CAll Us: +1 888-999-8231 Submit Ticket
Use WordPress Testing Tools to Build Your Plugin & Run Unit Tests in Github Actions

Use WordPress Testing Tools to Build Your Plugin & Run Unit Tests in Github Actions

We’ve been talking a bunch about unit tests lately, starting with the basics, and then moving up to adding tests to a plugin, so you could see it in practice. We’ve also covered how to use Github Actions to deploy your site automatically to your host. Today, we’re going to take pieces from both of these concepts and combine them, so that we’re running our tests automatically with Github actions when we push new code to the repository.

The easiest way I’ve found to get started is with WP Testing Tools from Valu Digital. The team at Valu Digital has provided a fairly easy way to get your tests up and running on Github Actions. We’re only going to cover how to use their base template to start from scratch with your plugin development so that you can run tests. Adding their test setup to an existing plugin takes a bit more work.

Add WP Testing Tools Setup to an Existing Plugin

To start, clone the repository onto your local computer.

[email protected]:valu-digital/wp-testing-tools.git your-plugin-name

Next, we need to grab the plugins folder from inside the repository as that will be the base for our plugin. Migrate that folder to where you want your new plugin and rename it to match the plugin name you want to use. 

Dealing with Composer

This testing setup requires Composer, which you don’t need to be intimately familiar with today. I’ll cover Composer in detail in a future post. For now, you’ll need to run composer install to install the required dependencies for WP Testing Tools.

Unfortunately, I’ve found that the repository is missing some required Composer packages so we’ll need to make sure these are also installed with the following commands.

composer require codeception/module-rest --dev 

composer require codeception/module-phpbrowser --dev

composer require codeception/module-db --dev

composer require codeception/module-asserts --dev

composer install

Wait, I got memory errors with Composer. Help! It probably doesn’t matter and you shouldn’t get them in Github Actions so you can ignore it for now.

Now we have the proper tools installed, you can push to Github and you’ll see that the unit tests run without issue. This plugin still isn’t ready for us to build on though so let’s get to renaming the other strings in it and make it our own.

Setting Up Plugin Files

We can start by renaming the plugin header information found in plugin.php. Name it whatever suits your plugin and make the author yourself. We’ll also need to change the namespace and class entries so that we’ve named them properly for our project. I’m changing the namespace to my company name and using PluginBase as my class name for this tutorial. You can see my working renamed file below. I’ve also cleaned up the comments to make it easier to read.



 * Plugin Name: Hostdedi - Github Actions Unit Tests

 * Plugin URI:

 * Description: Plugin base that runs unit tests with Github Actions

 * Author: Curtis McHale

 * Version: 0.1.0


 * @package example


 if (!class_exists('SfndesignPluginBase')) {

    require_once __DIR__ . '/vendor/autoload.php';



Now in composer.json, we have a few things to change around as well. Make sure that you’re listed as the author of the plugin and change the links to Github Issues and Source to match your repository. You’ll also need to change the namespace of your plugin under the autoload entry. I’m using my company name so mine says Sfndesign. You can see my changed composer.json file below.


  "name": "sfndesign/pluginbase",

  "description": "Actions Plugin",

  "type": "wordpress-plugin",

  "license": "GPL-2.0-or-later",

  "authors": [


      "name": "Curtis McHale",

      "email": "[email protected]",

      "role": "developer"



  "require-dev": {

    "valu/wp-testing-tools": "^0.4.0",

    "lucatume/wp-browser": "~2.2",

    "codeception/module-rest": "^1.2",

    "codeception/module-phpbrowser": "^1.0",

    "codeception/module-db": "^1.0",

    "codeception/module-asserts": "^1.3"


  "autoload": {

    "psr-4": {

      "Sfndesign\": "src/"



  "scripts": {

    "wp-install": "wp-install --full --env-file .env --wp-composer-file composer.wp-install.json",

    "wpunit": "codecept run wpunit",

    "functional": "codecept run functional",

    "test": [





  "config": {

    "optimize-autoloader": true


  "support": {

    "issues": "",

    "source": ""



Now we need to change the name of the Example.php file found in the src directory. I’m going to call it PluginBase.php to stick with the format we’ve been using. Next, open that file and change the namespace to Sfndesign and the class name to PluginBase. You can see the adjusted file below.


namespace Sfndesign;

class PluginBase {

    public static function init() {

        define( 'EXAMPLE', 'initialized' );

        add_action('the_title', function () {

            return 'EXAMPLE TITLE MOD';




Now that we’ve made these adjustments we need to run composer update again so that Composer registers the new autoload paths that are needed with our renamed files.

Finally, to make sure the whole thing is working well, I find it easier to change their initial test found in tests/ExampleTest.php to something that will return true no matter what. You can see this code below.

 public function testInit()




Now that we’re set up, you can initialize your plugin as a git repository and then push it to Github. Once you’ve done this you should see an action running under the Actions tab for your repository and everything will come back green because your unit test has been run.

Throughout the last few posts, we’ve written tests and used Github Actions to automate parts of our process. Now, it’s up to you to use these tools in your client projects. You won’t write tests later, so make sure you start your projects with tests from the beginning. If you want to go even deeper with testing there is an excellent course by Fränk Klein that explains Unit Testing in WordPress. It’s already on my list to go over so that I can get better at my testing practices.

Source link

What is the Best Contact Form Plugin for WordPress?

What is the Best Contact Form Plugin for WordPress?

Contact forms are needed on almost all websites. Luckily, there are a number of easy-to-use WordPress form plugins on the market.

A contact form is a way to gain feedback from customers or allow them to contact you directly through your site. The most commonly used form plugins all have a pretty similar set of features and integrations. 

  • Allows you to view form entries
  • Works with a captcha service
  • Works with Zapier
  • Works with Mailchimp and other marketing services

The marketplace has shifted a fair bit from the contact form options that used to be available for free and feature complete form plugin options. There are now so many solid options to pick from that choosing a contact form plugin for your site all comes down to which one you prefer and how well it works for you. 

Ask yourself these questions to work out which form plugin solution is the right one for you.

  • Do I need the paid version of this plugin or will the free version work? 
  • Will I need more add-on and features for my forms down the line?

Caldera Forms

Caldera Forms is a solid, free form plugin that has more options than most. Caldera Forms is developed by Saturday Drive, the same company behind Ninja Forms. Caldera Forms has an easy to use drag-drop interface for building forms. You could use Caldera Forms for building a simple contact form, a simple booking form, a credit card payment form, or a rating form.

  • One of the best free form plugins
  • It contains all of the features you will need
  • Easy to use

Everest Forms

Everest Forms plugin has a complete set of features and is another option worth checking out on your site. WPEverest is the company behind Everest Forms.

  • Drap and drop
  • Clean UI
  • Easy to use


Forminator is a pretty new form plugin from WPMU DEV. It offers a very complete set of features in the free version. 

  • Drag and drop form blocks
  • Complete set of integrations with common services
  • Includes a block for the block editor
  • Email routing
  • Front-end post submissions
  • Multi-fie upload field

The only feature that is not in the free version of the Forminator plugin is the e-signature feature which offered in the Pro version. Forminator is worth checking out.

  • Feature complete
  • Is not a limited form plugin
  • Solid for a free plugin


weForms is another solid plugin that comes with a huge range of features built-in. BoldGrid is the company behind weForms. If you are using the block editor then weForms comes with a block.

  • It just works
  • Easy UI
  • Has a number of built-in features


The MetForm plugin works with Elementor which means that you can control the form from within the Elementor page builder on your site. Wpmet is the company behind MetForm.

  • Clean UI
  • Works in Elementor which means you can edit forms in Elementor
  • Has a number of integrations built-in

Contact Coldform

One the easiest to use simple form plugin options for a contact form is Contact Coldform. Jeff Star is the plugin developer behind the Contact Coldform plugin.

  • Easy to use
  • Works well
  • Basic and does exactly what it says

Gutenberg Forms

If you are using the block editor on your site then one of the recommended form plugin for the block editor is Gutenberg Forms.

  • Works with the block editor
  • Native in the block editor
  • Easy of use

Contact Forms by Gutenforms

If you are using the block editor on your site then another recommended form plugin for the block editor is Contact Forms by Gutenforms.

  • Simple forms
  • Block editor compatible
  • Simple to use

Honorable mentions for more well-known form plugins for WordPress include Gravity Forms, Ninja Forms, and Formidable Forms. Ninja Forms and Formidable forms both have versions which are on Gravity Forms is a premium form plugin which comes with many features and add-on plugins for extra features and integrations. Ninja Forms and Formidable Forms also come with add-on features and all have pretty similar levels of features and add-ons, and well-priced plans.

Please take the time to test drive any of the contact form plugins we’ve mentioned on your staging site. After you have tested the form plugin of your choice, you can now begin creating all of the needed forms on your site.

Source link

The Business Case for Testing Your Code

The Business Case for Testing Your Code

We’ve talked a bunch about writing tests for your WordPress code, but one thing we haven’t touched on yet is why you should spend the extra time and money to write them. If you’re a manager or business owner, why should your developers ship features a bit slower as they take time to write tests?

Testing as an Investment

Today we’re going to tackle that management question. I want you to start viewing testing practices as an investment instead of an expense to your business.

Reduce Regressions

How much do you like it when your site breaks? If you’re like me, you hate it. I’m betting that at some point you wonder how on earth your developer could break the site. They must be terrible…right?


They’re just human like you and they make mistakes.

Good testing strategies can stop your projects from breaking. When you’re writing code and running tests, a good test suite will show you when something breaks. Then you can fix it right away when the work is fresh in your head.

Good testing can also cut down on debugging as you fix issues. Instead of wondering where on earth a problem is happening in the code, a failing test can show you where exactly you need to look. 

For those times you do find old bugs that break stuff, writing a test to catch this scenario means that in the future the code won’t break in the same way. No more chasing a bug around thinking you’ve fixed it. Good tests will tell you that the bug is squashed.

Deployments Are Easier

There was a time when I refused to deploy client sites on Thursday or Friday because I didn’t want to work on weekends. Two things fixed this issue. First, a repeatable deployment process let me know that I couldn’t mess things up with some silly FTP error. Second, writing tests let me know that my code didn’t affect anything else on my client sites that needed to work.

Now you’ll find me deploying code many times each day, even on Friday afternoon.

For clients, that means they don’t have to wait till Monday for a feature to release, if we get it approved on Thursday. My clients are happier because I can ship features as soon as they’re ready to go.

Changes Become Easier

Have you ever worked on a project where some portion of the code was a black box that one person knew about, but who didn’t work there anymore? You’re terrified that if you even think about touching this code you’ll awaken the Kraken and lose a month of your life as you wrestle a mythological beast back into submission.

Good testing stops this from happening.

When you have good test coverage, anyone can swap in and work on a part of your system. When anyone can work on the code in your project, you’ve decreased the risk to your business because you’re not reliant on that single developer who can manage the Kraken.

This also extends to changing out big parts of your system, like the database you choose to use. If you’ve got a good test suite written you can change the layers of your application independently and know that they’ll still interact properly because your tests pass.

Tighter Developer Feedback Loops

It’s far easier to fix issues that have just been created. Curtis of 6 months ago must have had no idea what he was doing, because I’ve seen that code and had no idea what was going on. Writing decent tests can prevent this from happening because you find those bugs as you’re writing code instead of stumbling across it months later. Instead of trying to find the same mind space you were in months ago, you’re sitting in the code, fully understanding it and ready to fix any issues that come up.

The Only Thing Developers Write Less of is Documentation

I’ve looked at lots of code in my career and spent lots of time trying to figure out what on earth is going on with some websites I’m working on. If there is one thing developers write less than tests, it would be good documentation. Sure many say it’s important, but very few write any documentation at all.

While I’d love to say you need to write both tests and documentation, I’d settle for tests because they act like documentation.

When I take over a project with tests I can easily jump into the project and start writing code without hours spent trying to figure out what is happening. I know that when I break something, the tests should tell me I broke something. If I find an issue later, I add it to the tests, thus adding to the documentation on how the code should be working.

Tests make it easier for any developer to pick up your project without you needing to worry that everything will get broken because they were not the ones that have worked on it since the start.

Improved Reputation

The reputation of your business is everything. If you have a reputation for shipping good work on time that doesn’t break, then you’re going to get more work. Testing can help you build this reputation.

Instead of breaking code as you “fix” things, you’ll see a failing test and fix it before the client knows there was an issue. Happy clients refer new clients, who can, in turn, become happy clients.

When you step back and think about it, your job isn’t to simply write code for customers. Your job is to write code that works to fulfill the needs of customers. When you add testing practices to your workflow you will be able to deliver on that better.

Your code will break less. You’ll be able to ship working features more often.

Your customers will be happier as you serve them better.

Stop making excuses and use testing practices to provide a better service to your customers.

Source link

Making Old Stogies New Again: A Magento 1 to WooCommerce Migration Story

Making Old Stogies New Again: A Magento 1 to WooCommerce Migration Story

If you were running a retail store circa 2010, chances are you had an experience like many others at the time. The Web offered a new opportunity to expand your physical store, and Magento was the best solution for the job. So you found your platform, built your strategy, asked one of your trusted employees to moonlight as a product photographer, and fired up the office computer to get to work. 

As you started building your site, you soon realized that creating the perfect store meant sitting in the office trying to learn a new piece of software instead of selling. Hiring a local developer wasn’t cheap, but eventually the site was everything you thought you wanted, and you started marketing it everywhere. Sales trickled in, but never really lived up to your expectations.

Over the next couple of years, you realized through customer feedback and your own testing that your slow sales weren’t about inventory or addressing consumer needs. The site had slowed to a crawl, your web developer had become more difficult to get a hold of, and product pages weren’t coming up in search engines. Something needed to change. But rebuilding your website is expensive and time consuming and you didn’t want to take on another project. Until you had to. 

An end for Magento 1

At the end of 2018, the Magento organization announced that support for Magento V1 (likely the version you’ve been using) would cease on June 30th, 2020. So after finding the right person to do the job of upgrading, optimizing, and re-building your store to drive those sales you were looking for, the software that your store runs on will now not be modernized, optimized, or updated moving forward. So what should you do next? Carpe Diem! See the grand opportunity in front of you to upgrade – and develop the site of your dreams that’s bigger, stronger, faster than before. 

Case in Point: The path forward for a small business in Houston

At the end of last year, Stogies World Class Cigars in Houston realized that after five years, they weren’t seeing the sales benefit they’d hoped for in their online store. Even worse, hiring the talent needed to fix page speed and search engine issues was cost-prohibitive. Since the team at Stogies wanted to reduce their maintenance costs, as well as manage future updates, content, and layout changes in-house, WooCommerce was the strongest option for migrating from M1. Building atop the Hostdedi Managed WooCommerce platform immediately reduced future software update costs. Built-in automatic plugin / update testing and upgrades meant that Stogies could focus on merchandising and optimizing the purchasing path for buyers. Speed was also a big concern. When they came to Hostdedi, pages on their website often took 15+ seconds to load. As a result, sales were low online but great in stores. So another priority was to decrease load speed – because they knew by speeding up the site, more traffic and increased sales were soon to follow. 

Content and Creative are King

After finding the right platform in Managed WooCommerce, they knew it was important to bring forward the visual aspects of the old website while still keeping the site snappy. Afterall, the crew at Stogies was proud of how their website looked, just not how it was performing. After testing 50+ themes, we recommended they use Astra, the best performing theme for their site.. From there we paired Astra with the Beaver Builder plugin to allow for easy future editing of layouts and sales pages.

Serving up a Seamless Customer Experience

It’s important that when a previous customer returns to a website, they recognize the landscape. The website should operate the same (or better) than it did before. We evaluated every bit of the customer experience from the old Stogies website, and were able to duplicate most of the functionality with off-the-shelf plugins included as a part of the Hostdedi WooCommerce platform.

Moving customer accounts and orders

The last step before testing the entire site was to make sure that customer accounts, previous orders, product data, and content were all transferred to the new website. Magento and WooCommerce are extremely different in the way that they store information. Using an easy-to-use import plugin for WordPress, we were able to successfully recreate all customer accounts, orders, and other data within their new WooCommerce site.

The moment of truth

After almost five years of dealing with the frustration of a slow, underperforming website, it was time to pop the cork on a bottle of bubbly, re-launch the website, and see whether or not the work to rebuild in WooCommerce was successful. 

The results were staggering

Within a month after launch, traffic increased 20%-50% per-day (over the previous year). Time spent by potential customers on the site increased by minutes, and average page load speed decreased from 5.11 to 2.14 seconds. Traffic from search engines increased by 181%, and new visitors were up by 67%. Most importantly, revenue started to double month-over-month.

Why WooCommerce and not Shopify? 

As we set out on the journey to help Stogies turn their stale store into an online powerhouse, we took a deep look at what it would take to build it on Shopify. While it’s possible to build a simple beautiful store with Shopify, we ran into problems with even small customizations. We found that customization capabilities were either free and limited or expensive and detailed. We also found that some of the features or customizations needed for our build would require ongoing support from a third party developer- – something we’d set out to eliminate for the Stogies crew. 

We matched each site feature with it’s Shopify counterpart, and here is the fully-loaded cost estimate: 

Annual CostsWooCommerceShopify
Non-NegotiablesProduct reviews, homepage slideshow, brand bar, from the blog section, recent products, product variations$0$371
Custom Core FeaturesMega menu, multi-tier header, multi-tier footer, real-time USPS rates, real-time UPS rates, gateway$266$119
Custom FeaturesStore locator, gift cards, event calendar, quick view, faceted filter, pricing tables, loyalty points, custom strength indicators, linked product attribute archives, email to a friend, menu cart, seo optimization, lazy load images, caching, forms, advanced search, PDF invoicing, email customization, ConvertKit integration, bulk product editing, wholesale pricing rules, import/export data tool, URL redirects$704$2,499
ThemesCore theme, page customizations, theme customization, advanced customization$147$150+custom dev
Hosting / Plan Cost$948$3,588+custom dev
Total Annual Cost$2,065$6,727

In short, WooCommerce is a third of the cost of Shopify and doesn’t require as much custom development.

We’re here to help you move forward

While Stogie’s results are extraordinary, they’re not unique. Modernizing, updating, and migrating your store to a fast WooCommerce platform will bring years of frustration with your online store to an end. Whether you’re working with an expert or managing your own store, we’re here to help. 

Source link

The Ultimate Magento 2 Performance Checklist

The Ultimate Magento 2 Performance Checklist

At Hostdedi, we spend a considerable amount of time optimizing our infrastructure to make your Magento 2 store faster. After years of research and development, we’ve pulled together the ultimate Magento 2 performance checklist:

  1. Remove unused modules: Magento 2 comes with many pre installed modules that aren’t always needed. Yireo created a great module to disable the optional modules you don’t need through composer. The idea behind the module is quite simple: you replace any unused module with nothing to avoid loading unused modules and classes. This module and a complete how-to can be found here:
  1. Enable CSS/JS minification and merging: Minifying and merging CSS files can greatly improve load times and the general performance of your store by cutting the number of requests your site makes when loading a page. You can minify and merge CSS and JS files from the admin panel by navigating to the Developer tab under Stores > Configuration > Advanced (keep in mind this tab will only show if you are using developer mode). Magento recommends using a 3rd party plugin like Baler or MagePack for JS bundling given that Magento’s bundling mechanisms are not optimal and should only be used as fallback alternatives.
  1. Enable production mode: While this one might seem simple, the number of sites we see using a different mode in Magento is staggering. No one should be running Magento 2 in production in a different mode, but we still see too many stores running on either default or developer mode. The best way to switch modes is via CLI: 

php bin/magento:deploy:mode:show

to see which mode is your store using and 

php bin/magento deploy:mode:set production

to set production mode

  1. Use Redis for session/default and full page cache: Redis is one of the most used key/value database engines and Magento 2 comes with integrated support to use it as a both session storage and default/full page. To configure your store to use Redis, run the following commands from your root folder:

bin/magento setup:config:set --cache-backend=redis --cache-backend-redis-<parameter_name>=<parameter_value>...

bin/magento setup:config:set --session-save=redis --session-save-redis-<parameter_name>=<parameter_value>...

You can find a complete list of Redis configuration parameters and values for sessions here and for the full page cache here

  1. Use Elasticsearch for Magento’s catalog search: Since Magento 2.4, MySql was deprecated (and removed) and Elasticsearch was introduced as the catalog search engine, greatly improving the speed and results of the searches. To enable Elasticsearch, navigate to your admin panel and under Stores > Settings > Configuration > Catalog > Catalog > Catalog Search you will find a tab called Search Engine. Configure your store to use your Elasticsearch endpoint, click Test connection and if everything worked, you’re all set. You can find the complete list of parameters to configure Elasticsearch here.
  1. Use Varnish to speed up your response time/TTFB: You either love or hate Varnish but at the end of the day, it greatly improves the TTFB, and if configured correctly, it can do wonders for the general usability and user experience of your site. Magento 2 features an out of the box integration, making Varnish configuration really simple. To configure Varnish, navigate to Stores > Settings > Configuration > Advanced > System > Full Page Cache, select Varnish from the Caching Application list and configure the rest of the options. A full list of all the parameters you can use to configure Varnish can be found here

You can also configure Varnish from the CLI by running:

php bin/magento config:set --scope=default --scope-code=0 system/full_page_cache/caching_application 2

  1. Use a CDN: A content delivery network is normally used to store media and static assets at edge servers near your customers for faster delivery. This means your assets are physically closer to your customer, resulting in faster response times. Configuring a CDN for Magento is not as straightforward as it should be but it can be achieved by using the admin and navigating to Stores > Settings > Configuration. Under General, click on Web and expand the Base URL sections. Once there, update the Base URL for Static View Files and Base URL for User Media Files with the URL of your CDN endpoint where static view and javascript files are stored. Do the same for Base URLs (Secure) and once done, click Save config. You might need to flush/clean your cache for this change to take effect. If everything worked as expected, you should be seeing your CDN url being used to serve most of your site’s static files.
  1. Enabling the Asynchronous email notifications, Asynchronous order data processing: during times of high concurrency, you might want to move processes that handle checkout, order processing email notifications and stock updates to the background. To enable async email notifications, go to Stores > Settings > Configuration > Sales > Sales Emails > General Settings > Asynchronous Sending

You can activate Asynchronous order data processing from Stores > Settings > Configuration > Advanced > Developer > Grid Settings > Asynchronous indexing

When enabled, orders will be placed in temporary storage and moved in batch to the Order grid without any collisions.

While there are no real magic tricks, we tried this guide in our cloudhosts and ended up with an A and a page load under 2 seconds on GTMetrix 🥳

If you’d like assistance enacting these changes, or are interested in our Managed Magento offering, please reach out to our award-winning support team 24/7/365 at [email protected].

Source link

Everything You Wanted to Know About Auto Scaling

Everything You Wanted to Know About Auto Scaling

One of the greatest advantages of hosting your website in the cloud is the ability to scale up or down quickly. Usually scaling will just take a few minutes and you can double or triple your server capacity.

Compare this kind of scalability to traditional hosting solutions where you have your own server (where it can take days or weeks to get a new server online), and you can immediately see why hosting in the cloud makes good sense.

Generally speaking, you choose your hosting plan based on where you are in your business lifecycle. If you know you’re near the limits of your plan you can upgrade quickly and prevent any downtime or slowness for your users. But what happens if your plan is the right size for you most of the time, but you have occasional traffic spikes – like when you launch a new product? That’s when auto scaling can save the day.

What is Auto Scaling? 

If you’ve never heard of auto scaling before you can think about it like an HOV lane on the highway. When Hostdedi Auto Scaling is enabled, if our servers detect a surge of traffic, we’ll automatically open up an ‘HOV lane’ to manage the expanding traffic. The added resources (or lanes in our example) will keep your website experience fast & snappy, ensuring an undisrupted experience for your visitors.

How does Auto Scaling Work?

Auto scaling works by allocating additional resources from a resource pool. It gets triggered by analyzing PHP threads (also known as PHP workers) every minute, to see if demand outstrips supply. Once demand exceeds capacity, PHP threads are automatically scaled. Auto scaling then re-tests requirements every 10-minutes until it is no longer needed.

Back to the HOV lane example – if we see that your highway is in bumper to bumper traffic, we’ll add extra lanes to the highway to make sure each car can go as fast as they want.

The Benefits of Auto Scaling

Address Variable Traffic Demand – Without Having to Upgrade

Few sites see consistent traffic 24/7/365. Auto Scaling provides you with the ability to address these traffic fluctuations. For example, imagine that a small business owner named Jerry finds himself with consistent performance issues on Saturdays. 

After analyzing the traffic, Jerry finds that Saturdays are particularly busy periods on his website. He thinks he might need to upgrade to a larger hosting solution, but after a cost-benefit analysis, he doesn’t see the ROI because  the other six days of the week don’t have any issues. So if he could adjust his cloud hosting resources only on Saturdays, then he can stick with his plan that makes sense, but address the traffic issue that needs to be managed.

He can do exactly that with Hostdedi Cloud Auto Scaling. All Jerry needs to do is leave the feature enabled (our Auto Scaling is enabled by default). On Saturdays, Auto Scaling will automatically add more resources to his site and he’ll reap the rewards of satisfied visitors that could turn into customers. 

Eliminate Additional Costs

Every Hostdedi plan across Managed Magento, Managed WooCommerce and Managed WordPress gets 24 hours of auto-scaling for free. So, if you have traffic spikes for less than 24 hours a month it won’t cost you a single penny. You can get the benefits of a higher plan for your current plan cost. 

Other hosts might force you to upgrade, or even worse, let your site crash forcing you to lose sales. We’re your business partner. And we’re not  in the business of letting you down. 

Want to learn more? Check out our Hostdedi Cloud Auto Scaling documentation in our Knowledge Library.

What’s Next for the Enterprise? Advanced Auto Scaling

On a larger scale, if you’re looking for predictable performance for extreme traffic spikes, our Hostdedi Advanced Auto Scaling offering provides unlimited support for the heaviest of traffic loads without having to buy, configure, deploy and migrate to larger environments. 

As an example, if you’re planning on a national TV appearance or you have an army of influencers who can drive traffic your way for a flash sale, with Advanced Auto Scaling, you can add as many resources as you need. Not just 10 PHP workers (which are the resources from the higher level plan), but 10, 20, 30, 40, or even more PHP workers to cover you without worry. 

Advanced Auto Scaling costs $99/mo and which will shift all of the PHP workers from your current infrastructure into a PHP Container. As you need more resources (to be able to handle more concurrent users), you can add additional containers with 10 PHP workers a piece for $50/day. No commitment, no long-term contract. 
With Hostdedi Auto Scaling and Advanced Auto Scaling, you are at-the-ready to handle traffic spikes whenever they occur.

Source link

Git Hooks – Hostdedi Blog

Git Hooks – Hostdedi Blog

Git is a powerful version control system that we’ve barely scratched the surface on over our last few posts. Today, we’re going to look at the automation power that Git can give you with Git Hooks.

Every repository gets hooks built in when you use the git init command. When a repository is initialized you get a hidden .git directory and inside that is a directory called hooks that will contain all your hooks. Open any git repository you have handy and use ls -a to see the hidden directory, then open it up in your favorite code editor.

To start you’ll see a bunch of files with .sample file extensions. These are exactly what they say, sample scripts that you could use in your projects. The files are named to correspond with the hook they run on. So post-commit.sample runs on the post-commit hook.

You can use pretty much any language to write a hook. The file is parsed according to the shebang notation at the top of the file. If you wanted to use node you’d use #! /usr/bindi/env node and your file will be parsed as a node file.

Before we dive into what you can do with git hooks, let’s take a look at some of the hooks that are available to you.

Types of Git Hooks

Commit Workflow Hooks

pre-commit is run before you even enter your commit message and it can be bypassed with git commit --no-verify.

prepare-commit-msg can be used to edit the default message you see in your commit message. Use it to give instructions to developers about what type of commit message they should be leaving. It can also be used to automate the contents of where the message is automatically generated for you, like merges or to add an issue number to your commit message automatically.

commit-msg can be used to validate the commit message for your project. Maybe you don’t want anyone to be able to put in a commit message that simply says “dealing with white space”. You can use this hook to detect the presence of the words white space and then exit and provide a warning to the user that they need to have a better commit message.

post-commit runs after all the commit hooks above. It’s most useful for a notification that a commit has been made.

Client Hooks

post-checkout runs after you’ve run a successful git checkout command. If you had a set of large files used on the site but didn’t want them in source control, you could use this command to move the files for you.

pre-push runs during a git push command before any objects are transferred to the remote repository.

Server Hooks

pre-receive runs when a client has pushed code to a remote repository. This can be used to check the code that is being pushed to make sure that it meets the criteria of your project before you accept the push.

post-receive runs after your remote repository has received the updates. This could be used to call a web hook which triggers a deployment process or notifying a chat room that a commit has been received and is ready for review.

Many of the hooks above can be set to run only on specific branches. That may mean when you use a post-receive hook only when someone has pushed code to the main branch that’s supposed to be ready to deploy. A list of developers could be notified to review the code and then deploy it. This way you would always have 2 sets of eyes on a deploy which can mean catching mistakes that a single developer can easily miss.

I’ve skipped some of the hooks that are available because I’ve never seen a need to use them. One set of hooks I didn’t talk about is the email workflow hooks. If you’re not accepting patches to your code via email, then you’ll likely never need them. You can find all the available hooks in the documentation.

In practice hooks I’ve used most are:

  • pre-commit
  • pre-push
  • commit-msg
  • pre-receive
  • post-commit
  • post-receive

Now let’s do something with these hooks.

Activating a WordPress Plugin with WP Cli and Git Hooks

For one client project this year I was adding a store, and still doing a few tasks on the main site. That meant the main site did not have any of our WooCommerce plugins installed or activated. I needed to develop the WooCommerce store on one branch and only once I was ready to push it all live, did I want to move WooCommerce over to main.

To start we’ll need a new branch called store. We can get this by using git checkout -b store. This creates a new branch and checks it out for us. Now let’s get the hook ready.

First we need to create the post-checkout hook with this command touch .git/hooks/post-checkout.

Next we need to make it executable. We can do this with the chmod command from terminal chmod +x .git/hooks/post-checkout.

Now open the file in your code editor of choice and copy the code below into your post-checkout file.

#! /bin/bash

wp plugin activate woocommerce

echo "activated WooCommerce"

wp plugin activate automatewoo

echo "activated AutomateWoo"

You can demo this by changing to any branch via terminal. You should see two lines telling you that WooCommerce and AutomateWoo have been activated. We know it’s working, but it’s not quite what we want because it will turn the plugins on every single time we change to any branch.

What we really want is to turn them on when we move to our store branch, and then turn them off when we are on our main branch. To do that we’ll need the hook to detect which branch we are one. Swap the contents of post-checkout with the code below.

#! /bin/bash


branch_name="(git symbolic-ref HEAD 2>/dev/null)"

if [ "refs/head/store" = "$branch_name" ];then
  wp plugin activate woocommerce
 echo "activated Woo"

  wp plugin activate automatewoo
 echo "activated AutomateWoo"

if [ "refs/head/main" = "$branch_name" ];then
 wp plugin deactivate woocommerce
  echo "deactivated Woo"

  wp plugin deactivate automatewoo
  echo "deactivated AutomateWoo"

This code starts by assigning the branch we are checking out to the branch_name variable. Then we have two if statements. The first checks to see if we have moved to the store branch. If we have it uses WP CLI to activate WooCommerce and AutomateWoo.

The next if statement checks to see if we are on the main branch. If we are, it will deactivate the plugins with WP CLI and tell us about it in the terminal.

Controlling Git Workflows with Git Hooks

In a previous post on Git I talked about different Git workflows. One very common use case for hooks is to stop anyone from committing code directly to the main branch. You can use a hook to make sure that all code is merged from a different branch into main.

Start by renaming pre-commit.sample to pre-commit and then make it executable as I’ve described above. Next grab the code below and use it in the pre-commit file.

#! /bin/bash

branch="$(git symbolic-ref HEAD 2>/dev/null)"

if [ "$branch" = "refs/heads/main" ]; then
echo "WHOA that was '"${branch}"' you should not do that. Stop doing silly stuff and create your own branch and merge it."
exit 1 # if you remove this it won't block the commit but it will send the message to slack


This checks to see if we’re on the main branch, and if we are, the commit is stopped. Then it prints a reminder to the user that they shouldn’t be committing directly to the main branch.

Remember many places are changing to main as their branch. Older projects may need master in place here if they haven’t updated.

You could even take this a step further and use cURL to access the API of a chat app and then complain publicly that someone tried to commit to main.

The only limitations of git hooks are your imagination. You could use them to stop someone from committing if a TODO is present in their code or to stop whitespace at the end of a file.

If you have some part of your workflow that is a continual stumbling block, look at hooks to automate it, so that you don’t have to remember.

Source link

Understanding WordPress Unit Testing Jargon

Understanding WordPress Unit Testing Jargon

In our last post, we took a basic look at testing your code in WordPress. Today, we’re going to take another step towards writing well-tested code, by introducing you to (and helping you understand) all the jargon that gets thrown around when you talk about unit testing for WordPress.

Types of Tests

Yup, unit tests are not the only types of tests we can have for our application so let’s look at the different types of tests you may use in your work.

Unit Tests

These types of tests are the first stage in a testing system. They specifically test to ensure that the code you wrote performs as expected. Unit tests form the foundation of your tests because if your code doesn’t work properly, then later types of tests are built on a shaky foundation.

If I wanted to limit the editing of permalinks to site administrators for posts that are over two weeks old my unit test would use WP_Mock to ensure that I have a user of the proper type and a post with the proper date. Then I’d make sure that my function returned the expected true or false value.

Unit tests should have no dependencies outside of the code you’ve written. They should interact with no other system. That means our last post wasn’t a true unit test, it was an integration test because we interacted with the database as we created our user.

When you use unit tests as the foundation of your work, you help ensure that each method/function has a single responsibility because it gets progressively harder to test as you add more conditions to a single function.

Some of the tools you’ll encounter in unit testing WordPress projects are:

Integration Tests

This is the second type of testing you’ll do in your work. Integration tests make sure that the interactions between two systems function as expected. Unlike unit tests, the goal here is to see the interaction between multiple systems.

WordPress itself does more integration tests than unit tests because when most of the application was written best practices were different. The functions inside WordPress are much larger than many newer PHP CMS’s and do much more work. These bigger functions are hard to unit test properly because they do so much. That means we rely more on integration testing as we check how our code directly works with WordPress.

None of this is bad, it’s simply a product of an application that wasn’t built when testing was as common as it is now.

Another example of an integration test would be if you were building an integration with an email marketing platform. You may use unit tests to make sure you validate the email properly, and submit the form as expected. Integration tests would be used to ensure that when you submit your valid email to the email platform you deal with the response properly.

Tools for integration testing:

Mutation Testing

Mutation testing is only going to be used for projects that have some sort of test coverage already. This type of testing creates “mutants” of your code by introducing common coding errors. The goal is that your unit tests break when these errors are introduced which means you’ve caught and killed the mutant. Mutation testing tools will report back on how many mutants you’ve killed and the places in which you haven’t caught mutants.

Mutation testing can be used to measure the quality of your test coverage. If you’ve got lots of mutants alongside 100% testing coverage, you have a big problem. This means that your tests don’t catch common errors programmers make. You’ve got code breakage waiting to happen.

Essentially, you’re testing your tests.

Tools for mutation testing:

Acceptance Tests

These can also be called functional test or browser testing. Where unit tests start with your code and work outwards, acceptance tests take the view of the person using your software. With acceptance tests, you may automate the web browser to interact with your site to make sure that users see what they expect to see.

Acceptance tests are harder to maintain because a small wording change in the UI of your software can mean that the test breaks because the automation can no longer find the UI elements it expects to find. For that reason, only invest in acceptance tests for business-critical infrastructure, like the checkout process on your shopping cart.

Tools for acceptance testing:


Now that we’ve covered the types of tests you can do, we need to make sure we understand the other language that developers will use as they write tests.

Test Doubles

The term test doubles is a generic term that refers to any time you replace a production object/function/thing for testing purposes. We did this in our previous post on Getting Started with Unit Testing in WordPress when we used WP_UnitTestCase to add a user that didn’t exist in our production database. It can be helpful to think of it like a stunt double who “stands in” for the actor when things get dangerous. Test doubles “stand-in” for our data and code to make testing easier. They should look and behave like their production counterparts but be simplified to reduce complexity when we’re testing.


Mocks are used when you don’t want to interact with the API or database. You use a mock to fake database interactions so you can test a single unit of code without adding the complexity of a database.

A mock doesn’t have to be data in a database. A tool like WP_Mock has the ability to fake the hook system inside WordPress. This lets you test to see if a hook was called in your function, without needing to interact with WordPress itself.

Below we can see an example in WP_Mock where we fake the get_permalink function. We provide the number of times we expect the function to be called, arguments we expect, and the value we expect returned.

WPMock::userFunction( 'getpermalink', array(

  'args' => 42,

  'times' => 1,

  'return' => 'https://theanswertoeverything.fourtytwo/guide

We’ll cover how to use WP_Mock in a future post.


Stubs are hard coded values for our tests, like canned answers to the questions our test may ask. You would be using a stub if you instructed your test to assume that a user is logged in while running a test. Another test may assume that the user is logged out. Both tests would be making sure that your functions returned the proper values for the given branch in your code.

It’s very likely that you’re using stubs and mocks together. In the example above, you use a stub to assume the logged in value, and then a mock to make sure that the proper values are returned.


Test dummies are used when you don’t care about what the code does. Maybe you use a dummy to fill in an array or parameter list so that the code works properly. Stubs are extra information that likely doesn’t matter in the context of the specific test you’re writing.

For a logged-in user, maybe part of the function your testing expects a name. Even if your current test doesn’t need that name, you need to make sure it’s filled in so that your test passes. Of course, you should also test the result of your function without that name so you’re sure that you handle failure conditions properly.


A factory is a tool that lets us populate valid objects in our data model so that we can test the data. In our last case we used a factory when we added a user to our code inside WP_UnitTestCase.

One thing to be wary of here is changing the data model in your code. If your user suddenly needs an extra field, then you’ll need to head into every test and make sure that you have added that extra field every time you’ve used a factory.

Monkey Patch

This is a catch-all term for dynamically replacing attributes and functions at runtime. WP_Mock is “monkey patching” by replacing the default WordPress functions for testing. This means that we don’t have to call into WordPress directly when we’re unit testing.


An assertion is a boolean value which will be true unless there is an error. An example would be using assertFileEquals to check if a file has the expected content. It will return true if the file matches expectations, and you have a passing test.


Now, what overall system are you going to approach your tests with? Does it matter more that individual functions are valid, or that the behavior the end-user sees is valid?


TDD or Test Driven Development is when you write tests to validate that the code functions as expected. Like unit tests, you’re starting with the expectation of the code and then working towards the customer/user as you go. TDD doesn’t care about the outputs, it’s only concerned that the tests function as expected.

Tests written under TDD are only going to be readable by developers that understand testing and what an assertion means.


BDD or Behavior Driven Development grew out of the shortcomings of TDD. Here you start with what the end-user expects to happen as a result of the code. You still write your tests first, but you focus them on the result of your code. BDD doesn’t care how you arrive at outputs, as long as the expected behavior is the result.

One big benefit to BDD tools like Cucumber is that the language used to write the test is easily readable by customers and developers. It’s easy enough to understand that customers can write feature requests using Cucumber after a brief introduction.

Now, which one should you use? The correct answer is probably a bit of both. TDD methods can be used to ensure that the code the developer writes runs as expected. BDD can be used to make sure that the output of that code is what the customer expects to happen.

That’s all, folks! Understanding unit testing for WordPress just got a lot easier. With all that jargon under your belt, you’ll be ready for the next post where we’ll build a plugin using TDD and BDD with all the tools at our disposal.

Source link

What Is a Platform As a Service (PaaS)?

What Is a Platform As a Service (PaaS)?

Once upon a time, software as a service was the only as a service acronym floating around. As the industry flourished though, forks came off of it into relating spaces to create a whole slew of aaS companies in numerous technological categories.

One of those forks is PaaS.

PaaS stands for platform as a service. It’s a service that provides and maintains a platform for developing, testing, and deploying applications for developers. All of the back-end infrastructure is managed by the PaaS, so that the developers can focus on their projects.

PaaS providers are able to reduce the amount of coding you need to do by providing you with middleware to use directly on the platform, with no dependencies on operating system compatibility (aw, yis).

A PaaS makes sense to use when you have multiple developers working on the same project. Like any other team-based SaaS app, PaaS apps allow you to add multiple users in a web-based development environment to co-work remotely on the same project.

Similar to more infrastructure-focused service providers like Hostdedi, PaaS providers include the basic infrastructure required to deploy apps, such as servers, networking, storage, and reference architectures. 

However, PaaS is arguably a more complete solution for app developers, providing an environment which allows you to build, collaborate, test, deploy, and manage your applications, all in one place.

You may not be familiar with the term PaaS, but if you’re a developer, you may already be using one:

  • Beanstalk
  • Heroku
  • Microsoft Azure

Platform as a service companies do one thing: save app developers time and money by bundling and automating a bunch of the things they’re used to doing manually. Then, when you run into problems, there’s a team of experts just behind the curtain to help you out.

PaaS companies are to app developers what Hostdedi is to ecommerce site developers – an all-in-one solution with a team of experts who specialize in your field. You don’t need tier one support at this stage in the game, you need someone who’s at least as informed as you are to help solve these problems.

Check out the PaaS providers above to help build your next application, and when it’s time to launch, talk to Hostdedi about managing your ecommerce site.

Choose From multiple Managed Applications

Source link

What Is PCI Compliance? – Hostdedi Blog

What Is PCI Compliance? – Hostdedi Blog

When it comes to processing payments online these days, most people don’t even bat an eye. Shoppers are paying with credit cards, over email, and through Facebook, but for ecommerce sites, payment security risk aversion is integral to how they do business.

Here’s how to make sure that your clients’ sites are staying compliant, and what to do when you’re dealing with an out of date application that’s reached end-of-life.

What does it mean?

First of all, let’s get our heads around what PCI compliance even means.

Originally set by the major credit card companies, the PCI Security Standards Council formed these parameters for payment processing compliance to protect their cardholders from security threats and fraud.

Using a set of qualifications to determine the safety of a point of sale terminal or ecommerce website, these standards are now mandatory best practices between businesses who process card payments and their customers.

The standards for PCI compliance are as follows:

  • Install and maintain a firewall configuration to protect cardholder data
  • Do not use vendor-supplied defaults for system passwords and other security parameters
  • Protect stored cardholder data
  • Encrypt transmission of cardholder data across open, public networks 
  • Use and regularly update anti-virus software or programs
  • Develop and maintain secure systems and applications
  • Restrict access to cardholder data by business need to know
  • Assign a unique ID to each person with computer access
  • Restrict physical access to cardholder data
  • Track and monitor all access to network resources and cardholder data
  • Regularly test security systems and processes
  • Maintain a policy that addresses information security for all personnel

For developers, a separate set of standards has been set by the PCI SSC to ensure websites are processing electronic payments securely:

  1. Do not retain full magnetic stripe, card verification code or value (CAV2, CID, CVC2, CVV2), or PIN block data
  2. Protect stored cardholder data 
  3. Provide secure authentication features 
  4. Log payment application activity
  5. Develop secure payment applications
  6. Protect wireless transmissions
  7. Test payment applications to address vulnerabilities
  8. Facilitate secure network implementation
  9. Cardholder data must never be stored on a server connected to the Internet
  10. Facilitate secure remote access to payment application
  11. Encrypt sensitive traffic over public networks
  12. Encrypt all non-console administrative access
  13. Maintain instructional documentation and training programs for customers, resellers, and integrators
  14. Maintain instructional documentation and training prog

Penalty fines for non compliance can range between $5,000 and $100,000 a month, and inevitably wind up being the merchant’s responsibility. Additionally, merchants can face steeper transaction processing fees, or even the inability to process electronic payments for their customers in the future for non-compliance.

What Developers Need to Know About PCI Compliance

Thankfully, payment applications and payment gateways have taken care of much of the technical side of ensuring that payments are processed securely. As a developer or site builder, your primary responsibility where PCI compliance is concerned is to ensure that your applications meet the PCI SSC’s standards and stay up to date.

PCI compliance standards are determined by the volume of transactions which a merchant processes. The merchant is assigned a compliance level requirement based on the volume of business that he or she does, and the security of their sites may be tested by an approved scanning vendor, or ASV.


Ecommerce sites fall under PCI SAQ 3.1 and have the following standards:

Whether your client requires an ASV really depends on which payment processors and ecommerce applications you’re running their site on. These charts depict the flow of data, so that you can determine whether your client’s site will need an ASV or not.

The burden of site security is ultimately on the site administrator, which may be you. If that’s the case, the strongest prevention for noncompliance is pretty straightforward:

  • Make sure plugins stay up to date
  • Ensure that software updates and security patches get installed
  • Maintain stringent server security standards
  • Make sure ecommerce applications are up to date

What End of Life Means for PCI Compliance

Recently, Magento 1 reached end-of-life, putting thousands of ecommerce sites into a compliance grey area when Adobe stopped issuing official security updates.

While the ecommerce application itself represents only a small part of what PCI compliance truly entails, for merchants still running their ecommerce sites on Magento 1, the important thing to note is there will no longer be security patches and updates issued for the platform. They’re on their own unless they’ve invested in a solution like Hostdedi Safe Harbor

This primarily applies to number seven in the list of PCI compliance measures for developers:

Test payment applications to address vulnerabilities.

With Magento no longer looking after security updates for Magento 1 users, it begs the question: can an ecommerce site be PCI compliant on an ecommerce application that’s reached end of life?

Yes. Hostdedi has done it with Safe Harbor. 

What to Do When a Platform Reaches End of Life

Magento was built on Hostdedi servers. When Magento 1 started approaching end of life, our engineering team jumped to work developing a solution that would allow merchants to decide for themselves when to migrate.

For many Magento 1 store owners, making the move to Magento 2 in the wake of COVID-19 wasn’t financially realistic. Site migrations are expensive and complex, and with so much upheaval and uncertainty, many were understandably scared to make the leap.

So the engineering team at Hostdedi came up with a compromise. Hostdedi Safe Harbor was built to address Magento 1 end-of-life, keeping ecommerce sites and stores owners PCI compliant until at LEAST the end of 2021, so they can migrate on their own time.

With regular security patches made by the team who literally started with Magento, Hostdedi is able to keep Magento 1 sites and stores PCI compliant until they’re ready to make the switch.

End of life doesn’t have to mean the end of PCI compliance.

Get more time, and keep customer data safe with Hostdedi Safe Harbor.

Click here to learn more about Hostdedi Safe Harbor, or open the chat window at the bottom right of your screen to speak to sales.

Source link