CAll Us: +1 888-999-8231 Submit Ticket
Introduction to Unit Testing for WordPress

Introduction to Unit Testing for WordPress

While many of us have heard of unit testing, it’s not a big topic of discussion in the WordPress community. Yes, there are tests for WordPress core. Yes, many of the major plugins like WooCommerce have unit tests, but that doesn’t make it easy to jump on the unit testing train. 

While you can find lots of content about unit testing PHP applications, there aren’t many people talking about unit testing specifically for WordPress. There is precious little written about where to start for developers that are ready to increase their code quality and want to learn how to add tests to their work. Even worse, some of the tools for unit testing in WordPress don’t work as advertised, or are using older versions of PHPUnit. Problems you’ll have to discover on your own as you try to start testing.

That leaves many people who want to get started with testing on a multi-hour journey to get even the basic default tests running for their plugin.

Today we’re going to solve that problem by giving you an up-to-date look at how to start unit testing your WordPress code.

Things to Have Installed

Before we dive into unit testing, I’m going to assume you have Laravel Valet and WP CLI installed. I use these excellent directions from WP Beaches to install Valet, though I use the mysql instructions not the MariaDB ones. 

You can find the instructions to install WP CLI on the WP CLI site.

If you’re on Windows, you may have a bit of extra digging to do so you can run unit tests. The WordPress Handbook has some instructions on the extra steps you’ll need to take.

Install PHPUnit for WordPress on Laravel Valet

Our first step is to install PHPUnit. While PHPUnit is currently on the 9.x branch, WordPress only supports 7.5.x which means we’ll need to install this older version.

Open up the terminal and use these commands.


chmod +x phpunit-7.5.9.phar`

sudo mv phpunit-7.5.9.phar /usr/local/bin/phpunit

phpunit --version

The command above, download PHPUnit. Then we make the file executable so it can be run. Next, we use the sudo command to move the file to the proper location on our computer. Finally, we check for the version number, which should return 7.5.9 when we run the final command.

Now we’re ready to set up our plugin with WP CLI and take a look at our first tests.

Setting Up Our Plugin

We can make starting a plugin, and getting our unit test scaffold, easy with the WP CLI scaffold command. This command will create a plugin for us, and add all the files we need to have a foundation for our tests.

In the terminal, make sure you’re in the proper WordPress directory you want to use and then type wp scaffold nexcess-unit-tests. That should give you a folder structure that looks like this.

– bin


– tests

    – bootstrap.php

    – test-sample.php

– .distignore

– .editorconfig

– .phpcs.xml.dist

– .travis.yml

– Gruntfile.js

– nexcess-unit-tests.php

– package.json

– phpunit.xml.dist

– readme.txt

If you want to build out the basics of a plugin but not include tests then you’d use the –skip-tests flag, which will skip the generation of test files. You can see all the options available for this command in the WP CLI documentation.

Depending on how your code editor and file system are set, you may not see the files that begin with a . because they’re considered hidden files. For our purposes today, they don’t matter so don’t worry about it if you don’t see them.

Now we need to hook our unit tests up to MySQL so that it can create dummy data for us to test against. Open up the wp-config.php file for your WordPress installation now because you’re going to need to find the db_user and db_pass for the next command.

To finish installing the tests change to your plugin directory and run the following command.

bin/ <db-name> <db-user> <db-pass> localhost latest

Make sure you use a different db-name than your WordPress install. You don’t want your unit tests messing with your data, you want it creating its own test data in its own database. Otherwise, the db-user and db-pass will be the same as your wp-config.php file.

The final two parameters tell the testing framework to connect to localhost and to install the latest version of WordPress to test with. Unless you know better, just leave those settings as they are.

If we were to run phpunit now, we’d still have a few errors to resolve. First, for some reason WP CLI doesn’t add the name attribute to the <testsuite> block in the sample tests. Open phpunit.xml.dist and change <testsuite> to <testsuite name=”Unit Tests”> and save the file.

Also note here that inside the <testsuite> block PHPUnit is told to ignore the tests/test-sample.php file. Let’s duplicate that file and rename it to test-nexcess.php so that we have a file which will run. Then open the new file and change the class name to NexcessTest. The file should look like this now.



 * Class NexcessTest


 * @package Nexcess_Unit_Tests



 * Sample test case.


class NexcessTest extends WP_UnitTestCase {


     * A single example test.


    public function test_sample() {

        // Replace this with some actual testing code.

        $this->assertTrue( true );



Now we’re ready to run phpunit from the terminal. Once you’ve done that you should see something like the image below.

Now that we’ve confirmed that PHPUnit is running our tests, let’s dive a little deeper and write a test to make sure that WordPress is adding users properly.

setUp and tearDown

Most unit tests will require specific data to be added to the testing database. Then you run your tests against this fake data. To do this you use the setUp and tearDown functions in your testing class.

Let’s create a user and then make sure that the user has the edit_posts capability. Add the following function to the top of your class.

   public function setUp(){

        // make a fake user

        $this->author = new WP_User( $this->factory->user->create( array( 'role' => 'editor' ) ) );


Then add this tearDown function to the end of the class.

public function tearDown(){


        wp_delete_user($this->author->ID, true);


This adds a user with the role of editor before we run our tests, and then removes the user after we’ve run our tests.

Now let’s write a simple test to verify that the user was added properly and has the proper capabilities.

   public function testUser(){

        // make sure setUp user has the cap we want

        $user = get_user_by( 'id', $this->author->ID );

        $this->assertTrue( user_can( $user, 'edit_posts' ), 'The user does not have the edit_posts capability and they should not' );

        $this->assertFalse( user_can( $user, 'activate_plugins' ), 'The user can activate plugins and the should not be able to' );


Above we start by getting the user object based on the fake user we created. We’ll need this object to test our capabilities which we do next with the assertTrue and assertFalse statements.

In this instance assertTrue is expecting that our user_can( $user, ‘edit_posts’ ) returns true. We’re testing to make sure that the user object provided is given the edit_posts capability, as an editor should have. If this were to return false, we’d see the message provided in our unit test output in the terminal.

Next, we test to make sure that the same user doesn’t have capabilities of an admin user. Here we use assertFalse while checking for the activate_plugins capability. This should return false because activate_plugins` is for the Admin role in WordPress not for Editors.

Once you have that code added after your setUp function, head to terminal and run phpunit. You should see 2 tests are okay, with 3 assertions.

PHPUnit considers our testUser function to be a test, and the assertTrue/assertFalse statements inside to be assertions.

What does that factory thing mean?

Before we finish here, let me draw your attention back to our setUp function. Specifically the factory section when we create a new user. 

When you use the WP CLI scaffold, it gives you access to the WP_UnitTest_Factory class. This class is there as a helper to create the data you’ll need to run your tests properly. You can use this factory to create posts, attachments, comments, users, terms, and some other things for WordPress Multisite.

This is not the only tool you can use to mimic WordPress for your tests though. In a future post we’ll look at WP_Mock to test parts of WordPress that the built-in factory doesn’t reach very well.

Today, we covered a fair bit of complex ground as we looked at unit testing your WordPress projects. I know when I started unit testing it looked daunting, but it’s worth it if you want code that works, and lets you know when you’ve broken something, so your customers don’t have to find the problem for you. Over the long-term, you’ll save time and headaches by writing testable code and aiming for decent test coverage in your projects.

Further Resources

Source link

Deploy WordPress with Github Actions

Deploy WordPress with Github Actions

A while back I showed you how to deploy your WordPress site with Deploybot. I’ve used this service for years now, but when I started, it was out of my price range, so I used nothing and made many mistakes deploying my sites which cost me many lost hours and some tears.

Today, I’m going to walk you through how to use Github Actions to deploy your site automatically for no cost. The set up is more complex than Deploybot, but it’s going to be free for most projects.

This post has a bunch of working parts so set aside some time, especially if you haven’t worked with Git or SSH keys before. We’re going to cover:

  • Creating a Hosting Account
  • Getting your code into Git
  • Creating special deploy SSH keys for Github to use
  • Configuring the Github Actions YAML file
  • Using Github repository secrets to keep private information safe
  • rsync via ssh

What Are Github Actions?

Github Actions is a feature of Github that allows you to automatically perform tasks based on the state of your code. Today, we’ll look at deployment, but you could also have an action to run your unit tests or notify a Slack chat when someone makes a new PR on your project.

While they may take a bit of time and effort to set up your first time, they return the investment by letting you get back to code instead of worrying about repeating the same work over and over. 

Getting Your Site Set up

The first thing you’ll need is to create a hosting account on Hostdedi, so head over and check out the plans available. The Maker WordPress plan is a good plan if you’re hosting a few sites.

With your site set up, go to the backups and create a backup of the site before you do anything else. Make sure you download this backup as well so you have a copy on your computer…just in case.

Unzip the backup and then copy out the public_html folder to use as our git repository. You can remove the wp-config.php file because we won’t need it today.

Now that we have our copy of the repository downloaded, let’s add it to Github. To start we’ll need to create a new repository in Github. Add your title and whatever description you want, I usually don’t initialize the repository with a README because I’ll provide my own with any project-specific notes and documentation for a client project.

Next, open up Terminal and cd into the downloaded copy of WordPress so you can initialize the repository with git init. The first thing we want to do is add our .gitignore file so that we don’t add any files that may overwrite what Hostdedi needs to run your site. You can use the .gitignore file provided below.

The top eight lines are specific to Hostdedi, so make sure you copy those lines if you have your own preferred ignore configuration.

If you’re not familiar with Git, check out my post on Introduction to Git.












































Now that you have the ignore file ready, use git add and git commit to add the WordPress files to your repository. Then push those files to your Github repository.

Creating a Deploy SSH Key for Github

To deploy our site via Github Actions, we’ll need an SSH key for the action to use. Do not use your regular SSH key.

To generate the SSH keys needed use the command below using your email. When prompted for a passphrase, press enter and leave it blank. When you’re asked for a location, choose a location to store them temporarily. You can find Github’s documentation on SSH keys with extra details here.

ssh-keygen -t rsa -b 4096 -C ""

Before we take the next step, make sure your new SSH keys are stored safely. I store them in my 1Password vault alongside the other server credentials. Don’t leave them sitting in some folder on your computer because if someone gets your keys they get access to any server those keys are used on.

Now, open the public key (the one that ends in .pub) and open it in the text editor of your choice. In your Hostdedi account, click on your account name in the top right corner and select SSH Keys.

Next, select Add Key and copy/paste the whole key out into the main text field. Make sure to label your key properly so you can tell what the key is being used for.

Finally, click Add to save the key to your account.

Adding and Configuring Your Github Action

Before we start to build our action, we’ll need some secret information stored in our account. Stuff like our private key for the key pair we just created, the location of our server, and the entry for the known_hosts file.

Start this by choosing Settings from the top right of your repository. Then choose Secrets from the left column. We’re going to add 3 different values.

  1. DEPLOY_SSH_KEY: This is the private key we generated (doesn’t have .pub at the end)
  2. NEXCESS_LOCATION: The location SSH will access on our server plus the file path to your html directory. Making a mistake here could delete your site, but this is why we took a backup.
  3. NEXCESS_HOST: The allowed host file we need from our server

You already have 1 so open your key file and copy it’s entire contents into a secret called DEPLOY_SSH_KEY.

Next, you can get the location of your server from your Hostdedi control panel for your site under the Access menu.

Finally, the easiest way to get the content for NEXCESS_HOST is to ssh into your server from your computer. This should prompt you to accept a new known server. Accept the new server and then you can head to ~/.ssh/known_hosts and get the last line in the file for the secret you need to add to your repository.

Why are we keeping this stuff secret? Each of these pieces of information has some security risk so you don’t want them in your repository. Putting them in a secret variable in Github makes sure you don’t expose information by accident.

We’re ready to head back over to our repository on Github and get our action going. Start by going to the Actions menu at the top of your repository and click on it. While there are lots of prebuilt actions, we’re going to start by creating our own action so select that from the Actions screen.

Doing this will give you a basic workflow file written in YAML. By default, it works only on your master branch, but by changing which branch is in the arguments on lines 9 & 11 you can make it work for a different branch. You may do this if you wanted to deploy from a staging branch to a staging site and from master to the live site.

Starting with the line below, delete the rest of your default workflow file.

# Runs a single command using the runners shell

    - name: Run a one-line script

      run: echo Hello, world

Now we need to get our SSH agent. On the right hand side of the workflow screen search for webfactory/ssh-agent.

Click on this and it will provide you with some commands to copy and paste into the bottom of your file, but we’re going to use our own custom script. Copy the code below and paste it into your workflow file below the actions/checkout@v2line.

   # Setting up SSH

    - name: Setup SSH agent

      uses: webfactory/ssh-agent@v0.4.0


        ssh-private-key: ${{ secrets.DEPLOY_SSH_KEY }}

    - name: Setup known_hosts

      run: echo '${{ secrets.NEXCESS_HOST }}' >> ~/.ssh/known_hosts

    - name: Sync project files

      run: rsync -uvzr --backup --backup-dir='~/deploy-backup/' --exclude 'wp-config.php' --exclude '.gitignore' --exclude '.git/*' --exclude 'wp-content/uploads/*' --exclude '.htaccess' --exclude 'wp-content/cache/*' --exclude 'wp-content/advanced-cache.php' --exclude 'wp-content/object-cache.php' --exclude 'wp-content/mu-plugins/*' ${GITHUB_WORKSPACE}/ ${{ secrets.NEXCESS_LOCATION }}

You can see the secrets we set up earlier in this file. It starts by pulling in the ssh-agent and adds our private DEPLOY_SSH_KEY to the server it’s getting ready in the background. Next, it adds our server as a known host.

It finishes by syncing the project files over to the live server in a long rsync command. In short, it backs up the remote server into a directory called deploy-backup and then moves any new files in the Github project over to the live server. We excluded all the same location we ignored in our original .gitignore file so that rsync doesn’t touch them by accident.

If you want to read up on each rsync command there, I usually refer to this Ubuntu documentation on rsync

Now all you need to do is make some changes in your repository and then use git to add and commit them to your repository. Push those changes to Github, and the action will automatically backup your files and then push your changes over to your site.

If you’re new to automatic deployments, this does seem like a lot of work. Even my first time using Github Actions to deploy my site I spent a few days working on the script. This doesn’t feel like it saves time until I’m using the same script for weeks with a project deploying regularly.

Automating your deployment, and taking the time to get it right, means you never accidentally overwrite the wrong files, put the wrong files in the wrong directory, or make any other of the many mistakes I’ve made moving files around in the 12 years I’ve been building sites.

A day or two of time spent configuring deployment for your first time is worth saving that pain.

Source link

How to Duplicate a WordPress Page or Post

How to Duplicate a WordPress Page or Post

WordPress includes a number of features in its core but a number of features you would think would be included fall into the realm of plugins. One such feature is the ability to duplicate an existing page, post, or custom post type on a site using WordPress. 

Luckily, there are a number of solid and easy to use plugins to enable page, post, or any custom post type to be duplicated. One of those plugins is called Yoast Duplicate Post. Using this plugin makes it easy to duplicate a site’s web page or post in WordPress.

How to Install and Activate Yoast Duplicate Post

Go about installing as you would do for any plugin from wp-admin go to:

Plugins > Add New

Then go search for Yoast Duplicate Post and then install and activate the plugin.

How to use Yoast Duplicate Post

Once the Yoast Duplicate Post plugin has been installed and activated it will add a new sub-menu item in settings within wp-admin on the site.

Settings > Duplicate Post

What to Copy


Yoast Duplicate Post plugin supports page, posts, and a number of custom post types. You can also use the plugin to duplicate products and coupons in WooCommerce.

Settings > Duplicate Post > Permissions

When you have the Yoast Duplicate Post setup as you want to cover all of the post types that you might need to duplicate on a site now comes in how you would go about creating a duplicate page or post.

In the page list screen in wp-admin when you mouse over the pages on the site, select the page you want to clone. Once you have cloned the page you will see a new page that is set to be with a post status of draft with the same page title as before.

The clone will show on posts, products as well as any other post types that you might have on the site and those that have been enabled to be supported in the Yoast Duplicate Post plugin.

The Yoast Duplicate Post plugin also includes a number of filters and actions which can be found from this link. Some of the included filters include custom fields that you might not want to be copied when creating a duplicate, if you needed to alter custom fields after a duplicate was created that is also possible.

How to Install and Activate Duplicate Page

There is another plugin option to be able to duplicate a page or post and it is called Duplicate Page.

Go to Plugins > Add New

Then search for duplicate page and you find the plugin to install.

How to use Duplicate Page

The Duplicate Page plugin will add a new sub-menu item into;

Settings > Duplicate Page

You can set which editor the page or post should be created using, the duplicate post status that the new duplicate page or post will be set as, and also you can see what happens after the page or post has been duplicated for the redirect.

In either the post or page list screen, the same will apply for any custom post types that exist on the site such as products if using WooCommerce. Next to the post or page click on the Duplicate This wording and a new duplicate post or page will be created.

Why Would I Want to Duplicate a Page?

Some of the reasons why you might need to duplicate a page, post, or a product would be if you are trying to use a different plugin on a specific page and you want a copy of the page before you update and use a different shortcode on it.

If you are using WooCommerce on the site, you might have a similar product to sell, and the easiest way is to duplicate an existing product. You can use the new duplicated page to update from instead of having to input all of the data from scratch again. If you had a number of changes to make to an existing product but wanted to have a backup copy, then being able to easily clone the product will help.

Source link

What Is a Platform As a Service (PaaS)?

What Is a Platform As a Service (PaaS)?

Once upon a time, software as a service was the only as a service acronym floating around. As the industry flourished though, forks came off of it into relating spaces to create a whole slew of aaS companies in numerous technological categories.

One of those forks is PaaS.

PaaS stands for platform as a service. It’s a service that provides and maintains a platform for developing, testing, and deploying applications for developers. All of the back-end infrastructure is managed by the PaaS, so that the developers can focus on their projects.

PaaS providers are able to reduce the amount of coding you need to do by providing you with middleware to use directly on the platform, with no dependencies on operating system compatibility (aw, yis).

A PaaS makes sense to use when you have multiple developers working on the same project. Like any other team-based SaaS app, PaaS apps allow you to add multiple users in a web-based development environment to co-work remotely on the same project.

Similar to more infrastructure-focused service providers like Hostdedi, PaaS providers include the basic infrastructure required to deploy apps, such as servers, networking, storage, and reference architectures. 

However, PaaS is arguably a more complete solution for app developers, providing an environment which allows you to build, collaborate, test, deploy, and manage your applications, all in one place.

You may not be familiar with the term PaaS, but if you’re a developer, you may already be using one:

  • Beanstalk
  • Heroku
  • Microsoft Azure

Platform as a service companies do one thing: save app developers time and money by bundling and automating a bunch of the things they’re used to doing manually. Then, when you run into problems, there’s a team of experts just behind the curtain to help you out.

PaaS companies are to app developers what Hostdedi is to ecommerce site developers – an all-in-one solution with a team of experts who specialize in your field. You don’t need tier one support at this stage in the game, you need someone who’s at least as informed as you are to help solve these problems.

Check out the PaaS providers above to help build your next application, and when it’s time to launch, talk to Hostdedi about managing your ecommerce site.

Choose From multiple Managed Applications

Source link

Advanced Git Usage & Workflows

Advanced Git Usage & Workflows

Recently we looked at the basics of getting started with using Git for source control in your projects. While that’s a great starting point, there are a bunch of other commands and Git workflows that will help you wrap your head around using Git in your daily coding work.

Git Workflows

When I started using Git, I figured I knew how to use it properly. My ideal approach was to make all changes in one spot without branches and then commit them to the repository and keep working.

While it was better than not using version control, it took me a while to realize that I wasn’t using most of the power that Git provided. To get Git working for me, I needed to have a strategy to branch and merge my changes.

Then git-flow came out and I adopted it as my strategy. I still remember feeling like I was peeking behind some curtain to where the amazing developers were. I now had insight into how they worked and could start to become one of them.

But git-flow doesn’t fit every scenario, so while we’re going to look at it we’ll also take a look at a few other methods of keeping your Git projects organized including how I manage my projects as a lone developer.


Looking at git-flow now, I acknowledge that the software landscape has changed greatly in 10 years and git-flow may not be the best option for your team. When git-flow was written, it was rare to deploy an application many times in a day. Instead, you probably did a major version number and release every few months or weeks if you were on a particularly agile team.

Let’s take a look at what git-flow is.

If you want to see the full deep explanation with charts and Git commands for Git Flow, you should read this post.

In git-flow, two branches have an infinite lifetime. First, main which should reflect code that is ready to be deployed to your live/production environment.

Second, we have our develop branch. This branch should have the latest changes that are ready for the next release of our software. When the changes in develop are ready to be deployed to our live application, we merge them into the main branch and tag them with the version number that corresponds with the release number.

Outside of the two major branches, there are three types of supporting branches.

1. Feature

A feature branch may be made from the develop branch only. It must be merged back into the develop branch. Naming can be anything descriptive of the feature you’re working on.

When the work is ready for the next release it gets merged back into the develop branch where it waits for release time.

2. Release

Release branches are made from the develop branch and must merge back into both develop and main. You name a release branch with the release-x convention. In practice that usually means you’d name a branch with the release number you’re planning to use like this: release-2.2

You use a release branch as a way to do the final prep to release your software. This may include bumping the version number of files, making sure that your translations are done, or final code checks.

3. Hotfix

The hotfix branch is made from the main branch and is used to contain changes that need to be dealt with in the live application right away. This may be a bug that has to be fixed or a security issue that needs to be dealt with.

Once the problem is fixed and deployed to your main branch you’d tag your code with the proper version number.

The biggest reason that teams don’t use git-flow now is that the way we release software has changed. Instead of larger releases less often, you may release a change to an application a few times in a day. I know that I push work to my client’s sites many times a week as soon as it’s ready. We don’t do version numbers of the site, we just keep improving it.

Standard git-flow isn’t meant to accommodate this.

Github Flow

The second option that many people use is Github Flow.

The one big rule of Github Flow is that whatever code is on the main branch can be deployed at any time because it’s production-ready.

All branches are created off of the main branch with a descriptive name for whatever you’re doing.

Once you have your changes ready you create a pull request.

Pull requests tell others working on the same code that the work you’re doing is ready to be reviewed before those changes are merged into the main code.

Once you have a pull request submitted, the team you’re working with can review the changes and provide feedback. If the pull request is deemed ready to merge, then it’s merged into the main branch for your project.

One drawback to Github flow for a single developer or very small team is that the administration of a pull request can create extra overhead in managing the project. This is why I don’t use them in my work.

How I Use Git with Client Projects

In my client work, I’m usually the only one writing code daily for a project. My client may update WordPress plugins or change some CSS, but they don’t do any major coding work. That means if I went with Github flow I’d be reviewing my pull requests which only create extra management headaches. Let’s look at the system I use, which bears some resemblance to both git-flow and Github flow.

I have two main branches called main and staging. The main branch tracks with whatever code is currently running on the production site. The staging branch tracks with whatever is being tested on the staging site we use to test changes before we push them to the live site.

Every branch is created from the main branch. New features are given a name like this: feature/32-new-feature. In this context, the number 32 corresponds to the ticket number in our project management system and the words after it are a short description of what’s being worked on. Bug fixes get named similarly, bug/20-bug-name.

Every branch created gets merged into our staging branch first, and then once approved by the client or tested by myself gets merged into the master branch. That workflow may look like this.

# merge feature into staging branch

git merge feature/32-new-feature

# deploy and test the feature

git checkout main

git merge feature/32-new-feature

# deploy feature to the live site

In my projects, I have continuous deployment set up which means any time I push code to main it gets pushed to the live site automatically. The same process is set up for the staging branch.

I’ve already written about continuous deployment with Deploybot.

If I was working with a team that could check my code in a pull request workflow, then I’d use this system because it works well in a team. For a developer mostly working on their own, it’s simply management overhead that’s going to slow down your workflow.

Advanced Git Commands I Use

Now that we have some idea of how we can use Git in a practical workflow, let’s take a look at more advanced commands that will be needed to make this happen. I use each of these commands at least a few times a week as I work with my customer’s code.

Even if you’re going to use a GUI application, (I mentioned some good ones in my last post on Git) it’s still important to have an understanding of what is happening in the background. Many times I’ve had to work via terminal to fix an issue that was created by a Git GUI application. 

Adding Changes by Line

The one command that made Git usage via Terminal click for me was git add -p. Until I learned this command I used GUI applications because I’d make changes in my code but want to break up specific lines into different commits so that I could explain why I had made a change. To do this I used a GUI to select the lines, but git add -p walks you through an interactive interface to add changes in chunks.

I use this many times every day.

Track Remote Git Branch

If you’re pulling down an existing repository and have branches like main and staging that you need to keep track of but already exist, you need to tell your local versions of the branches to track those remote versions of the branch.

There are a few ways to do this.

# Set upstream when pushing to remote

git push -u origin staging

# Set upstream

# assumes you’re on the branch you want to currently track with remote

git branch -u origin/staging

git branch --set-upstream-to=origin/staging

# If you’re not on the branch you want to track so you specify the branch at the end

git branch --set-upstream-to=origin/staging staging

Save Changes without Committing Them

Sometimes you’ll be in the middle of some work that’s not ready to be committed yet, but you need to save its state. That’s where git stash is useful. This command stashes changes away for you by removing the changes. You can get them back by using git stash pop. There are a few more commands to make stash useful, but those are the two I use regularly.

Pull a Specific Git Commit

Sometimes you need to pull a specific commit into your current work. With a clean HEAD (you haven’t made any changes yet) you can pull in a specific commit with git cherry-pick <SHA>. You can find the full documentation on git cherry-pick here.

For each commit Git builds a long sequence of letters and numbers which is called a Git Object, and commonly referred to as a SHA. Since each commit gets one you can reference a commit with its SHA value. Read more about Git Objects.

Throw Away Changes You Don’t Need

At some point in any project, we’re going to make changes and then realize that it’s not working and we need to simply scrap them and start over. Instead of just trying to undo until we’re back where we want to be we can use the following Git command to remove any changes that have not been tracked yet.

git reset --hard

The command above will reset your code back to the most recent commit on the branch you’re currently working on. We could also use a <#commitSHA> with this command to reset to a specific commit if we wanted to get back to a state before the latest commit. Maybe you’d use this to reset to the initial branch checkout because the entire branch worth of work isn’t something you want to keep, but you had already tracked some of the work.

To take it one step further, we can throw away any files or directories that have not been tracked in git yet with the git clean command.

git clean -fd: the flags f and d tell git to throw away files and directories that are untracked.

Remove Branches

Every week or two I look at the results of a git status command and find I have way too many branches to reasonably understand what’s going on in my repository. That means I can remove any branches that correspond to tickets that have been resolved with the following commands.

# removes the local version

git branch -d $branchname

#removes the branch on my remote

git push $remotename --delete $branchname

Use Version Control

While you may not be an expert at all the Git commands you’ll use, one important thing to remember is that you should be using version control. Even if you’re the only person working on a project, using Git and a Git workflow will help you keep your projects organized. You won’t need to press CTRL + Z until you’ve reset your work after writing code that didn’t work.

You’ll be able to trust your system and keep producing work for your projects.

Source link

Introduction to Git – Hostdedi Blog

Introduction to Git – Hostdedi Blog

When I started building websites, I “cowboy coded”, which means I was often editing files live on the server. It only took me a few broken sites to realize this was a terrible idea. Then I started building sites locally on my computer. More than once I edited a local file only to migrate the file to the wrong location in my FTP client. Occasionally that would mean I’d overwrite a file that I couldn’t fix without digging around for some backup copy I had hopefully kept.

If you’re still dealing with FTP and not being able to roll your files back, then it’s time to learn about using Git for version control.

What is Version Control?

A version control system (VCS) is a type of software that helps software developers manage changes to their code over time. A good VCS system will keep track of each change in the code that is made. This means if you break something, you can roll back to a previous version of the code that was working, without trying to hit “Undo” until things work.

In a team environment, a VCS will help you work with different members by giving you tools that will allow you to merge changes in code together when different members update files.

One of the things that I do in Git is to create a new branch for each feature I build. This means I can keep track of the changes I make on a branch, but still get back to the current state of the site by moving back to the main branch. We’ll talk more about this workflow later.

What is Git?

Git is a version control system, but it’s not the only one. The main WordPress repository is run via SVN, though you can find a Git copy as well. There is also Mercurial, Visual Source Safe, VESTA, and many other options.

Despite all these options, Git is what almost everyone uses so it’s the version control we’re going to learn about today.

Basic Git Terms and Commands

Before we dig into the mechanics of how to use Git, we need to understand a few terms. We’re only going to cover the terms that you’ll encounter regularly.

For a more complete list of everything you could encounter look at this Git reference or this complete list of Git commands

Add: When you’ve made changes to your code you will use the command git add to add the changes to so that they can be committed.

Branch: A branch is a version of your repository that has a difference from the main working project. All repositories come with a main or, more commonly in older projects a master branch. Recently Git, and Github have started to change the default branch name from master to main due to the historical issues with the word master. Git 2.28 also allows you to set your default branch name for any new project.

Checkout: You use the git checkout command to switch between different branches in a repository. When you use this command, Git changes the contents of the files or adds and removes files that differ between branches.

Clone: The git clone command is used to make a copy of a repository from the source repository. You’d use this command to get a local copy of a remote repository so that you can work on the code.

Commit: Once you’ve used git add you need to use git commit to be able to save the state of your files in git.

init: git init creates an empty repository for you with all the basic files that Git needs to operate.

Merge: Once you’ve made changes on one branch and added and committed them, you use the git merge command to migrate those changes into other branches.

Origin: This is the default name for the primary version of the repository. I usually change mine to be more descriptive than origin though. If I’m working with Github then I change the settings in Git so that origin becomes github. This helps me keep things clear in my head.

Push: Updates the remote branch with the commits that have been made in your local version of the repository.

Repository: This may also be called a “Repo” and is a directory of all the files, and a Git history of changes to those files.

Status: git status shows you the current status of your working repository.

.gitignore: This is a hidden file that contains patterns of files that Git will not bother tracking. If you have .DS_Store in your .gitignore files then it will ignore all the pesky files that macOS often puts inside folders.

Hosting Git Repositories

One other thing to understand before diving in is that, while you don’t need a remote location for your repository, not having one will reduce some of the benefits of using Git. Without a remote repository hosted somewhere else, you won’t have a backup of your code if your computer dies or gets stolen.

Github and Bitbucket are two of the more popular places to host your Git repositories because they’re mostly free, and you can have private repositories. That does mean your code is on someone else’s server so if you don’t like that idea, you can use Gitlab on your server to host repositories.

Installing Git

On macOS, the simplest way to install git is to open Terminal and type git which will prompt you to download Xcode Command Line tools to install git. Once that has finished you can run git –version to see which version of git you have. If that’s not working, there are a few other ways to install git on macOS.

For Windows users, you can install Git with the official Git installer. Git also comes bundled with the Github Desktop application, which we’ll talk about later.

If you’re on Linux, git should be bundled with your package manager, or you can look at these ways to install git on Linux.

Configuring Git Defaults

Once you have Git installed, you need to configure it so that each commit uses your name and email and commit messages to use your preferred editor to enter any comments that go with the commit. We’ll look at the way to set these in macOS via the Terminal application.

git config --global "Your Name" will set the name that goes with every commit made on your computer.

git config --global "" will set the email address that is associated with every commit you make.

git config --global core.editor vim will make the default editor for Git vim. While I love vim, it’s not the editor that everyone loves. If you use Atom then you’d use git config –global core.editor “atom –wait” or git config –global core.editor “subl -n -w” for Sublime Text.

If you’re into IDE’s then Visual Studio Code also lets you work with Git directly from within the application if you want as does PHPStorm.

Establishing a Repository

Now that we have git installed and configured, let’s start a basic repository. Open your Terminal and create a folder called test-repository by typing mkdir test-repository. Then type cd test-repository to change into your test-repository directory and type git init.

At this point, you’ll have one hidden directory in your folder called .git. Since it’s a hidden file you’ll need to type ls -a in Terminal to list hidden files.

Using git add

Now let’s create a file by typing touch test.txt into Terminal. Next type git status to see the file you just added.

As you can see the new file we created shows up in red and tells us that its status is untracked. That means that Git sees the file, but doesn’t have any record of it.

Type git add test.txt to tell Git to stage this file, then type git status again and Git should tell you that it knows about a changed file.

Committing Files to Git

Now that we’ve added our file we need to commit it so that Git saves the status of the file. We can do this in a single line, without opening our default editor with the following command.

git commit -m 'adding our first file'

The flag -m tells git that the words inside single quotes is the comment that goes with the command.

Now our repository has a single file in it with its status saved.

Create a Branch

The real power of Git comes when you start to get into branching. Let’s say you want to write a bunch in your test.txt file but aren’t sure if you will end up keeping it and want to make sure you can get back to the currently blank file. We can do this with a branch.

To create a branch we can type git checkout -b new-branch. This is a shortcut to create a branch at the same time as we checkout the branch, and it’s what I use every time I need to create a branch.

Now open our test.txt file, add some text to it and save it. Then use git add and git commit as above to save the state of the file.

Next, type git checkout master to switch back to the default main branch and then look at the contents of your test.txt file again. You’ll notice that all the text you typed has been removed. Git would even delete a new file that was only on one branch, though it does keep a record of it so it’s not gone.

Merge a Branch

Now, we love whatever it was we wrote in our file, so let’s integrate it with our main branch. Make sure you’re on the main branch and type git merge new-branch to integrate your changes.

If you look at the contents of test.txt now you’ll find your changes on the main branch just as you left them.

Using Git with WordPress

While the example above was extremely simple, that’s all you need to get started with Git in your projects. So let’s talk about exactly how you get a WordPress project using Git.

The first consideration is what level in your folder hierarchy should be the root of your Git repository. If you’re building a theme, then you could make the theme folder your repository. The same logic applies if you’re building a plugin.

I usually am working in themes and plugins at the same time so I often use the wp-content folder as the location for my repository. When I do this I make sure to ignore the uploads folder so that I don’t add all the images and uploaded files to the repository. They clutter up the repository and can slow Git down because it’s not great at compressing image files.

If I’m handling an entire deployment workflow, then I make the root WordPress folder the main location for my Git repository. Then I make sure to add wp-content/uploads and wp-config.php to my .gitignore file. wp-config.php is specific to each WordPress install, so I don’t want it deployed over any other version of the file which would cause the site to stop working.

You can see a copy of the .gitignore file I use as a starting point for every project below. It assumes that you’re using wp-content as the root of your Git repository, so I change some of the ignore patterns if I’m at the root of WordPress.

<script src=""></script>

Git GUI Applications

While we’ve covered the basics via the command line for Git, not everyone is comfortable in the command line, I know I wasn’t when I started using Git. Even now, occasionally I want to look at a visual representation of what Git is doing before I make any changes to my setup.

Luckily there are several great GUI clients for Git that you can use so let’s highlight a few.

Github Desktop (Windows/macOS)

One great spot to start as you look at Git GUI clients is with the Github Desktop application.

Many open-source projects use Github as their code repository for collaboration and use the standard Github flow to do their work. The Github Desktop client is built to help you handle this flow so it’s going to make creating pull requests easier.

If you’re not sure what a pull request is, check out Github’s documentation on pull requests

Unfortunately for Linux users, there is no official Github Desktop application, but there is a fork of Github Desktop that will install on Linux systems

Git Tower (Windows/macOS)

The Git GUI I use is Git Tower. Git Tower is available for macOS and Windows. When I was getting started with Git, I found it way easier to resolve conflicts and see what was different between files inside this GUI.

Working Copy (iOS/iPadOS)

If you mainly work from an iPad, as I do, then you should look at Working Copy. Working Copy is a full-featured Git client that works with iOS and iPadOS. It even features Shortcuts integration so you can automate parts of your Git workflow.

Wrapping Up

While we’ve covered a lot of ground in your Git knowledge today, there is no way a single blog post could be exhaustive on the topic. You can continue your learning with the Hostdedi help documentation as well as these excellent resources.

By using Git to manage your client projects, you will save yourself headaches since you can roll back changes or discard entire branches if you no longer want the work you’ve done. No more Ctrl + Z until you think you’ve rolled back far enough, Git will keep track of it all for you.

Source link

Getting Started with the WordPress Transients API

Getting Started with the WordPress Transients API

When I started building WordPress sites over 10 years ago things were much simpler than they are now. Install WordPress and help the client get their content into the site. Build them a theme and install some plugins. Maybe build a custom plugin if it was a complex site.

Today though it’s much more likely that I’ll be building multiple custom plugins and in a few of them calling out to some API to get the information we’ll use on the site. Then we need to make sure the site is fast, and it almost always comes down to lots of API calls that are slowing down the site.

Today we’re going to look at one of the built-in methods that WordPress provides developers to help speed up those API calls, by caching the results with the WordPress Transients API.

What are Transients Anyway?

WordPress transients are a way of storing information in your database (or your object cache) for a limited amount of time. We’re going to focus on saving data that comes from external API calls today, but the WordPress Transients API can be used for any type of data that takes a while to generate, like a long WP Query call that doesn’t need to get live data.

When you’re accessing an API without any form of caching, your site is going to call that API for each user that visits each page on your site. Each user will have to wait for the API call to complete before the content renders on their screen. Not only is waiting annoying for users, but it can also take up huge amounts of resources on your server if you have a site with even medium amounts of traffic. 

If you’re unfamiliar with an API call data flow, a basic API request would involve these steps.

This is where transients can step in. WordPress transients are a way of storing information in your database (or your object cache but we’ll get to that) for a limited amount of time. When using transients the above logic flow would change slightly.

The reason that transients are awesome is that they take out the whole waiting for information part of the equation and let your users get back to viewing your content. A simple way to use this would be to use your email marketing list API to get the number of subscribers that you have to your list and then save that data so it can be displayed on your site. Updating this every few hours, or days still shows people the size of your list without an API request every time someone visits a page on your site.

There are six functions that deal specifically with the Transients API.

We’re going to focus on set_transient, get_transient, and delete_transient for this tutorial. The site variations do the same thing, except they’re for making a transient available site-wide in a WordPress Multisite environment.

When you set your transient you need three parameters:

  1. key: The name of the transient
  2. value: The data you want to store
  3. timeframe: The time in seconds until the transient should expire

One big note here is that if you don’t set the timeframe for your transient it will be autoloaded for every page load for each user and it will never expire. I have never encountered a situation where this was the behaviour I wanted with my transients. So make sure you set a timeframe for your transients.

To use get_transient or delete_transient you only need the key to use the corresponding functions.

How to Use Transients

Now that we have a bit of an idea of what a transient is and what it can do let’s take a look at how to use transients in your WordPress code. An easy API to use for this is Github’s REST API. We’re going to use it today to get a list of my public repositories. To do this for your Github account you’ll need to generate a personal access token

I’m going to assume you have some understanding of the HTTP API built into WordPress. We’re going to be using it to make our calls to YouTube.

To make a basic request you’ll need the code below. Make sure you add your username and access token.

function github_repo(){

        $url = '';
        $username = 'YOUR_USER_NAME';
        $token = 'ACCESS_TOKEN';

        $args = array(
            'headers' => array(
                'Authorization' => 'Basic '. base64_encode( $username.':'.$token ),

        $response = wp_remote_request( $url, $args );
        $repos = json_decode( $response['body'], true );

        $html = '<ul>';
        foreach( $repos as $r ){
            $html .= '<li><a href="'. esc_url( $r['html_url'] ) .'" target="_blank">'. esc_attr( $r['name'] ) .'</a></li>';
        $html .= '</ul>';

        return $html;

Currently, that code will call out the Github API every time the page loads so let’s add a transient to this so that we can save the final HTML and simply get it and show it if the transient is found.

  function github_repo(){

        $html = get_transient( 'repository_html' );

        if ( empty( $html ) ){

            $url = '';
          $username = 'YOUR_USER_NAME';
          $token = 'ACCESS_TOKEN';

            $args = array(
                'headers' => array(
                    'Authorization' => 'Basic '. base64_encode( $username.':'.$token ),

            $response = wp_remote_request( $url, $args );
            $repos = json_decode( $response['body'], true );

            $html = '<ul>';
            foreach( $repos as $r ){
                $html .= '<li><a href="'. esc_url( $r['html_url'] ) .'" target="_blank">'. esc_attr( $r['name'] ) .'</a></li>';
            $html .= '</ul>';

            set_transient( 'repository_html', $html, DAY_IN_SECONDS );

        return $html;

Now the code above checks to see if we have a transient set. If we find it, we simply move to the end and return it so it can be displayed. If we don’t find it, we run through our API call and generate the list of public repositories. Then we save the generated HTML to our transient so we don’t have to call the API next time.

The code above saved 2 HTTP requests when our transient was populated and around 200ms on page load. While this isn’t a huge speed increase in the grand scheme of things, many other API calls will take much longer and have more data so your savings will be bigger.

Pitfalls and Tips for Transients

There are a few things that you need to make sure you take into account when you’re dealing with transients. First, you must remember that a transient can disappear at any time. Yes, that means even if it was called 15 seconds ago and you told it to stay around for a month it might be gone. For really big API calls I’ve occasionally saved the results to a regular option in WordPress and if my transient fails grabbed what I wanted out of the option to display it. Then we generate the transient again in the background with the shutdown action hook so that next time we can get the data out of our site cache.

If you’re dealing with an API that always needs a fresh response you wouldn’t use transients to speed it up. This could be a payment gateway where you need to ensure that you get the real payment status for a client instead of some cached value. Make sure that any data you’re saving in a transient needs to be shown again to multiple users over time.

When you’re using a transient in a plugin you’re releasing to the world, make sure you deal with clearing any transients that would have been generated for the older version of the plugin. Since they expire on their own, I find the best way to do this is to include the version number of your plugin in the transient name.


set_transient( 'repository_html' . VERSION_NUMBER, $html, DAY_IN_SECONDS );

When I release a new version of a plugin I can update the VERSION_NUMBER constant and know that all clients will be getting the newest versions of any data. I’d also need to ensure that I delete the old transients by calling delete_transient( ‘repository_html’. OLD_VERSION_NUMBER ); so that they don’t build up in your database. This is specifically a problem on sites that are not using object caching because every single transient will be sitting in the options table which can cause site speed issues.

While you can pretty much add anything you want to a transient name to make it unique, there is a 172 character limit on the name. Don’t get too carried away with your transient names or you’ll hit that limit and the world will stop spinning.

I have no idea how long a year is in seconds, and I’m betting most of you don’t either. Lucky for us WordPress provides a bunch of time constants to use for defining the time a transient should live. I almost always use these constants because they fit with the life my data should have.

Finally, be careful what data you store in your transients. In the example above I could store the result of wp_remote_request but that’s not useful to me. I’d still have to loop through the data and build out the list of repositories I wanted to list. Instead, I always prefer to store the final output I want to show to the frontend of the site so that I can save any extra processing time that may be required.

Plugins for Dealing with Transients

There are a few plugins that can make working with transients a bit easier during development. First, Transients Manager lets you see the content stored in your transients. One caveat is that this isn’t showing you transients stored in your object cache, only transients in your database will be available in this plugin. Still, during local development, I’m not running an object cache so this proves useful.

For a plugin that does a bit more than deal with just transients, you can look at Query Monitor. In addition to seeing the transients in your database and the object cache, it provides lots of information about what’s going on with your WordPress site in many other areas. This is one of the first plugins I install when I’m working on a WordPress project.

If you find a bunch of expired transients that are sitting around in your database then you can use Delete Expired Transients. This plugin will schedule a daily task which deletes any transients that should have expired but haven’t been called yet.

By using the WordPress Transients API you can speed up your site so that your users get the best experience possible. While it can seem a bit daunting at first, as I’ve shown above it’s quite simple to put into practice.

Source link

Creating A Smart & Scalable Tag & Category Infrastructure

Creating A Smart & Scalable Tag & Category Infrastructure

When you first set up your WordPress site it’s easy to get caught up in what it looks like, and which plugins you should be using. This often leaves people with little time to think about how they’ll set up their categories and tags on their content. A few months later you end up with 230 categories and no tags, or 2 categories that don’t truly apply to all the content in them.

Today we’re going to talk about the difference between tags and categories. We’ll also provide some SEO tips so that your rankings aren’t affected by duplicate content.

We’ll finish today with a method I’ve used many times to help people focus on a few categories for their site so they can keep their content from getting scattered.

What is a Category?

Categories are meant to contain broad groups of posts. If you’re writing about some coding topics then you’ll likely have a category to contain all the posts that deal with code in some fashion.

Each post you have on your site must have at least one category. If you don’t choose one WordPress will assign your default category to the post. You can change this under Settings->Writing in the WordPress Admin.

A good rule of thumb when you start producing content on your site is that if you’re assigning more than 2 categories to your content, you’ve probably got some categories that should be tags or you’re not staying focused enough with your content. Once you’ve developed your categories (we’ll talk about that in a bit) if a piece of content doesn’t fit in those categories that probably means you need to go back to the drawing board and rethink it so that you can stay with your content plan.

One final difference between tags and categories is that categories are hierarchical. So my Code category above could have sub-categories for WordPress, WooCommerce, Laravel, or any other broad coding topic I want. From a data perspective a post in a sub-category is also in the parent category so you could use this type of data structure to show all your Code posts, and then apply color-coding, or headings, to show when a post is in a specific sub-category.

What is a Tag?

Unlike categories, tags are not hierarchical. They each exist as a top-level way of structuring your content. Maybe that code post above uses WooCommerce Teams, WooCommerce Memberships, and Teams for WooCommerce Memberships. That means I’d tag it with all of those items and it would exist inside the WooCommerce category.

If I wrote a post about running in my local area it would get tags for:

  • running
  • trail running
  • mountains
  • mountain running
  • Chilliwack (because this is the city I live in)

Tags are more free form than categories. So tag your content with pretty much anything relevant to it. Most sites will have 5 – 10 categories, but hundreds of tags as they continue to put out high-quality content.

What about Tag and Category SEO?

From an SEO perspective, a good category structure can help your content get found but tags can be a problem because they can be judged by Google as duplicate content on your site. You can see below in the screenshot that I have a Book Reviews category on my site that comes up if you search for book reviews and my name. That category gets traffic regularly and contributes to people finding my book reviews.

Since your content exists at and and Google can look at that and say penalize your site for using the same content over and over. Luckily there are plugins like Elevate SEO or Yoast SEO that can help you noindex your tags so that you don’t get penalized for duplicate content.

In Elevate head to the Advanced menu and then scroll down until you see the option for indexing of tags. Most people will want to set it to  Don’t index but allow link following

This will mean that Google won’t show your tag pages in its index, but it will look at the links on your tag pages and follow them to other content.

Choosing Your Categories

Since it’s important to keep your categories focused, let’s walk through the process I use to define the categories for my sites so that they don’t end up with 22 categories, most with only 2 posts in them.

Let’s say that you specialize in building membership sites and are looking to build out your category structure so that you can produce content and attract clients. We’ll start by developing at least 15 categories by thinking about topics we can write about and looking at what some of the other sites in the membership niche are doing right now. This should take you around 20 – 30 minutes to do well, though you may generate most of your ideas right at the beginning and only add a few as you browse other sites towards the end.


  1. Membership Engagement
  2. Getting New Members
  3. Traffic Generation
  4. How to…(setup platforms, plugins, code…)
  5. The Business of Membership Sites (what is a membership site, finding content for your first course…)
  6. Content Generation
  7. Software Reviews (of membership software)
  8. Course Building
  9. Speaking (to build authority)
  10. Writing (to build authority)
  11. Membership Site Marketing
  12. Book Writing (to gain authority)
  13. Doing Market Research for a Site
  14. Email Marketing
  15. Membership Site Problems

More than a few of those are tags. Specifically, items 9 – 12 and 13,14 are all about marketing. Instead of all those different categories Convert it into a single category call Membership Site Marketing. The other ideas above would become tags on any relevant content.

Out of those 15 categories, I’d narrow it down to these 4.

  1. Getting and Keeping Members
  2. Membership Site Marketing
  3. How To…
  4. Reviews (membership plugins, marketing tools…)

Now that you have these four categories you can start to develop some content ideas that fit in each category. To start, aim for 8 – 10 ideas in each category and then plan them out on a calendar so that you have a content production schedule.

With a bit of planning, you can build a great tag and category structure that will help you stay focused on your content. With a great set of categories defined, generating content will be easier which means more content will be produced that’s focused on your customers which will bring in more customers to your site.

Source link

Deploying to Live and Staging with Deploybot

Deploying to Live and Staging with Deploybot

If you’ve been in web development for a while, you’ve probably screwed up a file transfer as you’re trying to update a site. In the best case scenario, you add a bunch of easily identifiable files to a directory and you remove them to fix the error. Yes it costs you time and it’s annoying, but no harm done. 

In a worst case scenario, you transfer a bunch of theme files improperly. Then you have to figure out which ones were overwritten, which don’t belong at all, and how on earth will you recover your theme’s proper working state.

Today we’re going to tackle solving this problem using Git and Deploybot to automate your deployment process.

What is Automated Deployment?

A basic automated deployment has four pieces as shown in this diagram.

Most developers start with just their code and the server. They make changes to their working copy of the site, and then push those changes directly to the server via FTP. Tools like Coda or Dreamweaver have direct FTP integration so that you can do this from inside your coding environment.

The next step many developers take is to add a staging site so that they’re not modifying the live server directly. You can do this with something like VVV or MAMP. Often this also means you’re using a version control system like Git to manage the changes you make to your local working site.

When you add a staging site, you also add complexity. How do you get your code changes from your local working site to a staging site where your client can see the changes? Yes, as I already said, you can use a basic FTP client like FileZilla, Transmit or Forklift to move the files as you make changes, but this is error prone and this is where automating your deployment process will save you so much time.

Instead of you taking the files you change and pushing them to your staging server, you use another system to automatically detect the changes in your Git repository and push only those changes to the staging site your client can use to check the work.

That still leaves your live site as a manual deploy though, which is much scarier because it can mean the loss of real money if you take down a live working site. Instead, let’s assume that you’re going to set up your deployment system to automatically deploy to staging, and then your system will deploy with a single click to the live environment when you’re ready to go.

So now you have a system that looks like this.

Let’s dive in so I can show you how I set up this deployment process for every client I work with. These are the steps I take as soon as I start a new project. I always make sure that my deployment process is set up and working before I start doing any other work on a client project.

How to Structure your Git Repository

Your first choice to make is, which directory will you set up your automated deployment in? Unless my client specifically requests that full source control for their WordPress install, I use the wp-content directory to set up my automated deployment system. That starts in terminal by issuing this command that initializes a git repository.

git init

Now it’s time to ignore the files you won’t want to deploy all the time. These are files like backup files, images, and any of the custom project files that many code editors add to a directory. You can see my usual .gitignore file below.










































Feel free to add or remove from this as needed. Almost every project I work on needs some sort of custom entry to ignore some file that is specific to my local working site for which the staging and live sites will have their own custom file I don’t want to overwrite.

From here it’s time to set up the branches you’ll need to get your deployment system going. I use two main branches. First is the master branch which corresponds to my live production site. Second, is a branch I label staging and corresponds to the staging site I want my client to use as a way to check the changes we’re making.

When you initialized your Git repository you already got your master branch, so use this command to add a staging branch and check it out.

git checkout -b staging

This command creates and checks out a new branch. If you’re new to git, you can find more information on the available commands in the Git documentation.

Now you’ll need to push your project into your source control system. Github and Bitbucket are two popular choices which both work with the automated deployment system we’re going to use called Deploybot. When you create a new repository with either site they’ll give you further directions to add your local repository to your online version in Github or Bitbucket.

Setting up Deploybot

When I was first getting into more complex work as a developer my friend Duane kept recommending Deploybot to me when I complained online about messing up manual FTP deployment. It took a number of recommendations before I finally did what I was told, but I’ve now been a happy Deploybot customer for years.

While there are other ways to deploy your sites many of them involve interfacing with Git Webhooks or some automated deployment configuration files via your code editor. There is lots of power in those other tools, but if you’re just getting started with automated deployment, then going with something straight forward like Deploybot is the place to start.

To get started sign up for a Deploybot account and connect Github or Bitbucket to your account. I’ll use my existing Bitbucket account today. Start by adding a new repository to your Deploybot account.

Once you’ve found the repository you want to setup for automated deployment click the button labelled connect at the bottom of the page. This will send you back to your repository page while Deploybot finishes initializing your repository. Generally this is done in a minute or two so fill up your coffee and come back to finish setting up your deployment process.

Once your repository is set up, click on it to get taken to its main page. Since we have no sFTP information set up yet it will have a big box on it telling you to set up a server. Click on the button to create an environment and server.

Let’s start with deployment to our staging environment. So label your server as staging. Choose automatic deployment and make sure you set the branch to staging.

When your done click the Save button on the bottom of the page to move to your server configuration.

On the next page label it as a Staging server again and put in your sFTP information from your site. If you’re not sure where to find them, read this helpful guide.

With your sFTP information entered you can scroll down to the bottom and save it. Deploybot will then test your connection to make sure that the information you provided works. Now it’s time to do our initial deploy for the site to make sure it all works. I often add a test.txt file to the deploy as an easy way to verify that the deploy worked properly.

To start your deploy to your environment history and click deploy.

Now you’ll see a page with your last git commit message on it as the note you’ll see inside Deploybot next to this deploy. For big changes I’ll change this, but if I’m just changing CSS or something minor the commit message can stay. Since this is staging, every single commit to our staging branch will be deployed automatically, which means your commit messages are what will show up. It’s only the initial commit that we need to do manually to our staging site.

Now verify that your files have been published to the staging site and we can set up the live deployment.

For your live deployment, make sure that you don’t choose automatic deployment and make sure that you choose the master branch as the source of your deployment. We want this to be a manual deployment when we’re ready to push changes to our live site.

To do this you’ll need to check out your master branch then merge your changes from your staging branch into master.

You can do that with these commands.

git checkout master

git merge staging

git push origin master

Now when you go to your Deploybot account you’ll be able to manually deploy your changes just like we did with our initial deployment to our staging environment. For your live environment, make sure you change the deployment message to suit the changes that are being pushed to your live site. You should also create a backup of your site. You can do this by accessing the backups navigation on your site and then creating a manual backup.

That’s it, you’ve got your automated deployment system setup for both staging and live environments.

Other Deployment Considerations

While this system is a big step forward for most developers, it’s not without its issues. The biggest one being that if you have a bunch of changes you’re still waiting for FTP to finish transferring files that have changed. This can mean that someone visits your site and not all of the files your site needs to run are present. 

For many clients this won’t be an issue, but if it is for your site then you’ll need to look at setting up an Atomic deployment system. This type of deployment system moves all the files, verifies that they are working correctly and then changes the file settings on your server so that the new directory is now the one that runs your site.

The process of linking to a new folder takes such a short time that only a computer would notice. That also means if you find a problem later, you can change your system link back to the old version of the site to rollback to the version that was working. This again only takes a very short amount of time and reduces downtime.

No matter what you choose to do, stop using an FTP client to deploy your client files today. The small monthly cost of Deploybot is recovered every time you don’t make a mistake deploying your files.

Source link

When is it time to leave Shared Hosting & upgrade to Managed WordPress?

When is it time to leave Shared Hosting & upgrade to Managed WordPress?

One of the best things about shared hosting is the low monthly price. One of the worst things about shared hosting is the low monthly price. The reality that both statements are correct presents a constant challenge to customers who are slowly outgrowing their initial decision to use shared hosting. 

Before we start talking about when it makes sense to leave shared hosting and upgrade to a Managed WordPress solution, let’s highlight why so many people start off with shared hosting.

The 3 reasons people start with shared hosting

While there may be many reasons why people choose shared hosting for their first WordPress and WooCommerce sites, there are three that rise to the surface anytime you find yourself talking about hosting.

First, the low price can’t be beat. 

Ask anyone and they’ll tell you they’re looking for lower prices. This isn’t anything new. 

In the days before wireless phones, where people paid for phone lines, there was a constant desire to look for lower prices for both local and long distance calls. That’s partly because no one understood the complexity that was hidden from them. 

Hosting is very similar. Since everything technical has been abstracted away, it all seems easy and therefore, it shouldn’t cost that much. Shared hosting offers monthly hosting at prices lower than a complicated Starbucks order. 

Second, no one knows what resources they’ll eventually need. 

Another dynamic when it comes to hosting is that few people can predict how well their site will do (in terms of traffic) and how that relates to the resources they’ll need. 

This is similar to the challenge that homeowners face when considering solar panels. They’re often asked by professionals to evaluate how many kilowatts of energy they’ll consume in a day or month. Most of us have no idea because it’s a resource that we don’t measure directly or need to keep track of.

When it comes to hosting, it’s hard to know if you’ll need a lot of CPU or a little, whether you will see consistently high RAM utilization or whether it will peak at random intervals. When you don’t know, sometimes it’s just easier to buy an inexpensive plan to start with and see how it goes.

Third, most of us underestimate the need for advanced support. 

The third and final reason most people get their start with shared hosting is that they don’t place a high value on advanced support. If you’ve never hosted anything before, it’s especially easy to hope that everything will work out and you’ll never need to make a phone call.

Most customers of shared hosting assume that support will be there when they need it and rarely test to see if that’s actually true. Then, when they really need support, it’s somewhat shocking to discover that it doesn’t perform the way we assumed it would.

Signs that it’s time to shift to Managed WordPress Hosting

As you can imagine, the signs that it’s time to shift to managed hosting are the very reasons why someone may have chosen shared hosting to begin with:

Low prices create slow performance

Those low monthly prices are available because your website was placed on a shared infrastructure that houses thousands of other sites. The assumption is that you won’t get enough traffic to create a problem, and when you have a problem you won’t notice it. Often you’ll notice your site getting slower over time. That simply means the server your site is on is getting more and more packed. That’s what high density shared hosting is all about – packing the most sites on a set of servers. Slow performance is a sure sign that it’s time to think about making a move. 

Slow performance and connection errors require more resources

Even worse than a site that gets slower and slower over time is a site that stops loading or presents 502 errors (or 503, 504, etc.). Even if you don’t see these errors, your customers will. More importantly, your website will be “down” for those customers, which can impact your brand or revenue. These errors tell you that you need more server resources and likely a different configuration of your setup, but that isn’t available for $4/month.

Poor support experiences mean you need better expertise

The third way to figure out you need to shift from shared hosting to managed WordPress hosting is potentially the easiest one to spot. If you submit a ticket and the majority of the work is put back on your plate, you know it’s potentially time to make a change. Hosting companies that offer managed WordPress plans staff their support with experts who understand what you’re going thru and can help you. Shared hosting often doesn’t want you getting on the phone at all, redirects you to their knowledge base articles, and invites you to solve your own problem.

When is it time to make the move to Managed WordPress?

The answer to the question is rather simple – the time to make the move from shared hosting to Managed WordPress is whenever you experience any of the following:

  • A site that is so slow that customers leave before the page loads
  • A site that seems to consistently get slower, month over month
  • A site that gets connection errors / becomes unavailable for others
  • When support organizations want you to do most of the work yourself

When you experience any of these situations, you may want to check out Hostdedi Managed WordPress or WooCommerce hosting.

Source link