The web is rich with images and video, but it is primarily a textual medium dominated by the written word. The web is all about reading, and that means we have to pay attention to typography.
Typography concerns itself with all aspects of displaying text on a page, but the typeface is its fundamental building block and choosing a typeface is the first step in creating attractive and readable text.
Thanks to web fonts and font hosting services like Typekit and Google Fonts, we can choose any of thousands of fonts for our WordPress sites, but there is a price to be paid for all that choice — web fonts inflate the size of web pages and increase the time it takes for them to download.
We weren’t always given so much choice. In the early days of the web, designers could use only web-safe fonts: typefaces that were already installed on the majority of devices. That’s why Times New Roman, Arial, and Verdana were ubiquitous on the early web.
Introducing Web Fonts
Web fonts were introduced to overcome the limitations web-safe fonts. Fonts could be packaged up and added to a web page. Later, font hosting services made using web fonts even easier. And with free font hosting services like Google Fonts, there is no reason not to use web fonts.
But web fonts aren’t without critics. They have been vilified as unnecessary, overly large, and unjustified because users don’t care about them. Designers certainly care about the typefaces that appear on the pages they design, but it’s the rare user who will abandon a site for using a web-safe font. They do, however, abandon sites that take too long to load and render because of a huge font file.
The designer Adam Morse made this point forcefully in 2016 when he wrote:
Typography is not about aesthetics, it’s about serving the text … webfonts cause more problems than they solve and weren’t worth the cost to my users or myself.
There is some truth to this argument, but it’s not a view typographers are likely to endorse. Historically, web-safe fonts were poorly implemented copies of earlier typefaces: the Palatino system font is a bad copy of Hermann Zapf’s original work, and Microsoft’s Book Antiqua is an uninspired copy of that.
There is nothing unique, original, or inspiring about a web page set in Times New Roman, and although these are not things that the average web user is consciously concerned about, there is a felt difference between a site with carefully selected high-quality typography and a site with old-fashioned fonts that have been seen a million times before.
That said, today’s system fonts are far superior to their ancestors. Microsoft’s Segoe, Apple’s San Francisco, and Google’s Roboto are fine typefaces. A font stack that takes advantage of them is adequate if uninspired.
WordPress site owners should balance the time taken to load web fonts with their design and readability benefits to come to a decision that best reflects the goals of their site.
When something goes wrong in a data center, their disaster recovery plan kicks in. A good disaster recovery plan aims to reduce data center risk to zero by implementing a range of redundancies and protections. To do that, it’s important to first walk through the data center risk factors out there.
What’s the biggest risk to a data center? Many facilities imply that fire is the biggest concern and highlight their fire suppression systems. Yet fire isn’t the only risk to a data center.
Data centers prepare a huge number of redundancies and protections – no matter how likely it is they will be needed.
This article will cover the types of risks that data centers typically prepare for, with a detailed look at:
External risks: Natural disasters and supplier outages.
Facility risks: infrastructure and risks involving the facility itself.
Data system risks: Data management and architecture.
External risks are those outside of a data center’s control. They include natural disasters, supplier outages, and human-caused events.
Many disaster recovery plans start by covering natural disasters, largely due to their potential damage being highest. Luckily, many meteorological threats can be forecasted before they become a problem and knowledgeable staff can be put on standby. This can mitigate a lot of the potential damage.
Large-scale damage and downtime from earthquakes and floods can be prevented with water penetration protection, a fire suppression system, and power backups. For a more detailed list of protections put in place, reach out to your hosting provider.
What if I Host in a Natural Disaster-Prone Area?
We understand that sometimes hosting in an area with frequent natural disasters is unavoidable. How you choose a data center is influenced by a number of different factors including proximity, convenience, and risk.
Most data center facilities located in such an area incorporate special infrastructure features, including reinforced buildings and stringent design plans. A good example is the Hostdedi Miami facility, which is Category 5 rated and designed to withstand flood damage and winds of up to 185 mph.
We highly recommend asking your facility about the history of natural disasters in their area and how they have affected the data center in the past. This will give you a good idea of what to expect and prepare for in the future.
As a rough guideline, the above map provides an overview of natural disaster frequency in the US. You can use this to identify susceptible areas.
Supplier outages occur when suppliers of either power, connectivity, or another important deliverable are unable to deliver. They are unavoidable but a suitably prepared data center can mitigate them entirely.
For example, downtime from a loss of connectivity or a downed power line is prevented by preparing multiple redundancies: additional power generators, multiple connections, and enough onsite fuel to last for several days.
It is important to have a backup pool of suppliers in the event one fails.
There are seven main areas where you don’t want anything to go wrong in a data center facility: power, water, climate, structure, fire, communication, and security. These should all be incorporated in a disaster risk assessment.
Take a look below for a better idea of how and why each of these factors is important.
Disasters will likely cause a power outage. No power means no data center (at least one that works). Multiple power source availability means that a data center (and so your website) will stay online through the worst.
Data centers are allergic to water. Even the smallest amount can cause a lot of damage. Water penetration protect can help to prevent the destruction of mission-critical infrastructure.
Conversely, losing a water supply for any cooling or fire suppression systems requires multiple, secure water sources.
A data center requires a precise climate. Not too hot, not too cold, and without too much humidity in the air. A high-quality and adaptable climate control system adds to reliability.
The data center’s building of operations itself. If poorly constructed, risk and exposure to the elements will be increased.
Fire damages pretty much everything it comes into contact with (apart from a good steak). Keeping it away from a data center is a top priority. All data center facilities you host in should come with a fire suppression system.
A line to the outside is a big advantage for a data center in the middle of an emergency. Not only does it let you contact your provider, it also allows them to contact backup suppliers.
Security procedures should exist for during a disaster to avoid unauthorized access to any part of the facility.
Data System Risks
Data System risks are those that involve shared infrastructure. It is vital to pay attention to all single points of failure in the system’s architecture and see how those failures can be avoided.
Look at how the data center protects against contamination between servers and its effectiveness at blocking attacks. An understanding of how vulnerable a data center is involves understanding how easily targeted they are. Hostdedi facilities block over 3 million attacks per day.
Other areas to ask your hosting provider about include:
Data Communications Network
Ask specifically about the network’s architecture and what security procedures have been put in place.
How do they interact with each other? How shielded is one account from others held on the same server? This is especially important with cloud technology and virtualized resources.
In the case that something bad does happen, what can be done to make sure your website doesn’t disappear? How often do backups take place, how long does it take to restore backups? What is the procedure for backup restoration?
Software Applications and Bugs
Unless your data center also creates the applications you’re going to run on your server, they don’t have a lot of control over this. However, they can tell you best practices, provide bug fixes, and generally stay up to date with how the application is being handled by other professionals.
Website migrations can be scary, but they don’t have to be. Here are 5 steps for making your moving experience as seamless as possible; starting from knowing what you need to back-up, and finishing with full DNS propagation and your new hosting solution going live.
It’s not every day you decide to change hosting providers or upgrade your solution. If you’re with a high-quality provider and haven’t had any problems, you may only ever do this a handful of times as your site grows. When you do decide to go through with a migration, you will likely go through the five stages below.
Backing up your website
Moving your website’s data
Testing the new website
Migrating your DNS
Enjoying your new hosting environment
We believe in seamless website migrations for everyone, which is why we’ve put together 5 steps for making sure your site migration is as easy and relaxed as possible.
You may be moving somewhere new because you were unhappy with your old provider but don’t rush. Canceling your old hosting provider before completing a migration can mean days or weeks of downtime, depending on how complex your migration is and whether you encounter any issues.
Unless your old hosting provider engages in daily backups and maintains them after you leave, you could lose your entire site. Even if you do have a backup, your SEO value can plummet, and a whole host (pun intended) of other problems can occur.
A good Migration should mean consistent site traffic. Not a sudden drop or decline.
A good migration
A bad migration
That’s why we always suggest making sure to…
One of the first things you should do during a migration is to create a local backup of your website. Despite everyone’s best intentions, technology doesn’t always go to plan and a small database corruption can cause issues.
If you haven’t canceled with your previous provider, they may still have backups located on a third-party server. Hostdedi offers daily hosting backups and archives them for 30 days. In most cases, you can use these backups to restore your site. However, it’s always a good idea to make sure you have a local one as well.
If you’re coming from a hosting provider with a cPanel interface, you can head to the page ‘Backup’ in your control panel. Here you’ll be able to download a copy of your “public_html” directory (which contains most of your site information). You can also grab a backup of your MySQL database too.
Hostdedi provides full backups through our control panel. Click on Backups -> Backup Now, and then click continue. You can also select to only perform a partial backup if you prefer.
Most hosting providers will have an easy to access backup feature available. If you can’t find one, get in touch with their support team.
“No, I don’t need to check. It’s ready, let’s go live,” is something every migration expert dreads hearing.
Going live without testing a site after a migration is like playing a game of Risk and not knowing what pieces you’ve got in play. While there’s a chance everything will work out well, there’s also a chance something will go wrong and you end up stuck with nowhere to go but start over.
A short checklist of what to test includes:
There may also be things you should check specific to your site. If you’re an eCommerce store, for instance, you may want to test the checkout process.
Do this by heading to your domain registration control panel and then “Domain Name Servers”. From here you’ll be able to see what your nameservers actually are.
If you’re interested in checking this out on your own machine, open up a command prompt and enter
dig +short NS yoursite.com | sort.
If you’re using the Hostdedi DNS service and have successfully repointed your domain, you should see at least one of those below:
If you don’t then don’t panic. It may be that you’re with an alternate DNS provider. It can also help to know how far along the path to full website migration you are (if you’re not the one in charge).
Remember that DNS record changes can take 12 to 24 hours, so don’t be surprised if this information doesn’t change immediately after you’ve altered your DNS. Just like with our first point, don’t cancel your old service before your new one is good to go.
Once you’ve changed your DNS, you’re going to want to let it complete propagation. You shouldn’t experience any downtime during this period, but you will want to make sure that you don’t make any changes to your site.
There’s nothing worse than posting new content during the propagation cycle and finding you’ve lost it the next day.
If you’re interested in checking the status of your DNS propagation, try the Hostdedi DNS checker to see how far it’s gotten.
Making Migration Easy
Remember, Hostdedi offers free migration assistance on all of our solutions, meaning that making the switch from one provider to another couldn’t be easier. We make migrations easy and seamless.
This week’s 30-minute session was with Doug, the Hostdedi data center facilities manager, covering everything you need to know about mission critical environments. He began by saying that maintaining reliability and security for mission critical environments is… mission critical. He then took marker to wall to expand on that.
Mission critical environments are hosting environments integral to the consistent and reliable running of a data center. This primarily includes servers, but data centers need to maintain other elements too.
Redundancies (backup generators, etc)
Tools (disaster recovery, maintenance)
Other unknowns that may be a danger to reliability and uptime.
Factors Important to Mission Critical Environments
For mission critical environments to remain stable, professionals have to ensure the stability and security of onsite equipment. A few of the factors that are most important for doing this are included below.
In the event of a disaster, your data center should have a disaster recovery plan ready. A good disaster recovery plan will minimize downtime and ensure your site is back online as soon as possible after a disaster event. This can include, but isn’t limited to:
Tools for solving problems
Trained onsite staff
Prevention is the best cure, and nowhere is that more evident than with data centers. Waiting for something to fail, whether it’s a server, power supply, or something else, is a recipe for reduced uptime and low-quality hosting.
Preventative maintenance means keeping an eye to ensure that hardware and infrastructure remain operating at full capacity with failing elements replaced before they become a problem.
Managing risk takes place everywhere, but it is no more critical than in a data center facility. As indicated above, risk is something to be avoided and finding a solution before a risk becomes a problem is a top priority.
Redundancy includes backups used if primary sources of power, connectivity, or something else go offline. For data center facilities trying to maximize uptime, redundancies are crucial. In many cases, data centers do not have control over when something goes wrong. Redundancies can help to mitigate any issues that arise.
Keeping mission critical environments secure and reliable is one of the most important tasks in a data center and involves looking at what might go wrong and finding the best way to prevent it. Thanks to Doug for showing us some of the ways in which that is done.
Want to know more about how we maintain mission critical environments? Contact our sales team.
We are excited to announce Hostdedi Global DNS. A globally distributed name service that puts DNS closer to your website visitors.
What is DNS?
The domain name service (DNS) is the phonebook of the Internet. Whenever you load a website, open a mobile app, or click on a cat GIF, your device usually searches for a web address using DNS.
The Internet is made up of connected devices with Internet Protocol (IP) addresses. The domain name service sits on top of the Internet and allows for convenient, easy-to-remember names, nexcess.net, to be translated back to hard-to-remember IP addresses as 22.214.171.124. This is made worse by the Internet’s next generation of addresses, known as IPv6, with long-string addresses such as 2607:f7c0:1:af00:d045:7800:0:1b.
Hostdedi DNS, Today
When you host your DNS with Hostdedi, as about half our customers currently do, DNS requests from your website visitors are answered from servers located in the US. Even if we host your services in London, Australia, or other international locations, our DNS services are still located in the US.
We go to great lengths to put our DNS servers on third-party networks, which isolates them from potential failures. We also host eight name servers in total, which is double the number typically found among web service providers. At the end of the day, it’s still a US-based DNS infrastructure.
To be clear, concentrating DNS servers in a particular location is a common setup. Due to the nature of DNS, when a user visits your website, their browser or device caches the results and doesn’t need to check DNS again for an extended period of time.
For new visitors from international locations, this can cause something known as first-visit page load delay. These geographically distant users may experience as much as a half-second delay. This may sound trivial, but visitors are quick to notice sluggish load times and tend to avoid sites that suffer from them.
Administrators and developers work tirelessly to shave even fractions of seconds from page load time. A research paper by Google last year found that when delays drift beyond 3 seconds, visitors quickly lose interest and start abandoning sites.
All things being equal – faster is better.
Hostdedi Global DNS
We’ve been hard at work the last couple of months deploying a footprint of 15 DNS servers distributed around the world. These servers are strategically positioned so that they provide a local DNS server option for visitors to your site, and significantly reduce first-visit load times.
Hostdedi Global DNS uses a technology called Anycast routing, which allows us to broadcast the IP addresses of our DNS server from multiple global locations at the same time. When a visitor loads your website, this technology allows their Internet service provider (ISP) to route the visitor’s DNS requests to the Hostdedi DNS server closest to that visitor.
When we stood up the proof-of-concept and looked at the latency differences of Global DNS against our existing DNS, it floored us! The results were significantly better than we expected in reducing DNS first-visit latency. This was some two months ago and it validated our all-in commitment to launching a Global DNS platform.
Following is a real-world example of Global DNS in action. Using a tool provided by KeyCDN.com, we tested latency (round trip time) from 16 global locations, then compared Classic DNS and Global DNS.
Hostdedi Global DNS, Going Live!
If you’re a Hostdedi customer, you will enjoy the benefits of our Global DNS for no additional cost, and no action is required.
We will begin transitioning Hostdedi DNS to the Global DNS system on Thursday, August 30th. The first maintenance will migrate ns7.nexcess.net and ns8.nexcess.net, with other name servers to follow in the coming weeks. Our goal is to have Global DNS operational for all nexcess.net name servers by the end of September.
There will be no downtime as a result of this maintenance. The existing Hostdedi DNS servers will continue to operate and respond to DNS queries until we confirmed all traffic has moved away from them.
Will other Hostdedi Global DNS locations be added?
Yes! We are currently looking at adding Bangalore, Hong Kong, Johannesburg, Sao Paulo, and Toronto. These locations will help close important gaps and continue to improve the experience for your website visitors.
How does a browser load a web page? It uses a phonebook. Not an old-fashioned leatherbound book or a switchboard operator, but a service known as DNS. Each page of that DNS “phonebook” is what are known as DNS Records.
In other words, when you look for nexcess.net, your computer looks in the DNS “phonebook”, finds the number for the site, and connects you to it. Of course, the whole process is much quicker, and faster, than this.
This article looks at what DNS records are, the different types you’ll find, and why they’re incredibly important for the success of any website.
It was 1983. The internet was young and IT professionals had begun to get fed up with having to remember long series of numbers in order to connect with other machines. Networks had spread beyond just a few units and in an effort to future-proof, longer series of numbers were proposed. There was just one problem, how to make these numbers more consumer friendly?
Paul Mockapetris published two papers on the subject, creatively named RFC 882 and RFC 883. Mockapetris’ system expanded prior use of a hosts.txt file into a large system capable of managing multiple domains in a single location. That system is known as DNS, or Domain Name System.
Without DNS, the Internet wouldn’t be what it is today. We may even need a Roladex to visit our favorite sites!
With DNS, computers still require the IP (internet protocol) address number sequence in order to connect with a server. Yet with over 4,294,967,296 different IPv4 addresses, it makes a lot more sense to convert those numbers into something more easily recognizable.
DNS gives IP addresses unique names for computers, services or other resources that are either part of a private network or part of the Internet.
The Hostdedi DNS network has 100% uptime with multiple redundancies in place
The domain name system prevents having to remember a long series of numbers. Users are able to type in a domain name and then the domain name system will automatically match those names with the IP address and route connections.
At the center of all this, the hosts.txt file still existed in the form of vast servers for managing domain names and at the heart of these servers are DNS records.
IP addresses work in a similar fashion to that of street addresses or phone numbers in an address book. While people browse the Internet, they look up their favorite site much like they look up a friend’s number. From there, the system provides them with the friend’s number and they can contact them. With DNS, the second part of this sequence is automated. This requires DNS records from a DNS server.
During the creation of DNS, servers were manufactured solely for the purpose of managing DNS and related information. Within each of these servers are DNS records that tie entries to a domain.
Any device connected to a computer network, whether it is a PC, router, printer, or any other device with an IP address, is referred to as ‘hosts’. With the sheer number of ‘hosts’ around the world, engineers needed a way to track devices without resorting to memorization of numbers.
As explained earlier, DNS records came along with DNS as a tool for system admins and users to seek out authoritative information on websites or other services they’re trying to access.
There are two types of DNS Records. These are:
Records stored in Domain Name System servers
Records stored on a user’s machine
Records stored on a Domain Name System server are covered in more detail below, including what types of records exits and how they function.
Records stored on a user’s machine are also known as DNS cache. This record lists the visiting history of an operator for all websites previously visited, regardless of whether they were attempted visits or not.
When you watch a crime drama and a culprit’s computer is taken to be analyzed for the sites they have visited, a DNS cache is usually what would be checked for unauthorized activity.
However, a DNS cache is usually temporary and has a limited lifespan before being removed.
DNS Syntax Types Explained
While there are an abundance of record types in existence, below you’ll find nine of the most commonly used DNS records. For more information, don’t forget to check our DNS Records knowledge base, as well as how to configure DNS records for your site. A – A records are usually referred to as address records, and occasionally host records. They are the most commonly used records that map hostnames of network devices to IPv4 addresses. A website address book. AAAA – Serves the same purpose as A records, except that hostnames are mapped to an IPv6 address vice an IPv4. As opposed to 32-bits for an IPv4 address, an IPv6 address contains 128-bits. An example of an IPv6 address is FE80:0000:0000:0000:0202:B3FF:FEIE:8329. CNAME – Acts as an alias for domains. The CNAME record is tied to the actual domain name. If the address nexcess.net was typed on your internet browser it would reload to the URL www.nexcess.net MX – MX records maps a domain name and connects them with message transfer agents. A mail server is responsible for managing the reception of emails, and preference values are assigned. In the case of large organizations, multiple email servers would be utilized to process messages en masse. Through the use of the SMTP (Simple Mail Transfer Protocol) emails are routed properly to their intended hosts. NS – Also known as name server records; designates a name server for a given host. PX – The technical description based on RFC 2163 details the PX DNS record as a ‘pointer to X.400/RFC822 mapping information’. Currently, it is not used by any application. PTR – Referred to as reverse-lookup pointer records. PTR records are used to search names of domains based on IP addresses. TXT – A type of DNS record that stores text-based information. It’s primarily used to verify the ownership of a domain as well as hold SPF (Sender Policy Framework) data, and prevents the delivery of fake emails that give the appearance of originating from a user. SOA – Possibly the most critical one of them all, the State of Authority record annotates when the domain was updated last.
The general purpose of a DNS lookup is to pull information from a DNS server. This is akin to someone looking up a number in a phone book (hence the term ‘lookup’ in conjunction with DNS).
Computers, mobile phones, and servers that are part of a network need to be configured to know how to translate domain names and email addresses into discernable information. A DNS lookup exists solely for this purpose. There are primarily two types of DNS lookups: forward DNS lookups and reverse DNS lookups.
Forward DNS Lookups
Forward DNS allows networked devices to translate an email address or domain name into the address of the device that would handle the communications process. Despite the transparency, forward DNS lookup is an integral function of IP networks, in particular, the Internet.
Reverse DNS Lookups
Reverse DNS (rDNS/RDNS) pulls domain name info from an IP address. It is also known as Inverse DNS. Reverse DNS lookups are used to filter undesirable data such as spam. Spam can be sent through any domain name that a spammer desires. Spammers can use this technique to fool regular customers into thinking that they’re dealing with legitimate entities. This can include organizations such as Bank of America or Paypal.
Email servers that are receiving emails can validate them by checking IPs with Reverse DNS requests. RDNS resolvers should match the domain of the email address if the emails themselves are legitimate. While this is useful in verifying the integrity of emails, it does not come without a cost. An ISP has to set the records up if the legitimate mail servers themselves do not have the appropriate records on hand to respond properly.
What Are Your DNS Records?
You can check your own DNS records with the Hostdedi DNS Checker. Simply enter the site address you want to check and the type of record you want to see.
You can also use this tool to check third-party DNS records and confirm the identity of certain domains to make sure they are not fake.
Ultimately, DNS makes life easier for the end user that can’t memorize 32-bit or 128-bit IP addresses. It’s easier to just type a name into the browser bar and let DNS figure out the rest. DNS resource records are fundamental for DNS to be able to work, and the Internet wouldn’t be what it is today without them.
If you’re looking for more information on site performance and benchmarking, don’t forget to check our article on TTFB (Time To First Byte) and why it may not be as important as you’ve been led to believe. Also, check out our summary of data center tiers and use the stats to figure out which data center tier you’re hosting with.
Time To First Byte (TTFB) is the time it takes for a web server to respond to a request. It’s a metric reported by several page speed testers, and is often quoted as a primary means for measuring how fast a site is. The idea being that the faster a web server responds, the quicker a site will load.
However, numerous groups have found that TTFB isn’t that important. When looked at in isolation, the figure provides an appealing way to grade your site or hosting provider, but when looked at in conjunction with other metrics, there seems to be a disconnect. This is especially true with regards to SEO rankings and improved user experience.
Here, we’re going to look at why TTFB can be easily manipulated, what metrics actually matter, and how knowing these things can help you to improve your site’s SEO, user experience, and more.
TTFB measures the time between a user making a HTTP request and the first byte of the page being received by the user’s browser.
The basic model of how TTFB works
The model is simple. The faster a web server responds to a user request, the faster the site will load. Unfortunately, things get a little more complicated.
In some cases of testing site speed, you’ll find TTFB test durations far longer than what you would expect. This is despite actual page load times seeming much faster. This is the first indication that something is wrong with how TTFB measures speed.
A deeper look shows that this is because TTFB actually measures the time it takes for the first HTTP response to be received, not the time it takes for the page itself to be sent.
A test of Time To First Byte and page load times
In the Time To First Byte test above, TTFB is measured at 0.417 seconds, which seems very quick. However, looking at the waterfall, we can see that this figure only correlates with the HTML loading time. Afterward, page load speed takes much longer for other assets on the page and we’re seeing DOM content loaded at around 1.6 seconds.
This is because the TTFB value is incredibly easy to manipulate. HTML HTTP response headers can be generated and sent incredibly quickly but they have absolutely no bearing on how fast a user will be able to see or interact with a page. For all practical purposes, they are invisible.
By loading HTTP response headers to speed up TTFB, it’s easy to create a ‘false’ view of a site’s speed. It also doesn’t necessarily mean that the rest of the waterfall will load quickly as well.
A good example of how Time To First Byte testing can be manipulated with HTTP headers is when looking at the page load times of NGINX in conjunction with compression.
Compressed pages are smaller and so they download from a server faster when compared with uncompressed pages. This ultimately means that page loads times to interactivity are much faster. However, from the perspective of TTFB, this is not true.
Time To First Byte compared with actual page loading times
This is because HTTP headers can be generated and sent relatively quickly before the main page content.
This is an especially significant figure for those that make use of the Hostdedi Cloud Accelerator, as this makes use of NGINX in order to speed up caching speeds on optimized Hostdedi platforms.
Continue reading to find out what metrics you should be using to check page load times.
In a 2013 study by Moz, it was found that Time To First Byte does have a significant correlation with SEO rankings. The faster TTFB was, the higher ranked pages would be.
This being said (and as Moz themselves make clear) correlation and causation are not the same thing. The actual methods Google (and other search engines) use to crawl web pages and build out SERPs are not known to the public.
It’s been deemed by many that page load times to interactivity are actually a lot more important. When looking at page speed tests, it’s important to look at all the figures available as a whole and not just TTFB.
So, with regards to TTFB tests, SEO, and user experience:
Google Does Not Measure Page Speed for SEO (Entirely)
Ok, it sounds like we’ve gone back on what we just said, but bear with us.
Google doesn’t measure page speed as incredibly important, it measures user behavior. They have said in the past that if users are willing to wait for content to load, they will not downgrade a website for being slow.
This is because Google weighs usability and experience as more important than speed. Back in 2010, Matt Cutts said that including site speed as a ranking factor “affects outliers […] If you’re the best resource, you’ll probably still come up.” It just happens to be that the less time a user has to wait for a page, the more likely they are to stay on the page.
So when it comes to using speed testing services such as PageSpeed Insights, make sure to consider your page load times from a practical perspective as well. How do you feel about the time it takes for your page to load when you type it in your browser? Do you think the content quality is worth the wait?
PageSpeed Insights provides actionable speed intel for SEO such as that above
Simple checks like this are easy and can provide you with a lot of insight into what your users will think.
Practical Page Load Times Matter – Not TTFB
A faster Time To First Byte does not mean a faster website.
TTFB is not a practical measurement. It doesn’t really affect the user experience. The time it takes for a browser to communicate back and forth with a server doesn’t affect a user’s experience of that server’s content as much as the time it takes for them to actually interact with it.
Instead, measurements that test time to interactivity are inherently more important. Improvements here don’t always match the results of web page speed tests or scores.
So, the main takeaway here? High-quality content and a great user experience are still two of the most significant factors involved in SEO, site speed can influence this but is not nearly the most important.
However, again, TTFB and page load times aren’t as important as high-quality content and usability. The user experience on mobile devices has long been a key area Google and other search engines have tried targeting and improving. Load times are just a small part of this.
Responsive design and easily readable and scalable text and images are much more important.
Google highly recommend their tool PageSpeed Insights in order to properly see how your page speed may affect SEO ranking.
Slow and Steady Wins the Race
Ok, all this doesn’t mean that you should let your site crawl to a halt. This isn’t a childhood fable or a call to reduce quick internet. Fast internet is one of the wonders of the modern age and you still want your site to load as quickly as possible.
What we’re saying is that if you’re trying to find how to improve Time To First Byte, stop.
it’s far more important for you to start looking at page load time in their entirety and not just the time it takes for a server to respond. At Hostdedi, we’re proud of how fast our data center serves content, and work our hardest to make sure that our servers are optimized for providing a great user experience and helping to boost your SEO as much as a hosting company can.
We highly recommend checking out the Hostdedi Cloud and seeing how Hostdedi can help.
In the world of data centers, reliability is one of the most important factors. The more reliable you are, the more likely clients are going to want to use you. After all, who wants a data center that isn’t online?
Luckily, The Telecommunications Industry Association (TIA) published a standard for data centers defining four levels of data centers in regards to their reliability. The aim was that this standard would then be able to inform potential data center users about which center is best for them. While brief, the standard laid the groundwork for how some data centers would manage to pull ahead of others in the future.
This article breaks the types of data center down into the four tiers and looks at how they differ. Combine this our article on how to choose a data center location, and you’ll know where the best place to host your website is.
Check out our Infographic below to quickly see the main differences between data center tiers, or keep reading for more detail.
The Classification of Data Centers
Data centers are facilities used to house computer systems and associated components. A data center is comprised of redundant power supplies, data communications connections, environmental controls, and various security devices.
Tier one data centers have the lowest uptime, and tier four have the highest. The requirements of a data center are progressive in that tier four data centers incorporate the data center requirements of the first three tiers in addition to other conditions that classify it as a tier four data center.
The requirements of a data center refer to the equipment needed to create a suitable environment. This includes reliable infrastructure necessary for IT operations, which increases security and reduces the chances of security breaches.
What to Consider When Choosing a Data Center
When choosing a data center to store data for your business, it is important to have a data center checklist. This is a list of the most important things you should keep in mind – such as the physical security of a prospective data center – when making your choice.
Typically, a good data center checklist would include the various data center pricing policies and extra amenities provided. An excellent straightforward strategy, for instance, should have no hidden charges and a data center with additional facilities is better than one without.
Data Center Specifications
Data center specifications refer to information about the setup in a data center. This can include the maximum uptime, redundant power systems that allow the platform to stay up regardless of power outages, the qualification of technical staff at the data center, and more.
It is common that higher data center tiers have better-qualified staffing since more expertise is required to maintain the whole platform. Data center specifications should be on the data center checklist of a customer looking at prospective data centers to store their data.
What Is a Tier One Data Center?
This is the lowest tier in the Tier Standard. A data center in this tier is simple in that it has only one source of servers, network links, and other components.
Redundancy and backups in this tier are little or non-existent. That includes power and storage redundancies.
As such, specifications for a data center for this tier are not awe-inspiring. If a power outage were to occur, the system would go offline since there are no fail safes to kick in and save the day.
The specifications of a tier one data center allow for uptime of approximately 99.671%. The lack of backup mechanisms make this data center tier seem like a risk for many businesses but they can work for small internet based companies with no real-time customer support. However, for companies with heavy reliance on their data, a tier one data center would not be practical.
One of the advantages of tier one data centers is that they provide the cheapest service offering for companies on a budget.
However, a lack of redundancy means that the uptime of servers is considerably lower than tier two, three and four and maintenance on the facility will require shutting down of the entire facility thus more downtime.
What is a Tier Two Data Center?
This is the next level up after line one. Tier Two features more infrastructure and measures to ensure less susceptibility to unexpected downtime. The requirements of a data center for this data center tier include all those of the first tier but with some redundancy.
For instance, they typically have one a single path for power and cooling. However, they also have a generator as a backup and a backup cooling system to keep the data center environment optimal.
The specifications of a data center for the second tier allow for higher uptime compared to level one data centers that are approximately 99.741%.
What is a Tier Three Data Center?
Tier Three data center requirements for line three data centers include all those of tier one but have a more sophisticated infrastructure to allow for redundancy and backups in case of unexpected events that may cause downtime.
All server equipment has multiple power sources and cooling distribution paths. In case of failure of any of the distribution paths, another takes over ensuring the system stays online. Tier three data centers must have multiple uplinks and must be dual powered.
These specifications ensure you only have a maximum of two hours downtime annually, as a percentage. Some of the equipment in tier three systems are fully fault-tolerant.
Some procedures are put in place to ensure maintenance can be done without any downtime. Tier three data centers are the most cost-effective solution for the majority of businesses.
What is a Tier Four Data Center?
Tier 4 is the highest level when it comes to data center tiers. It has an availability percentage of 99.99%. A tier 4 data center is more sophisticated regarding its infrastructure as it has the full capacity, support, and procedures in place to ensure the maximum and optimum uptime levels.
Tier 4 data center fully meets all the specifications of the other three tiers. A tier 4 data center is error tolerant as it can operate normally even when there is an instance of infrastructural equipment failure.
A Tier 4 data center is fully redundant with multiple cooling systems, sources of power and generators to back it up. It has an uptime level of 99.99% with an estimated downtime level of only 29 minutes annually.
These are the four data center tiers and a summary of their data center requirements used in the design process. Anyone looking for things to put in their data center checklist could find the essential elements to look for in the specifications of a data center and requirements.
Hostdedi Is a Tier 4 Data Center
Between having an uptime of 99.9975%, multiple redundancies, and an annual downtime of less than 18 minutes, the Hostdedi data center is regarded as a tier 4 data center. If you would like to know more about the Hostdedi data center, don’t hesitate to check out the different data centers offered by Hostdedi around the world or take a more detailed look at our Southfield, Michigan data center (in an easy to read infographic).
We’ve just returned from IRCE 2018. Between the marketplace and the sessions, there was a lot happening. eCommerce and marketing professionals from around the world were in attendance, and everyone seemed to have something to bring to the table.
However, throughout the show, we found that three things seemed to be present in almost all of the conversations going on.
Here are what we think were the three main takeaways from IRCE this year.
With huge marketplaces such as Amazon, speakers such as Seth Godin stated that “You will lose on price” if you try and compete there.
Instead, small companies should start to look at fringe groups that are likely to grow with time. Effectively building a business is about making change happen. It’s about taking something and increasing its value in the public consciousness.
This led Godin to prompt everyone to ask themselves two questions about their brand:
Who’s it for?
What’s it for?
Throughout IRCE, this theme found itself springing up time and time again.
The speech Institutionalize Innovation by Roe Macfarlane talked about how market segmentation required specific actions based on age, including the type of leader different groups are more inclined to follow.
Counter the Amazon Effect also talked about how it was important to innovate and inspire change in order to compete with the eCommerce giants of today. How did many people suggest this change and niche focus should come about? Personalization.
Godin’s second standout statement during his keynote was also repeated by speakers throughout IRCE 2018. The importance is not in marketing to a mainstream audience, but in appealing to those who are already a friend to your brand. These connections should be nurtured in a way that creates a “tribe” that follows one thing: you.
This tribe should be nurtured through personalization techniques.
Personalization 2.0: Making the Move to Individualization by Brendan Witcher talked about the ultimate destination of personalization techniques: individualization, not segmentation. He also went over how to make use of big data to do this (without becoming ‘creepy’).
We also saw David Blades of Jenson USA talk about the importance of user generated content in boosting sales. The community wants the brand to be about them, and what better way to make it about them than by having them generate the content.
With the first Magento Straight Talk during IRCE came conversations about machine learning and its place in eCommerce. For many businesses, the idea of machine learning has become something that is spoken about a lot but hasn’t shown enough value to be applied independently.
Anita Andrew’s talk inspired a different perspective, with stats on how effective machine learning has been for some huge brands. Target saw a 30% growth in revenue after applying machine learning techniques. Amazon saw a 55% increase in sales from personal recommendations, and USAA saw a 76% improvement in customer support contact and product offering fit.
Yet Anita did mention the issue with what she termed ‘dirty data’. Throughout the big data sessions, dirty data become a central point of interest. How do you take outliers and unpredictable variables and apply them to machine learning algorithms? Many of the IRCE speakers gave their own perspectives and approaches to cleaning for different purposes. Anita talked about cleaning data in order to boost product offerings. In Personalization 2.0, the focus was on how to clean data to truly individualize your brand. In the merchandising track, Carter Perez talked about how Machine learning could be used to improve product discovery.
Regardless of where you heard it, the message was clear: machine learning is the future and it’s here now.
Outside of the sessions, the marketplace was abuzz with activity. Many of those exhibiting at the show had something to offer that linked into the topics mentioned above.
Hostdedi met with several old, new, and future clients during the show and had a great time with all of them. We also went to go see the Cubs vs. Phillies game in Wrigleyville, with over 250 RSVPs to the rooftop event. We’ll leave you with the view we had and look forward to seeing you next time!
The GDPR (General Data Protection Regulation) is set to usher in the next era of European digital compliance this May. As the latest set of European Union (EU) regulations regarding consumer rights, the GDPR has been proposed in order to strengthen and unify data protection for individuals, and address issues with exporting data outside of the EU.
This will mean changes to the way in which many businesses which operate within the EU handle and process customer data. Keep reading to find out how.
What is the General Data Protection Regulation (GDPR)?
The GDPR is a new set of online data security regulations which have been adopted by the EU and will be put in place by May 25.
The main things you need to know are that the GDPR will increase the definition of what constitutes personal data, change the way in which you handle that data, and provide individual EU consumers with increased control over their personal information.
While online data security and consumer rights protections have existed for a long time – in the form of the Data Protection Directive – its definitions and mechanisms date back to 1995. The internet has changed a lot since then and new regulations have long been needed.
The GDPR will apply to all EU member states and any business which is active within them. For many companies both inside and outside of the EU, this will mean a change of strategy in order to continue working within Europe.
Why do we need the GDPR?
In a sentence: because data protection and privacy issues are increasingly becoming a problem.
As internet technology continues to grow so too does the frequency and effect of data breaches. In 2013, there were over 575 million of them. By the first half of 2017, that number had increased to over 1.9 billion. Over 95% of those breaches involved unencrypted data which was not being suitably protected. How does this affect consumers and organizations? By 2019, the total global annual cost of all data breaches is expected to exceed $2.1 trillion in damages.
The GDPR aims to try and reduce these figures by creating a set of data security standards. These are standards which organizations and businesses which operate or have an entity in Europe will need to follow. For some, these increased protections are just “common sense” data security ideas which should have been implemented long ago. For others, they are serious concerns which their business has yet to fully address. In a survey by Deloitte, it was found that just 15% of respondents expected to be fully GDPR compliant by the deadline.
Who Will Be Affected by the GDPR?
Your business will be affected by the GDPR if you are storing or processing information on EU citizens, even if your business or processing centers are not located in the EU.
“This Regulation applies to the processing of personal data in the context of the activities of an establishment of a controller or a processor in the [European] Union, regardless of whether the processing takes place in the [European] Union or not.”
How Will the GDPR Work?
Current data security regulations already require security for names, addresses, and basic ID numbers (i.e. social security). The GDPR aims to take this and provide similar protection for individual IP addresses, cookie data, and more.
By securing this information in a more stringent manner, protection against data breaches and information theft will hopefully decrease. However, you should note that the GDPR does not just address what type of information is protected, it also addresses how it is protected.
Data the GDPR Will Protect Includes:
Names, addresses, and ID numbers
Location data, IP addresses, cookie data and RFID tags
Racial and ethnicity data
Additional GDPR Roles
There are three main roles which have been defined by the GDPR which will need to be filled. These roles are responsible for implementation and compliance with the GDPR. They include:
A Data Controller – Responsible for deciding on how personal data is processed and why it is processed.
A Data Processor – Responsible for maintaining and processing personal data records, as well as ensuring that processing partners also comply.
A Data Protection Officer – Responsible for overseeing the data security strategy and making sure that you are GDPR compliant.
According to the new GDPR guidelines, consent will become a major factor in the storing of personal information. Consent must be explicitly given by those providing personal information and data controllers must be able to prove this. Furthermore, if an individual would like to withdraw consent, they are able to at any time, whereupon data must be deleted.
GDPR Pseudonymisation is a process whereby information is transformed so as to not be attributable to a single individual without secondary verification. This means that personal data must be made “unintelligible” without the use of a secondary set of information by which to understand it. This may mean using encryption, or it may mean adopting a tokenization system.
GDPR Data Portability
Data portability concerns “the right for a data subject to receive the personal data concerning them”. This means that data must be portable and easily transferred to its subject in a ‘commonly used and machine readable format’.
By When Do I Have to Be GDPR Compliant?
GDPR compliance will be required by May 25, 2018.
What Are the GDPR fines?
Fines for those who are not GDPR compliant will vary depending on the severity of non-compliance. At this point in time, examples of GDPR fines have not been released.
However, it has been indicated that fines of up to €20 million, or 4% of the worldwide annual revenue of the prior fiscal year, are likely for those who have not followed the basic principles for processing or conditions for consent.
For those who have not managed their monitoring bodies or controllers and processors of the GDPR, fines will instead be up to €10 million, or 2% of the worldwide annual revenue of the prior fiscal year.
Hostdedi and GDPR
In order to help clients who will be affected by the GDPR, Hostdedi will be GDPR compliant. We are currently working to ensure that our policies and procedures comply with the General Data Protection Regulation (GDPR).
In the coming weeks, we will be making sure that you are informed of any changes which take place to Hostdedi’ services. At this point in time, we fully believe that you will be satisfied with those changes.
Note that this guide does not constitute legal advice and is rather an overview of the regulation changes which will take effect. For a full breakdown of the changes taking place, please consult the agreed text from the EUGDPR.org website.