CAll Us: +1 888-999-8231 Submit Ticket
Data Center Risk Factors and Recovery

Data Center Risk Factors and Recovery

When something goes wrong in a data center, their disaster recovery plan kicks in. A good disaster recovery plan aims to reduce data center risk to zero by implementing a range of redundancies and protections. To do that, it’s important to first walk through the data center risk factors out there.

What’s the biggest risk to a data center? Many facilities imply that fire is the biggest concern and highlight their fire suppression systems. Yet fire isn’t the only risk to a data center.

Data centers prepare a huge number of redundancies and protections – no matter how likely it is they will be needed.

This article will cover the types of risks that data centers typically prepare for, with a detailed look at:

  • External risks: Natural disasters and supplier outages.
  • Facility risks: infrastructure and risks involving the facility itself.
  • Data system risks: Data management and architecture.

External Risks

External risks are those outside of a data center’s control. They include natural disasters, supplier outages, and human-caused events. 

Natural Disasters

Many disaster recovery plans start by covering natural disasters, largely due to their potential damage being highest. Luckily, many meteorological threats can be forecasted before they become a problem and knowledgeable staff can be put on standby. This can mitigate a lot of the potential damage.

Large-scale damage and downtime from earthquakes and floods can be prevented with water penetration protection, a fire suppression system, and power backups. For a more detailed list of protections put in place, reach out to your hosting provider.

What if I Host in a Natural Disaster-Prone Area?

We understand that sometimes hosting in an area with frequent natural disasters is unavoidable. How you choose a data center is influenced by a number of different factors including proximity, convenience, and risk.

Most data center facilities located in such an area incorporate special infrastructure features, including reinforced buildings and stringent design plans. A good example is the Hostdedi Miami facility, which is Category 5 rated and designed to withstand flood damage and winds of up to 185 mph.

We highly recommend asking your facility about the history of natural disasters in their area and how they have affected the data center in the past. This will give you a good idea of what to expect and prepare for in the future.

US risk map data center

As a rough guideline, the above map provides an overview of natural disaster frequency in the US. You can use this to identify susceptible areas.

Supplier Outages

Supplier outages occur when suppliers of either power, connectivity, or another important deliverable are unable to deliver. They are unavoidable but a suitably prepared data center can mitigate them entirely.

For example, downtime from a loss of connectivity or a downed power line is prevented by preparing multiple redundancies: additional power generators, multiple connections, and enough onsite fuel to last for several days.

It is important to have a backup pool of suppliers in the event one fails.

Facility Risks

There are seven main areas where you don’t want anything to go wrong in a data center facility: power, water, climate, structure, fire, communication, and security. These should all be incorporated in a disaster risk assessment.

Take a look below for a better idea of how and why each of these factors is important.

PowerDisasters will likely cause a power outage. No power means no data center (at least one that works). Multiple power source availability means that a data center (and so your website) will stay online through the worst.
WaterData centers are allergic to water. Even the smallest amount can cause a lot of damage. Water penetration protect can help to prevent the destruction of mission-critical infrastructure.

Conversely, losing a water supply for any cooling or fire suppression systems requires multiple, secure water sources.

ClimateA data center requires a precise climate. Not too hot, not too cold, and without too much humidity in the air. A high-quality and adaptable climate control system adds to reliability.
StructureThe data center’s building of operations itself. If poorly constructed, risk and exposure to the elements will be increased.
FireFire damages pretty much everything it comes into contact with (apart from a good steak). Keeping it away from a data center is a top priority. All data center facilities you host in should come with a fire suppression system.
Communication  A line to the outside is a big advantage for a data center in the middle of an emergency. Not only does it let you contact your provider, it also allows them to contact backup suppliers.
SecuritySecurity procedures should exist for during a disaster to avoid unauthorized access to any part of the facility.

 

Data System Risks

Data System risks are those that involve shared infrastructure. It is vital to pay attention to all single points of failure in the system’s architecture and see how those failures can be avoided.

Look at how the data center protects against contamination between servers and its effectiveness at blocking attacks. An understanding of how vulnerable a data center is involves understanding how easily targeted they are. Hostdedi facilities block over 3 million attacks per day.

Other areas to ask your hosting provider about include:

Data Communications Network

Ask specifically about the network’s architecture and what security procedures have been put in place.

Shared Servers

How do they interact with each other? How shielded is one account from others held on the same server? This is especially important with cloud technology and virtualized resources.

Data Backup

In the case that something bad does happen, what can be done to make sure your website doesn’t disappear? How often do backups take place, how long does it take to restore backups? What is the procedure for backup restoration?

Software Applications and Bugs

Unless your data center also creates the applications you’re going to run on your server, they don’t have a lot of control over this. However, they can tell you best practices, provide bug fixes, and generally stay up to date with how the application is being handled by other professionals.

Blog Banner WP_5 Questions4

Posted in:
General

Source link

5 Steps to a Successful Website Migration

5 Steps to a Successful Website Migration

5 steps to a successful website migrationWebsite migrations can be scary, but they don’t have to be. Here are 5 steps for making your moving experience as seamless as possible; starting from knowing what you need to back-up, and finishing with full DNS propagation and your new hosting solution going live.

It’s not every day you decide to change hosting providers or upgrade your solution. If you’re with a high-quality provider and haven’t had any problems, you may only ever do this a handful of times as your site grows. When you do decide to go through with a migration, you will likely go through the five stages below.

  1. Backing up your website
  2. Moving your website’s data
  3. Testing the new website
  4. Migrating your DNS
  5. Enjoying your new hosting environment

We believe in seamless website migrations for everyone, which is why we’ve put together 5 steps for making sure your site migration is as easy and relaxed as possible.

You may be moving somewhere new because you were unhappy with your old provider but don’t rush. Canceling your old hosting provider before completing a migration can mean days or weeks of downtime, depending on how complex your migration is and whether you encounter any issues.

Unless your old hosting provider engages in daily backups and maintains them after you leave, you could lose your entire site. Even if you do have a backup, your SEO value can plummet, and a whole host (pun intended) of other problems can occur.

A good Migration should mean consistent site traffic. Not a sudden drop or decline.

Good Traffic Results from a migrations

A good migration

Bad Traffic Results from a migrations

A bad migration

That’s why we always suggest making sure to…

One of the first things you should do during a migration is to create a local backup of your website. Despite everyone’s best intentions, technology doesn’t always go to plan and a small database corruption can cause issues.

If you haven’t canceled with your previous provider, they may still have backups located on a third-party server. Hostdedi offers daily hosting backups and archives them for 30 days. In most cases, you can use these backups to restore your site. However, it’s always a good idea to make sure you have a local one as well.

If you’re coming from a hosting provider with a cPanel interface, you can head to the page ‘Backup’ in your control panel. Here you’ll be able to download a copy of your “public_html” directory (which contains most of your site information). You can also grab a backup of your MySQL database too.

Hostdedi provides full backups through our control panel. Click on Backups -> Backup Now, and then click continue. You can also select to only perform a partial backup if you prefer.

How to backup your Website for a Migration

Most hosting providers will have an easy to access backup feature available. If you can’t find one, get in touch with their support team.

“No, I don’t need to check. It’s ready, let’s go live,” is something every migration expert dreads hearing.

Going live without testing a site after a migration is like playing a game of Risk and not knowing what pieces you’ve got in play. While there’s a chance everything will work out well, there’s also a chance something will go wrong and you end up stuck with nowhere to go but start over.

A short checklist of what to test includes:

There may also be things you should check specific to your site. If you’re an eCommerce store, for instance, you may want to test the checkout process.

Do this by heading to your domain registration control panel and then “Domain Name Servers”. From here you’ll be able to see what your nameservers actually are.

Find Out Your Nameservers

If you’re interested in checking this out on your own machine, open up a command prompt and enter

dig +short NS yoursite.com | sort.

If you’re using the Hostdedi DNS service and have successfully repointed your domain, you should see at least one of those below:

ns1.nexcess.net

ns2.nexcess.net

ns3.nexcess.net

ns4.nexcess.net

ns5.nexcess.net

ns6.nexcess.net

ns7.nexcess.net

Ns8.nexcess.net

If you don’t then don’t panic. It may be that you’re with an alternate DNS provider. It can also help to know how far along the path to full website migration you are (if you’re not the one in charge).

Remember that DNS record changes can take 12 to 24 hours, so don’t be surprised if this information doesn’t change immediately after you’ve altered your DNS. Just like with our first point, don’t cancel your old service before your new one is good to go.

Once you’ve changed your DNS, you’re going to want to let it complete propagation. You shouldn’t experience any downtime during this period, but you will want to make sure that you don’t make any changes to your site.

There’s nothing worse than posting new content during the propagation cycle and finding you’ve lost it the next day.

If you’re interested in checking the status of your DNS propagation, try the Hostdedi DNS checker to see how far it’s gotten.

Making Migration Easy

Remember, Hostdedi offers free migration assistance on all of our solutions, meaning that making the switch from one provider to another couldn’t be easier. We make migrations easy and seamless.

Posted in:
General

Source link

Mission Critical Environments

Mission Critical Environments

This week’s 30-minute session was with Doug, the Hostdedi data center facilities manager, covering everything you need to know about mission critical environments. He began by saying that maintaining reliability and security for mission critical environments is… mission critical. He then took marker to wall to expand on that.

What are Mission Critical Environments

Mission critical environments are hosting environments integral to the consistent and reliable running of a data center. This primarily includes servers, but data centers need to maintain other elements too.

  • Infrastructure (buildings)
  • Redundancies (backup generators, etc)
  • Tools (disaster recovery, maintenance)
  • Other unknowns that may be a danger to reliability and uptime.

Factors Important to Mission Critical Environments

For mission critical environments to remain stable, professionals have to ensure the stability and security of onsite equipment. A few of the factors that are most important for doing this are included below.

Disaster Recovery

In the event of a disaster, your data center should have a disaster recovery plan ready. A good disaster recovery plan will minimize downtime and ensure your site is back online as soon as possible after a disaster event. This can include, but isn’t limited to:

  • Backup generators
  • Infrastructure features
  • Tools for solving problems
  • Trained onsite staff

Preventative Maintenance

Prevention is the best cure, and nowhere is that more evident than with data centers. Waiting for something to fail, whether it’s a server, power supply, or something else, is a recipe for reduced uptime and low-quality hosting.

Preventative maintenance means keeping an eye to ensure that hardware and infrastructure remain operating at full capacity with failing elements replaced before they become a problem.

Risk Management

Managing risk takes place everywhere, but it is no more critical than in a data center facility. As indicated above, risk is something to be avoided and finding a solution before a risk becomes a problem is a top priority.

Redundancy

Redundancy includes backups used if primary sources of power, connectivity, or something else go offline. For data center facilities trying to maximize uptime, redundancies are crucial. In many cases, data centers do not have control over when something goes wrong. Redundancies can help to mitigate any issues that arise.

Design mission critical environments for these things

Final Thoughts

Keeping mission critical environments secure and reliable is one of the most important tasks in a data center and involves looking at what might go wrong and finding the best way to prevent it. Thanks to Doug for showing us some of the ways in which that is done.

Want to know more about how we maintain mission critical environments? Contact our sales team.

Posted in:
General

Source link

Introducing Hostdedi Global DNS

Introducing Hostdedi Global DNS

We are excited to announce Hostdedi Global DNS. A globally distributed name service that puts DNS closer to your website visitors.

What is DNS?

The domain name service (DNS) is the phonebook of the Internet. Whenever you load a website, open a mobile app, or click on a cat GIF, your device usually searches for a web address using DNS.
 
The Internet is made up of connected devices with Internet Protocol (IP) addresses. The domain name service sits on top of the Internet and allows for convenient, easy-to-remember names, nexcess.net, to be translated back to hard-to-remember IP addresses as 208.69.120.21. This is made worse by the Internet’s next generation of addresses, known as IPv6, with long-string addresses such as 2607:f7c0:1:af00:d045:7800:0:1b.

Hostdedi DNS, Today

When you host your DNS with Hostdedi, as about half our customers currently do, DNS requests from your website visitors are answered from servers located in the US. Even if we host your services in London, Australia, or other international locations, our DNS services are still located in the US.
 
We go to great lengths to put our DNS servers on third-party networks, which isolates them from potential failures. We also host eight name servers in total, which is double the number typically found among web service providers. At the end of the day, it’s still a US-based DNS infrastructure.
 
To be clear, concentrating DNS servers in a particular location is a common setup. Due to the nature of DNS, when a user visits your website, their browser or device caches the results and doesn’t need to check DNS again for an extended period of time.
 
For new visitors from international locations, this can cause something known as first-visit page load delay. These geographically distant users may experience as much as a half-second delay. This may sound trivial, but visitors are quick to notice sluggish load times and tend to avoid sites that suffer from them.
 
Administrators and developers work tirelessly to shave even fractions of seconds from page load time. A research paper by Google last year found that when delays drift beyond 3 seconds, visitors quickly lose interest and start abandoning sites.

All things being equal – faster is better.
 

Hostdedi Global DNS

We’ve been hard at work the last couple of months deploying a footprint of 15 DNS servers distributed around the world. These servers are strategically positioned so that they provide a local DNS server option for visitors to your site, and significantly reduce first-visit load times.
 
Hostdedi Global DNS uses a technology called Anycast routing, which allows us to broadcast the IP addresses of our DNS server from multiple global locations at the same time. When a visitor loads your website, this technology allows their Internet service provider (ISP) to route the visitor’s DNS requests to the Hostdedi DNS server closest to that visitor.
 
When we stood up the proof-of-concept and looked at the latency differences of Global DNS against our existing DNS, it floored us! The results were significantly better than we expected in reducing DNS first-visit latency. This was some two months ago and it validated our all-in commitment to launching a Global DNS platform.
 
Following is a real-world example of Global DNS in action. Using a tool provided by KeyCDN.com, we tested latency (round trip time) from 16 global locations, then compared Classic DNS and Global DNS.

Hostdedi Global DNS, Going Live!

If you’re a Hostdedi customer, you will enjoy the benefits of our Global DNS for no additional cost, and no action is required.
 
We will begin transitioning Hostdedi DNS to the Global DNS system on Thursday, August 30th. The first maintenance will migrate ns7.nexcess.net and ns8.nexcess.net, with other name servers to follow in the coming weeks. Our goal is to have Global DNS operational for all nexcess.net name servers by the end of September.
 
There will be no downtime as a result of this maintenance. The existing Hostdedi DNS servers will continue to operate and respond to DNS queries until we confirmed all traffic has moved away from them.
 
For instructions on pointing your domain to Hostdedi Global DNS, please see our how-to-guide for details.

Where are Hostdedi Global DNS servers located?

  • Amsterdam
  • Atlanta
  • Chicago
  • Dallas
  • Frankfurt
  • London
  • Los Angeles
  • Miami
  • New york
  • Paris
  • San Francisco
  • Seattle
  • Singapore
  • Sydney
  • Tokyo

 

Will other Hostdedi Global DNS locations be added?

Yes! We are currently looking at adding Bangalore, Hong Kong, Johannesburg, Sao Paulo, and Toronto. These locations will help close important gaps and continue to improve the experience for your website visitors.

Posted in:
General, Hostdedi

Source link

Everything You Need To Know

Everything You Need To Know

What to Know about DNS Records

How does a browser load a web page? It uses a phonebook. Not an old-fashioned leatherbound book or a switchboard operator, but a service known as DNS. Each page of that DNS “phonebook” is what are known as DNS Records.

In other words, when you look for nexcess.net, your computer looks in the DNS “phonebook”, finds the number for the site, and connects you to it. Of course, the whole process is much quicker, and faster, than this.

This article looks at what DNS records are, the different types you’ll find, and why they’re incredibly important for the success of any website.

Don’t forget, for those using Hostdedi hosting services, it’s possible to use Hostdedi DNS for free. We manage all the hard work once the service is in place, you just have to point your domain name to Hostdedi Nameservers.

It was 1983. The internet was young and IT professionals had begun to get fed up with having to remember long series of numbers in order to connect with other machines. Networks had spread beyond just a few units and in an effort to future-proof, longer series of numbers were proposed. There was just one problem, how to make these numbers more consumer friendly?

Paul Mockapetris published two papers on the subject, creatively named RFC 882 and RFC 883. Mockapetris’ system expanded prior use of a hosts.txt file into a large system capable of managing multiple domains in a single location. That system is known as DNS, or Domain Name System.

Without DNS, the Internet wouldn’t be what it is today. We may even need a Roladex to visit our favorite sites!

With DNS, computers still require the IP (internet protocol) address number sequence in order to connect with a server. Yet with over 4,294,967,296 different IPv4 addresses, it makes a lot more sense to convert those numbers into something more easily recognizable. 

DNS gives IP addresses unique names for computers, services or other resources that are either part of a private network or part of the Internet.

 

 

Hostdedi DNS Network Setup

The Hostdedi DNS network has 100% uptime with multiple redundancies in place

The domain name system prevents having to remember a long series of numbers. Users are able to type in a domain name and then the domain name system will automatically match those names with the IP address and route connections.

At the center of all this, the hosts.txt file still existed in the form of vast servers for managing domain names and at the heart of these servers are DNS records.

IP addresses work in a similar fashion to that of street addresses or phone numbers in an address book. While people browse the Internet, they look up their favorite site much like they look up a friend’s number. From there, the system provides them with the friend’s number and they can contact them. With DNS, the second part of this sequence is automated. This requires DNS records from a DNS server.

During the creation of DNS, servers were manufactured solely for the purpose of managing DNS and related information. Within each of these servers are DNS records that tie entries to a domain. 

Any device connected to a computer network, whether it is a PC, router, printer, or any other device with an IP address, is referred to as ‘hosts’. With the sheer number of ‘hosts’ around the world, engineers needed a way to track devices without resorting to memorization of numbers.

As explained earlier, DNS records came along with DNS as a tool for system admins and users to seek out authoritative information on websites or other services they’re trying to access.

There are two types of DNS Records. These are:

  • Records stored in Domain Name System servers
  • Records stored on a user’s machine

Records stored on a Domain Name System server are covered in more detail below, including what types of records exits and how they function.

Records stored on a user’s machine are also known as DNS cache. This record lists the visiting history of an operator for all websites previously visited, regardless of whether they were attempted visits or not.

When you watch a crime drama and a culprit’s computer is taken to be analyzed for the sites they have visited, a DNS cache is usually what would be checked for unauthorized activity.

However, a DNS cache is usually temporary and has a limited lifespan before being removed.

DNS Syntax Types Explained

While there are an abundance of record types in existence, below you’ll find nine of the most commonly used DNS records. For more information, don’t forget to check our DNS Records knowledge base, as well as how to configure DNS records for your site.

A – A records are usually referred to as address records, and occasionally host records. They are the most commonly used records that map hostnames of network devices to IPv4 addresses. A website address book.

AAAA – Serves the same purpose as A records, except that hostnames are mapped to an IPv6 address vice an IPv4. As opposed to 32-bits for an IPv4 address, an IPv6 address contains 128-bits. An example of an IPv6 address is FE80:0000:0000:0000:0202:B3FF:FEIE:8329. 

CNAME – Acts as an alias for domains. The CNAME record is tied to the actual domain name. If the address nexcess.net was typed on your internet browser it would reload to the URL www.nexcess.net 

MX – MX records maps a domain name and connects them with message transfer agents. A mail server is responsible for managing the reception of emails, and preference values are assigned. In the case of large organizations, multiple email servers would be utilized to process messages en masse. Through the use of the SMTP (Simple Mail Transfer Protocol) emails are routed properly to their intended hosts.

NS – Also known as name server records; designates a name server for a given host.

PX – The technical description based on RFC 2163 details the PX DNS record as a ‘pointer to X.400/RFC822 mapping information’. Currently, it is not used by any application.

PTR – Referred to as reverse-lookup pointer records. PTR records are used to search names of domains based on IP addresses.

TXT – A type of DNS record that stores text-based information. It’s primarily used to verify the ownership of a domain as well as hold SPF (Sender Policy Framework) data, and prevents the delivery of fake emails that give the appearance of originating from a user.

SOA – Possibly the most critical one of them all, the State of Authority record annotates when the domain was updated last.

The general purpose of a DNS lookup is to pull information from a DNS server. This is akin to someone looking up a number in a phone book (hence the term ‘lookup’ in conjunction with DNS).

Computers, mobile phones, and servers that are part of a network need to be configured to know how to translate domain names and email addresses into discernable information. A DNS lookup exists solely for this purpose.

There are primarily two types of DNS lookups: forward DNS lookups and reverse DNS lookups.

Forward and Reverse DNS

Forward DNS Lookups

Forward DNS allows networked devices to translate an email address or domain name into the address of the device that would handle the communications process. Despite the transparency, forward DNS lookup is an integral function of IP networks, in particular, the Internet.

Reverse DNS Lookups

Reverse DNS (rDNS/RDNS) pulls domain name info from an IP address. It is also known as Inverse DNS. Reverse DNS lookups are used to filter undesirable data such as spam. Spam can be sent through any domain name that a spammer desires. Spammers can use this technique to fool regular customers into thinking that they’re dealing with legitimate entities. This can include organizations such as Bank of America or Paypal.

Email servers that are receiving emails can validate them by checking IPs with Reverse DNS requests. RDNS resolvers should match the domain of the email address if the emails themselves are legitimate. While this is useful in verifying the integrity of emails, it does not come without a cost. An ISP has to set the records up if the legitimate mail servers themselves do not have the appropriate records on hand to respond properly.


What's my DNS?

What Are Your DNS Records?

You can check your own DNS records with the Hostdedi DNS Checker. Simply enter the site address you want to check and the type of record you want to see.

You can also use this tool to check third-party DNS records and confirm the identity of certain domains to make sure they are not fake.


Ultimately, DNS makes life easier for the end user that can’t memorize 32-bit or 128-bit IP addresses. It’s easier to just type a name into the browser bar and let DNS figure out the rest. DNS resource records are fundamental for DNS to be able to work, and the Internet wouldn’t be what it is today without them.

If you’re looking for more information on site performance and benchmarking, don’t forget to check our article on TTFB (Time To First Byte) and why it may not be as important as you’ve been led to believe. Also, check out our summary of data center tiers and use the stats to figure out which data center tier you’re hosting with.

Hostdedi DNS Solutions

Posted in:
General, Web Hosting Basics

Source link

Why Time To First Byte (TTFB) Isn’t as Important as You Think

Why Time To First Byte (TTFB) Isn’t as Important as You Think

Why TTFB isn't as important as you thinkTime To First Byte (TTFB) is the time it takes for a web server to respond to a request. It’s a metric reported by several page speed testers, and is often quoted as a primary means for measuring how fast a site is. The idea being that the faster a web server responds, the quicker a site will load.

However, numerous groups have found that TTFB isn’t that important. When looked at in isolation, the figure provides an appealing way to grade your site or hosting provider, but when looked at in conjunction with other metrics, there seems to be a disconnect. This is especially true with regards to SEO rankings and improved user experience.

Here, we’re going to look at why TTFB can be easily manipulated, what metrics actually matter, and how knowing these things can help you to improve your site’s SEO, user experience, and more.

TTFB measures the time between a user making a HTTP request and the first byte of the page being received by the user’s browser.

What Does TTFB measure

The basic model of how TTFB works

The model is simple. The faster a web server responds to a user request, the faster the site will load. Unfortunately, things get a little more complicated.

In some cases of testing site speed, you’ll find TTFB test durations far longer than what you would expect. This is despite actual page load times seeming much faster. This is the first indication that something is wrong with how TTFB measures speed.

A deeper look shows that this is because TTFB actually measures the time it takes for the first HTTP response to be received, not the time it takes for the page itself to be sent.

Time To First Byte test

A test of Time To First Byte and page load times

In the Time To First Byte test above, TTFB is measured at 0.417 seconds, which seems very quick. However, looking at the waterfall, we can see that this figure only correlates with the HTML loading time.  Afterward, page load speed takes much longer for other assets on the page and we’re seeing DOM content loaded at around 1.6 seconds.

This is because the TTFB value is incredibly easy to manipulate. HTML HTTP response headers can be generated and sent incredibly quickly but they have absolutely no bearing on how fast a user will be able to see or interact with a page. For all practical purposes, they are invisible.

By loading HTTP response headers to speed up TTFB, it’s easy to create a ‘false’ view of a site’s speed. It also doesn’t necessarily mean that the rest of the waterfall will load quickly as well.

A good example of how Time To First Byte testing can be manipulated with HTTP headers is when looking at the page load times of NGINX in conjunction with compression.

Compressed pages are smaller and so they download from a server faster when compared with uncompressed pages. This ultimately means that page loads times to interactivity are much faster. However, from the perspective of TTFB, this is not true.

TTFB With NGINX

Time To First Byte compared with actual page loading times

This is because HTTP headers can be generated and sent relatively quickly before the main page content.

This is an especially significant figure for those that make use of the Hostdedi Cloud Accelerator, as this makes use of NGINX in order to speed up caching speeds on optimized Hostdedi platforms.

Continue reading to find out what metrics you should be using to check page load times.

In a 2013 study by Moz, it was found that Time To First Byte does have a significant correlation with SEO rankings. The faster TTFB was, the higher ranked pages would be.

This being said (and as Moz themselves make clear) correlation and causation are not the same thing. The actual methods Google (and other search engines) use to crawl web pages and build out SERPs are not known to the public.

It’s been deemed by many that page load times to interactivity are actually a lot more important. When looking at page speed tests, it’s important to look at all the figures available as a whole and not just TTFB.

So, with regards to TTFB tests, SEO, and user experience:

Google Does Not Measure Page Speed for SEO (Entirely)

Ok, it sounds like we’ve gone back on what we just said, but bear with us.

Google doesn’t measure page speed as incredibly important, it measures user behavior. They have said in the past that if users are willing to wait for content to load, they will not downgrade a website for being slow.

This is because Google weighs usability and experience as more important than speed. Back in 2010, Matt Cutts said that including site speed as a ranking factor “affects outliers […] If you’re the best resource, you’ll probably still come up.” It just happens to be that the less time a user has to wait for a page, the more likely they are to stay on the page.

So when it comes to using speed testing services such as PageSpeed Insights, make sure to consider your page load times from a practical perspective as well. How do you feel about the time it takes for your page to load when you type it in your browser? Do you think the content quality is worth the wait?

Time To First Byte SEO

PageSpeed Insights provides actionable speed intel for SEO such as that above

Simple checks like this are easy and can provide you with a lot of insight into what your users will think.

Practical Page Load Times Matter – Not TTFB

A faster Time To First Byte does not mean a faster website.

TTFB is not a practical measurement. It doesn’t really affect the user experience. The time it takes for a browser to communicate back and forth with a server doesn’t affect a user’s experience of that server’s content as much as the time it takes for them to actually interact with it.

Instead, measurements that test time to interactivity are inherently more important. Improvements here don’t always match the results of web page speed tests or scores.

So, the main takeaway here? High-quality content and a great user experience are still two of the most significant factors involved in SEO, site speed can influence this but is not nearly the most important. 

Mobile Speed is Now a Ranking Factor

As of July 2018, mobile page load speed has become a factor in SEO ranking. TTFB can be included in this.

However, again, TTFB and page load times aren’t as important as high-quality content and usability. The user experience on mobile devices has long been a key area Google and other search engines have tried targeting and improving. Load times are just a small part of this.

Responsive design and easily readable and scalable text and images are much more important.

Google highly recommend their tool PageSpeed Insights in order to properly see how your page speed may affect SEO ranking.

Slow and Steady Wins the Race

Ok, all this doesn’t mean that you should let your site crawl to a halt. This isn’t a childhood fable or a call to reduce quick internet. Fast internet is one of the wonders of the modern age and you still want your site to load as quickly as possible.

What we’re saying is that if you’re trying to find how to improve Time To First Byte, stop.

it’s far more important for you to start looking at page load time in their entirety and not just the time it takes for a server to respond. At Hostdedi, we’re proud of how fast our data center serves content, and work our hardest to make sure that our servers are optimized for providing a great user experience and helping to boost your SEO as much as a hosting company can. 

We highly recommend checking out the Hostdedi Cloud and seeing how Hostdedi can help.

Faster Cloud Hosting

Posted in:
General

Source link

What Are Data Center Tiers Explained

What Are Data Center Tiers Explained

The Definitive Guide to Data Center TiersIn the world of data centers, reliability is one of the most important factors. The more reliable you are, the more likely clients are going to want to use you. After all, who wants a data center that isn’t online?

Luckily, The Telecommunications Industry Association (TIA) published a standard for data centers defining four levels of data centers in regards to their reliability. The aim was that this standard would then be able to inform potential data center users about which center is best for them. While brief, the standard laid the groundwork for how some data centers would manage to pull ahead of others in the future.

But The TIA’s standard wasn’t enough. Several years later, Uptime Institute instigated what is now known as the ‘Tier Standard’. Tier Standard describes four different data center tiers based on the availability of data processing as the result of the hardware at a location. 

This article breaks the types of data center down into the four tiers and looks at how they differ. Combine this our article on how to choose a data center location, and you’ll know where the best place to host your website is.

TL;DR:

Check out our Infographic below to quickly see the main differences between data center tiers, or keep reading for more detail.

What are the different Data Center Tiers

The Classification of  Data Centers

Data centers are facilities used to house computer systems and associated components. A data center is comprised of redundant power supplies, data communications connections, environmental controls, and various security devices.

Tier one data centers have the lowest uptime, and tier four have the highest. The requirements of a data center are progressive in that tier four data centers incorporate the data center requirements of the first three tiers in addition to other conditions that classify it as a tier four data center.

The requirements of a data center refer to the equipment needed to create a suitable environment. This includes reliable infrastructure necessary for IT operations, which increases security and reduces the chances of security breaches.

What to Consider When Choosing a Data Center

When choosing a data center to store data for your business, it is important to have a data center checklist. This is a list of the most important things you should keep in mind – such as the physical security of a prospective data center –  when making your choice.

Typically, a good data center checklist would include the various data center pricing policies and extra amenities provided. An excellent straightforward strategy, for instance, should have no hidden charges and a data center with additional facilities is better than one without.

Data Center Specifications

Data center specifications refer to information about the setup in a data center. This can include the maximum uptime, redundant power systems that allow the platform to stay up regardless of power outages, the qualification of technical staff at the data center, and more.

It is common that higher data center tiers have better-qualified staffing since more expertise is required to maintain the whole platform. Data center specifications should be on the data center checklist of a customer looking at prospective data centers to store their data.

What Is a Tier One Data Center?

This is the lowest tier in the Tier Standard. A data center in this tier is simple in that it has only one source of servers, network links, and other components.

Redundancy and backups in this tier are little or non-existent. That includes power and storage redundancies.

As such, specifications for a data center for this tier are not awe-inspiring. If a power outage were to occur, the system would go offline since there are no fail safes to kick in and save the day.

The specifications of a tier one data center allow for uptime of approximately 99.671%. The lack of backup mechanisms make this data center tier seem like a risk for many businesses but they can work for small internet based companies with no real-time customer support. However, for companies with heavy reliance on their data, a tier one data center would not be practical.

One of the advantages of tier one data centers is that they provide the cheapest service offering for companies on a budget.

However, a lack of redundancy means that the uptime of servers is considerably lower than tier two, three and four and maintenance on the facility will require shutting down of the entire facility thus more downtime.

What is a Tier Two Data Center?

This is the next level up after line one. Tier Two features more infrastructure and measures to ensure less susceptibility to unexpected downtime. The requirements of a data center for this data center tier include all those of the first tier but with some redundancy.

For instance, they typically have one a single path for power and cooling. However,  they also have a generator as a backup and a backup cooling system to keep the data center environment optimal.

The specifications of a data center for the second tier allow for higher uptime compared to level one data centers that are approximately 99.741%.

What is a Tier Three Data Center?

Tier Three data center requirements for line three data centers include all those of tier one but have a more sophisticated infrastructure to allow for redundancy and backups in case of unexpected events that may cause downtime.

All server equipment has multiple power sources and cooling distribution paths. In case of failure of any of the distribution paths, another takes over ensuring the system stays online. Tier three data centers must have multiple uplinks and must be dual powered.

These specifications ensure you only have a maximum of two hours downtime annually,  as a percentage. Some of the equipment in tier three systems are fully fault-tolerant.

Some procedures are put in place to ensure maintenance can be done without any downtime. Tier three data centers are the most cost-effective solution for the majority of businesses.

What is a Tier Four Data Center?

Tier 4 is the highest level when it comes to data center tiers. It has an availability percentage of 99.99%. A tier 4 data center is more sophisticated regarding its infrastructure as it has the full capacity, support, and procedures in place to ensure the maximum and optimum uptime levels.

Tier 4 data center fully meets all the specifications of the other three tiers. A tier 4 data center is error tolerant as it can operate normally even when there is an instance of infrastructural equipment failure.

A Tier 4 data center is fully redundant with multiple cooling systems, sources of power and generators to back it up. It has an uptime level of 99.99% with an estimated downtime level of only 29 minutes annually.

These are the four data center tiers and a summary of their data center requirements used in the design process. Anyone looking for things to put in their data center checklist could find the essential elements to look for in the specifications of a data center and requirements.

Hostdedi Is a Tier 4 Data Center

Between having an uptime of 99.9975%, multiple redundancies, and an annual downtime of less than 18 minutes, the Hostdedi data center is regarded as a tier 4 data center. If you would like to know more about the Hostdedi data center, don’t hesitate to check out the different data centers offered by Hostdedi around the world or take a more detailed look at our Southfield, Michigan data center (in an easy to read infographic).

Host in a Tier 4 Data Center

Posted in:
General, Hostdedi, Web Hosting Basics

Source link

3 Key Takeaways from IRCE 2018

3 Key Takeaways from IRCE 2018

3 Takeaways from IRCE 2018

We’ve just returned from IRCE 2018. Between the marketplace and the sessions, there was a lot happening. eCommerce and marketing professionals from around the world were in attendance, and everyone seemed to have something to bring to the table.

However, throughout the show, we found that three things seemed to be present in almost all of the conversations going on.

Here are what we think were the three main takeaways from IRCE this year.

 

With huge marketplaces such as Amazon, speakers such as Seth Godin stated that “You will lose on price” if you try and compete there.

Instead, small companies should start to look at fringe groups that are likely to grow with time. Effectively building a business is about making change happen. It’s about taking something and increasing its value in the public consciousness.

This led Godin to prompt everyone to ask themselves two questions about their brand:

  • Who’s it for?
  • What’s it for?

Throughout IRCE, this theme found itself springing up time and time again.

The speech Institutionalize Innovation by Roe Macfarlane talked about how market segmentation required specific actions based on age, including the type of leader different groups are more inclined to follow.

Counter the Amazon Effect also talked about how it was important to innovate and inspire change in order to compete with the eCommerce giants of today. How did many people suggest this change and niche focus should come about? Personalization.

Compete against Amazon at IRCE 2018

Godin’s second standout statement during his keynote was also repeated by speakers throughout IRCE 2018. The importance is not in marketing to a mainstream audience, but in appealing to those who are already a friend to your brand. These connections should be nurtured in a way that creates a “tribe” that follows one thing: you.

This tribe should be nurtured through personalization techniques.

Personalization 2.0: Making the Move to Individualization by Brendan Witcher talked about the ultimate destination of personalization techniques: individualization, not segmentation. He also went over how to make use of big data to do this (without becoming ‘creepy’).

We also saw David Blades of Jenson USA talk about the importance of user generated content in boosting sales. The community wants the brand to be about them, and what better way to make it about them than by having them generate the content.

Magento and Machine Learning

With the first Magento Straight Talk during IRCE came conversations about machine learning and its place in eCommerce. For many businesses, the idea of machine learning has become something that is spoken about a lot but hasn’t shown enough value to be applied independently.

Anita Andrew’s talk inspired a different perspective, with stats on how effective machine learning has been for some huge brands. Target saw a 30% growth in revenue after applying machine learning techniques. Amazon saw a 55% increase in sales from personal recommendations, and USAA saw a 76% improvement in customer support contact and product offering fit.

Yet Anita did mention the issue with what she termed ‘dirty data’. Throughout the big data sessions, dirty data become a central point of interest. How do you take outliers and unpredictable variables and apply them to machine learning algorithms? Many of the IRCE speakers gave their own perspectives and approaches to cleaning for different purposes. Anita talked about cleaning data in order to boost product offerings. In Personalization 2.0, the focus was on how to clean data to truly individualize your brand. In the merchandising track, Carter Perez talked about how Machine learning could be used to improve product discovery.

Regardless of where you heard it, the message was clear: machine learning is the future and it’s here now.

Outside of the sessions, the marketplace was abuzz with activity. Many of those exhibiting at the show had something to offer that linked into the topics mentioned above.

Hostdedi met with several old, new, and future clients during the show and had a great time with all of them. We also went to go see the Cubs vs. Phillies game in Wrigleyville, with over 250 RSVPs to the rooftop event. We’ll leave you with the view we had and look forward to seeing you next time!

Posted in:
General, Magento

Source link

Everything You Need to Know About GDPR

Everything You Need to Know About GDPR

The GDPR (General Data Protection Regulation) is set to usher in the next era of European digital compliance this May. As the latest set of European Union (EU) regulations regarding consumer rights, the GDPR has been proposed in order to strengthen and unify data protection for individuals, and address issues with exporting data outside of the EU.

This will mean changes to the way in which many businesses which operate within the EU handle and process customer data. Keep reading to find out how.

What is the General Data Protection Regulation (GDPR)?

The GDPR is a new set of online data security regulations which have been adopted by the EU and will be put in place by May 25.

The main things you need to know are that the GDPR will increase the definition of what constitutes personal data, change the way in which you handle that data, and provide individual EU consumers with increased control over their personal information.

While online data security and consumer rights protections have existed for a long time – in the form of the Data Protection Directive – its definitions and mechanisms date back to 1995. The internet has changed a lot since then and new regulations have long been needed.

The GDPR will apply to all EU member states and any business which is active within them. For many companies both inside and outside of the EU, this will mean a change of strategy in order to continue working within Europe.

Why do we need the GDPR?

In a sentence: because data protection and privacy issues are increasingly becoming a problem.

As internet technology continues to grow so too does the frequency and effect of data breaches. In 2013, there were over 575 million of them. By the first half of 2017, that number had increased to over 1.9 billion. Over 95% of those breaches involved unencrypted data which was not being suitably protected. How does this affect consumers and organizations? By 2019, the total global annual cost of all data breaches is expected to exceed $2.1 trillion in damages.

The GDPR aims to try and reduce these figures by creating a set of data security standards. These are standards which organizations and businesses which operate or have an entity in Europe will need to follow. For some, these increased protections are just “common sense” data security ideas which should have been implemented long ago. For others, they are serious concerns which their business has yet to fully address. In a survey by Deloitte, it was found that just 15% of respondents expected to be fully GDPR compliant by the deadline.

Who Will Be Affected by the GDPR?

Your business will be affected by the GDPR if you are storing or processing information on EU citizens, even if your business or processing centers are not located in the EU.

As the GDPR documentation states:

“This Regulation applies to the processing of personal data in the context of the activities of an establishment of a controller or a processor in the [European] Union, regardless of whether the processing takes place in the [European] Union or not.”

How Will the GDPR Work?

Current data security regulations already require security for names, addresses, and basic ID numbers (i.e. social security). The GDPR aims to take this and provide similar protection for individual IP addresses, cookie data, and more.

By securing this information in a more stringent manner, protection against data breaches and information theft will hopefully decrease. However, you should note that the GDPR does not just address what type of information is protected, it also addresses how it is protected.

Data the GDPR Will Protect Includes:

  • Names, addresses, and ID numbers
  • Location data, IP addresses, cookie data and RFID tags
  • Biometric data
  • Health-related data
  • Political opinions
  • Sexual orientation
  • Racial and ethnicity data

Additional GDPR Roles

There are three main roles which have been defined by the GDPR which will need to be filled. These roles are responsible for implementation and compliance with the GDPR. They include:

  • A Data Controller – Responsible for deciding on how personal data is processed and why it is processed.
  • A Data Processor – Responsible for maintaining and processing personal data records, as well as ensuring that processing partners also comply.
  • A Data Protection Officer – Responsible for overseeing the data security strategy and making sure that you are GDPR compliant.

GDPR Consent

According to the new GDPR guidelines, consent will become a major factor in the storing of personal information. Consent must be explicitly given by those providing personal information and data controllers must be able to prove this. Furthermore, if an individual would like to withdraw consent, they are able to at any time, whereupon data must be deleted.

GDPR Pseudonymisation

GDPR Pseudonymisation is a process whereby information is transformed so as to not be attributable to a single individual without secondary verification. This means that personal data must be made “unintelligible” without the use of a secondary set of information by which to understand it. This may mean using encryption, or it may mean adopting a tokenization system.

GDPR Data Portability

Data portability concerns “the right for a data subject to receive the personal data concerning them”. This means that data must be portable and easily transferred to its subject in a ‘commonly used and machine readable format’.

By When Do I Have to Be GDPR Compliant?

GDPR compliance will be required by May 25, 2018.

What Are the GDPR fines?

Fines for those who are not GDPR compliant will vary depending on the severity of non-compliance. At this point in time, examples of GDPR fines have not been released.

However, it has been indicated that fines of up to €20 million, or 4% of the worldwide annual revenue of the prior fiscal year, are likely for those who have not followed the basic principles for processing or conditions for consent.

For those who have not managed their monitoring bodies or controllers and processors of the GDPR, fines will instead be up to €10 million, or 2% of the worldwide annual revenue of the prior fiscal year.

Hostdedi and GDPR

In order to help clients who will be affected by the GDPR, Hostdedi will be GDPR compliant. We are currently working to ensure that our policies and procedures comply with the General Data Protection Regulation (GDPR).

In the coming weeks, we will be making sure that you are informed of any changes which take place to Hostdedi’ services. At this point in time, we fully believe that you will be satisfied with those changes.

Note that this guide does not constitute legal advice and is rather an overview of the regulation changes which will take effect. For a full breakdown of the changes taking place, please consult the agreed text from the EUGDPR.org website.

Posted in:
General

Source link

What Caused Your Site’s Search Rank To Crash?

What Caused Your Site’s Search Rank To Crash?

I don’t encourage site owners to spend their time obsessively scrutinizing search rankings: there are more positive ways to increase traffic to your site. Nevertheless, a drop in search position can have a substantial impact on the number of visitors your site receives, and hence on revenue.

Every site is different, and there’s no one-size-fits-all solution to the problem of declining search position, but, in my experience, these are the five areas that you should focus on if your site has recently tanked in the SERPs.

Backlink Erosion

Although Google’s algorithms have come a long way since the days they entirely depended on incoming links to assess the value of a web page, backlinks still matter.

If a site loses a lot of the links that were propping it up in the SERPs, it’s likely to take a dive. But it doesn’t have to be a lot of links: a small number of links from high-authority pages have an outsized effect, and if they’re removed, the drop in rank can be substantial.

Use a tool like Moz’s Open Site Explorer to assess your site’s backlink profile at regular intervals, so that you can compare over time. It should help you identify potential problems with the site’s link profile.

The opposite of losing good links is gaining bad links, and that can be harmful too. So-called negative SEO could be the culprit, so look carefully at your backlink profile for evidence of links from bad neighborhoods. The Disavow Backlinks tool might come in handy, but use it with caution or you may shoot yourself in the foot.

The Competition Has Stepped Up Their Game

Rankings are relative. For a site to go up, another site has to go down. If you’re losing position relative to a competitor, take a close look at any recent changes they’ve made to their site: improved content, better backlinks, and anything else that might cause their site to look better to Google.

A thorough competitor analysis can point the way to improvements you might make to your own site.

Penalties And Algorithm Changes

Google is opinionated about what it does and does not like. Site owners who aren’t familiar with the rules can accidentally damage their ranking potential. The first step is to take a look at Google Search Console, which will tell you about any manual penalties that have been applied.

If there are no manual penalties, familiarize yourself with Google’s Webmaster Guidelines. If you’re doing something Google doesn’t like, that’s where you’ll find out what it is.

Server Issues / Poor hosting

Google wants to send its users to websites that provide a positive experience. Slow-loading sites, excessive latency, and unresponsive pages do not a good experience make. For the sake of both SEO and user experience, it’s worth making sure that your site is as fast and responsive as possible.

Google’s PageSpeed Insights, Pingdom Tools, and GTmetrix can analyze your site and offer tips for performance improvements.

If your web host isn’t up to the job, no amount of performance optimization will make much of a difference. If performance is a problem, consider upgrading to faster hosting or migrating to a web hosting provider with a platform that can support the needs of your site.

Random Fluctuations In Rank

This is perhaps the most frustrating type of rank change: it’s often called the Google Dance or Google Flux. A site will lose and gain ranking with no discernible reason as Google tweaks its algorithm or some other factor changes.

There’s really nothing you can do about random fluctuations other than redoubling your SEO efforts, following best practices, and ensuring that your site offers the best possible user experience.

Posted in:
General

Source link