CAll Us: +1 888-999-8231 Submit Ticket

Keep Your Site Fast with Mod_PageSpeed, Now Available for Hostdedi Cloud

Keep Your Site Fast with Mod_PageSpeedSlow sites crush eCommerce. Your customers will rebound quickly and forget about your lumbering load times when they flee to your competition. The same can’t be said for your site. Even if you dropped time and money on a sleek interface, marketing, and captivating copy,  even a 2-second load time will send your customers for the hills and drive down your page ranking.

If you’re a developer, or have access to one, Mod_PageSpeed provides a relatively easy path toward addressing speed bumps before they drive away your business, not after.

Even better, if you’re a Hostdedi Cloud client, we can help you get Mod_PageSpeed up and running, or your developer can accomplish the same by modifying your htaccess file:

<IfModule pagespeed_module>

ModPagespeed on

ModPagespeedRewriteLevel CoreFilters

</IfModule>

Slow websites wish they were as pretty as this gargantuan gastropod.

What is Mod_PageSpeed?

PageSpeed, or Mod_PageSpeed, is an open source plug-in for web servers using Apache or NGINX. Developed by Google as a counterpart of their PageSpeed Insights, which suggests ways to optimize your site, Mod_PageSpeed will automatically deploy many of these same optimizations.  

These optimizations span five categories, and generally look for ways to reduce file sizes and apply best practices without changing your content:

  • Stylesheets (CSS)
  • JavaScript (JS)
  • Images
  • HTML
  • Tracking activity filters

Each of these categories is divided into multiple filters, potentially giving you more direct control over the scope of optimization. For a detailed list of these filters, see the Google PageSpeed Wiki.

Not for Everyone

As you might guess, Mod_PageSpeed isn’t a good option for everyone. If you answer “no” to any of these questions, you may need another approach.

    1. Does your site use mostly dynamic content? Mod_PageSpeed optimizations have almost no effect on dynamic content, or content that adapts to how your site visitors behave. Sites that use static content — content that doesn’t change from visitor to visitor — will see far better results.
    2. Are you done making short-term changes to your site’s content? Each change you make diminishes the effect of Mod_PageSpeed optimizations. If you’re still making changes, the need to re-configure Mod_PageSpeed each time can bury your development team under additional work and complicate the process.
    3. Do you already have active website acceleration technology? If so, they tend not to play nice with Mod_PageSpeed, especially when both are optimizing your HTML. While it’s possible to disable HTML optimization in either Mod_PageSpeed or your alternate tech, any misstep will lead to HTML errors and an unpleasant experience for your visitors.  
    4. Do you have access to a developer? PageSpeed is open source, and so it takes some developer know-how to deploy and maintain properly. If you’re not planning upcoming changes to your site, this need is somewhat reduced — just remember any future changes will likely slow down your site without a developer’s assistance.
    5. If you aren’t running your own Apache or Nginx server, do you host with a company that gives you the tools required for installation of Mod_Pagespeed? If you’re running your own show, you have root access. See Point #4. We can’t speak for other companies, but if you’re a Hostdedi Cloud client, we’ll install it for you and even assist with basic configuration. Or, if you know a developer, they can do it themselves by modifying your .htaccess file.

If you’re not a Hostdedi client, but think Mod_PageSpeed might be a good fit, we once again recommend enlisting the services of a developer to both avoid potential pitfalls and get the most out of it.

If you are a Hostdedi Cloud client, or are just the curious sort, read on to learn a little about what even the default configuration of Mod_Pagespeed can accomplish.

“CoreFilters” for Mod_PageSpeed

For non-developers and for review, remember “filter” is just PageSpeed jargon for a subcategory of the five available categories for optimization: CSS, JS, Images, HTML, and tracking activity filters. If a filter is present, then Mod_PageSpeed is optimizing that element.

We use “CoreFilters” default mode because it is considered safe for use on most websites.

add_head – Adds a <head> tag to the document if not already present

combine_css – Combines multiple CSS elements into one

combine_javascript – Combines multiple script elements into one

convert_meta_tags – Adds a response header for each meta tag with an HTTP-equivalent attribute

extend_cache – Extends cache lifetime of CSS, JavaScript, and image resources that have not otherwise been optimized by signing URLs with a content hash.

fallback_rewrite_css_urls – Rewrites resources referenced in any CSS file that cannot otherwise be parsed and minified

flatten_css_imports – Sets CSS inline by flattening all @import rules

inline_css – Inlines small CSS files into the HTML document

inline_import_to_link – Inlines <style> tags with only CSS @imports by converting them to equivalent <link> tags

inline_javascript – Inlines small JS files into the HTML document

rewrite_css – Rewrites CSS files to remove excess whitespace and comments and, if enabled, rewrites or cache-extends images referenced in CSS files

rewrite_images – Optimizes images by re-encoding them, removing excess pixels, and inlining small images

rewrite_javascript – Rewrites JavaScript files to remove excess whitespace and comments

rewrite_style_attributes_with_url – Rewrite the CSS in-style attributes if it contains the text “url(“ by applying the configured rewrite_css filter to it

If you’re already using Hostdedi Cloud, contact our 24/7 support team to make inquiries or install it for you today!

Posted in:
General

Source link

Trends That Defined the Industry

Introducing the State of Hosting 2019- Trends That Defined the Industry

In the nineteen years we’ve been in the hosting industry, we’ve seen a lot of different sites grow and prosper. Over the last few years, however, we’ve started to see a shift in the way that sites are doing so. New technology and infrastructure options, combined with industry changes to security and privacy, have seen development and hosting take on a whole new meaning.

Released today, the State of Hosting 2019 marks the first annual deep dive into the hosting solutions site owners and merchants are choosing, along with their hopes and concerns for the future. The aim of this report is to help make business owners aware of how hosting solutions are changing for the better, and how they can keep up. 

Below you’ll find a quick look at some of the most compelling takeaways from this year’s report. Alternatively, you can download the full report now.

 

Magento Continues to Dominate the eCommerce Market

eCommerce applications have long been in competition over top spot. Each offers its own experience with unique selling points that appeal to specific merchants. Coming into 2019, Magento continues to lead the charge, being the application of choice for 64% of hosting solutions and dominating over competitor WooCommerce.

There are several reasons for this, with one being the functionality and flexibility offered by Magento solutions. Magento also seems to line up with site owners’ top issue of development. However, a new competitor has entered the market in 2019 and with it a potentially new candidate for top eCommerce spot. Read the report to find out who and what it may mean for your eCommerce store.

 

PWA Is the Future

PWA took the world by storm in 2018, and it’s only going to continue to see an increase. We found that 67% of store owners plan to adopt PWA development in the future. The reasons are many, with development capabilities standing at top spot.

However, PWA development will likely lead to a number of organizational changes with regards to how websites and online properties are manages. Many agencies are still working on what this will look like, and trying to decide which clients will really benefit from PWA. Download the report to see what else merchants and developers have to say about PWA.

 

Uptime Remains a Primary Concern for Content Producers

Site outages and downtime can lead to huge losses in revenue. Just a 1-second delay in load time can lead to a 7% decrease in conversions. For content producers, that number can have a huge effect on conversion goals and is a very real threat to the success of a website.

Consequently, uptime remains a primary concern for content application owners. However, price is still the top value. This means that while site owners are looking for reliable hosting solutions, they are still aiming to keep the price down. However, finding the right balance between the two is integral, with many site owners claiming that their move to Hostdedi came after reliability concerns with cheaper providers.

A Significant Number of Websites Run On WordPress

Automattic place the number of sites that use WordPress as making up 32.5% of all websites globally. Internally, we have found that number to be closer to 24% across all solutions, and 67% across content solutions. That is still no small number.

Site owners choose WordPress due to its ease of use and the sheer amount of content it allows for owners to create and publish easily. Read the report to find out why WordPress was also 2018’s fastest adopter of cloud technology.


We invite you to learn more about hosting in 2018 and the decisions other merchants and site owners made throughout the year. Download the report now.

Posted in:
Content, eCommerce, General

Source link

Does Your WordPress Site Really Need Web Fonts?

Does Your WordPress Site Really Need Web FontsThe web is rich with images and video, but it is primarily a textual medium dominated by the written word. The web is all about reading, and that means we have to pay attention to typography.

Typography concerns itself with all aspects of displaying text on a page, but the typeface is its fundamental building block and choosing a typeface is the first step in creating attractive and readable text.

Finding Fonts

Thanks to web fonts and font hosting services like Typekit and Google Fonts, we can choose any of thousands of fonts for our WordPress sites, but there is a price to be paid for all that choice — web fonts inflate the size of web pages and increase the time it takes for them to download.

We weren’t always given so much choice. In the early days of the web, designers could use only web-safe fonts: typefaces that were already installed on the majority of devices. That’s why Times New Roman, Arial, and Verdana were ubiquitous on the early web.

Introducing Web Fonts

Web fonts were introduced to overcome the limitations web-safe fonts. Fonts could be packaged up and added to a web page. Later, font hosting services made using web fonts even easier. And with free font hosting services like Google Fonts, there is no reason not to use web fonts.

But web fonts aren’t without critics. They have been vilified as unnecessary, overly large, and unjustified because users don’t care about them. Designers certainly care about the typefaces that appear on the pages they design, but it’s the rare user who will abandon a site for using a web-safe font. They do, however, abandon sites that take too long to load and render because of a huge font file.

The designer Adam Morse made this point forcefully in 2016 when he wrote:

Typography is not about aesthetics, it’s about serving the text … webfonts cause more problems than they solve and weren’t worth the cost to my users or myself.

There is some truth to this argument, but it’s not a view typographers are likely to endorse. Historically, web-safe fonts were poorly implemented copies of earlier typefaces: the Palatino system font is a bad copy of Hermann Zapf’s original work, and Microsoft’s Book Antiqua is an uninspired copy of that.

There is nothing unique, original, or inspiring about a web page set in Times New Roman, and although these are not things that the average web user is consciously concerned about, there is a felt difference between a site with carefully selected high-quality typography and a site with old-fashioned fonts that have been seen a million times before.

That said, today’s system fonts are far superior to their ancestors. Microsoft’s Segoe, Apple’s San Francisco, and Google’s Roboto are fine typefaces. A font stack that takes advantage of them is adequate if uninspired.

WordPress site owners should balance the time taken to load web fonts with their design and readability benefits to come to a decision that best reflects the goals of their site.

Posted in:
General

Source link

Data Center Risk Factors and Recovery

When something goes wrong in a data center, their disaster recovery plan kicks in. A good disaster recovery plan aims to reduce data center risk to zero by implementing a range of redundancies and protections. To do that, it’s important to first walk through the data center risk factors out there.

What’s the biggest risk to a data center? Many facilities imply that fire is the biggest concern and highlight their fire suppression systems. Yet fire isn’t the only risk to a data center.

Data centers prepare a huge number of redundancies and protections – no matter how likely it is they will be needed.

This article will cover the types of risks that data centers typically prepare for, with a detailed look at:

  • External risks: Natural disasters and supplier outages.
  • Facility risks: infrastructure and risks involving the facility itself.
  • Data system risks: Data management and architecture.

External Risks

External risks are those outside of a data center’s control. They include natural disasters, supplier outages, and human-caused events. 

Natural Disasters

Many disaster recovery plans start by covering natural disasters, largely due to their potential damage being highest. Luckily, many meteorological threats can be forecasted before they become a problem and knowledgeable staff can be put on standby. This can mitigate a lot of the potential damage.

Large-scale damage and downtime from earthquakes and floods can be prevented with water penetration protection, a fire suppression system, and power backups. For a more detailed list of protections put in place, reach out to your hosting provider.

What if I Host in a Natural Disaster-Prone Area?

We understand that sometimes hosting in an area with frequent natural disasters is unavoidable. How you choose a data center is influenced by a number of different factors including proximity, convenience, and risk.

Most data center facilities located in such an area incorporate special infrastructure features, including reinforced buildings and stringent design plans. A good example is the Hostdedi Miami facility, which is Category 5 rated and designed to withstand flood damage and winds of up to 185 mph.

We highly recommend asking your facility about the history of natural disasters in their area and how they have affected the data center in the past. This will give you a good idea of what to expect and prepare for in the future.

US risk map data center

As a rough guideline, the above map provides an overview of natural disaster frequency in the US. You can use this to identify susceptible areas.

Supplier Outages

Supplier outages occur when suppliers of either power, connectivity, or another important deliverable are unable to deliver. They are unavoidable but a suitably prepared data center can mitigate them entirely.

For example, downtime from a loss of connectivity or a downed power line is prevented by preparing multiple redundancies: additional power generators, multiple connections, and enough onsite fuel to last for several days.

It is important to have a backup pool of suppliers in the event one fails.

Facility Risks

There are seven main areas where you don’t want anything to go wrong in a data center facility: power, water, climate, structure, fire, communication, and security. These should all be incorporated in a disaster risk assessment.

Take a look below for a better idea of how and why each of these factors is important.

Power Disasters will likely cause a power outage. No power means no data center (at least one that works). Multiple power source availability means that a data center (and so your website) will stay online through the worst.
Water Data centers are allergic to water. Even the smallest amount can cause a lot of damage. Water penetration protect can help to prevent the destruction of mission-critical infrastructure.

Conversely, losing a water supply for any cooling or fire suppression systems requires multiple, secure water sources.

Climate A data center requires a precise climate. Not too hot, not too cold, and without too much humidity in the air. A high-quality and adaptable climate control system adds to reliability.
Structure The data center’s building of operations itself. If poorly constructed, risk and exposure to the elements will be increased.
Fire Fire damages pretty much everything it comes into contact with (apart from a good steak). Keeping it away from a data center is a top priority. All data center facilities you host in should come with a fire suppression system.
Communication   A line to the outside is a big advantage for a data center in the middle of an emergency. Not only does it let you contact your provider, it also allows them to contact backup suppliers.
Security Security procedures should exist for during a disaster to avoid unauthorized access to any part of the facility.

 

Data System Risks

Data System risks are those that involve shared infrastructure. It is vital to pay attention to all single points of failure in the system’s architecture and see how those failures can be avoided.

Look at how the data center protects against contamination between servers and its effectiveness at blocking attacks. An understanding of how vulnerable a data center is involves understanding how easily targeted they are. Hostdedi facilities block over 3 million attacks per day.

Other areas to ask your hosting provider about include:

Data Communications Network

Ask specifically about the network’s architecture and what security procedures have been put in place.

Shared Servers

How do they interact with each other? How shielded is one account from others held on the same server? This is especially important with cloud technology and virtualized resources.

Data Backup

In the case that something bad does happen, what can be done to make sure your website doesn’t disappear? How often do backups take place, how long does it take to restore backups? What is the procedure for backup restoration?

Software Applications and Bugs

Unless your data center also creates the applications you’re going to run on your server, they don’t have a lot of control over this. However, they can tell you best practices, provide bug fixes, and generally stay up to date with how the application is being handled by other professionals.

Blog Banner WP_5 Questions4

Posted in:
General

Source link

5 Steps to a Successful Website Migration

5 steps to a successful website migrationWebsite migrations can be scary, but they don’t have to be. Here are 5 steps for making your moving experience as seamless as possible; starting from knowing what you need to back-up, and finishing with full DNS propagation and your new hosting solution going live.

It’s not every day you decide to change hosting providers or upgrade your solution. If you’re with a high-quality provider and haven’t had any problems, you may only ever do this a handful of times as your site grows. When you do decide to go through with a migration, you will likely go through the five stages below.

  1. Backing up your website
  2. Moving your website’s data
  3. Testing the new website
  4. Migrating your DNS
  5. Enjoying your new hosting environment

We believe in seamless website migrations for everyone, which is why we’ve put together 5 steps for making sure your site migration is as easy and relaxed as possible.

You may be moving somewhere new because you were unhappy with your old provider but don’t rush. Canceling your old hosting provider before completing a migration can mean days or weeks of downtime, depending on how complex your migration is and whether you encounter any issues.

Unless your old hosting provider engages in daily backups and maintains them after you leave, you could lose your entire site. Even if you do have a backup, your SEO value can plummet, and a whole host (pun intended) of other problems can occur.

A good Migration should mean consistent site traffic. Not a sudden drop or decline.

Good Traffic Results from a migrations

A good migration

Bad Traffic Results from a migrations

A bad migration

That’s why we always suggest making sure to…

One of the first things you should do during a migration is to create a local backup of your website. Despite everyone’s best intentions, technology doesn’t always go to plan and a small database corruption can cause issues.

If you haven’t canceled with your previous provider, they may still have backups located on a third-party server. Hostdedi offers daily hosting backups and archives them for 30 days. In most cases, you can use these backups to restore your site. However, it’s always a good idea to make sure you have a local one as well.

If you’re coming from a hosting provider with a cPanel interface, you can head to the page ‘Backup’ in your control panel. Here you’ll be able to download a copy of your “public_html” directory (which contains most of your site information). You can also grab a backup of your MySQL database too.

Hostdedi provides full backups through our control panel. Click on Backups -> Backup Now, and then click continue. You can also select to only perform a partial backup if you prefer.

How to backup your Website for a Migration

Most hosting providers will have an easy to access backup feature available. If you can’t find one, get in touch with their support team.

“No, I don’t need to check. It’s ready, let’s go live,” is something every migration expert dreads hearing.

Going live without testing a site after a migration is like playing a game of Risk and not knowing what pieces you’ve got in play. While there’s a chance everything will work out well, there’s also a chance something will go wrong and you end up stuck with nowhere to go but start over.

A short checklist of what to test includes:

There may also be things you should check specific to your site. If you’re an eCommerce store, for instance, you may want to test the checkout process.

Do this by heading to your domain registration control panel and then “Domain Name Servers”. From here you’ll be able to see what your nameservers actually are.

Find Out Your Nameservers

If you’re interested in checking this out on your own machine, open up a command prompt and enter

dig +short NS yoursite.com | sort.

If you’re using the Hostdedi DNS service and have successfully repointed your domain, you should see at least one of those below:

ns1.nexcess.net

ns2.nexcess.net

ns3.nexcess.net

ns4.nexcess.net

ns5.nexcess.net

ns6.nexcess.net

ns7.nexcess.net

Ns8.nexcess.net

If you don’t then don’t panic. It may be that you’re with an alternate DNS provider. It can also help to know how far along the path to full website migration you are (if you’re not the one in charge).

Remember that DNS record changes can take 12 to 24 hours, so don’t be surprised if this information doesn’t change immediately after you’ve altered your DNS. Just like with our first point, don’t cancel your old service before your new one is good to go.

Once you’ve changed your DNS, you’re going to want to let it complete propagation. You shouldn’t experience any downtime during this period, but you will want to make sure that you don’t make any changes to your site.

There’s nothing worse than posting new content during the propagation cycle and finding you’ve lost it the next day.

If you’re interested in checking the status of your DNS propagation, try the Hostdedi DNS checker to see how far it’s gotten.

Making Migration Easy

Remember, Hostdedi offers free migration assistance on all of our solutions, meaning that making the switch from one provider to another couldn’t be easier. We make migrations easy and seamless.

Posted in:
General

Source link

Mission Critical Environments

This week’s 30-minute session was with Doug, the Hostdedi data center facilities manager, covering everything you need to know about mission critical environments. He began by saying that maintaining reliability and security for mission critical environments is… mission critical. He then took marker to wall to expand on that.

What are Mission Critical Environments

Mission critical environments are hosting environments integral to the consistent and reliable running of a data center. This primarily includes servers, but data centers need to maintain other elements too.

  • Infrastructure (buildings)
  • Redundancies (backup generators, etc)
  • Tools (disaster recovery, maintenance)
  • Other unknowns that may be a danger to reliability and uptime.

Factors Important to Mission Critical Environments

For mission critical environments to remain stable, professionals have to ensure the stability and security of onsite equipment. A few of the factors that are most important for doing this are included below.

Disaster Recovery

In the event of a disaster, your data center should have a disaster recovery plan ready. A good disaster recovery plan will minimize downtime and ensure your site is back online as soon as possible after a disaster event. This can include, but isn’t limited to:

  • Backup generators
  • Infrastructure features
  • Tools for solving problems
  • Trained onsite staff

Preventative Maintenance

Prevention is the best cure, and nowhere is that more evident than with data centers. Waiting for something to fail, whether it’s a server, power supply, or something else, is a recipe for reduced uptime and low-quality hosting.

Preventative maintenance means keeping an eye to ensure that hardware and infrastructure remain operating at full capacity with failing elements replaced before they become a problem.

Risk Management

Managing risk takes place everywhere, but it is no more critical than in a data center facility. As indicated above, risk is something to be avoided and finding a solution before a risk becomes a problem is a top priority.

Redundancy

Redundancy includes backups used if primary sources of power, connectivity, or something else go offline. For data center facilities trying to maximize uptime, redundancies are crucial. In many cases, data centers do not have control over when something goes wrong. Redundancies can help to mitigate any issues that arise.

Design mission critical environments for these things

Final Thoughts

Keeping mission critical environments secure and reliable is one of the most important tasks in a data center and involves looking at what might go wrong and finding the best way to prevent it. Thanks to Doug for showing us some of the ways in which that is done.

Want to know more about how we maintain mission critical environments? Contact our sales team.

Posted in:
General

Source link

Introducing Hostdedi Global DNS

We are excited to announce Hostdedi Global DNS. A globally distributed name service that puts DNS closer to your website visitors.

What is DNS?

The domain name service (DNS) is the phonebook of the Internet. Whenever you load a website, open a mobile app, or click on a cat GIF, your device usually searches for a web address using DNS.
 
The Internet is made up of connected devices with Internet Protocol (IP) addresses. The domain name service sits on top of the Internet and allows for convenient, easy-to-remember names, nexcess.net, to be translated back to hard-to-remember IP addresses as 208.69.120.21. This is made worse by the Internet’s next generation of addresses, known as IPv6, with long-string addresses such as 2607:f7c0:1:af00:d045:7800:0:1b.

Hostdedi DNS, Today

When you host your DNS with Hostdedi, as about half our customers currently do, DNS requests from your website visitors are answered from servers located in the US. Even if we host your services in London, Australia, or other international locations, our DNS services are still located in the US.
 
We go to great lengths to put our DNS servers on third-party networks, which isolates them from potential failures. We also host eight name servers in total, which is double the number typically found among web service providers. At the end of the day, it’s still a US-based DNS infrastructure.
 
To be clear, concentrating DNS servers in a particular location is a common setup. Due to the nature of DNS, when a user visits your website, their browser or device caches the results and doesn’t need to check DNS again for an extended period of time.
 
For new visitors from international locations, this can cause something known as first-visit page load delay. These geographically distant users may experience as much as a half-second delay. This may sound trivial, but visitors are quick to notice sluggish load times and tend to avoid sites that suffer from them.
 
Administrators and developers work tirelessly to shave even fractions of seconds from page load time. A research paper by Google last year found that when delays drift beyond 3 seconds, visitors quickly lose interest and start abandoning sites.

All things being equal – faster is better.
 

Hostdedi Global DNS

We’ve been hard at work the last couple of months deploying a footprint of 15 DNS servers distributed around the world. These servers are strategically positioned so that they provide a local DNS server option for visitors to your site, and significantly reduce first-visit load times.
 
Hostdedi Global DNS uses a technology called Anycast routing, which allows us to broadcast the IP addresses of our DNS server from multiple global locations at the same time. When a visitor loads your website, this technology allows their Internet service provider (ISP) to route the visitor’s DNS requests to the Hostdedi DNS server closest to that visitor.
 
When we stood up the proof-of-concept and looked at the latency differences of Global DNS against our existing DNS, it floored us! The results were significantly better than we expected in reducing DNS first-visit latency. This was some two months ago and it validated our all-in commitment to launching a Global DNS platform.
 
Following is a real-world example of Global DNS in action. Using a tool provided by KeyCDN.com, we tested latency (round trip time) from 16 global locations, then compared Classic DNS and Global DNS.

Hostdedi Global DNS, Going Live!

If you’re a Hostdedi customer, you will enjoy the benefits of our Global DNS for no additional cost, and no action is required.
 
We will begin transitioning Hostdedi DNS to the Global DNS system on Thursday, August 30th. The first maintenance will migrate ns7.nexcess.net and ns8.nexcess.net, with other name servers to follow in the coming weeks. Our goal is to have Global DNS operational for all nexcess.net name servers by the end of September.
 
There will be no downtime as a result of this maintenance. The existing Hostdedi DNS servers will continue to operate and respond to DNS queries until we confirmed all traffic has moved away from them.
 
For instructions on pointing your domain to Hostdedi Global DNS, please see our how-to-guide for details.

Where are Hostdedi Global DNS servers located?

  • Amsterdam
  • Atlanta
  • Chicago
  • Dallas
  • Frankfurt
  • London
  • Los Angeles
  • Miami
  • New york
  • Paris
  • San Francisco
  • Seattle
  • Singapore
  • Sydney
  • Tokyo

 

Will other Hostdedi Global DNS locations be added?

Yes! We are currently looking at adding Bangalore, Hong Kong, Johannesburg, Sao Paulo, and Toronto. These locations will help close important gaps and continue to improve the experience for your website visitors.

Posted in:
General, Hostdedi

Source link

Everything You Need To Know

What to Know about DNS Records

How does a browser load a web page? It uses a phonebook. Not an old-fashioned leatherbound book or a switchboard operator, but a service known as DNS. Each page of that DNS “phonebook” is what are known as DNS Records.

In other words, when you look for nexcess.net, your computer looks in the DNS “phonebook”, finds the number for the site, and connects you to it. Of course, the whole process is much quicker, and faster, than this.

This article looks at what DNS records are, the different types you’ll find, and why they’re incredibly important for the success of any website.

Don’t forget, for those using Hostdedi hosting services, it’s possible to use Hostdedi DNS for free. We manage all the hard work once the service is in place, you just have to point your domain name to Hostdedi Nameservers.

It was 1983. The internet was young and IT professionals had begun to get fed up with having to remember long series of numbers in order to connect with other machines. Networks had spread beyond just a few units and in an effort to future-proof, longer series of numbers were proposed. There was just one problem, how to make these numbers more consumer friendly?

Paul Mockapetris published two papers on the subject, creatively named RFC 882 and RFC 883. Mockapetris’ system expanded prior use of a hosts.txt file into a large system capable of managing multiple domains in a single location. That system is known as DNS, or Domain Name System.

Without DNS, the Internet wouldn’t be what it is today. We may even need a Roladex to visit our favorite sites!

With DNS, computers still require the IP (internet protocol) address number sequence in order to connect with a server. Yet with over 4,294,967,296 different IPv4 addresses, it makes a lot more sense to convert those numbers into something more easily recognizable. 

DNS gives IP addresses unique names for computers, services or other resources that are either part of a private network or part of the Internet.

 

 

Hostdedi DNS Network Setup

The Hostdedi DNS network has 100% uptime with multiple redundancies in place

The domain name system prevents having to remember a long series of numbers. Users are able to type in a domain name and then the domain name system will automatically match those names with the IP address and route connections.

At the center of all this, the hosts.txt file still existed in the form of vast servers for managing domain names and at the heart of these servers are DNS records.

IP addresses work in a similar fashion to that of street addresses or phone numbers in an address book. While people browse the Internet, they look up their favorite site much like they look up a friend’s number. From there, the system provides them with the friend’s number and they can contact them. With DNS, the second part of this sequence is automated. This requires DNS records from a DNS server.

During the creation of DNS, servers were manufactured solely for the purpose of managing DNS and related information. Within each of these servers are DNS records that tie entries to a domain. 

Any device connected to a computer network, whether it is a PC, router, printer, or any other device with an IP address, is referred to as ‘hosts’. With the sheer number of ‘hosts’ around the world, engineers needed a way to track devices without resorting to memorization of numbers.

As explained earlier, DNS records came along with DNS as a tool for system admins and users to seek out authoritative information on websites or other services they’re trying to access.

There are two types of DNS Records. These are:

  • Records stored in Domain Name System servers
  • Records stored on a user’s machine

Records stored on a Domain Name System server are covered in more detail below, including what types of records exits and how they function.

Records stored on a user’s machine are also known as DNS cache. This record lists the visiting history of an operator for all websites previously visited, regardless of whether they were attempted visits or not.

When you watch a crime drama and a culprit’s computer is taken to be analyzed for the sites they have visited, a DNS cache is usually what would be checked for unauthorized activity.

However, a DNS cache is usually temporary and has a limited lifespan before being removed.

DNS Syntax Types Explained

While there are an abundance of record types in existence, below you’ll find nine of the most commonly used DNS records. For more information, don’t forget to check our DNS Records knowledge base, as well as how to configure DNS records for your site.

A – A records are usually referred to as address records, and occasionally host records. They are the most commonly used records that map hostnames of network devices to IPv4 addresses. A website address book.

AAAA – Serves the same purpose as A records, except that hostnames are mapped to an IPv6 address vice an IPv4. As opposed to 32-bits for an IPv4 address, an IPv6 address contains 128-bits. An example of an IPv6 address is FE80:0000:0000:0000:0202:B3FF:FEIE:8329. 

CNAME – Acts as an alias for domains. The CNAME record is tied to the actual domain name. If the address nexcess.net was typed on your internet browser it would reload to the URL www.nexcess.net 

MX – MX records maps a domain name and connects them with message transfer agents. A mail server is responsible for managing the reception of emails, and preference values are assigned. In the case of large organizations, multiple email servers would be utilized to process messages en masse. Through the use of the SMTP (Simple Mail Transfer Protocol) emails are routed properly to their intended hosts.

NS – Also known as name server records; designates a name server for a given host.

PX – The technical description based on RFC 2163 details the PX DNS record as a ‘pointer to X.400/RFC822 mapping information’. Currently, it is not used by any application.

PTR – Referred to as reverse-lookup pointer records. PTR records are used to search names of domains based on IP addresses.

TXT – A type of DNS record that stores text-based information. It’s primarily used to verify the ownership of a domain as well as hold SPF (Sender Policy Framework) data, and prevents the delivery of fake emails that give the appearance of originating from a user.

SOA – Possibly the most critical one of them all, the State of Authority record annotates when the domain was updated last.

The general purpose of a DNS lookup is to pull information from a DNS server. This is akin to someone looking up a number in a phone book (hence the term ‘lookup’ in conjunction with DNS).

Computers, mobile phones, and servers that are part of a network need to be configured to know how to translate domain names and email addresses into discernable information. A DNS lookup exists solely for this purpose.

There are primarily two types of DNS lookups: forward DNS lookups and reverse DNS lookups.

Forward and Reverse DNS

Forward DNS Lookups

Forward DNS allows networked devices to translate an email address or domain name into the address of the device that would handle the communications process. Despite the transparency, forward DNS lookup is an integral function of IP networks, in particular, the Internet.

Reverse DNS Lookups

Reverse DNS (rDNS/RDNS) pulls domain name info from an IP address. It is also known as Inverse DNS. Reverse DNS lookups are used to filter undesirable data such as spam. Spam can be sent through any domain name that a spammer desires. Spammers can use this technique to fool regular customers into thinking that they’re dealing with legitimate entities. This can include organizations such as Bank of America or Paypal.

Email servers that are receiving emails can validate them by checking IPs with Reverse DNS requests. RDNS resolvers should match the domain of the email address if the emails themselves are legitimate. While this is useful in verifying the integrity of emails, it does not come without a cost. An ISP has to set the records up if the legitimate mail servers themselves do not have the appropriate records on hand to respond properly.


What's my DNS?

What Are Your DNS Records?

You can check your own DNS records with the Hostdedi DNS Checker. Simply enter the site address you want to check and the type of record you want to see.

You can also use this tool to check third-party DNS records and confirm the identity of certain domains to make sure they are not fake.


Ultimately, DNS makes life easier for the end user that can’t memorize 32-bit or 128-bit IP addresses. It’s easier to just type a name into the browser bar and let DNS figure out the rest. DNS resource records are fundamental for DNS to be able to work, and the Internet wouldn’t be what it is today without them.

If you’re looking for more information on site performance and benchmarking, don’t forget to check our article on TTFB (Time To First Byte) and why it may not be as important as you’ve been led to believe. Also, check out our summary of data center tiers and use the stats to figure out which data center tier you’re hosting with.

Hostdedi DNS Solutions

Posted in:
General, Web Hosting Basics

Source link

Why Time To First Byte (TTFB) Isn’t as Important as You Think

Why TTFB isn't as important as you thinkTime To First Byte (TTFB) is the time it takes for a web server to respond to a request. It’s a metric reported by several page speed testers, and is often quoted as a primary means for measuring how fast a site is. The idea being that the faster a web server responds, the quicker a site will load.

However, numerous groups have found that TTFB isn’t that important. When looked at in isolation, the figure provides an appealing way to grade your site or hosting provider, but when looked at in conjunction with other metrics, there seems to be a disconnect. This is especially true with regards to SEO rankings and improved user experience.

Here, we’re going to look at why TTFB can be easily manipulated, what metrics actually matter, and how knowing these things can help you to improve your site’s SEO, user experience, and more.

TTFB measures the time between a user making a HTTP request and the first byte of the page being received by the user’s browser.

What Does TTFB measure

The basic model of how TTFB works

The model is simple. The faster a web server responds to a user request, the faster the site will load. Unfortunately, things get a little more complicated.

In some cases of testing site speed, you’ll find TTFB test durations far longer than what you would expect. This is despite actual page load times seeming much faster. This is the first indication that something is wrong with how TTFB measures speed.

A deeper look shows that this is because TTFB actually measures the time it takes for the first HTTP response to be received, not the time it takes for the page itself to be sent.

Time To First Byte test

A test of Time To First Byte and page load times

In the Time To First Byte test above, TTFB is measured at 0.417 seconds, which seems very quick. However, looking at the waterfall, we can see that this figure only correlates with the HTML loading time.  Afterward, page load speed takes much longer for other assets on the page and we’re seeing DOM content loaded at around 1.6 seconds.

This is because the TTFB value is incredibly easy to manipulate. HTML HTTP response headers can be generated and sent incredibly quickly but they have absolutely no bearing on how fast a user will be able to see or interact with a page. For all practical purposes, they are invisible.

By loading HTTP response headers to speed up TTFB, it’s easy to create a ‘false’ view of a site’s speed. It also doesn’t necessarily mean that the rest of the waterfall will load quickly as well.

A good example of how Time To First Byte testing can be manipulated with HTTP headers is when looking at the page load times of NGINX in conjunction with compression.

Compressed pages are smaller and so they download from a server faster when compared with uncompressed pages. This ultimately means that page loads times to interactivity are much faster. However, from the perspective of TTFB, this is not true.

TTFB With NGINX

Time To First Byte compared with actual page loading times

This is because HTTP headers can be generated and sent relatively quickly before the main page content.

This is an especially significant figure for those that make use of the Hostdedi Cloud Accelerator, as this makes use of NGINX in order to speed up caching speeds on optimized Hostdedi platforms.

Continue reading to find out what metrics you should be using to check page load times.

In a 2013 study by Moz, it was found that Time To First Byte does have a significant correlation with SEO rankings. The faster TTFB was, the higher ranked pages would be.

This being said (and as Moz themselves make clear) correlation and causation are not the same thing. The actual methods Google (and other search engines) use to crawl web pages and build out SERPs are not known to the public.

It’s been deemed by many that page load times to interactivity are actually a lot more important. When looking at page speed tests, it’s important to look at all the figures available as a whole and not just TTFB.

So, with regards to TTFB tests, SEO, and user experience:

Google Does Not Measure Page Speed for SEO (Entirely)

Ok, it sounds like we’ve gone back on what we just said, but bear with us.

Google doesn’t measure page speed as incredibly important, it measures user behavior. They have said in the past that if users are willing to wait for content to load, they will not downgrade a website for being slow.

This is because Google weighs usability and experience as more important than speed. Back in 2010, Matt Cutts said that including site speed as a ranking factor “affects outliers […] If you’re the best resource, you’ll probably still come up.” It just happens to be that the less time a user has to wait for a page, the more likely they are to stay on the page.

So when it comes to using speed testing services such as PageSpeed Insights, make sure to consider your page load times from a practical perspective as well. How do you feel about the time it takes for your page to load when you type it in your browser? Do you think the content quality is worth the wait?

Time To First Byte SEO

PageSpeed Insights provides actionable speed intel for SEO such as that above

Simple checks like this are easy and can provide you with a lot of insight into what your users will think.

Practical Page Load Times Matter – Not TTFB

A faster Time To First Byte does not mean a faster website.

TTFB is not a practical measurement. It doesn’t really affect the user experience. The time it takes for a browser to communicate back and forth with a server doesn’t affect a user’s experience of that server’s content as much as the time it takes for them to actually interact with it.

Instead, measurements that test time to interactivity are inherently more important. Improvements here don’t always match the results of web page speed tests or scores.

So, the main takeaway here? High-quality content and a great user experience are still two of the most significant factors involved in SEO, site speed can influence this but is not nearly the most important. 

Mobile Speed is Now a Ranking Factor

As of July 2018, mobile page load speed has become a factor in SEO ranking. TTFB can be included in this.

However, again, TTFB and page load times aren’t as important as high-quality content and usability. The user experience on mobile devices has long been a key area Google and other search engines have tried targeting and improving. Load times are just a small part of this.

Responsive design and easily readable and scalable text and images are much more important.

Google highly recommend their tool PageSpeed Insights in order to properly see how your page speed may affect SEO ranking.

Slow and Steady Wins the Race

Ok, all this doesn’t mean that you should let your site crawl to a halt. This isn’t a childhood fable or a call to reduce quick internet. Fast internet is one of the wonders of the modern age and you still want your site to load as quickly as possible.

What we’re saying is that if you’re trying to find how to improve Time To First Byte, stop.

it’s far more important for you to start looking at page load time in their entirety and not just the time it takes for a server to respond. At Hostdedi, we’re proud of how fast our data center serves content, and work our hardest to make sure that our servers are optimized for providing a great user experience and helping to boost your SEO as much as a hosting company can. 

We highly recommend checking out the Hostdedi Cloud and seeing how Hostdedi can help.

Faster Cloud Hosting

Posted in:
General

Source link

What Are Data Center Tiers Explained

The Definitive Guide to Data Center TiersIn the world of data centers, reliability is one of the most important factors. The more reliable you are, the more likely clients are going to want to use you. After all, who wants a data center that isn’t online?

Luckily, The Telecommunications Industry Association (TIA) published a standard for data centers defining four levels of data centers in regards to their reliability. The aim was that this standard would then be able to inform potential data center users about which center is best for them. While brief, the standard laid the groundwork for how some data centers would manage to pull ahead of others in the future.

But The TIA’s standard wasn’t enough. Several years later, Uptime Institute instigated what is now known as the ‘Tier Standard’. Tier Standard describes four different data center tiers based on the availability of data processing as the result of the hardware at a location. 

This article breaks the types of data center down into the four tiers and looks at how they differ. Combine this our article on how to choose a data center location, and you’ll know where the best place to host your website is.

TL;DR:

Check out our Infographic below to quickly see the main differences between data center tiers, or keep reading for more detail.

What are the different Data Center Tiers

The Classification of  Data Centers

Data centers are facilities used to house computer systems and associated components. A data center is comprised of redundant power supplies, data communications connections, environmental controls, and various security devices.

Tier one data centers have the lowest uptime, and tier four have the highest. The requirements of a data center are progressive in that tier four data centers incorporate the data center requirements of the first three tiers in addition to other conditions that classify it as a tier four data center.

The requirements of a data center refer to the equipment needed to create a suitable environment. This includes reliable infrastructure necessary for IT operations, which increases security and reduces the chances of security breaches.

What to Consider When Choosing a Data Center

When choosing a data center to store data for your business, it is important to have a data center checklist. This is a list of the most important things you should keep in mind – such as the physical security of a prospective data center –  when making your choice.

Typically, a good data center checklist would include the various data center pricing policies and extra amenities provided. An excellent straightforward strategy, for instance, should have no hidden charges and a data center with additional facilities is better than one without.

Data Center Specifications

Data center specifications refer to information about the setup in a data center. This can include the maximum uptime, redundant power systems that allow the platform to stay up regardless of power outages, the qualification of technical staff at the data center, and more.

It is common that higher data center tiers have better-qualified staffing since more expertise is required to maintain the whole platform. Data center specifications should be on the data center checklist of a customer looking at prospective data centers to store their data.

What Is a Tier One Data Center?

This is the lowest tier in the Tier Standard. A data center in this tier is simple in that it has only one source of servers, network links, and other components.

Redundancy and backups in this tier are little or non-existent. That includes power and storage redundancies.

As such, specifications for a data center for this tier are not awe-inspiring. If a power outage were to occur, the system would go offline since there are no fail safes to kick in and save the day.

The specifications of a tier one data center allow for uptime of approximately 99.671%. The lack of backup mechanisms make this data center tier seem like a risk for many businesses but they can work for small internet based companies with no real-time customer support. However, for companies with heavy reliance on their data, a tier one data center would not be practical.

One of the advantages of tier one data centers is that they provide the cheapest service offering for companies on a budget.

However, a lack of redundancy means that the uptime of servers is considerably lower than tier two, three and four and maintenance on the facility will require shutting down of the entire facility thus more downtime.

What is a Tier Two Data Center?

This is the next level up after line one. Tier Two features more infrastructure and measures to ensure less susceptibility to unexpected downtime. The requirements of a data center for this data center tier include all those of the first tier but with some redundancy.

For instance, they typically have one a single path for power and cooling. However,  they also have a generator as a backup and a backup cooling system to keep the data center environment optimal.

The specifications of a data center for the second tier allow for higher uptime compared to level one data centers that are approximately 99.741%.

What is a Tier Three Data Center?

Tier Three data center requirements for line three data centers include all those of tier one but have a more sophisticated infrastructure to allow for redundancy and backups in case of unexpected events that may cause downtime.

All server equipment has multiple power sources and cooling distribution paths. In case of failure of any of the distribution paths, another takes over ensuring the system stays online. Tier three data centers must have multiple uplinks and must be dual powered.

These specifications ensure you only have a maximum of two hours downtime annually,  as a percentage. Some of the equipment in tier three systems are fully fault-tolerant.

Some procedures are put in place to ensure maintenance can be done without any downtime. Tier three data centers are the most cost-effective solution for the majority of businesses.

What is a Tier Four Data Center?

Tier 4 is the highest level when it comes to data center tiers. It has an availability percentage of 99.99%. A tier 4 data center is more sophisticated regarding its infrastructure as it has the full capacity, support, and procedures in place to ensure the maximum and optimum uptime levels.

Tier 4 data center fully meets all the specifications of the other three tiers. A tier 4 data center is error tolerant as it can operate normally even when there is an instance of infrastructural equipment failure.

A Tier 4 data center is fully redundant with multiple cooling systems, sources of power and generators to back it up. It has an uptime level of 99.99% with an estimated downtime level of only 29 minutes annually.

These are the four data center tiers and a summary of their data center requirements used in the design process. Anyone looking for things to put in their data center checklist could find the essential elements to look for in the specifications of a data center and requirements.

Hostdedi Is a Tier 4 Data Center

Between having an uptime of 99.9975%, multiple redundancies, and an annual downtime of less than 18 minutes, the Hostdedi data center is regarded as a tier 4 data center. If you would like to know more about the Hostdedi data center, don’t hesitate to check out the different data centers offered by Hostdedi around the world or take a more detailed look at our Southfield, Michigan data center (in an easy to read infographic).

Host in a Tier 4 Data Center

Posted in:
General, Hostdedi, Web Hosting Basics

Source link