Unless you’re a network administrator or tech enthusiast, you’ve probably never heard of BGP (Border Gateway Protocol), but the Internet could not exist without it.
A network is a group of computers linked together, either wired or wireless. Each computer is identified by a unique ID number called an IP address. BGP allows computers in different networks to communicate with one another, and we’ve come to know the combination of all the countless interconnected networks on the planet as the Internet.
A technology that defines how exactly to communicate messages is called a protocol. When you define a protocol, you mostly define what kinds of communication can take place. So what kinds of communication are possible with BGP?
First, it advertises blocks of IP addresses on that network to neighboring networks, a process known as originatinganadvertisement. Second, it gives networks a way to pass along this information to other networks, who pass it in turn to other networks, a process known as propagatinganadvertisement.
So why are these two kinds of communication needed?
If there was just one network in existence, there would be no need for a protocol that allows networks to share information about addresses. Figure 1 represents a network, labeled AS1.
You can think of this as just a name, but AS actually stands for AutonomousSystem, which is another word for network. A network is a system that is managed by a single person or group of people, and is thus “autonomous”. Generally, each organization has its own AS, and each has a unique number assigned to it. Networks use numbers rather than names because numbers are easier for computers, so AS1 is network 1.
If there were two networks, then each network would just need a way to tell the other, “These are my IP addresses.” This type of message is called an advertisement. This makes more sense when dealing with more than two networks, but for now, it’s worth knowing that creating an advertisement for your own IPs is called originating an advertisement. The red arrow in figure 2 represents AS2 originating an advertisement to AS1.
With three networks, we need an additional mechanism. In the diagram, AS3 is originating the advertisement for its addresses. But it would be nice if the people in AS1 could also reach AS3. So AS2 now takes the advertisement from AS3 and sends it on to AS1. This is called propagating an advertisement, and the diagram represents it with the green arrow. In this simple example, AS2 could just pass on the message it got from AS3 to AS1 without any changes. But in the real-world application of BGP protocol, AS2 would add its own AS number to the advertisement.
A more complex example
In this image, AS5 now is originating its advertisements to both AS3 and AS4, which are in turn propagating the advertisements to the remaining networks. AS1 can reach the AS5 addresses through either AS2 or AS4.
How does AS1 decide which path is better? Because each AS has been adding its own AS number to the advertisement, each one looks different:
AS2’s advertisement says the path is AS2 > AS3 > AS5
AS4’s advertisement says the path is AS4 > AS5
All other things being equal, a path with fewer networks is usually more desirable, but other factors often come into consideration.
BGP Allows Networks to Set Their Own Policy
BGP allows a network to control how it sends and receives traffic: what your network advertises, what it accepts, and what paths it prefers to use. In the example from Figure 4, AS1 might decide that the path through AS4 looks shorter. However, it might also decide that AS4 is untrustworthy and prefer the other path through AS2 > AS3. How exactly the network uses the data depends on the policies configured by the people running the network.
Ready, Set, Dive
It’s easy to take the Internet and the technology behind it for granted. If you made it this far, it you’re showing interest in looking under the hood and going beyond being a strict consumer. Keep your eye on this space for opportunities to learn more!
Downtime is inevitable. Yet pursuing it will make your shoppers feel safe and keep them coming back to your store. Alternatively, sluggish or unavailable sites annoy customers and may even leave them thinking your site isn’t up to the task of safely handling their credit card information.
This is why site monitoring is important. It keeps Hostdedi support technicians up to date with what, if anything, needs to be done to keep your site running at peak performance.
Site Monitoring Begins With Us But Ends With You
At Hostdedi, if we manage your server, then we monitor your website for trouble. However, like most things tech-related, redundancy means reliability and so it’s worthwhile to monitor your own site. Using a variety of methods, it’s possible to monitor traffic levels, sales, speed, errors, availability, and other critical factors. The more varied your methods, the more effective your monitoring.
Our monitoring service monitors very specific services on your server, such as the status of PHP, Apache, Nginx, MySQL, and so on. In addition to service-level monitoring, we also watch the server’s memory usage, load level, and general availability. Technicians monitor these sites around the clock for stability, and will also notify you if they detect other problems specific to your site, but beyond our direct control.
An example of logs in the Hostdedi Client Portal.
On your end, third-party applications can provide many of the essentials. For example, Google Analytics can reveal your site’s ranking in search results, gauge performance, and even suggest optimizations. Free uptime checkers such as Pingdom, while prone to some false positives, can track how frequently your site is unavailable or slow to respond to queries. Most modern content management systems (CMS) provide built-in charts for tracking traffic and sales data, or have plug-ins available to do so. As with any plug-in, do your research. Not all plug-ins are created equal, and some can pose additional security risks.
Too Much of a Good Thing?
As noted above with Pingdom and other uptime checkers, false positives are a possibility. The problem of “too much site monitoring” is a very real one. Using multiple uptime checkers can give conflicting data, skew the average, and even cause downtime. If not properly configured, certain uptime checkers generate so much traffic that they impair the performance of your site.
If you plan on using such a service, settle on one and only one. If you are supplementing the built-in analytics of Magento or another CMS, consider one that tracks overall traffic or sales, as these benefit most from additional monitoring. For example, the built-in sale and data collection within Magento can be reinforced by plug-ins that either provide more detail, or track it separately and log the information in a separate database or log file. Such backups can prove invaluable if and when you have any issues with corrupt data.
Analyzing Your Data
Once your monitoring systems are up and running, organizing your data takes priority. A single instance of downtime often doesn’t supply enough information to diagnose the root cause, but an organized collection of data helps to identify patterns in frequency and duration that can lead to issue resolution. With proper site management, this data can be compared with other logged information, such as time and nature of site changes, the timing of cron jobs, and other helpful information.
We’re Here to Help
If you have any questions about how to add plugins to your site, or want more details about their functionality, please feel free to contact our support team by email or through your Client Portal. Many server and service-level functions are configured by default when signing up for our service, but we can give guidance on services you may want to use for your own monitoring purposes.
Troy Evans has been a support technician at Hostdedi for just over 5 years, and has been helping to improve and refine the monitoring procedures and services.
As the cause of lag, latency is public enemy #1 in eCommerce. Nothing is better at killing an online experience and driving users away from a site, possibly never to return. Most of us probably assume it’s an issue with bandwidth, but often the problem can be traced back to high latency.
Read on to learn more about what causes latency and how web hosts like Hostdedi use Internet Exchanges to reduce it.
What Is Latency?
Latency refers to how long data takes to travel between the device requesting the data and the device providing it. Usually, the distance between these two points requires the use of other devices along the way. Each additional device, or hop, has the potential to increase latency. Indirect routes have more hops and are therefore undesirable.
How Does an Internet Exchange Work?
Local Network Service Providers (NSP, but also known as Internet Service Providers or ISP) typically have an inefficient infrastructure and do not reliably provide the most direct route. For example, a user in Detroit, MI attempts to contact a network in Ann Arbor, MI, but the NSP routes the traffic through Chicago, IL. This indirect path pushes traffic through multiple routers and hundreds more miles of fiber optic cable. Typically, each additional mile increases latency by about 9 microseconds due to the light in those cables having to travel farther.
Reputable web hosting companies are well aware of this problem, and even though they may be in competition with one another, their solution is cooperation. This cooperation is known in the industry as an Internet Exchange (IX) and provides a more direct path by eliminating the need for ISPs to carry local traffic.
By allowing companies to directly peer with one another and exchange traffic in a more direct path, there are less hops. This, in turn, generally means lower network latency.
At Hostdedi, we minimize latency for our clients by participating in the Detroit Internet Exchange (DET-IX).
Cage Match: DET-IX Versus NSP
IX participants are connected in a shared network with other members, allowing them to communicate locally and bypass the NSP. Using a tool like My traceroute (MTR), we can compare latency between DET-IX and NSP.
The IP addresses show traffic leaving our network in Southfield, MI and then traveling through Cleveland, OH before finally reaching https://cloudflare.com in Toronto. While a better route, this is still similar to the pathway described above, where a user Detroit, MI attempts to contact a network in Ann Arbor, MI.
In the industry, the effect of indirect paths backtracking over their own route is often known as “tromboning” and it is universally viewed as unfavorable for latency.
These extra hops through the network add latency as the traffic passes through each router. As shown in the last entry in the Last column, the average response time of 10.4ms. This is good, but it can be improved.
Traffic again leaves our network in Southfield, but through DET-IX where Cloudflare is also participating. The path uses three less hops, avoids tromboning, and improves the average response time for 100 packets to 0.5 ms, nearly a 10ms reduction.
In eCommerce, faster is almost always better. Shoppers have nearly no patience for lag, the modern-day equivalent of long lines. Fast stores sell more than slow stores, and better page-load times elevate your ranking on Google search engine results pages (SERPs), driving more traffic to your site.
Good and Getting Better!
This shared network relies on Border Gateway Protocol (BGP) to exchange routing and reachability information, and we use technology that both allow their use and accommodates future expansion. As DET-IX participation continues to grow, so does our ability to accept routes from new members.
Andrew has been working at Hostdedi for six years. He started out with the Data Center Operations department before making the leap to Network Operations. Andrew has eight Juniper Networks certifications, with the highest level achieved being JNCIP-DC.
Why did WordPress become so popular? Partly, it is because WordPress is easy to use and because its theme and plugin ecosystems are so large. But just as important are the values that fuel the project: freedom and control. Freedom to use WordPress as you see fit. Control over every aspect of your site.
That ethos of freedom and control was embraced by many other projects, including Magento, WooCommerce, and Craft CMS. If you build a site or store on Hostdedi web hosting with these applications, you have complete control.
There are alternatives to this model. SaaS publishing platforms are designed to hide the technical details of hosting and publishing. They provide a simple interface and an acceptable – if bland and uninspiring – design. But, unlike a WordPress site, this type of publishing platform does not exist to fulfill the needs of creatives and publishers. It exists to serve the needs of the business that owns the platform.
Custom Domains Are Not Optional For Publishers
Last year, a prominent SaaS publisher announced that it would no longer offer custom domains. This announcement came in the wake of others that “sunsetted” the features that attracted publishers to the platform in the first place.
Users with custom domains would be able to use them for the foreseeable future. New accounts would be served from the platform’s domain with the publication’s name demoted to the URL’s path component.
In your day-to-day experience as a publisher and writer it may not seem to matter much. However, a domain is hugely important for branding, for search engine optimization, and for control. Without a domain, you don’t own the name of your site. You can build a business and an audience around your content and have it taken away in an instant. That can’t happen with a properly registered domain.
A domain can be pointed anywhere. If you have control over your site’s domain, you can redirect to any server on the internet. You decide which company hosts the site, and you can change your mind. Without a domain, changing hosting providers means changing the name of your site.
Links move with the domain. Incoming links remain an important part of SEO. Incoming links persist when a site moves only if the site owner can control the domain and any redirects. A publication that builds a link profile on a platform that doesn’t offer custom domains cannot take those links with them when they leave.
The platform’s policies override the publisher’s needs. Without a custom domain, the cost of switching to a new platform or hosting provider is high. That cost may force publishers to stay with a platform as it changes in ways that don’t benefit the publisher.
It is hard to overestimate the importance of a custom domain to site owners. It isn’t an optional perk. It’s a necessity. For these reasons, many publishers and site owners who embraced SaaS publishing platforms are pulling out.
With WordPress and Craft CMS hosted on traditional web hosting or a platform like the Hostdedi Cloud, you will always have complete control over your site, your content, and the business you build on them.
This week’s Whiteboard Wednesday saw Jason, one of our hosting infrastructure experts, take marker to wall with an introduction to clusters: what they are, how they work, and why they may be right for you. Here’s our summary of what Jason’s 30-minute session revealed.
What Is a Cluster?
A cluster is an enterprise level hosting solution that provides the necessary infrastructure for high traffic sites that need flexibility. They manage this by spreading the hosting load across what are called nodes. This increases performance and improves concurrent user capacity.
How Does a Cluster Work?
As already covered, Cluster’s work by spreading incoming requests and hosting load across several different nodes. These nodes are also known as web application servers and they primarily store your website.
A load balancer is responsible for making sure that the nodes and their content are managed and served accurately and quickly. By using multiple nodes, server clusters are able to eliminate single points of failure and increase the availablity of a website beyond that of other single server hosting solutions.
In addition to nodes, clusters can also include a range of other add-ons and elements. These include, but aren’t limited to:
Additional Web Application servers
How Does Load Balancing Work?
Think of load balancing like the line into your favorite venue. There are a lot of people wanting to get in but there isn’t enough capacity inside. The venue is your website.
Instead of trying to get as many people into the venue as possible – causing a cramped and less enjoyable experience – you start to split each of those lines up and send them to different parts of your venue (send them to different nodes).
If you still find the venue filling up, then it’s very easy to expand the size of your venue. This means that you’re not restricted by a set number of nodes and add-ons, and can keep expanding as much as you need to meet your capacity requirements.
Clusters are a great option for larger businesses with sites that need to meet high-volume traffic requirements and reliability standards. They are also flexible and capable of growing with your website and your business.
If you like the flexibility of Clustered hosting but don’t think you need such a large solution, why not explore the promise of our cloud solutions.
How does a browser load a web page? It uses a phonebook. Not an old-fashioned leatherbound book or a switchboard operator, but a service known as DNS. Each page of that DNS “phonebook” is what are known as DNS Records.
In other words, when you look for nexcess.net, your computer looks in the DNS “phonebook”, finds the number for the site, and connects you to it. Of course, the whole process is much quicker, and faster, than this.
This article looks at what DNS records are, the different types you’ll find, and why they’re incredibly important for the success of any website.
It was 1983. The internet was young and IT professionals had begun to get fed up with having to remember long series of numbers in order to connect with other machines. Networks had spread beyond just a few units and in an effort to future-proof, longer series of numbers were proposed. There was just one problem, how to make these numbers more consumer friendly?
Paul Mockapetris published two papers on the subject, creatively named RFC 882 and RFC 883. Mockapetris’ system expanded prior use of a hosts.txt file into a large system capable of managing multiple domains in a single location. That system is known as DNS, or Domain Name System.
Without DNS, the Internet wouldn’t be what it is today. We may even need a Roladex to visit our favorite sites!
With DNS, computers still require the IP (internet protocol) address number sequence in order to connect with a server. Yet with over 4,294,967,296 different IPv4 addresses, it makes a lot more sense to convert those numbers into something more easily recognizable.
DNS gives IP addresses unique names for computers, services or other resources that are either part of a private network or part of the Internet.
The Hostdedi DNS network has 100% uptime with multiple redundancies in place
The domain name system prevents having to remember a long series of numbers. Users are able to type in a domain name and then the domain name system will automatically match those names with the IP address and route connections.
At the center of all this, the hosts.txt file still existed in the form of vast servers for managing domain names and at the heart of these servers are DNS records.
IP addresses work in a similar fashion to that of street addresses or phone numbers in an address book. While people browse the Internet, they look up their favorite site much like they look up a friend’s number. From there, the system provides them with the friend’s number and they can contact them. With DNS, the second part of this sequence is automated. This requires DNS records from a DNS server.
During the creation of DNS, servers were manufactured solely for the purpose of managing DNS and related information. Within each of these servers are DNS records that tie entries to a domain.
Any device connected to a computer network, whether it is a PC, router, printer, or any other device with an IP address, is referred to as ‘hosts’. With the sheer number of ‘hosts’ around the world, engineers needed a way to track devices without resorting to memorization of numbers.
As explained earlier, DNS records came along with DNS as a tool for system admins and users to seek out authoritative information on websites or other services they’re trying to access.
There are two types of DNS Records. These are:
Records stored in Domain Name System servers
Records stored on a user’s machine
Records stored on a Domain Name System server are covered in more detail below, including what types of records exits and how they function.
Records stored on a user’s machine are also known as DNS cache. This record lists the visiting history of an operator for all websites previously visited, regardless of whether they were attempted visits or not.
When you watch a crime drama and a culprit’s computer is taken to be analyzed for the sites they have visited, a DNS cache is usually what would be checked for unauthorized activity.
However, a DNS cache is usually temporary and has a limited lifespan before being removed.
DNS Syntax Types Explained
While there are an abundance of record types in existence, below you’ll find nine of the most commonly used DNS records. For more information, don’t forget to check our DNS Records knowledge base, as well as how to configure DNS records for your site. A – A records are usually referred to as address records, and occasionally host records. They are the most commonly used records that map hostnames of network devices to IPv4 addresses. A website address book. AAAA – Serves the same purpose as A records, except that hostnames are mapped to an IPv6 address vice an IPv4. As opposed to 32-bits for an IPv4 address, an IPv6 address contains 128-bits. An example of an IPv6 address is FE80:0000:0000:0000:0202:B3FF:FEIE:8329. CNAME – Acts as an alias for domains. The CNAME record is tied to the actual domain name. If the address nexcess.net was typed on your internet browser it would reload to the URL www.nexcess.net MX – MX records maps a domain name and connects them with message transfer agents. A mail server is responsible for managing the reception of emails, and preference values are assigned. In the case of large organizations, multiple email servers would be utilized to process messages en masse. Through the use of the SMTP (Simple Mail Transfer Protocol) emails are routed properly to their intended hosts. NS – Also known as name server records; designates a name server for a given host. PX – The technical description based on RFC 2163 details the PX DNS record as a ‘pointer to X.400/RFC822 mapping information’. Currently, it is not used by any application. PTR – Referred to as reverse-lookup pointer records. PTR records are used to search names of domains based on IP addresses. TXT – A type of DNS record that stores text-based information. It’s primarily used to verify the ownership of a domain as well as hold SPF (Sender Policy Framework) data, and prevents the delivery of fake emails that give the appearance of originating from a user. SOA – Possibly the most critical one of them all, the State of Authority record annotates when the domain was updated last.
The general purpose of a DNS lookup is to pull information from a DNS server. This is akin to someone looking up a number in a phone book (hence the term ‘lookup’ in conjunction with DNS).
Computers, mobile phones, and servers that are part of a network need to be configured to know how to translate domain names and email addresses into discernable information. A DNS lookup exists solely for this purpose. There are primarily two types of DNS lookups: forward DNS lookups and reverse DNS lookups.
Forward DNS Lookups
Forward DNS allows networked devices to translate an email address or domain name into the address of the device that would handle the communications process. Despite the transparency, forward DNS lookup is an integral function of IP networks, in particular, the Internet.
Reverse DNS Lookups
Reverse DNS (rDNS/RDNS) pulls domain name info from an IP address. It is also known as Inverse DNS. Reverse DNS lookups are used to filter undesirable data such as spam. Spam can be sent through any domain name that a spammer desires. Spammers can use this technique to fool regular customers into thinking that they’re dealing with legitimate entities. This can include organizations such as Bank of America or Paypal.
Email servers that are receiving emails can validate them by checking IPs with Reverse DNS requests. RDNS resolvers should match the domain of the email address if the emails themselves are legitimate. While this is useful in verifying the integrity of emails, it does not come without a cost. An ISP has to set the records up if the legitimate mail servers themselves do not have the appropriate records on hand to respond properly.
What Are Your DNS Records?
You can check your own DNS records with the Hostdedi DNS Checker. Simply enter the site address you want to check and the type of record you want to see.
You can also use this tool to check third-party DNS records and confirm the identity of certain domains to make sure they are not fake.
Ultimately, DNS makes life easier for the end user that can’t memorize 32-bit or 128-bit IP addresses. It’s easier to just type a name into the browser bar and let DNS figure out the rest. DNS resource records are fundamental for DNS to be able to work, and the Internet wouldn’t be what it is today without them.
If you’re looking for more information on site performance and benchmarking, don’t forget to check our article on TTFB (Time To First Byte) and why it may not be as important as you’ve been led to believe. Also, check out our summary of data center tiers and use the stats to figure out which data center tier you’re hosting with.
In the world of data centers, reliability is one of the most important factors. The more reliable you are, the more likely clients are going to want to use you. After all, who wants a data center that isn’t online?
Luckily, The Telecommunications Industry Association (TIA) published a standard for data centers defining four levels of data centers in regards to their reliability. The aim was that this standard would then be able to inform potential data center users about which center is best for them. While brief, the standard laid the groundwork for how some data centers would manage to pull ahead of others in the future.
This article breaks the types of data center down into the four tiers and looks at how they differ. Combine this our article on how to choose a data center location, and you’ll know where the best place to host your website is.
Check out our Infographic below to quickly see the main differences between data center tiers, or keep reading for more detail.
The Classification of Data Centers
Data centers are facilities used to house computer systems and associated components. A data center is comprised of redundant power supplies, data communications connections, environmental controls, and various security devices.
Tier one data centers have the lowest uptime, and tier four have the highest. The requirements of a data center are progressive in that tier four data centers incorporate the data center requirements of the first three tiers in addition to other conditions that classify it as a tier four data center.
The requirements of a data center refer to the equipment needed to create a suitable environment. This includes reliable infrastructure necessary for IT operations, which increases security and reduces the chances of security breaches.
What to Consider When Choosing a Data Center
When choosing a data center to store data for your business, it is important to have a data center checklist. This is a list of the most important things you should keep in mind – such as the physical security of a prospective data center – when making your choice.
Typically, a good data center checklist would include the various data center pricing policies and extra amenities provided. An excellent straightforward strategy, for instance, should have no hidden charges and a data center with additional facilities is better than one without.
Data Center Specifications
Data center specifications refer to information about the setup in a data center. This can include the maximum uptime, redundant power systems that allow the platform to stay up regardless of power outages, the qualification of technical staff at the data center, and more.
It is common that higher data center tiers have better-qualified staffing since more expertise is required to maintain the whole platform. Data center specifications should be on the data center checklist of a customer looking at prospective data centers to store their data.
What Is a Tier One Data Center?
This is the lowest tier in the Tier Standard. A data center in this tier is simple in that it has only one source of servers, network links, and other components.
Redundancy and backups in this tier are little or non-existent. That includes power and storage redundancies.
As such, specifications for a data center for this tier are not awe-inspiring. If a power outage were to occur, the system would go offline since there are no fail safes to kick in and save the day.
The specifications of a tier one data center allow for uptime of approximately 99.671%. The lack of backup mechanisms make this data center tier seem like a risk for many businesses but they can work for small internet based companies with no real-time customer support. However, for companies with heavy reliance on their data, a tier one data center would not be practical.
One of the advantages of tier one data centers is that they provide the cheapest service offering for companies on a budget.
However, a lack of redundancy means that the uptime of servers is considerably lower than tier two, three and four and maintenance on the facility will require shutting down of the entire facility thus more downtime.
What is a Tier Two Data Center?
This is the next level up after line one. Tier Two features more infrastructure and measures to ensure less susceptibility to unexpected downtime. The requirements of a data center for this data center tier include all those of the first tier but with some redundancy.
For instance, they typically have one a single path for power and cooling. However, they also have a generator as a backup and a backup cooling system to keep the data center environment optimal.
The specifications of a data center for the second tier allow for higher uptime compared to level one data centers that are approximately 99.741%.
What is a Tier Three Data Center?
Tier Three data center requirements for line three data centers include all those of tier one but have a more sophisticated infrastructure to allow for redundancy and backups in case of unexpected events that may cause downtime.
All server equipment has multiple power sources and cooling distribution paths. In case of failure of any of the distribution paths, another takes over ensuring the system stays online. Tier three data centers must have multiple uplinks and must be dual powered.
These specifications ensure you only have a maximum of two hours downtime annually, as a percentage. Some of the equipment in tier three systems are fully fault-tolerant.
Some procedures are put in place to ensure maintenance can be done without any downtime. Tier three data centers are the most cost-effective solution for the majority of businesses.
What is a Tier Four Data Center?
Tier 4 is the highest level when it comes to data center tiers. It has an availability percentage of 99.99%. A tier 4 data center is more sophisticated regarding its infrastructure as it has the full capacity, support, and procedures in place to ensure the maximum and optimum uptime levels.
Tier 4 data center fully meets all the specifications of the other three tiers. A tier 4 data center is error tolerant as it can operate normally even when there is an instance of infrastructural equipment failure.
A Tier 4 data center is fully redundant with multiple cooling systems, sources of power and generators to back it up. It has an uptime level of 99.99% with an estimated downtime level of only 29 minutes annually.
These are the four data center tiers and a summary of their data center requirements used in the design process. Anyone looking for things to put in their data center checklist could find the essential elements to look for in the specifications of a data center and requirements.
Hostdedi Is a Tier 4 Data Center
Between having an uptime of 99.9975%, multiple redundancies, and an annual downtime of less than 18 minutes, the Hostdedi data center is regarded as a tier 4 data center. If you would like to know more about the Hostdedi data center, don’t hesitate to check out the different data centers offered by Hostdedi around the world or take a more detailed look at our Southfield, Michigan data center (in an easy to read infographic).
New web hosting clients often find the distinction between a web hosting provider and a domain name registrar confusing. After all, a website has a name and it’s not much use without one. Shouldn’t paying for a website be the same as paying for its name? Why would giving a website a name be complicated at all?
In fact, although some web hosting providers do offer domain name registration services, they’re actually quite different and the organizations that manage each service are separate.
Web hosting providers connect your site to the internet and provide the server it runs on. Domain name registrars reserve a domain name for use by your site.
What Is Web Hosting?
Web hosting provides a server (or part of a server) for a website’s files and database to be stored on. A server is just a powerful computer. Web hosting also provides the bandwidth that connects a site to the internet. Every computer that is connected to the Internet has an address — an IP number — that looks like this: “198.51.100.23”. It’s more or less like a phone number.
It wouldn’t be convenient for everyone who wants to visit your website to type in an IP number. They’re hard to remember, they’re in limited supply, and “nexcess.net” is nicer to look at than “184.108.40.206”.
So, we have domain names: a name that is easy for humans to understand. When you type a domain name into your browser, a Domain Name Server converts it into the associated IP address so that the servers and the routers on the internet know where to send your request.
The domain names are managed by a set of organizations that are not connected directly to web hosting providers.
Domain Name Registrars
When you need a domain name to use with your site, you go to a domain name registrar. These companies (which are sometimes web hosting providers too) will, for a small fee, reserve a domain name for you to use for a limited time.
The registrars don’t actually own the domain name registry, which has ultimate control over the domain names under a top-level domain like “.com”, or “.net” but we needn’t concern ourselves with that wrinkle here.
So what exactly do you get when you pay a domain name registrar? In a nutshell, you get an entry in the name servers of the top-level domain. Those entries mean only you can use the domain name. The records also point to a Domain Name Server, a server that holds all the domain name records for your domain.
That sounds complex, but the domain name records are really just like the contacts app on your phone, which has a list of names associated with a list of numbers. To find a person’s number, you look up a name.
In simplified terms, when someone puts your domain name in their browser, the browser asks the name server of the root domain (the .com bit) where to find the Domain Name Servers for that domain. The root name server tells the browser where to find your DNS server, which is often part of your web hosting. The browser then goes to your name server, which tells it the IP address of your website.
In reality, it’s more complicated than that, with layers of caching and hierarchies of name servers, but hopefully, you now have a better understanding of what happens when someone uses your site’s domain name and how domain name registrars are different to web hosting companies like Hostdedi.