Você está na página 1de 13

Server Load Balancing: Algorithms Published: Monday, May 17, 2004

Types of load balancing Load balancing of servers by an IP sprayer can be implemented in different ways. These methods of load balancing can be set up in the load balancer based on available load balancing types. There are various algorithms used to distribute the load among the available servers.

Random Allocation In a random allocation, the HTTP requests are assigned to any server picked randomly among the group of servers. In such a case, one of the servers may be assigned many more requests to process, while the other servers are sitting idle. However, on average, each server gets its share of the load due to the random selection. Pros: Simple to implement. Cons: Can lead to overloading of one server while under-utilization of others.

Round-Robin Allocation In a round-robin algorithm, the IP sprayer assigns the requests to a list of the servers on a rotating basis. The first request is allocated to a server picked randomly from the group, so that if more than one IP sprayer is involved, not all the first requests go to the same server. For the subsequent requests, the IP sprayer follows the circular order to redirect the request. Once a server is assigned a request, the server is moved to the end of the list. This keeps the servers equally assigned. Pros: Better than random allocation because the requests are equally divided among the available servers in an orderly fashion. Cons: Round robin algorithm is not enough for load balancing based on processing overhead required and if the server specifications are not identical to each other in the server group.

Weighted Round-Robin Allocation Weighted Round-Robin is an advanced version of the round-robin that eliminates the deficiencies of the plain round robin algorithm. In case of a weighted round-robin, one can assign a weight to each server in the group so that if one server is capable of handling twice as much load as the other, the powerful server gets a weight of 2. In such cases, the IP sprayer will assign two requests to the powerful server for each request assigned to the weaker one.

Pros: Takes care of the capacity of the servers in the group. Cons: Does not consider the advanced load balancing requirements such as processing times for each individual request. The configuration of a load balancing software or hardware should be decided on the particular requirement. For example, if the website wants to load balance servers for static HTML pages or light database driven dynamic webpages, round robin will be sufficient. However, if some of the requests take longer than the others to process, then advanced load balancing algorithms are used. The load balancer should be able to provide intelligent monitoring to distribute the load, directing them to the servers that are capable of handling them better than the others in the cluster of server.

Server Fault

Questions Tags Users Badges Unanswered Ask Question

Why is DNS failover not recommended?

From reading, it seems like DNS failover is not recommended just because DNS wasn't designed for it. But if you have two webservers on different subnets hosting redundant content, what other methods are there up vote 40 to ensure that all traffic gets routed to the live server if one server goes down vote down? favorite 21 To me it seems like DNS failover is the only failover option here, but the share [g+] consensus is it's not a good option. Yet services like DNSmadeeasy.com share [fb] provide it, so there must be merit to it. Any comments? share [tw] dns failover link|improve this edited Feb 22 '11 at asked Aug 30 '09 at

question

1:49 17:57

John Gardeniers Lin 16.7k21554 5291718 62% accept rate feedback

13 Answers
active oldest votes By 'DNS failover' I take it you mean DNS Round Robin combined with some monitoring, i.e. publishing multiple IP addresses for a DNS hostname, and removing a dead address when monitoring detects that a server is down. This can be workable for small, less trafficked websites. By design, when you answer a DNS request you also provide a Time To Live (TTL) for the response you hand out. In other words, you're telling other DNS servers and caches "you may store this answer and use it for x minutes before checking back with me". The drawbacks come from this:

up vote 42 down vote

With DNS failover, a unknown percentage of your users will have your DNS data cached with varying amounts of TTL left. Until the TTL expires these may connect to the dead server. There are faster ways of completing failover than this. Because of the above, you're inclined to set the TTL quite low, say 510 minutes. But setting it higher gives a (very small) performance benefit, and may help your DNS propagation work reliably even if there is a short glitch in network traffic. So using DNS based failover goes against high TTLs, but high TTLs are a part of DNS and can be useful.

The more common methods of getting good uptime involve:


Placing servers together on the same LAN. Place the LAN in a datacenter with highly available power and network planes. Use a HTTP load balancer to spread load and fail over on individual server failures. Get the level of redundancy / expected uptime you require for your firewalls, load balancers and switches. Have a communication strategy in place for full-datacenter failures, and the occasional failure of a switch / database server / other resource that cannot easily be mirrored.

A very small minority of web sites use multi-datacenter setups, with 'geo-

balancing' between datacenters. answered Aug 30 '09 at 18:39 link|improve this answer Jesper Mortensen 9,074628 I think he's specifically trying to manage failover between two different data centres (note the comments about different subnets), so placing the 10 servers together/using load balancers/extra redundacy isn't going to help him (apart from redundant data centres. But you still need to tell the internet to go to the one that's still up). Cian Aug 30 '09 at 23:22 Add anycast to the multi-datacenter setup and it becomes datacenter4 failure proof. petrus Feb 22 '11 at 0:30 wikipedia entry on anycast (en.wikipedia.org/wiki/Anycast) discusses this in relation to DNS root server resilience. dunxd Apr 1 '11 at 1:54 Don't forget the several other reasons that DNS "round robin" resource 1 record set shuffling is useless. JdeBP May 25 '11 at 20:07 feedback The issue with DNS failover is that it is, in many cases, unreliable. Some ISPs will ignore your TTLs, it doesn't happen immediately even if they do respect your TTLs, and when your site comes back up, it can lead to some weirdness with sessions when a user's DNS cache times out, and they end up heading over to the other server. up vote Unfortunately, it is pretty much the only option, unless you're large enough to do your own (external) routing. 13 down vote answered Aug 30 '09 at 18:27 link|improve this answer Cian 4,0041819 +1 Slow and Unreliable Chris S May 25 '11 at 19:56 feedback DNS failover defintely works great. I have been using it for many years to manually shift traffic between datacenters, or automatically when monitoring up systems detected outages, connectivity issues, or overloaded servers. When vote 5 you see the speed at which it works, and the volumes of real world traffic that down can be shifted with ease - you'll never look back. I use Zabbix for monitoring vote all of my systems and the visual graphs that show what happens during a DNS failover situation put all my doubts to and end. There may be a few ISPs out there that ignore TTLs, and there are some users still out there with old

browsers - but when you are looking at traffic from millions of page views a days across 2 datacenter locations and you do a DNS traffic shift - the residual traffic coming in that ignores TTLs is laughable. DNS failover is a solid technique. DNS was not designed for failover - but it was designed with TTLs that work amazingly for failover needs when combined with a solid monitoring system. TTLs can be set very short. I have effectively used TTLs of 5 seconds in production for lightening fast DNS failover based solutions. You have to have DNS servers capable of handling the extra load - and named won't cut it. However, powerdns fits the bill when backed with a mysql replicated databases on redundant name servers. You also need a solid distributed monitoring system that you can trust for the automated failover integration. Zabbix works for me - I can verify outages from multiple distributed Zabbix systems almost instantly - update mysql records used by powerdns on the fly and provide nearly instant failover during outages and traffic spikes. But hey - I built a company that provides DNS failover services (www.freefailover.com) after years of making it work for large companies. So take my opinion with a grain of salt. If you want to see some zabbix traffic graphs of high volume sites during an outage - to see for yourself exactly how good DNS failover works - email me I'm more than happy to share. Thanks, Scott@FreeFailover.com answered Oct 20 '10 at 17:17 link|improve this answer Scott McDonald 5111 feedback The prevalent opinion is that with DNS RR, when an IP goes down, some clients will continue to use the broken IP for minutes. This was stated in some of the previous answers to the question and it is also wrote on Wikipedia. Anyway, up vote 5 http://crypto.stanford.edu/dns/dns-rebinding.pdf explains that it is not true for down most of the current HTML browsers. They will try the next IP in seconds. vote http://www.tenereillo.com/GSLBPageOfShame.htm seems to be even more strong: The use of multiple A records is not a trick of the trade, or a feature conceived by load balancing equipment vendors. The DNS protocol was designed with

support for multiple A records for this very reason. Applications such as browsers and proxies and mail servers make use of that part of the DNS protocol. Maybe some expert can comment and give a more clear explanation of why DNS RR is not good for high availability. Thanks, Valentino PS: sorry for the broken link but, as new user, I cannot post more than 1 edited Aug 7 '11 at 8:13 link|improve this answer dtoubelis 1,461410 vmiazzo 30647 answered Sep 29 '09 at 10:06

Multiple A records are designed in, but for load balancing, rather than for fail over. Clients will cache the results, and continue using the full pool (including the broken IP) for a few minutes after you change the record. Cian Sep 29 '09 at 10:10 So, is what is wrote on crypto.stanford.edu/dns/dns-rebinding.pdf chapter 3.1 false? <<Internet Explorer 7 pins DNS bindings for 30 minutes.1 1 Unfortunately, if the attackers domain has multiple A records and the current server becomes unavailable, the browser will try a different IP address within one second.>> vmiazzo Sep 29 '09 at 14:08 Moved my subquestion here serverfault.com/questions/69870/ vmiazzo Sep 30 '09 at 8:45 feedback There are a bunch of people that use us (Dyn) for failover. It's the same reason sites can either do a status page when they have downtime (think of things like Twitter's Fail Whale)...or simply just reroute the traffic based on the TTLs. Some people may think that DNS Failover is ghetto...but we seriously designed our network with failover from the beginning...so that it would work as well as hardware. I'm not sure how DME does it, but we have up vote 3 of 17 of our closest anycasted PoPs monitor your server from the closest 2 down location. When it detects from two of the three that it's down, we simply reroute the traffic to the other IP. The only downtime is for those that were at vote that requested for the remainder of that TTL interval. Some people like to use both servers at once...and in that case can do something like a round robin load balancing...or geo based load balancing. For those that actually care about the performance... our real time traffic manager will monitor each server...and if one is slower...reroute the traffic to

the fastest one based on what IPs you link in your hostnames. Again...this works based on the values you put in place in our UI/API/Portal. I guess my point is...we engineered dns failover on purpose. While DNS wasn't made for failover when it originally was created...our DNS network was designed to implement it from the get go. It usually can be just as effective as hardware..without depreciation or the cost of hardware. Hope that doesn't make me sound lame for plugging Dyn...there are plenty of other companies that do it...I'm just speaking from our team's perspective. Hope this helps... answered May 25 '11 at 19:38 link|improve this answer Ryan 211 feedback The alternative is a BGP based failover system. It's not simple to set up, but it should be bullet proof. Set up site A in one location, site B in a second all with local IP addresses, then get a class C or other block of ip's that are portable and set up redirection from the portable IP's to the local IP's. There are pitfalls, but it's better than DNS based solutions if you need that up vote level of control. 1 down vote answered Aug 30 '09 at 21:40 link|improve this answer Kyle Hodgson 1,488619 BGP based solutions aren't available to everyone though. And are far 3 easier to break in particularly horrible ways than DNS is. Swings and roundabouts, I suppose. Cian Aug 31 '09 at 3:48 feedback If you want to learn more, read the application notes at http://edgedirector.com up vote They cover: failover, global load balancing, and a host of related matters. 1 down vote If your backend architecture permits it, the better option is global load balancing with the failover option. That way, all of the servers and bandwidth are in play as much as possible. Rather than inserting an additional available server on failure, this setup withdraws a failed server

from service until it is recovered. The short answer: it works, but you have to understand the limitations. link|improve this answer answered Oct 6 '09 at 14:22 spenser

feedback Ive been using DNS failover to protect our company website for a few years now. I've never had any issues with TTL and from my tests the 30 second TTL works great. As soon as the TZOHA monitors detect the server down, it instantly switches the DNS record to the live server. I was leary about moving our DNS but after speaking with the TZO sales rep for their failover service and seeing some reviews about this technology, I had faith and it hasn't let me down. DNS wasn't designed to do this but integrating ideas and technology together up vote often solves many problems. I'm a happy customer of DNS failover using 1 down TZO and won't be spending thousands of dollars on hardware devices and vote training! answered Jun 23 '10 at 15:32 link|improve this answer user46587 211 Advertise much? That aside, others have pretty well covered why this might work for some, and why you're taking your chances using it for most production environments (though it's better than nothing). Chris S Jun 23 '10 at 15:46 feedback I've been using DNS based site-balancing and failover for the last ten years, and there are some issues, but those can be mitigated. BGP, while superior in some ways is not a 100% solution either with increased complexity, probably additional hardware costs, convergence times, etc... I've found combining local (LAN based) load balancing, GSLB, and cloud based zone hosting is working quite well to close up some of the issues up vote normally associated with DNS load balancing. 1 down vote answered Aug 23 '10 at 1:50 link|improve this answer Greeblesnort 1,21627

feedback One option for multi data-center failover is to train your users. We advertise to our customers that we provide multiple servers in multiple cities and in our signup emails and such include links directly to each "server" so that users know if one server is down they can use the link to the other server. This totally bypasses the issue of DNS failover by just maintaining multiple domain names. Users who go to www.company.com or company.com and login get directed to server1.company.com or server2.company.com and have up vote the choice of bookmarking either of those if they notice they get better 1 down performance using one or the other. If one goes down users are trained to go to the other server. vote answered Oct 11 '10 at 22:11 link|improve this answer thelsdj 356112 feedback I believe the idea of failover was intended for clustering, but because it could also run solo still made it possible to operate in a one-to-one availability. up vote 1 down vote answered Feb 22 '11 at 0:19 link|improve this answer Seth 969 feedback Another option would be to set up name server 1 in location A and name server 2 in location B, but set each one up so all A records on NS1 point traffic to IPs for location A, and on NS2 all A records point to IPs for location B. Then set your TTLs for a very low number, and make sure your domain record at the registrar has been setup for NS1 and NS2. That way, it will automatically load balance, and fail over should one server or one link to a location goes down.

up vote 1 down I've used this approach in a slightly different way. I have one location with vote two ISPs and use this method to direct traffic over each link. Now, it may be a bit more maintenance than you're willing to do... but I was able to create a simple piece of software that automatically pulls NS1 records, updates A record IP addresses for select zones, and pushes those zones to NS2. link|improve this answer answered Aug 7 '11 at 5:13

Amal 111 feedback "and why you're taking your chances using it for most production environments (though it's better than nothing)." Actually, "better than nothing" is better expressed as "the only option" when the presences are geographically diverse. Hardware load balancers are great for a single point of presence, but a single point of presence is also a single point of failure. There are plenty of big-dollar sites that use dns based traffic manipulation to good effect. They are the type of sites who know on an hourly basis if sales are off. It would seem that they are the last to be up for "taking your chances using it for most production environments". Indeed, they have reviewed their options carefully, selected the technology, and pay well for it. If they thought up vote something was better they would leave in a heartbeat. The fact that they still 0 down choose to stay speaks volumes about real world usage. vote Dns based failover does suffer from a certain amount of latency. There is no way around it. But, it is still the only viable approach to failover management in a multi-pop scenario. As the only option, it is far more than "better than nothing". answered Oct 11 '10 at 21:52 link|improve this answer edited Oct 11 '10 at 21:58 spenser 522 feedback

Your Answer

Name log in or Email Home Page

Not the answer you're looking for? Browse other questions tagged dns failover or ask your own question.
Greetings! This is a collaboratively edited question and answer site for system administrators and desktop support professionals. It's 100% free, no registration required. Got a question about the site itself? meta is the place to talk about things like what questions are appropriate, what tags we should use, etc. about faq meta tagged dns 3622 failover 295 asked 2 years ago viewed 30,267 times active 7 months ago

34 People Chatting

The Comms Room


14 mins ago - Bart De Vos

Server Fault: Improve or Close


12 hours ago - ewwhite

Linked Multiple data centers and HTTP traffic: DNS Round Robin is the ONLY way to assure instant fail-over? Apache availability with the two front-ends on diferent locations. Is it possible? How can I serve a static landing page if my server is down? Dual external internet connections Is it possible to completely avoid a single-point-of-failure in a web back-end? How can one domain route to an always-changing pool of servers? Enterprise Redirection Services? What's the best solution for 99.999% Up-time to end users? Can DNS do failover switch for a domain? how to switch serving server b/w primary,secondary when one of them is down? Related How can I automaticaly change the DNS A record to point my site to a secondary server in case of a failure? Best way to handle link failover for based upon connectivity to a single host? Software for failover across multiple external hosts Free DNS software with failover support? Simple Webserver failover DNS Made Easy's DNS Failover, is it worth it?

DNS Round-robin failover and load balancing Failover of mirrored webservice Amazon EC2 Failover Solution failover dns not working and not sure how to troubleshoot Best Practice - Setting up 2 AD intragrated DNS servers for internal network Alternatives to DNS round robin redirect traffic to www.example.com through DNS from example.com Browser-based DNS failover using multiple A records Simple Failover to a Geographically Separated Server? DNS Round Robin: Multiple Nameservers VS Multiple A Records? DNS server failover Looking for a DNS Hosting provider with automatic failover and role switching Redundancy with DNS records Configuring Secondary DNS to Work if Primary is down (Bind) DNS Failover with multiple Nginx load balancers Does dynamically DNS record updating works as a failover mechanism? Can DNS do failover switch for a domain? DNS Failover when Datacenter is dead

Você também pode gostar