Posts tagged ‘web’

The following is a technique I’ve used over the last decade or so for distributing web traffic (or potentially any services) across multiple services, using just DNS.  Being an old DNS hack, I’ve called this technique Poor Man’s Anycast, although it doesn’t really use anycasting.

But before we get into the technique, we need to make a brief diversion into the little-known but rather neat feature of the DNS, or more accurately, DNS forwarders, which makes this a cool way to do stuff. The feature is name server selection.

Most DNS clients, and by this I include your home PC, make use of a DNS forwarder.  The forwarder is the thing that handles (and caches) DNS requests from end clients, while a DNS server carries authoritative information about a limited set of domains and only answers queries for them. These two functions have historically been conflated rather severely, mainly due to the use of BIND for both, and why this is a bad thing is the subject for a whole other post.

Moving right along. A DNS forwarder gets to handle lots of queries for any domain that its clients ask for. When you ask for foo.example.net, it asks one of the root servers (a.root-servers.net, b.root-servers.net et al) for that full domain name (let’s assume it’s just come up and doesn’t have anything cached).  It gets back a delegation from the root servers, saying basically, “I don’t know, but the GTLD (.com, .net) servers will”, and tell you where to find the GTLD servers (a.gtld-servers.net et al).  You ask the one of the GTLD servers, and get back an answer that says that they don’t know either, but ns1.example.net and ns2.example.net do.

You then ask (say) ns1.example.net, and hopefully you’ll get the answer you want (e.g. the IP address).

Now, along the way, the forwarder has been caching everything it got. Every time it asks a name server for data, it stores the time it took to reply. That means that when looking up names in example.net, the forwarder has been collecting timing and reliability data which it uses to choose which name server to ask next time, as well as the answers it received.  So if ns1.example.net answers in 20 ms, but ns2.example.net answers in 10 ms, roughly two thirds of the queries for something.example.net will be sent to ns2.example.net. If the timing difference is much greater, the split of queries will be even more marked. Similarly, if a name server fails to respond at all, that fact will be reflected in the accumulated preference assigned to that server, and it will get very few queries in future; just enough so that we know we can start sending it queries again when it comes back.

This is a powerful effect, and is of particular use when distributing servers over a wide geographical area. DNS specialists know about it, because poor DNS performance affects everything, and DNS people don’t like adversely affecting everything. (They’re really quite paranoid about it. Trust me, I’m one.) But it can also be used to pick the closest server for other things as well.

After all, closeness (in terms of round-trip time) is very important in network performance (see my post on bandwidth delay products).

The technique is as follows.  Let’s say we have three web servers, carrying static content. Call them, say, auckland.example.net, chicago.example.net and london.example.net. Let’s say that they’re widely disparate. All three servers carry content for http://www.example.com/.

So, we start by configuring, on the example.com name servers:

$ORIGIN example.com.
$TTL 86400
www     IN      NS      auckland.example.net.
        IN      NS      chicago.example.net.
        IN      NS      london.example.net.

We then run a DNS server on all three web servers.  We configure the servers with a zone for www.example.com along the lines of:

$ORIGIN www.example.com
$TTL 86400                       ; Long (24 hour) TTL on NS records etc
@       IN      SOA     auckland.example.net. webmaster.example.com (
                                2009112900 3600 900 3600000 300 )
        IN      NS      auckland.example.net.
        IN      NS      chicago.example.net.
        IN      NS      london.example.net.
$TTL 300                         ; Short (five minute) TTL on A record
@       IN      A       10.0.0.1 ; Set this to host IP address

Now the key is that each web server serves up its own IP address. When a DNS forwarder makes a query for a www.example.com, it will be directed to one of auckland.example.net, chicago.example.net or london.example.net. But as more and more queries get made, one of those three will start handling the bulk of the queries, at least if that one is significantly closer than the other two. And if auckland.example.net gets the query, it answers with its  own IP address, meaning that it also gets the subsequent HTTP request or other services directed to it. The short DNS TTL (5 minutes in the example) mean that the address gets queried moderately often, allowing the name server selection to “get up to speed”. Much longer TTLs on the name servers mean the data doesn’t get forgotten too quickly.

The result is that in many cases, the best server gets the request.

The technique works best if there are lots of domains being handled by the same set of servers, and there are lots of requests coming through. That way the preferences get set quickly in the major ISPs’ DNS forwarders. The down side of the technique is that far away servers will still get some queries. This non-determinism may be a reason for not deploying this technique.  If you want determinism, you’ll need to look at more industrial grade techniques.

Now, this isn’t what players like Akamai do, and it isn’t what anycasting is about. Akamai and (some) other content distribution networks work by maintaining a map of the Internet, and returning DNS answers based on the requester’s IP address. But this is a fairly heavyweight answer to the problem. It’s not something you can implement with just BIND alone.

Anycasting on the other hand relies on advertising the same IP address in multiple places, and letting BGP sort out the nearest path. This has three disadvantages:

  1. It potentially breaks TCP. If there is an equal cost path to a given anycast node, it’s possible one packet from a stream might go one way, while the next packet might be sent to a completely different host (at the same IP address). In practice, this has proven to be less of a problem than might be expected, but there is still scope for surprises.
  2. Each of your nodes has to be separately BGP peered with its upstream network(s). That’s a lot more administration than many ISPs will do for free.
  3. Most importantly, being close in BGP terms is not the same as being close physically or in terms of round-trip time. Many providers have huge reach within a single AS, so a short AS-path (the main metric for BGP) may actually be a geographically long distance, with a correspondingly long round-trip.

The other nice thing about poor man’s anycast is that it’s dynamic; if a node falls off the world, as long as its DNS goes away too, it’ll just disappear from the cloud as soon as the TTLs time out. If a path to it gets congested, name server selection will notice the increased TTL and de-prefer that server.

And of course you don’t need to be a DNS or BGP guru, or buy/build expensive, complex software systems to set it up.