Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 14 Nov 1995 12:51:44 -0600 (CST)
From:      Joe Greco <jgreco@brasil.moneng.mei.com>
To:        terry@lambert.org (Terry Lambert)
Cc:        terry@lambert.org, luigi@labinfo.iet.unipi.it, hackers@FreeBSD.org
Subject:   Re: Multiple http servers - howto ?
Message-ID:  <199511141851.MAA29115@brasil.moneng.mei.com>
In-Reply-To: <199511141728.KAA20264@phaeton.artisoft.com> from "Terry Lambert" at Nov 14, 95 10:28:06 am

next in thread | previous in thread | raw e-mail | index | archive | help
> > > #1.  Via DNS.  The requesting hosts are rotored through a list of the
> > > addresses.
> > > 
> > > It isn't a very good scheme, mostly because caching exists.
> > 
> > Which is why you lower the TTL  :-)  or maybe just not worry about it,
> > because when you start examining the Bigger Picture, you realize that a site
> > large enough to require multiple servers is receiving zillions of requests,
> > and different data will be cached by each domain server, still effectively
> > spreading the load over multiple servers.
> 
> *My* cache doesn't have to honor *your* TTL.  In fact, if my  provider
> is Sprint or one of serval others, it *won't* honor your TTL.

If you are using Sprint for domain service, I pity you.  Nevertheless, the
TTL only assists in randomization.

> You're still doing round-robin address assignment, which expects that
> clients will behave statistically identical to one another.  And they
> won't, even if the TTL is honored.

Somebody else who doesn't really understand that when N is a random function
that may not be random for small values of x, still is random enough for
large values of x....  :-)

The TTL hack simply reduces the definition of "may not be random for small
values of x".

If you are trying to tell me that if I have 4 addresses and 5,000 sites do 
a DNS lookup on me, I will state that at least 1,000 sites will get assigned
to each address.  That does not imply that the loading will be identical or
totally equal, but it should be reasonably distributed.  I may not care if
the distribution is 1000/1000/1000/2000, because it is still better than
5000 against a single box - and I would bet that it would be more evenly
distributed than I am suggesting, most of the time.

For smaller cases, you don't care because you don't need multiple server
platforms to begin with.

> > The case where you might lose is if a hundred workstations at the same site
> > suddenly decide to all run Netscape on a particular URL at once, all hundred
> > workstations receive the same cached answer from the local domain server,
> > and they proceed to pound the box into oblivion.  This is the "University
> > Intro to CS class" problem.  It's worse if they are pounding on your news
> > server  :-(  which HAS happened to me.
> 
> Or one of several server boxes with 40 X terminals hanging off it.

Both of which are cases where the sample size "x" isn't large enough (well,
of course, in the case of the news server, there was only one news server).

... Joe

-------------------------------------------------------------------------------
Joe Greco - Systems Administrator			      jgreco@ns.sol.net
Solaria Public Access UNIX - Milwaukee, WI			   414/342-4847



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199511141851.MAA29115>