Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 27 Oct 1995 12:03:47 -0700 (MST)
From:      Terry Lambert <terry@lambert.org>
To:        rcarter@geli.com (Russell L. Carter)
Cc:        jkh@time.cdrom.com, hackers@FreeBSD.ORG
Subject:   Re: New lmbench available (fwd)
Message-ID:  <199510271903.MAA23671@phaeton.artisoft.com>
In-Reply-To: <199510270531.WAA05178@geli.clusternet> from "Russell L. Carter" at Oct 26, 95 10:31:23 pm

next in thread | previous in thread | raw e-mail | index | archive | help
> So here is a market opportunity:  how do you scale a bunch of 
> httpd servers working in concert (maybe like timed?) so that when one
> goes down, the remainders elect a master and life goes on?

#ifdef TERRY_IN_FUTURIST_MODE

You connect to services instead of machines, of course, and the
underlying transport is permitted to load balance you off onto
another server.

I coined the term "server anonymity" for this four years ago.

The problem is in the protocols... they are server connection
rather than service connection oriented.

Consider: do I really give a damn where the next 512 frames of my
"Raiders of the Lost Ark" come from, as long as they get to me?  I
really want to be able to ask for them and not care *who* sends
them to me, as long as *someone* does.

The next step after the implementation of "server anonymity" is
implementation of "content addressable networking".

Domain proximity is also what dictates network path congestion
probability.  The higher the proximity, the lower the probability.

At that point, the service that gets charged for is data vaulting
and service replication level.  If I release a movie into this type
of system, my ability to deliver it to 'N' people is based on how
many places the service exists.  So my costs are based on the number
of vaulting locations I rent and their domain proximity (in hops)
to the people who want the information.

Obviously, there's a lot of vested interest in the wire companies for
connection/circuit oriented delivery systems.  That's because everyone
has been killing each other and themselves to sell you the wire into
your house on the expectation that they will be able to meter your
usage and charge accordingly.

On the other side of the coin are the people who want to deluge you
with "junk email", which you'll refuse to pay metered rates for.
After all, you're not stupid.  You've paid air-time for unsolicited
cell phone calls, and you know that metering fails under those
conditions.

Turns out the real money is going to be in vaulting (distribution)
and production of content.

(BTW, I want credit for this if you repeat it or use it.  8-).)

#endif	/* TERRY_IN_FUTURIST_MODE*/


For now, scaling is based on the assumption that session duration
for a series of transactions over a connection to a server will be
statistically normative.

So you "load balance" by rotoring the machine that gets the actual
connection by setting up a DNS rotor.  When the address is requested
for a given machine name, the query is responded to by iterating a
set of hosts such that the load is distributed round-robin.

Of course, this fails to load balance under caching (which DNS does),
and it fails to load balance under non-homogeneous connection
duration (which humans do), and it fails to load balance under
disparate data served from a single server (which humans also do).

Basically, it fails to load balance.  But it *is* the way things are
currently done, and you do get some marginal increase in capability
from it.



					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199510271903.MAA23671>