From owner-freebsd-hackers Sat Oct 28 00:06:26 1995 Return-Path: owner-hackers Received: (from root@localhost) by freefall.freebsd.org (8.6.12/8.6.6) id AAA18603 for hackers-outgoing; Sat, 28 Oct 1995 00:06:26 -0700 Received: from rah.star-gate.com (rah.star-gate.com [204.188.121.18]) by freefall.freebsd.org (8.6.12/8.6.6) with ESMTP id AAA18598 for ; Sat, 28 Oct 1995 00:06:22 -0700 Received: from rah.star-gate.com (localhost.v-site.net [127.0.0.1]) by rah.star-gate.com (8.6.12/8.6.9) with ESMTP id AAA01197; Sat, 28 Oct 1995 00:05:58 -0700 Message-Id: <199510280705.AAA01197@rah.star-gate.com> X-Mailer: exmh version 1.6.2 7/18/95 To: Terry Lambert cc: rcarter@geli.com (Russell L. Carter), jkh@time.cdrom.com, hackers@FreeBSD.ORG Subject: Re: New lmbench available (fwd) In-reply-to: Your message of "Fri, 27 Oct 1995 12:03:47 PDT." <199510271903.MAA23671@phaeton.artisoft.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Date: Sat, 28 Oct 1995 00:05:58 -0700 From: "Amancio Hasty Jr." Sender: owner-hackers@FreeBSD.ORG Precedence: bulk I am a bit a lost here . What is the big deal with having a network object server whose target objects are replicated in a network? Oh, don't know there has been quite a number of papers, architectures, and also CORBA comes to mind not to mentioned decnet's remote distributed object architecture, etc... Cheers, Amancio >>> Terry Lambert said: > > So here is a market opportunity: how do you scale a bunch of > > httpd servers working in concert (maybe like timed?) so that when one > > goes down, the remainders elect a master and life goes on? > > #ifdef TERRY_IN_FUTURIST_MODE > > You connect to services instead of machines, of course, and the > underlying transport is permitted to load balance you off onto > another server. Boy isn't this very similar to Novell's service schemes except that I don't think that Novell provides for load balancing but then again that is probably not to hard to implement if you can agree on the metrics... > I coined the term "server anonymity" for this four years ago. > > The problem is in the protocols... they are server connection > rather than service connection oriented. > > Consider: do I really give a damn where the next 512 frames of my > "Raiders of the Lost Ark" come from, as long as they get to me? I > really want to be able to ask for them and not care *who* sends > them to me, as long as *someone* does. > > The next step after the implementation of "server anonymity" is > implementation of "content addressable networking". > > Domain proximity is also what dictates network path congestion > probability. The higher the proximity, the lower the probability. > > At that point, the service that gets charged for is data vaulting > and service replication level. If I release a movie into this type > of system, my ability to deliver it to 'N' people is based on how > many places the service exists. So my costs are based on the number > of vaulting locations I rent and their domain proximity (in hops) > to the people who want the information. > > Obviously, there's a lot of vested interest in the wire companies for > connection/circuit oriented delivery systems. That's because everyone > has been killing each other and themselves to sell you the wire into > your house on the expectation that they will be able to meter your > usage and charge accordingly. > > On the other side of the coin are the people who want to deluge you > with "junk email", which you'll refuse to pay metered rates for. > After all, you're not stupid. You've paid air-time for unsolicited > cell phone calls, and you know that metering fails under those > conditions. > > Turns out the real money is going to be in vaulting (distribution) > and production of content. > > (BTW, I want credit for this if you repeat it or use it. 8-).) > > #endif /* TERRY_IN_FUTURIST_MODE*/ > > > For now, scaling is based on the assumption that session duration > for a series of transactions over a connection to a server will be > statistically normative. > > So you "load balance" by rotoring the machine that gets the actual > connection by setting up a DNS rotor. When the address is requested > for a given machine name, the query is responded to by iterating a > set of hosts such that the load is distributed round-robin. > > Of course, this fails to load balance under caching (which DNS does), > and it fails to load balance under non-homogeneous connection > duration (which humans do), and it fails to load balance under > disparate data served from a single server (which humans also do). > > Basically, it fails to load balance. But it *is* the way things are > currently done, and you do get some marginal increase in capability > from it. > > > > Terry Lambert > terry@lambert.org > --- > Any opinions in this posting are my own and not those of my present > or previous employers. >