From owner-freebsd-cluster Thu Jun 27 4:30:40 2002 Delivered-To: freebsd-cluster@freebsd.org Received: from gate.nentec.de (gate2.nentec.de [194.25.215.66]) by hub.freebsd.org (Postfix) with ESMTP id BCADE37B400 for ; Thu, 27 Jun 2002 04:30:33 -0700 (PDT) Received: from nenny.nentec.de (root@nenny.nentec.de [153.92.64.1]) by gate.nentec.de (8.11.3/8.9.3) with ESMTP id g5RBUSA21885; Thu, 27 Jun 2002 13:30:28 +0200 Received: from nentec.de (andromeda.nentec.de [153.92.64.34]) by nenny.nentec.de (8.11.3/8.11.3/SuSE Linux 8.11.1-0.5) with ESMTP id g5RBUFZ24802; Thu, 27 Jun 2002 13:30:16 +0200 Message-ID: <3D1AF747.6030803@nentec.de> Date: Thu, 27 Jun 2002 13:30:15 +0200 From: Andy Sporner User-Agent: Mozilla/5.0 (X11; U; Linux i686; de-AT; rv:0.9.8) Gecko/20020204 X-Accept-Language: de-at, de, en, en-us MIME-Version: 1.0 To: Amar Takhar , freebsd-cluster Subject: Re: host (cvs or otherwise) (about what phase1 means...) References: <20020621210549.GA41195@drunkmonk.net> <3D16DDB3.1010202@nentec.de> <20020626024357.GA79555@drunkmonk.net> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: by AMaViS-perl11-milter (http://amavis.org/) Sender: owner-freebsd-cluster@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.ORG Hi Amar, I would suggest 'freebsdcluster' as a name--since there has never been much thought about others. The only point is that somebody might (rightly??) complain that 'failover' is only one kind of cluster and not for instance 'beowulf' that is so common in usage. My belief is that if you can make a reliable computing platform, then you can scalability is really only about scheduling. We are making a network switch that does this. So I would suggest taking the newest (212) version and putting it into the CVS in the following format: phase1/ patches/ Where 212 is put into phase1 and patches contains the CSE patch. I hope soon I will have my internet access again where I can directly access such things. We should also decide who has commit access-- which should probably be those who are doing to most to update and maintain the code base. BTW: For those curious, "Phase 1" is failover, Phase 2, will provide NUMA (Non-Uniform Memory Access) like functionality where processes can migrate between nodes. Some aspects are network migratable sockets. I have realized this by an allocation patch (that assures conversational port numbers are unique across the cluster-- It isn't available yet, but hopefully soon) and a feature of this switch device to keep track of movement of the process in the cluster. (This can also be realized by NAT, but not as efficiently). The idea is that unlike NUMA, where the OS is the single point of failure, several instances of an OS provide a solution to this and work cooperatively, sharing memory pages to allows processes to move (sort of like swapping to a remote machine). The advantage is that it should be theoretically possible to construct a cluster that achieves near perpetual availability of an application. (I am not sure what the standard is for calculating available--is it that at least a certain percentage of users can access the application or that it has to at least be available). So if there are 10 nodes making up an application cluster, all load sharing, if one crashses, the processes there die as well as any processes that had any kind of context there, but the other machines go on and those affected can immediately re-connect and be able to get to a live machine that survives, while a Numa type machine would still be booting or the cluster software would be waiting to see if it has in fact died. Andy Amar Takhar wrote: >Well, the machine is up, and working good, does the program have an actual >name?, if so i can get .stanford.edu as the host for the machine.. or >freebsd-cluster.stanford.edu, either way, the host will be used for the mailing >lists, cvs, web etc... > >amar. > > To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-cluster" in the body of the message