From owner-freebsd-current@FreeBSD.ORG Fri May 27 13:19:21 2005 Return-Path: X-Original-To: freebsd-current@freebsd.org Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 494DD16A41C for ; Fri, 27 May 2005 13:19:21 +0000 (GMT) (envelope-from anderson@centtech.com) Received: from mh2.centtech.com (moat3.centtech.com [207.200.51.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id D4F6043D1F for ; Fri, 27 May 2005 13:19:20 +0000 (GMT) (envelope-from anderson@centtech.com) Received: from [10.177.171.220] (neutrino.centtech.com [10.177.171.220]) by mh2.centtech.com (8.13.1/8.13.1) with ESMTP id j4RDJJ4h063903; Fri, 27 May 2005 08:19:19 -0500 (CDT) (envelope-from anderson@centtech.com) Message-ID: <42971E48.7040806@centtech.com> Date: Fri, 27 May 2005 08:19:04 -0500 From: Eric Anderson User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.7.7) Gecko/20050504 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Scott Long References: <4295D51F.50106@centtech.com> <429606D9.6080602@cs.tu-berlin.de> <42960ACB.7090801@cs.tu-berlin.de> <42960CFE.4060307@centtech.com> <42960F8F.2050109@samsco.org> <42961195.30608@centtech.com> <429613FB.80100@samsco.org> <42968AD4.3020603@centtech.com> <4296997C.9030700@samsco.org> <20050527000105.E54386@lexi.siliconlandmark.com> <42969DD8.4060701@samsco.org> In-Reply-To: <42969DD8.4060701@samsco.org> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Cc: FreeBSD Current Subject: Re: Disable read/write caching to disk? X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 27 May 2005 13:19:21 -0000 Scott Long wrote: > Andre Guibert de Bruet wrote: > >> >> On Thu, 26 May 2005, Chad Leigh -- Shire.Net LLC wrote: >> >>> Agreed, but as you say, FreeBSD is not there yet, and since the OP is >>> on FreeBSD, and wants to have multiple computers attached, NFS would >>> be one way of making that happen. And if you leave the other >>> computers attached by the FC but not mounted, if on goes down, you >>> can replace it with another, and switch your nfs server over. Not as >>> ideal but doable on FreeBSD. >> >> >> >> This hack would not be suitable in an HA environment -- It requires >> human intervention or some really fugly scripts not just for the NFS >> server, but also for the clients. These scripts would have to figure >> out how to recover NFS file locking state and consistency when the >> backup machine fails over. >> >> It seems as if NFS in this type of setup introduces more problems than >> it solves. >> >> Andy >> > > So what we need is some manpower. I estimate that a proof-of-concept > port of GFS would take about 4-6 solid months. There is also a volume > management aspect to GFS, but that is less important and the existing > GEOM classes can largely fill the role already. Anyone interested in > taking a serious look at it? I'm not much of a coder, so I'd like to support anyone who would like to work on this. I'll provide anything I can to help make this work. As a datapoint, I currently have a 5 node cluster running suse linux (enterprise) with Polyserve's cluster filesystem and distributed lock manager. They have two software components (well, more, but two bundles) - a distributed filesystem is the core of their software (with a distributed lock manager), and they also have an NFS bundle that allows multiple NFS servers to access and share the same data via shared storage. This is very useful to us (and many other companies much larger), and we pay a premium for it. I'm not sure what I could do, but I'm certain I could provide at minimum an environment to build/test all of this. Just curious - what are the pros/cons of porting GFS vs mangling UFS? I know GFS already has a base architected, but in the long run, what would be best? A BSD licensed fs would be completely awesome. Eric -- ------------------------------------------------------------------------ Eric Anderson Sr. Systems Administrator Centaur Technology A lost ounce of gold may be found, a lost moment of time never. ------------------------------------------------------------------------