From owner-freebsd-current@FreeBSD.ORG Thu Dec 2 21:43:09 2004 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 266C716A4CE; Thu, 2 Dec 2004 21:43:09 +0000 (GMT) Received: from pooker.samsco.org (pooker.samsco.org [168.103.85.57]) by mx1.FreeBSD.org (Postfix) with ESMTP id 90EE443D5A; Thu, 2 Dec 2004 21:43:08 +0000 (GMT) (envelope-from scottl@freebsd.org) Received: from [192.168.254.11] (junior-wifi.samsco.home [192.168.254.11]) (authenticated bits=0) by pooker.samsco.org (8.12.11/8.12.10) with ESMTP id iB2Lks5N068636; Thu, 2 Dec 2004 14:46:54 -0700 (MST) (envelope-from scottl@freebsd.org) Message-ID: <41AF8C78.8050806@freebsd.org> Date: Thu, 02 Dec 2004 14:43:20 -0700 From: Scott Long User-Agent: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.7.2) Gecko/20040929 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Stephan Uphoff References: <41AE3F80.1000506@freebsd.org> <41AF29AC.6030401@freebsd.org> <1102022838.11465.7735.camel@palm.tree.com> In-Reply-To: <1102022838.11465.7735.camel@palm.tree.com> X-Enigmail-Version: 0.86.1.0 X-Enigmail-Supports: pgp-inline, pgp-mime Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, hits=0.0 required=3.8 tests=none autolearn=no version=2.63 X-Spam-Checker-Version: SpamAssassin 2.63 (2004-01-11) on pooker.samsco.org cc: hackers@freebsd.org cc: Andre Oppermann cc: "current@freebsd.org" Subject: Re: My project wish-list for the next 12 months X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 02 Dec 2004 21:43:09 -0000 Stephan Uphoff wrote: > On Thu, 2004-12-02 at 09:41, Andre Oppermann wrote: > >>Scott Long wrote: >> >>>5. Clustered FS support. SANs are all the rage these days, and >>>clustered filesystems that allow data to be distributed across many >>>storage enpoints and accessed concurrently through the SAN are very >>>powerful. RedHat recently bought Sistina and re-opened the GFS source >>>code, so exploring this would be very interesting. >> >>There are certain steps that can be be taken one at a time. For example >>it should be relatively easy to mount snapshots (ro) from more than one >>machine. Next step would be to mount a full 'rw' filesystem as 'ro' on >>other boxes. This would require cache and sector invalidation broadcasting >>from the 'rw' box to the 'ro' mounts. > > > Mhhh .. if you plan to invalidate at the disk block cache layer then you > will run into race conditions with UFS/FFS (Especially with remove > operations) > I was once called in to evaluate such a multiple reader/single writer > system based on an UFS like file system and block layer invalidation and > had to convince management to kill it. (It appeared to work and actually > made it though internal and customer acceptance testing before failing > horrible in the field). > > If you send me more details on your proposed cache and sector > invalidation/cluster design I will be happy to review it. > > > >>The holy grail of course is to mount >>the same filesystem 'rw' on more than one box, preferrably more than two. >>This requires some more involved synchronization and locking on top of the >>cache invalidation. And make sure that the multi-'rw' cluster stays alive >>if one of the participants freezes and doesn't respond anymore. >> >>Scrolling through the UFS/FFS code I think the first one is 2-3 days of >>work. The second 2-4 weeks and the third 2-3 month to get it right. >>If someone would throw up the money... > > Although I don't know the specifics of your experience, I can easily imagine how hard it would be to make this work on UFS. Common operations like walking a file path to the root are nearly impossible to do reliably without an overbearing amount of synchronization. Then you have all of the problems synchronizing buffered data and metadata. Softupdates would be a nightmare, if not impossible. Scott