From owner-freebsd-current Sun Dec 5 1:40:15 1999 Delivered-To: freebsd-current@freebsd.org Received: from uni4nn.gn.iaf.nl (osmium.gn.iaf.nl [193.67.144.12]) by hub.freebsd.org (Postfix) with ESMTP id 67D2415147; Sun, 5 Dec 1999 01:38:05 -0800 (PST) (envelope-from wilko@yedi.iaf.nl) Received: from yedi.iaf.nl (uucp@localhost) by uni4nn.gn.iaf.nl (8.9.2/8.9.2) with UUCP id KAA26933; Sun, 5 Dec 1999 10:24:06 +0100 (MET) Received: (from wilko@localhost) by yedi.iaf.nl (8.9.3/8.9.3) id AAA30805; Sun, 5 Dec 1999 00:45:12 +0100 (CET) (envelope-from wilko) From: Wilko Bulte Message-Id: <199912042345.AAA30805@yedi.iaf.nl> Subject: Re: Mounting one FS on more than one system In-Reply-To: <199912042044.MAA05073@flamingo.McKusick.COM> from Kirk McKusick at "Dec 4, 1999 12:44:43 pm" To: mckusick@flamingo.McKusick.COM (Kirk McKusick) Date: Sun, 5 Dec 1999 00:45:12 +0100 (CET) Cc: msmith@FreeBSD.ORG, match@elen.utah.edu, current@FreeBSD.ORG X-Organisation: Private FreeBSD site - Arnhem, The Netherlands X-pgp-info: PGP public key at 'finger wilko@freefall.freebsd.org' X-Mailer: ELM [version 2.4ME+ PL43 (25)] MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-freebsd-current@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG As Kirk McKusick wrote ... > Mounting on more than one system is generally problematical unless > you are willing to have all systems read-only. The problem is cache > coherence between the machines. If one changes a block, the other > machines will not see it. Basically, this is why we have the NFS > filesystem. That lets a disk be mounted on one machine, but shared > out to others. If you wanted to write a protocol that would allow > for multiple machines, then you would need to have some central > coordinator running some sort of coherency protocol with a complexity > akin to that of NFS. I wonder how Tru64 is doing it. IIRC V5.0 Tru64 can do a cluster filesystem. A CFS must have solved the coherency issue in some way. Older revs had distributed raw devices (for Oracle and the like) but that had all I/O go through one cluster member that did all the I/O for that DRD. The I/O from the other cluster members was done via the Memory Channel to the DRD-serving machine. Interesting.. Wilko -- | / o / / _ Arnhem, The Netherlands - Powered by FreeBSD - |/|/ / / /( (_) Bulte WWW : http://www.tcja.nl http://www.freebsd.org To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-current" in the body of the message