From owner-freebsd-geom@FreeBSD.ORG Tue Dec 13 13:44:44 2005 Return-Path: X-Original-To: freebsd-geom@freebsd.org Delivered-To: freebsd-geom@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 270C816A41F for ; Tue, 13 Dec 2005 13:44:44 +0000 (GMT) (envelope-from anderson@centtech.com) Received: from mh2.centtech.com (moat3.centtech.com [207.200.51.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id 7D15343D82 for ; Tue, 13 Dec 2005 13:44:29 +0000 (GMT) (envelope-from anderson@centtech.com) Received: from [10.177.171.220] (neutrino.centtech.com [10.177.171.220]) by mh2.centtech.com (8.13.1/8.13.1) with ESMTP id jBDDiF4L038542; Tue, 13 Dec 2005 07:44:15 -0600 (CST) (envelope-from anderson@centtech.com) Message-ID: <439ED00E.7050701@centtech.com> Date: Tue, 13 Dec 2005 07:43:42 -0600 From: Eric Anderson User-Agent: Mozilla Thunderbird 1.0.7 (X11/20051204) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Michael Schuh References: <1dbad3150512090509l2ab08e03k@mail.gmail.com> <43998C95.3050505@fer.hr> <1dbad3150512130421i5278d693g@mail.gmail.com> In-Reply-To: <1dbad3150512130421i5278d693g@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.87.1/1209/Mon Dec 12 09:48:01 2005 on mh2.centtech.com X-Virus-Status: Clean Cc: Ivan Voras , freebsd-geom@freebsd.org Subject: Re: Questions about geom-gate and RAID1/10 and CARP X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 13 Dec 2005 13:44:44 -0000 Michael Schuh wrote: >Hello, > >thanks to Ivan for his suggestions and his experiences. >I have this mailing first mailed to stable, but i think now this was the >wrong place. It this the real place, or even better place as stable? >I hope so. > >>From the suggestions from Ivan i have learned, that >i must think a little bit in other directions. >First change to following case is: >I woud have a Machine that do the Writer Job. >I call this Machine W. >W binds the shared Disks (per ggated-exports) from A and B >with his own shared Disk-Device to an RAID1 OR RAID10. >Incoming Traffic like ftp-uploads, passwdchanges, accountchanges >must be done on W. The Machines A and B mount the ggate-exported RAID >on an unique mountpoint read only, so that all software work on an >that shared RAID. On the other site all Loggings are made through the >private interfaces to >W. He is the only one where could write the logfiles to the RAID. > >For the rest of functionality look down in the original posting, i >pasted it after this mailing. > >Have anyone any suggestion. Should this work ready? >Give it another Way to work so? > If I understand correctly, you would like one node to do all the writes, and the other nodes to be read-only of that same device that node 'w' is writing to. This would work, in the respect that it should not cause problems, but it may not work as you expect. The nodes mounting the device read-only would only be able to see the data that had been commited to disk on the writer side at the time the read-only side mounted the filesystem. This means that any changes made after the read-only side mounted would result in an inconsistent view of the filesystem, since the read-only side would have the inodes and dirent's cached, and would not expect them to change. This *could* cause issues, for instance, if the read-only machine cached an inode entry, then the inode entry changed, but the read-only side did not know it, so it loads the data blocks referenced in the old inode, which may not be the correct data, so you would get a corrupt file. The only way I can think of to prevent this is to use synchronous writes (metadata and data, so the 'sync' option on the writer side), *AND* also somehow disable the buffer cache on the read-only side, so for each access, it looks to the block device for the newest information, however this would most likely result in very poor performance. You might be able to make something with NFS work, but not sure the how you would implement that completely. Eric -- ------------------------------------------------------------------------ Eric Anderson Sr. Systems Administrator Centaur Technology Anything that works is better than anything that doesn't. ------------------------------------------------------------------------