From owner-freebsd-geom@FreeBSD.ORG Fri Dec 19 16:16:05 2008 Return-Path: Delivered-To: freebsd-geom@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1D1E41065679 for ; Fri, 19 Dec 2008 16:16:05 +0000 (UTC) (envelope-from rick@kiwi-computer.com) Received: from kiwi-computer.com (keira.kiwi-computer.com [63.224.10.3]) by mx1.freebsd.org (Postfix) with SMTP id C9BBA8FC25 for ; Fri, 19 Dec 2008 16:16:04 +0000 (UTC) (envelope-from rick@kiwi-computer.com) Received: (qmail 80938 invoked by uid 2001); 19 Dec 2008 16:16:03 -0000 Date: Fri, 19 Dec 2008 10:16:02 -0600 From: "Rick C. Petty" To: Dimitri Aivaliotis Message-ID: <20081219161602.GA80859@keira.kiwi-computer.com> References: <55c107bf0812180320x502847efi53df5a7da68b73e1@mail.gmail.com> <20081218175752.GA10326@carrot.lan> <20081218182352.GA69287@keira.kiwi-computer.com> <55c107bf0812190250x434e468cy2fb19956f36b5958@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <55c107bf0812190250x434e468cy2fb19956f36b5958@mail.gmail.com> User-Agent: Mutt/1.4.2.3i Cc: freebsd-geom@freebsd.org Subject: Re: gvinum raid10 stale X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: rick-freebsd2008@kiwi-computer.com List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Dec 2008 16:16:05 -0000 On Fri, Dec 19, 2008 at 11:50:22AM +0100, Dimitri Aivaliotis wrote: > Hi Rick, > > > Were the plexes and subdisks all up before you restarted? After you create > > stuff in gvinum, sync'd subdisks are marked as stale until you start the > > plexes or force the subdisks up. I'm not sure if you did this step in > > between. Also, it is possible that gvinum wasn't marked clean because a > > drive was "disconnected" at shutdown or not present immediately at startup. > > Other than that, I've not seen gvinum mark things down unexplicably. > > This wouldn't explain why all the subdisks on one plex of the server > that wasn't restarted were marked as stale. As far as the logs show, > there's no reason for it. I also don't know how long the one plex has > been down, as the volume itself remained up. Both plexes were up > initially though. gvinum is pretty noisy about these things. I would check in /var/log/messages* to see if you see any links containing "gvinum". > Is a 'gvinum start' necessary after a 'gvinum create'? I know that I > hadn't issued a start until just now, but I didn't see the need for > it, as gvinum was already started. Perhaps this is a naming issue. It is in some cases, I believe. Ulf has a patch that no longer requires this at least in the case of mirrors, and it has saved me loads of time! If you're saying the plexes were up at one point, my suspicion is that you did start the plexes at one time. > The volume was up before the restart. I can't speak to the state of > the individual plexes. The new drives had not used vinum in the past. If the volume is up, it should be mountable. > > Agreed. This is the proper procedure if one plex is good, but you should > > be able to mount that volume-- you can mount any volume that isn't down. A > > volume is only down if all of its plexes are down. A plex is down if any > > of its subdisks are down. You can also mount a plex which I've done before > > when I didn't want vinum state to be changed but wanted to pull my data > > off. You can also mount subdisks but when you use stripes (multiple > > subdisks per plex), this won't work. This is one of the many reasons I > > gave up using stripes long ago. =) > > What would you recommend in a situation like this? I had followed the > "Resilience and Performance" section of > http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/vinum-examples.html > when initially creating the volume. I want a RAID10-like solution > which can be easily expanded in the future. Well I personally always sacrifice performance for resilience and I just use gvinum for mirrors and volume management. With the speed and cost of SATA drives, I hardly need those few extra seconds per day of use. You should be able to add mirrors pretty easily, as long as those are mirrors of stripes (since you can't stripe your mirrors in gvinum), but you have to make sure each mirror (plex) contains the right size, regardless of your stripe (subdisk) size. I add to my mirrors regularly, usually just to move volumes around. I think the confusion here is the number of subdisks per plex that you have which is unnecessary. -- Rick C. Petty