From owner-freebsd-geom@FreeBSD.ORG Fri Dec 19 10:50:23 2008 Return-Path: Delivered-To: freebsd-geom@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4F8F81065686 for ; Fri, 19 Dec 2008 10:50:23 +0000 (UTC) (envelope-from aglarond@gmail.com) Received: from yx-out-2324.google.com (yx-out-2324.google.com [74.125.44.28]) by mx1.freebsd.org (Postfix) with ESMTP id 0847A8FC40 for ; Fri, 19 Dec 2008 10:50:22 +0000 (UTC) (envelope-from aglarond@gmail.com) Received: by yx-out-2324.google.com with SMTP id 8so1025294yxb.13 for ; Fri, 19 Dec 2008 02:50:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to :subject:cc:in-reply-to:mime-version:content-type :content-transfer-encoding:content-disposition:references; bh=FsPDOOreRBEflCTz/p+xq5FJnwlmscU0adLOjtZDEw0=; b=ZQQTx0mfY+le/8/iI2+SVY3MU3nbc21LBH/reGZ7Oive7GDVjxKNt+jAvaWkL4psba nmAyNovAf4o14oiERP8IteIKeeUAnSHK64eCq9wq2rfepJIUjSPDg7J9Rdmq8NOVrAUT lvFPIY6EMJRFFOa/Qqs2jRDx9YF2piZlGO1Rk= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version :content-type:content-transfer-encoding:content-disposition :references; b=ZvQXemvbklpjJ4iME9R9jnkgJiPyikJV0nhyKi4/PHUv+eeT2MmtIrcmBld7aASsXI GP1BrEBa3kKKbN9qYID8wiBmVaikEHdDNVVWBloumxW6w4XrJWwJ5a8IsyIF148EmK0z qbxqzqETalYKHlcRC79QbUFLpkItfSTByfyh4= Received: by 10.100.8.17 with SMTP id 17mr2130369anh.85.1229683822199; Fri, 19 Dec 2008 02:50:22 -0800 (PST) Received: by 10.101.70.16 with HTTP; Fri, 19 Dec 2008 02:50:22 -0800 (PST) Message-ID: <55c107bf0812190250x434e468cy2fb19956f36b5958@mail.gmail.com> Date: Fri, 19 Dec 2008 11:50:22 +0100 From: "Dimitri Aivaliotis" To: rick-freebsd2008@kiwi-computer.com In-Reply-To: <20081218182352.GA69287@keira.kiwi-computer.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <55c107bf0812180320x502847efi53df5a7da68b73e1@mail.gmail.com> <20081218175752.GA10326@carrot.lan> <20081218182352.GA69287@keira.kiwi-computer.com> Cc: freebsd-geom@freebsd.org Subject: Re: gvinum raid10 stale X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Dec 2008 10:50:23 -0000 Hi Rick, On Thu, Dec 18, 2008 at 7:23 PM, Rick C. Petty wrote: > On Thu, Dec 18, 2008 at 06:57:53PM +0100, Ulf Lilleengen wrote: >> On tor, des 18, 2008 at 12:20:26pm +0100, Dimitri Aivaliotis wrote: > I agree with Ulf. Why are you creating so many subdisks? It's pretty > unnecessary and just adds confusion and trouble. I agree with you about the confusion and trouble. :) > Were the plexes and subdisks all up before you restarted? After you create > stuff in gvinum, sync'd subdisks are marked as stale until you start the > plexes or force the subdisks up. I'm not sure if you did this step in > between. Also, it is possible that gvinum wasn't marked clean because a > drive was "disconnected" at shutdown or not present immediately at startup. > Other than that, I've not seen gvinum mark things down unexplicably. This wouldn't explain why all the subdisks on one plex of the server that wasn't restarted were marked as stale. As far as the logs show, there's no reason for it. I also don't know how long the one plex has been down, as the volume itself remained up. Both plexes were up initially though. Is a 'gvinum start' necessary after a 'gvinum create'? I know that I hadn't issued a start until just now, but I didn't see the need for it, as gvinum was already started. Perhaps this is a naming issue. >> I don't see how the subdisks could go stale after inserting the disks unless >> they changed names, and the new disks you inserted was named with the old >> disks device number. > > This shouldn't happen unless the new disks had used vinum in the past and > there was a name collision. Unless a drive was marked down for a period of > time or you didn't bring the plexes up after creating them, I don't know > why this would happen. The volume was up before the restart. I can't speak to the state of the individual plexes. The new drives had not used vinum in the past. > Agreed. This is the proper procedure if one plex is good, but you should > be able to mount that volume-- you can mount any volume that isn't down. A > volume is only down if all of its plexes are down. A plex is down if any > of its subdisks are down. You can also mount a plex which I've done before > when I didn't want vinum state to be changed but wanted to pull my data > off. You can also mount subdisks but when you use stripes (multiple > subdisks per plex), this won't work. This is one of the many reasons I > gave up using stripes long ago. =) What would you recommend in a situation like this? I had followed the "Resilience and Performance" section of http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/vinum-examples.html when initially creating the volume. I want a RAID10-like solution which can be easily expanded in the future. > Good luck, Thanks! - Dimitri