From owner-freebsd-geom@FreeBSD.ORG Thu Sep 11 17:46:37 2008 Return-Path: Delivered-To: freebsd-geom@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 96A9C1065671 for ; Thu, 11 Sep 2008 17:46:37 +0000 (UTC) (envelope-from lulf@stud.ntnu.no) Received: from bene2.itea.ntnu.no (bene2.itea.ntnu.no [IPv6:2001:700:300:3::57]) by mx1.freebsd.org (Postfix) with ESMTP id A3C058FC22 for ; Thu, 11 Sep 2008 17:46:35 +0000 (UTC) (envelope-from lulf@stud.ntnu.no) Received: from localhost (localhost [127.0.0.1]) by bene2.itea.ntnu.no (Postfix) with ESMTP id 89E8490002; Thu, 11 Sep 2008 19:46:33 +0200 (CEST) Received: from nobby (unknown [IPv6:2001:700:300:3::184]) by bene2.itea.ntnu.no (Postfix) with ESMTP id E47F990001; Thu, 11 Sep 2008 19:46:32 +0200 (CEST) Date: Thu, 11 Sep 2008 19:45:55 +0200 From: Ulf Lilleengen To: Daniel Scheibli Message-ID: <20080911174555.GA2992@nobby.lan> References: <48C47AD0.50905@edelbyte.org> <20080908135741.GA2567@nobby.lan> <48C8CF48.1060808@edelbyte.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <48C8CF48.1060808@edelbyte.org> User-Agent: Mutt/1.5.18 (2008-05-17) X-Virus-Scanned: Debian amavisd-new at bene2.itea.ntnu.no Cc: freebsd-geom@freebsd.org Subject: Re: Interaction of geom_vinum & geom_eli X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: "..."@nobby.lan List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Sep 2008 17:46:37 -0000 On Thu, Sep 11, 2008 at 12:56:56AM -0700, Daniel Scheibli wrote: > > Ulf Lilleengen wrote: > > On Sun, Sep 07, 2008 at 06:07:28PM -0700, Daniel Scheibli wrote: > > [...] > >> My question is how does geom_vinum react on this? > >> > >> I suspect it will reconstruct the data from the parity written > >> to the other disks to service the request. > >> > >> But how is the disk - with the corrupt block - handled? Is the > >> entire disk marked as bad? Or does it only mark that single block? > >> Does it attempt to rewrite the corrupt data with the reconstructed > >> data? > >> > > Hi, > > > > Gvinum will set the state of the drive to "down" (And you will get a > > "GEOM_VINUM: lost drive XXX" message). It will then as you say reconstruct > > the data if it's part of a RAID-5 plex. It will not however "salvage" the > > data on the drive like for instance ZFS. > > Hi, > > thanks for your reply, thats what I feared. > > I tend to run a "checksum all data" script every time I do > a backup (to ensure that the backup worked, but also to check > that only expected file changed since the last checksum run). > > If a single corrupt block result in the entire disk being > flagged "down", then I worry that I'am only 1 more corrupt > block (on any other disk) away from the entire plex being > considered broken. > > Are there any future plans to rewrite the reconstructed > data down to the "failed" disk (in geom_vinum or geom_raid5) > or is this then something where one should look towards > the ZFS port? Also would it be of interest to have the > "escalation" mode configurable? > That would be a neat feature to have, but I won't start on implementing it until the 2007 SoC work on gvinum have been integrated (it's hard enough to review as it is), but afterwards I might try. It would have to optional too, for not breaking the old way.. Regarding geom_raid5, you should ask the author, as it's not in the tree at the moment. For the moment, only ZFS pools provides this functionality. Remember that you can use a ZFS pool and create geom providers (ZVOLs) from it if you want to run another file system. -- Ulf Lilleengen