Date: Thu, 11 Sep 2008 19:45:55 +0200 From: Ulf Lilleengen <lulf@stud.ntnu.no> To: Daniel Scheibli <daniel.scheibli@edelbyte.org> Cc: freebsd-geom@freebsd.org Subject: Re: Interaction of geom_vinum & geom_eli Message-ID: <20080911174555.GA2992@nobby.lan> In-Reply-To: <48C8CF48.1060808@edelbyte.org> References: <48C47AD0.50905@edelbyte.org> <20080908135741.GA2567@nobby.lan> <48C8CF48.1060808@edelbyte.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, Sep 11, 2008 at 12:56:56AM -0700, Daniel Scheibli wrote: > > Ulf Lilleengen wrote: > > On Sun, Sep 07, 2008 at 06:07:28PM -0700, Daniel Scheibli wrote: > > [...] > >> My question is how does geom_vinum react on this? > >> > >> I suspect it will reconstruct the data from the parity written > >> to the other disks to service the request. > >> > >> But how is the disk - with the corrupt block - handled? Is the > >> entire disk marked as bad? Or does it only mark that single block? > >> Does it attempt to rewrite the corrupt data with the reconstructed > >> data? > >> > > Hi, > > > > Gvinum will set the state of the drive to "down" (And you will get a > > "GEOM_VINUM: lost drive XXX" message). It will then as you say reconstruct > > the data if it's part of a RAID-5 plex. It will not however "salvage" the > > data on the drive like for instance ZFS. > > Hi, > > thanks for your reply, thats what I feared. > > I tend to run a "checksum all data" script every time I do > a backup (to ensure that the backup worked, but also to check > that only expected file changed since the last checksum run). > > If a single corrupt block result in the entire disk being > flagged "down", then I worry that I'am only 1 more corrupt > block (on any other disk) away from the entire plex being > considered broken. > > Are there any future plans to rewrite the reconstructed > data down to the "failed" disk (in geom_vinum or geom_raid5) > or is this then something where one should look towards > the ZFS port? Also would it be of interest to have the > "escalation" mode configurable? > That would be a neat feature to have, but I won't start on implementing it until the 2007 SoC work on gvinum have been integrated (it's hard enough to review as it is), but afterwards I might try. It would have to optional too, for not breaking the old way.. Regarding geom_raid5, you should ask the author, as it's not in the tree at the moment. For the moment, only ZFS pools provides this functionality. Remember that you can use a ZFS pool and create geom providers (ZVOLs) from it if you want to run another file system. -- Ulf Lilleengen
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20080911174555.GA2992>