From owner-freebsd-geom@FreeBSD.ORG Wed Sep 10 22:57:49 2008 Return-Path: Delivered-To: freebsd-geom@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9F5B9106566B for ; Wed, 10 Sep 2008 22:57:49 +0000 (UTC) (envelope-from daniel.scheibli@edelbyte.org) Received: from m61s16.vlinux.de (m61s16.vlinux.de [83.151.21.178]) by mx1.freebsd.org (Postfix) with ESMTP id 6488A8FC27 for ; Wed, 10 Sep 2008 22:57:49 +0000 (UTC) (envelope-from daniel.scheibli@edelbyte.org) Received: from [127.0.0.1] (localhost [127.0.0.1]) by m61s16.vlinux.de (Postfix) with ESMTP id BD849288F2; Wed, 10 Sep 2008 22:59:03 +0000 (UTC) Message-ID: <48C8CF48.1060808@edelbyte.org> Date: Thu, 11 Sep 2008 00:56:56 -0700 From: Daniel Scheibli User-Agent: Thunderbird 2.0.0.16 (X11/20080707) MIME-Version: 1.0 To: lulf@stud.ntnu.no References: <48C47AD0.50905@edelbyte.org> <20080908135741.GA2567@nobby.lan> In-Reply-To: <20080908135741.GA2567@nobby.lan> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Cc: freebsd-geom@freebsd.org Subject: Re: Interaction of geom_vinum & geom_eli X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 10 Sep 2008 22:57:49 -0000 Ulf Lilleengen wrote: > On Sun, Sep 07, 2008 at 06:07:28PM -0700, Daniel Scheibli wrote: > [...] >> My question is how does geom_vinum react on this? >> >> I suspect it will reconstruct the data from the parity written >> to the other disks to service the request. >> >> But how is the disk - with the corrupt block - handled? Is the >> entire disk marked as bad? Or does it only mark that single block? >> Does it attempt to rewrite the corrupt data with the reconstructed >> data? >> > Hi, > > Gvinum will set the state of the drive to "down" (And you will get a > "GEOM_VINUM: lost drive XXX" message). It will then as you say reconstruct > the data if it's part of a RAID-5 plex. It will not however "salvage" the > data on the drive like for instance ZFS. Hi, thanks for your reply, thats what I feared. I tend to run a "checksum all data" script every time I do a backup (to ensure that the backup worked, but also to check that only expected file changed since the last checksum run). If a single corrupt block result in the entire disk being flagged "down", then I worry that I'am only 1 more corrupt block (on any other disk) away from the entire plex being considered broken. Are there any future plans to rewrite the reconstructed data down to the "failed" disk (in geom_vinum or geom_raid5) or is this then something where one should look towards the ZFS port? Also would it be of interest to have the "escalation" mode configurable?