From owner-freebsd-geom@FreeBSD.ORG Thu Dec 4 07:34:18 2008 Return-Path: Delivered-To: freebsd-geom@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A435D106564A for ; Thu, 4 Dec 2008 07:34:18 +0000 (UTC) (envelope-from lulf@stud.ntnu.no) Received: from bene1.itea.ntnu.no (bene1.itea.ntnu.no [IPv6:2001:700:300:3::56]) by mx1.freebsd.org (Postfix) with ESMTP id 23C8B8FC22 for ; Thu, 4 Dec 2008 07:34:16 +0000 (UTC) (envelope-from lulf@stud.ntnu.no) Received: from localhost (localhost [127.0.0.1]) by bene1.itea.ntnu.no (Postfix) with ESMTP id 7AD091769C0; Thu, 4 Dec 2008 08:34:14 +0100 (CET) Received: from nobby (unknown [IPv6:2001:700:300:3::184]) by bene1.itea.ntnu.no (Postfix) with ESMTP id C867B16C7D3; Thu, 4 Dec 2008 08:34:13 +0100 (CET) Date: Thu, 4 Dec 2008 07:34:10 +0100 From: Ulf Lilleengen To: Hilko Meyer Message-ID: <20081204063410.GA1465@nobby.lan> References: <20081130153558.GA2120@nobby.lan> <20081130222445.GA1528@carrot.studby.ntnu.no> <20081201021720.GA1949@carrot.studby.ntnu.no> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.18 (2008-05-17) X-Virus-Scanned: Debian amavisd-new at bene1.itea.ntnu.no Cc: adnan@hochpass.uni-hannover.de, freebsd-geom@freebsd.org Subject: Re: System freeze with gvinum X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 04 Dec 2008 07:34:18 -0000 On Thu, Dec 04, 2008 at 03:02:39AM +0100, Hilko Meyer wrote: > Ulf Lilleengen schrieb: > >On man, des 01, 2008 at 12:32:22am +0100, Hilko Meyer wrote: > >> Is gvinum in 7.1RC and 7.x the same? We considered to update to 7.1 > >> before it's released anyway, because we need nfe(4). And wanted to try > >> gvinum and zfs there. > >Yes, they are the same. > >> > >> But we can test a patch against 6.4 before the big update if you want. > >> > >It's really up to you. If you're going to upgrade anyway, it will at least > >save me from a little bit of work :) > > Unfortunately I have some other work for you. After changing the > BIOS-setting to AHCI, I tried gvinum with 6.4 again. And strangely > enough it worked. No freeze with newfs and I could copy several GB to > the volumes, but after a reboot gvinum list looks like that: > > | D sata3 State: up /dev/ad10 A: 9/476939 MB (0%) > | D sata2 State: up /dev/ad8 A: 9/476939 MB (0%) > | D sata1 State: up /dev/ad4 A: 9/476939 MB (0%) > | > | 2 volumes: > | V homes_raid5 State: down Plexes: 1 Size: 465 GB > | V dump_raid5 State: down Plexes: 1 Size: 465 GB > | > | 2 plexes: > | P homes_raid5.p0 R5 State: down Subdisks: 3 Size: 465 GB > | P dump_raid5.p0 R5 State: down Subdisks: 3 Size: 465 GB > | > | 6 subdisks: > | S homes_raid5.p0.s0 State: stale D: sata1 Size: 232 GB > | S homes_raid5.p0.s1 State: stale D: sata2 Size: 232 GB > | S homes_raid5.p0.s2 State: stale D: sata3 Size: 232 GB > | S dump_raid5.p0.s0 State: stale D: sata1 Size: 232 GB > | S dump_raid5.p0.s1 State: stale D: sata2 Size: 232 GB > | S dump_raid5.p0.s2 State: stale D: sata3 Size: 232 GB > > Then we updated to FreeBSD 7.1-PRERELEASE, but nothing changed. After a > reboot the volumes are down. In dmesg I found > g_vfs_done():gvinum/dump_raid5[READ(offset=65536, length=8192)]error = 6 > but I think, that occurred during a try to mount a volume. > Well, this can happen if there was errors reading/writing to volumes previously. When volumes are in the down state, it is not possible to use them. You have a few options: If currently have any data on the volumes, and would like to recover without reinitializing the volumes, you can try and force the subdisk states to up by doing: 1. 'gvinum setstate -f up ' on all subdisk. The plexes should then go into the upstate as all the subdisks are up. 2. Do fsck on the volumes to ensure that they are ok. If so, you are ready to go again. Note that you might have to pass -t ufs to fsck as vinum volumes previously have set their own disklabels and other weird stuff. If you don't have any valuable data yet, you can run 'gvinum start ' on all volumes, which should reinitialize the plexes, or you can just recreate the entire config. Recreating the entire config might also work if you have data, but I'd try the tip above first. In any case, I don't guarantee for any these methods to work, but forcing the state of the subdisks should to the trick. Preferably, you can try the method on the subdisks of one of the volumes first and see if it works. -- Ulf Lilleengen