From owner-freebsd-geom@FreeBSD.ORG Mon Oct 22 10:57:51 2007 Return-Path: Delivered-To: freebsd-geom@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3EC5116A420 for ; Mon, 22 Oct 2007 10:57:51 +0000 (UTC) (envelope-from felipe@neuwald.biz) Received: from itacaiunas.cepatec.org.br (itacaiunas.cepatec.org.br [200.152.208.51]) by mx1.freebsd.org (Postfix) with ESMTP id 8209313C4E1 for ; Mon, 22 Oct 2007 10:57:50 +0000 (UTC) (envelope-from felipe@neuwald.biz) Received: from localhost (vermelho [10.0.0.5]) by itacaiunas.cepatec.org.br (Postfix) with ESMTP id 483CD11562D; Mon, 22 Oct 2007 08:57:38 -0200 (BRST) X-Virus-Scanned: amavisd-new at cepatec.org.br Received: from itacaiunas.cepatec.org.br ([10.0.0.3]) by localhost (vermelho.cepatec.org.br [10.0.0.5]) (amavisd-new, port 10024) with ESMTP id jHKQyGBCqXUN; Mon, 22 Oct 2007 07:57:36 -0300 (BRT) Received: from [192.168.0.152] (unknown [200.199.198.61]) by itacaiunas.cepatec.org.br (Postfix) with ESMTP id 4900011562B; Mon, 22 Oct 2007 08:57:35 -0200 (BRST) Message-ID: <471C821F.2090101@neuwald.biz> Date: Mon, 22 Oct 2007 08:57:35 -0200 From: Felipe Neuwald User-Agent: Thunderbird 1.5.0.13 (X11/20070824) To: Ulf Lilleengen References: <4718ECB2.9050207@neuwald.biz> <20071019200041.GA16812@stud.ntnu.no> In-Reply-To: <20071019200041.GA16812@stud.ntnu.no> Content-Transfer-Encoding: 7bit MIME-Version: 1.0 Content-Type: text/plain; charset="ISO-8859-1" X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-geom@freebsd.org Subject: Re: gvinum - problem on hard disk X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 22 Oct 2007 10:57:51 -0000 Hi Ulf, Thank you for your information. As you can see, it worked: [root@fileserver ~]# gvinum list 4 drives: D a State: up /dev/ad4 A: 0/238474 MB (0%) D b State: up /dev/ad5 A: 0/238475 MB (0%) D c State: up /dev/ad6 A: 0/238475 MB (0%) D d State: up /dev/ad7 A: 0/238475 MB (0%) 1 volume: V data State: up Plexes: 1 Size: 931 GB 1 plex: P data.p0 S State: up Subdisks: 4 Size: 931 GB 4 subdisks: S data.p0.s3 State: up D: d Size: 232 GB S data.p0.s2 State: up D: c Size: 232 GB S data.p0.s1 State: up D: b Size: 232 GB S data.p0.s0 State: up D: a Size: 232 GB [root@fileserver ~]# fsck -t ufs -y /dev/gvinum/data ** /dev/gvinum/data ** Last Mounted on /data ** Phase 1 - Check Blocks and Sizes ** Phase 2 - Check Pathnames ** Phase 3 - Check Connectivity ** Phase 4 - Check Reference Counts ** Phase 5 - Check Cyl groups 258700 files, 419044280 used, 53985031 free (39599 frags, 6743179 blocks, 0.0% fragmentation) ***** FILE SYSTEM MARKED CLEAN ***** [root@fileserver ~]# mount -t ufs /dev/gvinum/data /data [root@fileserver ~]# mount /dev/ad0s1a on / (ufs, local) devfs on /dev (devfs, local) /dev/ad0s1d on /tmp (ufs, local, soft-updates) /dev/ad0s1e on /usr (ufs, local, soft-updates) /dev/ad0s1f on /var (ufs, local, soft-updates) /dev/gvinum/data on /data (ufs, local) [root@fileserver ~]# Now, I have to advice the customer again to make a backup file server. Thank you very much, Felipe Neuwald. Ulf Lilleengen escreveu: On fre, okt 19, 2007 at 03:43:14 -0200, Felipe Neuwald wrote: Hi folks, I have one gvinum raid on a FreeBSD 6.1-RELEASE machine. There are 4 disks running, as you can see: [root@fileserver ~]# gvinum list 4 drives: D a State: up /dev/ad4 A: 0/238474 MB (0%) D b State: up /dev/ad5 A: 0/238475 MB (0%) D c State: up /dev/ad6 A: 0/238475 MB (0%) D d State: up /dev/ad7 A: 0/238475 MB (0%) 1 volume: V data State: down Plexes: 1 Size: 931 GB 1 plex: P data.p0 S State: down Subdisks: 4 Size: 931 GB 4 subdisks: S data.p0.s3 State: stale D: d Size: 232 GB S data.p0.s2 State: up D: c Size: 232 GB S data.p0.s1 State: up D: b Size: 232 GB S data.p0.s0 State: up D: a Size: 232 GB But, as you can see, the data.p0.s3 is "stale". What should I do to try recover this and get the raid up again (and recover information) Hello, Since your plex organization is RAID0 (striping), recovering after a drive failure is a problem since you don't have any redundancy, but if you didn't replace any drives etc, this could just be gvinum fooling around. In that case, doing a 'gvinum setstate -f up data.p0.s3' should get the volume up again.