From owner-freebsd-geom@FreeBSD.ORG Fri Oct 19 20:18:36 2007 Return-Path: Delivered-To: freebsd-geom@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5B62816A421 for ; Fri, 19 Oct 2007 20:18:36 +0000 (UTC) (envelope-from lulf@stud.ntnu.no) Received: from fri.itea.ntnu.no (fri.itea.ntnu.no [129.241.7.60]) by mx1.freebsd.org (Postfix) with ESMTP id 0CA3413C44B for ; Fri, 19 Oct 2007 20:18:35 +0000 (UTC) (envelope-from lulf@stud.ntnu.no) Received: from localhost (localhost [127.0.0.1]) by fri.itea.ntnu.no (Postfix) with ESMTP id F076B8401; Fri, 19 Oct 2007 22:00:32 +0200 (CEST) Received: from caracal.stud.ntnu.no (caracal.stud.ntnu.no [129.241.56.185]) by fri.itea.ntnu.no (Postfix) with ESMTP; Fri, 19 Oct 2007 22:00:32 +0200 (CEST) Received: by caracal.stud.ntnu.no (Postfix, from userid 2312) id 956396240F4; Fri, 19 Oct 2007 22:00:41 +0200 (CEST) Date: Fri, 19 Oct 2007 22:00:41 +0200 From: Ulf Lilleengen To: Felipe Neuwald Message-ID: <20071019200041.GA16812@stud.ntnu.no> References: <4718ECB2.9050207@neuwald.biz> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4718ECB2.9050207@neuwald.biz> User-Agent: Mutt/1.5.9i X-Content-Scanned: with sophos and spamassassin at mailgw.ntnu.no. Cc: freebsd-geom@freebsd.org Subject: Re: gvinum - problem on hard disk X-BeenThere: freebsd-geom@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: GEOM-specific discussions and implementations List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Oct 2007 20:18:36 -0000 On fre, okt 19, 2007 at 03:43:14 -0200, Felipe Neuwald wrote: > Hi folks, > > I have one gvinum raid on a FreeBSD 6.1-RELEASE machine. There are 4 > disks running, as you can see: > > [root@fileserver ~]# gvinum list > 4 drives: > D a State: up /dev/ad4 A: 0/238474 MB (0%) > D b State: up /dev/ad5 A: 0/238475 MB (0%) > D c State: up /dev/ad6 A: 0/238475 MB (0%) > D d State: up /dev/ad7 A: 0/238475 MB (0%) > > 1 volume: > V data State: down Plexes: 1 Size: 931 GB > > 1 plex: > P data.p0 S State: down Subdisks: 4 Size: 931 GB > > 4 subdisks: > S data.p0.s3 State: stale D: d Size: 232 GB > S data.p0.s2 State: up D: c Size: 232 GB > S data.p0.s1 State: up D: b Size: 232 GB > S data.p0.s0 State: up D: a Size: 232 GB > > > But, as you can see, the data.p0.s3 is "stale". What should I do to try > recover this and get the raid up again (and recover information) > Hello, Since your plex organization is RAID0 (striping), recovering after a drive failure is a problem since you don't have any redundancy, but if you didn't replace any drives etc, this could just be gvinum fooling around. In that case, doing a 'gvinum setstate -f up data.p0.s3' should get the volume up again. > -- Ulf Lilleengen