Date: Fri, 19 Oct 2007 22:00:41 +0200 From: Ulf Lilleengen <lulf@stud.ntnu.no> To: Felipe Neuwald <felipe@neuwald.biz> Cc: freebsd-geom@freebsd.org Subject: Re: gvinum - problem on hard disk Message-ID: <20071019200041.GA16812@stud.ntnu.no> In-Reply-To: <4718ECB2.9050207@neuwald.biz> References: <4718ECB2.9050207@neuwald.biz>
next in thread | previous in thread | raw e-mail | index | archive | help
On fre, okt 19, 2007 at 03:43:14 -0200, Felipe Neuwald wrote: > Hi folks, > > I have one gvinum raid on a FreeBSD 6.1-RELEASE machine. There are 4 > disks running, as you can see: > > [root@fileserver ~]# gvinum list > 4 drives: > D a State: up /dev/ad4 A: 0/238474 MB (0%) > D b State: up /dev/ad5 A: 0/238475 MB (0%) > D c State: up /dev/ad6 A: 0/238475 MB (0%) > D d State: up /dev/ad7 A: 0/238475 MB (0%) > > 1 volume: > V data State: down Plexes: 1 Size: 931 GB > > 1 plex: > P data.p0 S State: down Subdisks: 4 Size: 931 GB > > 4 subdisks: > S data.p0.s3 State: stale D: d Size: 232 GB > S data.p0.s2 State: up D: c Size: 232 GB > S data.p0.s1 State: up D: b Size: 232 GB > S data.p0.s0 State: up D: a Size: 232 GB > > > But, as you can see, the data.p0.s3 is "stale". What should I do to try > recover this and get the raid up again (and recover information) > Hello, Since your plex organization is RAID0 (striping), recovering after a drive failure is a problem since you don't have any redundancy, but if you didn't replace any drives etc, this could just be gvinum fooling around. In that case, doing a 'gvinum setstate -f up data.p0.s3' should get the volume up again. > -- Ulf Lilleengen
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20071019200041.GA16812>