Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 22 Oct 2007 08:57:35 -0200
From:      Felipe Neuwald <felipe@neuwald.biz>
To:        Ulf Lilleengen <lulf@stud.ntnu.no>
Cc:        freebsd-geom@freebsd.org
Subject:   Re: gvinum - problem on hard disk
Message-ID:  <471C821F.2090101@neuwald.biz>
In-Reply-To: <20071019200041.GA16812@stud.ntnu.no>
References:  <4718ECB2.9050207@neuwald.biz> <20071019200041.GA16812@stud.ntnu.no>

next in thread | previous in thread | raw e-mail | index | archive | help

   Hi Ulf,
   Thank you for your information. As you can see, it worked:
   [root@fileserver ~]# gvinum list
   4 drives:
   D a                     State: up       /dev/ad4        A: 0/238474 MB
   (0%)
   D b                     State: up       /dev/ad5        A: 0/238475 MB
   (0%)
   D c                     State: up       /dev/ad6        A: 0/238475 MB
   (0%)
   D d                     State: up       /dev/ad7        A: 0/238475 MB
   (0%)
   1 volume:
   V data                  State: up       Plexes:       1 Size:
   931 GB
   1 plex:
   P data.p0             S State: up       Subdisks:     4 Size:
   931 GB
   4 subdisks:
   S data.p0.s3            State: up       D: d            Size:
   232 GB
   S data.p0.s2            State: up       D: c            Size:
   232 GB
   S data.p0.s1            State: up       D: b            Size:
   232 GB
   S data.p0.s0            State: up       D: a            Size:
   232 GB
   [root@fileserver ~]# fsck -t ufs -y /dev/gvinum/data
   ** /dev/gvinum/data
   ** Last Mounted on /data
   ** Phase 1 - Check Blocks and Sizes
   ** Phase 2 - Check Pathnames
   ** Phase 3 - Check Connectivity
   ** Phase 4 - Check Reference Counts
   ** Phase 5 - Check Cyl groups
   258700 files, 419044280 used, 53985031 free (39599 frags, 6743179
   blocks, 0.0% fragmentation)
   ***** FILE SYSTEM MARKED CLEAN *****
   [root@fileserver ~]# mount -t ufs /dev/gvinum/data /data
   [root@fileserver ~]# mount
   /dev/ad0s1a on / (ufs, local)
   devfs on /dev (devfs, local)
   /dev/ad0s1d on /tmp (ufs, local, soft-updates)
   /dev/ad0s1e on /usr (ufs, local, soft-updates)
   /dev/ad0s1f on /var (ufs, local, soft-updates)
   /dev/gvinum/data on /data (ufs, local)
   [root@fileserver ~]#
   Now, I have to advice the customer again to make a backup file server.
   Thank you very much,
   Felipe Neuwald.
   Ulf Lilleengen escreveu:

On fre, okt 19, 2007 at 03:43:14 -0200, Felipe Neuwald wrote:
  

Hi folks,

I have one gvinum raid on a FreeBSD 6.1-RELEASE machine. There are 4 
disks running, as you can see:

[root@fileserver ~]# gvinum list
4 drives:
D a                     State: up       /dev/ad4        A: 0/238474 MB (0%)
D b                     State: up       /dev/ad5        A: 0/238475 MB (0%)
D c                     State: up       /dev/ad6        A: 0/238475 MB (0%)
D d                     State: up       /dev/ad7        A: 0/238475 MB (0%)

1 volume:
V data                  State: down     Plexes:       1 Size:        931 GB

1 plex:
P data.p0             S State: down     Subdisks:     4 Size:        931 GB

4 subdisks:
S data.p0.s3            State: stale    D: d            Size:        232 GB
S data.p0.s2            State: up       D: c            Size:        232 GB
S data.p0.s1            State: up       D: b            Size:        232 GB
S data.p0.s0            State: up       D: a            Size:        232 GB


But, as you can see, the data.p0.s3 is "stale". What should I do to try 
recover this and get the raid up again (and recover information)

    

Hello,

Since your plex organization is RAID0 (striping), recovering after a drive
failure is a problem since you don't have any redundancy, but if you didn't
replace any drives etc, this could just be gvinum fooling around. In that
case, doing a 'gvinum setstate -f up data.p0.s3' should get the volume up
again.
  
  



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?471C821F.2090101>