Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 7 Mar 2007 12:16:51 -0700
From:      Clayton F <clayton@bitheaven.net>
To:        freebsd-geom@freebsd.org
Subject:   Problems simulating gvinum raid5 rebuild
Message-ID:  <0B1A704D-A455-4741-BC11-A2019BFB4B22@bitheaven.net>

next in thread | raw e-mail | index | archive | help
Howdy GEOM group,

I've been using gvinum raid5 for the past year, but never had to  
replace a drive. One just failed, and I backed up my files from the  
degraded raid5 array to an external drive before attempting to  
rebuild with a replacement drive. I botched the rebuild attempt, so  
now am trying to better understand how to go about restoring a raid5  
under gvinum. Unfortunately, it is behaving strangely, and I haven't  
been able to find anything in the man pages or GEOM archives that  
seem to address my problem.

My new configuration has 7 drives instead of 5, all partitioned in  
dangerously dedicated mode. I'm running 6.2 STABLE. The gvinum config  
is as follows:

     [root@alcor /export]# gvinum l
     7 drives:
     D disk6                 State: up       /dev/ad14       A:  
0/194480 MB (0%)
     D disk5                 State: up       /dev/ad12       A:  
0/194480 MB (0%)
     D disk4                 State: up       /dev/ad10       A:  
0/194480 MB (0%)
     D disk3                 State: up       /dev/ad8        A:  
0/194480 MB (0%)
     D disk2                 State: up       /dev/ad6        A:  
0/194480 MB (0%)
     D disk1                 State: up       /dev/ad4        A:  
0/194480 MB (0%)
     D disk0                 State: up       /dev/ad2        A:  
0/194480 MB (0%)

     1 volume:
     V raid                  State: up       Plexes:       1  
Size:       1139 GB

     1 plex:
     P raid.p0            R5 State: up       Subdisks:     7  
Size:       1139 GB

     7 subdisks:
     S raid.p0.s0            State: up       D: disk0         
Size:        189 GB
     S raid.p0.s1            State: up       D: disk1         
Size:        189 GB
     S raid.p0.s2            State: up       D: disk2         
Size:        189 GB
     S raid.p0.s3            State: up       D: disk3         
Size:        189 GB
     S raid.p0.s4            State: up       D: disk4         
Size:        189 GB
     S raid.p0.s5            State: up       D: disk5         
Size:        189 GB
     S raid.p0.s6            State: up       D: disk6         
Size:        189 GB

I am able to create a filesystem on the array, mount it and read/ 
write without problems. Next, I attempt to simulate a hardware drive  
failure by rebooting with one drive in the array unplugged, expecting  
to see that the array is available, but degraded due to the loss of a  
drive. Instead, I get the following report, showing the subdisk of  
the missing drive (drive4, or subdisk raid.p0.s4) as 'up,' but  
reporting one less drive and a volume size 189 GB smaller than 7  
drive array. The array will mount, but has no data. Listing the  
mounted filesystems using df shows the original 1.1 terabyte array  
size, not the smaller value reported in gvinum. Output of gvinum and  
df -h are below:

     [root@alcor /export]# shutdown -h now

     (power down and unplug disk4)

     [root@alcor ~]# gvinum l
     6 drives:
     D disk6                 State: up       /dev/ad14       A:  
0/194480 MB (0%)
     D disk5                 State: up       /dev/ad12       A:  
0/194480 MB (0%)
     D disk3                 State: up       /dev/ad8        A:  
0/194480 MB (0%)
     D disk2                 State: up       /dev/ad6        A:  
0/194480 MB (0%)
     D disk1                 State: up       /dev/ad4        A:  
0/194480 MB (0%)
     D disk0                 State: up       /dev/ad2        A:  
0/194480 MB (0%)

     1 volume:
     V raid                  State: up       Plexes:       1  
Size:        949 GB

     1 plex:
     P raid.p0            R5 State: up       Subdisks:     6  
Size:        949 GB

     7 subdisks:
     S raid.p0.s0            State: up       D: disk0         
Size:        189 GB
     S raid.p0.s1            State: up       D: disk1         
Size:        189 GB
     S raid.p0.s2            State: up       D: disk2         
Size:        189 GB
     S raid.p0.s3            State: up       D: disk3         
Size:        189 GB
     S raid.p0.s4            State: up       D: disk4         
Size:        189 GB
     S raid.p0.s5            State: up       D: disk5         
Size:        189 GB
     S raid.p0.s6            State: up       D: disk6         
Size:        189 GB

     [root@alcor ~]# df -h
     Filesystem          Size    Used   Avail Capacity  Mounted on
     /dev/ad0s1a         496M     60M    396M    13%    /
     devfs               1.0K    1.0K      0B   100%    /dev
     /dev/ad0s1g          95G    1.7G     86G     2%    /jail
     /dev/ad0s1e         496M     18K    456M     0%    /tmp
     /dev/ad0s1f         9.7G    2.6G    6.3G    30%    /usr
     /dev/ad0s1d         1.4G     64M    1.3G     5%    /var
     /dev/gvinum/raid    1.1T    3.5G    1.0T     0%    /export

If I plug the drive back in (powering down first - I don't have hot  
swappable hardware), the array comes up normally with its data still  
intact. It is obvious to me that I could never rebuild the array in  
the event of losing a drive, which is my intent in configuring a  
raid5 in the first place. The behavior seems more like a jbod config  
than a raid5.

Any suggestions? Is using 7 drives exceeding the number that gvinum  
raid5 will allow? Should I be labeling the drives differently? Is my  
method for simulating a drive failure/replacement flawed? Any help  
would be most appreciated!

Thanks,
Clayton





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?0B1A704D-A455-4741-BC11-A2019BFB4B22>