Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 12 Mar 2006 10:42:31 -0700
From:      "Andrew" <lightmoogle@hotmail.com>
To:        <freebsd-geom@freebsd.org>
Subject:   Gvinum RAID5 volume don't work after a simple reboot
Message-ID:  <BAY114-DAV13B3ABB49B9F1D650C6A47B2E30@phx.gbl>

next in thread | raw e-mail | index | archive | help
I ran into the problem of initializing a RAID5 array after reboot as well.
It seems, when you create the RAID5 array, it shows everything is up and ok.
Indeed, you can newfs it, mount it, put data on it. When you reboot, it'll
be in the "down" state. If you do start <RAID5>, it'll go through the
initialization process.

Let me add one thing: using dd to zero out the drive is a waste of time, as
it will be done after you reboot and the initialization happens.

The fix that I determined is to create the array, reboot, start the array
and it'll initialize, this initialization will be persistent from then on -- 
it'll only do it once. You mentioned needing to run newfs twice.. this seems
like why.

My problem, though, is when a disk is removed. If a disk is removed, the
array fails entirely -- there is no degraded state. If I reboot with the
drie readded, things will work flawlessly again. If I reboot with a _new_
drive, I can't replace the drive in the array, I can't rebuild parity... I'm
stuck. So, rather than RAID5's "One drive fails and you're still OK", I'm
getting, "Well.. we're using that one drive for parity, but if _any_ of your
drives fail you're screwed."

The only even _thought_ I have on that last part is that I'm using the
entire disk (/dev/ad0) as opposed to a labeled section (/dev/ad0s1a).. but I
haven't tried to mess with that yet.

-DrkShadow



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?BAY114-DAV13B3ABB49B9F1D650C6A47B2E30>