Date: Mon, 25 May 2009 19:37:59 +0300 From: Valentin Bud <valentin.bud@gmail.com> To: Graeme Dargie <arab@tangerine-army.co.uk> Cc: Howard Jones <howard.jones@network-i.net>, freebsd-questions@freebsd.org Subject: Re: FreeBSD & Software RAID Message-ID: <139b44430905250937u3410ac24g1f0b9f89a0d51f22@mail.gmail.com> In-Reply-To: <01FB8F39BAD0BD49A6D0DA8F7897392956C7@Mercury.galaxy.lan.lcl> References: <4A1AA3DC.5020300@network-i.net> <01FB8F39BAD0BD49A6D0DA8F7897392956C7@Mercury.galaxy.lan.lcl>
next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, May 25, 2009 at 7:30 PM, Graeme Dargie <arab@tangerine-army.co.uk>wrote: > > > -----Original Message----- > From: Howard Jones [mailto:howard.jones@network-i.net] > Sent: 25 May 2009 14:58 > To: freebsd-questions@freebsd.org > Subject: FreeBSD & Software RAID > > Hi, > > Can anyone with experience of software RAID point me in the right > direction please? I've used gmirror before with no trouble, but nothing > fancier. > > I have a set of brand new 1TB drives, a Sil3124 SATA card and a FreeBSD > 7.1-p4 system. > > I created a RAID 5 set with gvinum: > drive d0 device /dev/ad4s1a > drive d1 device /dev/ad6s1a > drive d2 device /dev/ad8s1a > drive d3 device /dev/ad10s1a > volume jumbo > plex org raid5 256k > sd drive d0 > sd drive d1 > sd drive d2 > sd drive d3 > > and it shows as up and happy. If I reboot, all the subdisks show as > stale, and so the plex is down. It seems to be doing a rebuild, although > it wasn't before, and would newfs, mount and accept data onto the new > plex before the reboot. > > Is there any way to avoid having to wait while gvinum apparently > calculates the parity on all those zeroes? > > Am I missing some step to 'liven up' the plex before the first reboot? > (loader.conf has the correct line to load gvinum at boot) I tried again, > with 'gvinum start jumbo' before rebooting, and that made no difference. > > Also is the configuration file format actually documented anywhere? I > got that example from someone's blog, but the gvinum manpage doesn't > mention the format at all! It *does* have a few pages dedicated to > things that don't work, which was handy... :-) The handbook is still > talking about ccd and vinum, and mostly covers the complications of > booting of such a device. > > On the subject of documentation, I'm also assuming that this: > S jumbo.p0.s2 State: I 1% D: d2 Size: > 931 GB > means it's 1% through initialising, because the states or the output of > 'list' aren't described in the manual either. > > I'm was half-considering switching to ZFS, but the most positive thing I > could find written about that (as implemented on FreeBSD) is that it > "doesn't crash that much", so perhaps not. That was from a while ago > though. > > Does anyone use software RAID5 (or RAIDZ) for data they care about? > > Cheers, > > Howie > _______________________________________________ > freebsd-questions@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-questions > To unsubscribe, send any mail to > "freebsd-questions-unsubscribe@freebsd.org" > > > I have been running ZFS RAIDZ for 5 months on a 7.1 amd64 install, I > have to say my experience has been mostly good. Initially I had an issue > with a pci sata card causing drives to disconnect, but after investing a > new motherboard with 6 sata ports everything has been smooth. I did have > to replace a disk last week as it was showing checksum, read and write > errors. ZFS rebuilt 2TB of data in around 5hours and did not loose any > files at all. > > Regards > > Graeme > > _______________________________________________ > freebsd-questions@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-questions > To unsubscribe, send any mail to " > freebsd-questions-unsubscribe@freebsd.org" > I have been using ZFS for about half an year. I just have mirroring with 2 drives. Never had a problem with it. I would go with ZFS in the future too. And yes the server is in production and it has all sort of important data. a great day, v -- network warrior since 2005
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?139b44430905250937u3410ac24g1f0b9f89a0d51f22>