Date: Wed, 07 Nov 2007 08:08:42 -0500 From: =?UTF-8?B?6Z+T5a625qiZIEJpbGwgSGFja2Vy?= <askbill@conducive.net> To: freebsd-current@freebsd.org Subject: Re: geom_raid5 inclusion in HEAD? Message-ID: <4731B8DA.8010201@conducive.net> In-Reply-To: <9bbcef730711070450x308129b4rb18577c317eee197@mail.gmail.com> References: <fgs516$mj8$1@ger.gmane.org> <487375.1457.qm@web30309.mail.mud.yahoo.com> <9bbcef730711070450x308129b4rb18577c317eee197@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Ivan Voras wrote: > On 07/11/2007, Arne Wörner <arne_woerner@yahoo.com> wrote: > >> graid5 puts write requests for about kern.geom.raid5.wdt seconds (but not less >> than 1-2 seconds) into the write cache (if there is enough space left in >> graid5's write cache)... I would guess that this behaviour is pretty >> incompatible with soft-updates with power outage... > > Can this cache be disabled? Probably - but recent info shows it to be the prime mover in providing decent performance (when things are NOT broken). > >> Then there still is the write cache of the hard discs (I dont know how long it >> waits, but that time would come in addition to graid5's delay)... >> >> Maybe gjournal could help, because graid5 honors the BIO_FLUSH, but that is >> untested... > > Yes, AFAIK this would work. > A RAID5 is one of the harder ones to do both fast and well in software-only. The better hardware ($$$) controllers have fast hardware XOR engines as well as CPU-as-state-machines and battery-backed cache, and THEY have to work hard. Further, a hardware controller sits in the right place to do the job well, the 'GP' CPU(s) - no matter they have spare cycles to burn - do not. I don't think even GEOM magic can get around that w/o user willingness to take on some unavoidable compromises. Given decent hardware & any UPS that costs less than the hardware controller, these are 'choices' - not really show-stoppers. Bill
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4731B8DA.8010201>