Date: Fri, 12 Sep 2008 12:04:27 -0400 From: "Zaphod Beeblebrox" <zbeeble@gmail.com> To: freebsd-hackers@freebsd.org, kpielorz_lst@tdx.co.uk Subject: Re: ZFS w/failing drives - any equivalent of Solaris FMA? Message-ID: <5f67a8c40809120904o49b6e410l5b65a20f5216202@mail.gmail.com> In-Reply-To: <200809121544.m8CFiRHQ099725@lurza.secnetix.de> References: <C984A6E7B1C6657CD8C4F79E@Slim64.dmpriest.net.uk> <200809121544.m8CFiRHQ099725@lurza.secnetix.de>
next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Sep 12, 2008 at 11:44 AM, Oliver Fromme <olli@lurza.secnetix.de>wrote: > Did you try "atacontrol detach" to remove the disk from > the bus? I haven't tried that with ZFS, but gmirror > automatically detects when a disk has gone away, and > doesn't try to do anything with it anymore. It certainly > should not hang the machine. After all, what's the > purpose of a RAID when you have to reboot upon drive > failure. ;-) To be fair, many "home" users run RAID without the expectation of being able to hot swap the drives. While RAID can provide high availability, but it can also provide simple data security. In my home environment, I have a number of machines running. I have a few things on non-redundant disks --- mostly operating systems or local archives of internet data (like a cvsup server, for instance). Those disks can be lost, and while it's a nuisance, it's not catastrophic. Other things (from family photos to mp3s to other media) I keep on home RAID arrays. They're not hot swap... but I've had quite a few disks go bad over the years. I actually welcome ZFS for this --- the idea that checksums are kepts makes me feel a lot more secure about my data. I have observed some bitrot over time on some data. To your point... I suppose you have to reboot at some point after the drive failure, but my experience has been that the reboot has been under my control some time after the failure (usually when I have the replacement drive). For the home user, this can be quite inexpensive, too. I've found a case that can take 19 drives internally (and has good cooling for about $125). If you used some of the 5-to-3 drive bays, that number would increase to 25. About the only real improvement I'd like to see in this setup is the ability to spin down idle drives. That would be an ideal setup for the home RAID array.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5f67a8c40809120904o49b6e410l5b65a20f5216202>