Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 22 Jan 2019 11:15:59 +0000 (GMT)
From:      andy thomas <andy@time-domain.co.uk>
To:        Ireneusz Pluta <ipluta@wp.pl>
Cc:        freebsd-fs <freebsd-fs@freebsd.org>
Subject:   Re: ZFS on Hardware RAID
Message-ID:  <alpine.BSF.2.21.1901211548570.24493@mail0.time-domain.co.uk>
In-Reply-To: <ee6353dc-161f-407e-d976-71ca652970a0@wp.pl>
References:  <1180280695.63420.1547910313494.JavaMail.zimbra@gausus.net> <92646202.63422.1547910433715.JavaMail.zimbra@gausus.net> <CAOeNLurgn-ep1e=Lq9kgxXK%2By5xqq4ULnudKZAbye59Ys7q96Q@mail.gmail.com> <alpine.BSF.2.21.1901200834470.12592@mail0.time-domain.co.uk> <ee6353dc-161f-407e-d976-71ca652970a0@wp.pl>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, 20 Jan 2019, Ireneusz Pluta wrote:

> W dniu 2019-01-20 o?09:45, andy thomas pisze:
>> I run a number of very busy webservers (Dell PowerEdge 2950 with LSI 
>> MegaRAID SAS 1078 controllers) with the first two disks in RAID 1 as the 
>> FreeBSD system disk and the remaining 4 disks configured as RAID 0 virtual 
>> disks making up a ZFS RAIDz1 pool with 3 disks plus one hot spare. 
> In this configuration, have you ever made a test of causing a drive failure, 
> to see the hot spare activated?

Yesterday I set up a spare Dell 2950 with Perc 5/i Integrated HBA and six 
73 GB SAS disks, with the first two disks configured as a RAID 1 system 
disk (/dev/mfid0) and the remaining 4 disks as RAID 0 (mfid1- mfid4). 
After adding a freebsd-zfs GPT partition to each of these 4 disks I then 
created a RAIDz1 pool using mfid1p1, mfid2p1 and mfid3p1 with mfid4p1 as a 
spare, followed by creating a simple ZFS filesystem.

After copying a few hundred MB of files to the ZFS filesystem, I yanked 
/dev/mfid3 out to simulate a disk failure. I was then able to manually 
detach the failed disk and replace it with the spare. Later, after pushing 
/dev/mfid3 back in followed by a reboot and scrubbing the pool, mfid4 
automatically replaced the former mfid3 that was pulled out and mfid3 
became the new spare.

So a spare disk replacing a failed disk seems to be semi-automatic in 
FreeBSD (this was version 10.3) although I have seen fully automatic 
replacement on a Solaris parc platform.

Andy



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.21.1901211548570.24493>