Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 22 Jan 2019 13:04:45 +0100
From:      Borja Marcos <borjam@sarenet.es>
To:        andy thomas <andy@time-domain.co.uk>
Cc:        Ireneusz Pluta <ipluta@wp.pl>, freebsd-fs <freebsd-fs@freebsd.org>
Subject:   Re: ZFS on Hardware RAID
Message-ID:  <B4991FA9-7E94-4994-BDDB-EC59AF6DB960@sarenet.es>
In-Reply-To: <alpine.BSF.2.21.1901211548570.24493@mail0.time-domain.co.uk>
References:  <1180280695.63420.1547910313494.JavaMail.zimbra@gausus.net> <92646202.63422.1547910433715.JavaMail.zimbra@gausus.net> <CAOeNLurgn-ep1e=Lq9kgxXK%2By5xqq4ULnudKZAbye59Ys7q96Q@mail.gmail.com> <alpine.BSF.2.21.1901200834470.12592@mail0.time-domain.co.uk> <ee6353dc-161f-407e-d976-71ca652970a0@wp.pl> <alpine.BSF.2.21.1901211548570.24493@mail0.time-domain.co.uk>

next in thread | previous in thread | raw e-mail | index | archive | help


> On 22 Jan 2019, at 12:15, andy thomas <andy@time-domain.co.uk> wrote:
>=20
>=20
> Yesterday I set up a spare Dell 2950 with Perc 5/i Integrated HBA and =
six 73 GB SAS disks, with the first two disks configured as a RAID 1 =
system disk (/dev/mfid0) and the remaining 4 disks as RAID 0 (mfid1- =
mfid4). After adding a freebsd-zfs GPT partition to each of these 4 =
disks I then created a RAIDz1 pool using mfid1p1, mfid2p1 and mfid3p1 =
with mfid4p1 as a spare, followed by creating a simple ZFS filesystem.
>=20
> After copying a few hundred MB of files to the ZFS filesystem, I =
yanked /dev/mfid3 out to simulate a disk failure. I was then able to =
manually detach the failed disk and replace it with the spare. Later, =
after pushing /dev/mfid3 back in followed by a reboot and scrubbing the =
pool, mfid4 automatically replaced the former mfid3 that was pulled out =
and mfid3 became the new spare.

You shouldn=E2=80=99t require a reboot. If the actual targets are =
exposed to the CAM layer your disks will appear as =E2=80=9Cda=E2=80=9D =
(SAS backplane) and you can offline a device,=20
hot plug a new one, at most do a =E2=80=9Ccamcontrol rescan=E2=80=9D to =
detect it and run a zfs replace (or whatever) without stopping the =
system.=20

If your drives are =E2=80=9Cmfid=E2=80=9D devices you may need either a =
reboot or some magic rituals using =E2=80=9Csas2ircu=E2=80=9D or =
=E2=80=9Csas3ircu=E2=80=9D to have the controller recognize the new =
drive
and accept it as a valid volume.

> So a spare disk replacing a failed disk seems to be semi-automatic in =
FreeBSD (this was version 10.3) although I have seen fully automatic =
replacement on a Solaris parc platform.

There are several stages at play here:

1- Starting up and recognizing a SAS or SATA drive.=20

2- Having it recognized as a volume by a RAID card. With LSI cards and =
single disk RAID0 volumes it may require a reboot or using the =
sas2ircu/sas3ircu utility.

3- ZFS replacement, which on Solaris can be automatic and on FreeBSD is =
done manually (I haven=E2=80=99t tried zfsd yet).






Borja.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?B4991FA9-7E94-4994-BDDB-EC59AF6DB960>