Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 7 Jul 2017 19:21:55 -0700
From:      David Christensen <dpchrist@holgerdanske.com>
To:        freebsd-questions@freebsd.org
Subject:   Re: Drive labelling with ZFS
Message-ID:  <771917ae-7e07-95d0-5cee-4bda8578a646@holgerdanske.com>
In-Reply-To: <ce4e20c0-e9fc-be20-7e88-114bd61f6bdd@fjl.co.uk>
References:  <03643051-38e8-87ef-64ee-5284e2567cb8@fjl.co.uk> <b99a9f4e-f00d-c6fa-e709-d19e07ccb98e@holgerdanske.com> <7fa67076-3ec8-4c25-67b9-a1b8a0aa5afc@holgerdanske.com> <5940EE63.2080904@fjl.co.uk> <cd863f47-037f-15a6-573d-9d59efff7f43@holgerdanske.com> <ce4e20c0-e9fc-be20-7e88-114bd61f6bdd@fjl.co.uk>

next in thread | previous in thread | raw e-mail | index | archive | help
On 07/07/17 03:47, Frank Leonhardt wrote:
> I'm afraid the Lucas book has a lot of stuff in that may have been true
> once. I've had a fun time with the chance to experiment with "big
> hardware" full time for a few weeks, and have some differing views on
> some of it.
>
> With big hardware you can flash the light on any drive you like (using
> FreeBSD sesutil) so the label problem goes away anyhow. With a small
> SATA array I really don't think there's a solution. Basically ZFS will
> cope with having it's drives installed anywhere and stitch them together
> where it finds them. If you accidentally swap a disk around its internal
> label will be wrong. More to the point, if you have to migrate drives to
> another machine, ZFS will be cool but your labels won't be.
>
> The most useful thing I can think of is to label the caddies with the
> GUID (first or last 2-3 digits). If you have only one shelf you should
> be able to find the one you want quick enough.

As I understand it, ZFS goes by the UUID/GUID.  So, using UUID"s for 
software and applying matching physical labels to each drive/caddy makes 
sense.


> Incidentally, the Lucas book says you should configure your raidz arrays
> with 2, 4, 8, 16... data drives plus extras depending on the level of
> redundancy. I couldn't see why, so did some digging. The only reason I
> found relates to the "parity" data fitting exactly in to a block,
> assuming specific (small) block sizes to start with. Even if you hit
> this magic combo, using compression is A Good Thing with ZFS so your
> logical:physical mapping is never going to work. So do what you like
> with raidz. With four drives I'd go for raidz2, because I like to have
> more than one spare drive. With 2x2 mirrors you run the risk of killing
> the remaining drive on a pair when the first one dies. It happens more
> often than you think, because resilvering stresses the remaining drive
> and if it's gonna go, that's when (a scientific explanation for sods
> law). That said, mirrors are useful if the drives are separated on
> different shelves. It depends on your level of paranoia, but in a SOHO
> environment there's a tendency to use an array as its own backup.
>
> If you could get a fifth drive raidz2 would be an even better. raidz1
> with four drives is statistically safer than two mirrors as long as you
> swap the failed drive fast. And on that subject, it's good to have a
> spare slot in the array for the replacement drive. Unless the failed
> drive has completely failed, this is much kinder to the remaining drives
> during the resilver.

Thanks for the information.  :-)


David




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?771917ae-7e07-95d0-5cee-4bda8578a646>