From owner-freebsd-questions@freebsd.org Sat Jul 8 02:21:58 2017 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0BB8EDB1471 for ; Sat, 8 Jul 2017 02:21:58 +0000 (UTC) (envelope-from dpchrist@holgerdanske.com) Received: from holgerdanske.com (holgerdanske.com [IPv6:2001:470:0:19b::b869:801b]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "holgerdanske.com", Issuer "holgerdanske.com" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id EDBD96F8A4 for ; Sat, 8 Jul 2017 02:21:57 +0000 (UTC) (envelope-from dpchrist@holgerdanske.com) Received: from 99.100.19.101 ([99.100.19.101]) by holgerdanske.com with ESMTPSA (ECDHE-RSA-AES128-GCM-SHA256:TLSv1.2:Kx=ECDH:Au=RSA:Enc=AESGCM(128):Mac=AEAD) (SMTP-AUTH username dpchrist@holgerdanske.com, mechanism PLAIN) for ; Fri, 7 Jul 2017 19:21:56 -0700 Subject: Re: Drive labelling with ZFS To: freebsd-questions@freebsd.org References: <03643051-38e8-87ef-64ee-5284e2567cb8@fjl.co.uk> <7fa67076-3ec8-4c25-67b9-a1b8a0aa5afc@holgerdanske.com> <5940EE63.2080904@fjl.co.uk> From: David Christensen Message-ID: <771917ae-7e07-95d0-5cee-4bda8578a646@holgerdanske.com> Date: Fri, 7 Jul 2017 19:21:55 -0700 User-Agent: Mozilla/5.0 (X11; FreeBSD i386; rv:45.0) Gecko/20100101 Thunderbird/45.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 08 Jul 2017 02:21:58 -0000 On 07/07/17 03:47, Frank Leonhardt wrote: > I'm afraid the Lucas book has a lot of stuff in that may have been true > once. I've had a fun time with the chance to experiment with "big > hardware" full time for a few weeks, and have some differing views on > some of it. > > With big hardware you can flash the light on any drive you like (using > FreeBSD sesutil) so the label problem goes away anyhow. With a small > SATA array I really don't think there's a solution. Basically ZFS will > cope with having it's drives installed anywhere and stitch them together > where it finds them. If you accidentally swap a disk around its internal > label will be wrong. More to the point, if you have to migrate drives to > another machine, ZFS will be cool but your labels won't be. > > The most useful thing I can think of is to label the caddies with the > GUID (first or last 2-3 digits). If you have only one shelf you should > be able to find the one you want quick enough. As I understand it, ZFS goes by the UUID/GUID. So, using UUID"s for software and applying matching physical labels to each drive/caddy makes sense. > Incidentally, the Lucas book says you should configure your raidz arrays > with 2, 4, 8, 16... data drives plus extras depending on the level of > redundancy. I couldn't see why, so did some digging. The only reason I > found relates to the "parity" data fitting exactly in to a block, > assuming specific (small) block sizes to start with. Even if you hit > this magic combo, using compression is A Good Thing with ZFS so your > logical:physical mapping is never going to work. So do what you like > with raidz. With four drives I'd go for raidz2, because I like to have > more than one spare drive. With 2x2 mirrors you run the risk of killing > the remaining drive on a pair when the first one dies. It happens more > often than you think, because resilvering stresses the remaining drive > and if it's gonna go, that's when (a scientific explanation for sods > law). That said, mirrors are useful if the drives are separated on > different shelves. It depends on your level of paranoia, but in a SOHO > environment there's a tendency to use an array as its own backup. > > If you could get a fifth drive raidz2 would be an even better. raidz1 > with four drives is statistically safer than two mirrors as long as you > swap the failed drive fast. And on that subject, it's good to have a > spare slot in the array for the replacement drive. Unless the failed > drive has completely failed, this is much kinder to the remaining drives > during the resilver. Thanks for the information. :-) David