From owner-freebsd-questions@freebsd.org Fri Jul 7 10:47:27 2017 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0D9B6DA00E7 for ; Fri, 7 Jul 2017 10:47:27 +0000 (UTC) (envelope-from frank2@fjl.co.uk) Received: from bs1.fjl.org.uk (bs1.fjl.org.uk [84.45.41.196]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "bs1.fjl.org.uk", Issuer "bs1.fjl.org.uk" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 5BA447567B for ; Fri, 7 Jul 2017 10:47:26 +0000 (UTC) (envelope-from frank2@fjl.co.uk) Received: from [192.168.1.35] (host86-191-88-133.range86-191.btcentralplus.com [86.191.88.133]) (authenticated bits=0) by bs1.fjl.org.uk (8.14.4/8.14.4) with ESMTP id v67AlO3I038621 for ; Fri, 7 Jul 2017 11:47:24 +0100 (BST) (envelope-from frank2@fjl.co.uk) Subject: Re: Drive labelling with ZFS To: freebsd-questions@freebsd.org References: <03643051-38e8-87ef-64ee-5284e2567cb8@fjl.co.uk> <7fa67076-3ec8-4c25-67b9-a1b8a0aa5afc@holgerdanske.com> <5940EE63.2080904@fjl.co.uk> From: Frank Leonhardt Message-ID: Date: Fri, 7 Jul 2017 11:47:27 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.23 X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 07 Jul 2017 10:47:27 -0000 On 06/14/2017 07:22 AM, Frank Leonhardt wrote: >> Hi David, >> >> It turns out that these options were set anyway. The problem turned >> out be be that I was assuming that geom label played nice with GPT. >> It doesn't! Well it does display labels set on GPT partitions, but >> it doesn't change them. It took a look at the GPT blocks to confirm >> this. It does, however, mask the GPT version with its own, sometimes, >> leading to much monkeyhouse. >> >> So ignore glabel completely and set the labels using gpart instead. >> >> Having got this sorted out, it turns out that it's really not as >> useful as it sounds. On a new array you can find a broken drive this >> way, but when it comes to moving a drive around (e.g. from the spare >> slot to its correct location) life isn't so simple. First off, ZFS >> does a good job of locating pool components wherever in the array you >> move them using the GUID. However, if you change the GPT label and >> move it, ZFS will refer to it by the device name instead. Nothing I >> have tried will persuade it otherwise. If you leave the label intact >> it's now pointing to the wrong slot, which ZFS really doesn't mind >> about but this could really ruin your day if you don't know. >> >> Now FreeBSD 11.0 can flash the ident light on any drive you choose, >> by device name (as used by ZFS), I'm seriously wondering if labels >> are worth the bother if they can't be relied on. Consider what happen >> if a tech pulls two drives and puts them back in the wrong order. ZFS >> will carry on regardless, but the label will now identify the wrong >> slot. Dangerous! >> > I'm glad I was able to provide you with one useful clue. > > > The Lucas books assume a fair amount of reader knowledge and > follow-up, but they gave me a nice boost up the learning curve and > were worth every penny. I probably would not have understood glabel > vs. gpart without them. > > > The /boot/loader.conf settings are also present on my FreeBSD 11.0 > system. The installer must have set them for me. > > > I agree with the idea of having some kind of identifier other than the > automatically generated interface based device node (e.g. /dev/ada0s1) > for devices/ virtual devices. It sounds like FreeBSD provides > multiple choices and the various subsystems are not well coordinated > on their usage (?). > > > I am a SOHO user who has only built a few JBOD and RAID0 arrays. But, > now I have four 1.5 TB drives and would like to put them to use with > FreeBSD ZFS ZRAID1 or striped mirrors. If you figure out a "one label > to rule them all" solution, please post it. (My preference at this > point would be whitespace-free strings set by the administrator based > on drive function -- e.g. "zraid1a", "zraid1b", "zraid1c", and > "zraid1d", or "zmirror0a", "zmirror0b", "zmirror1a", and "zmirror1b" > in my case; I plan to attach matching physical labels on the drives > themselves. Failing free-form strings, I prefer make/model/serial > number.) I'm afraid the Lucas book has a lot of stuff in that may have been true once. I've had a fun time with the chance to experiment with "big hardware" full time for a few weeks, and have some differing views on some of it. With big hardware you can flash the light on any drive you like (using FreeBSD sesutil) so the label problem goes away anyhow. With a small SATA array I really don't think there's a solution. Basically ZFS will cope with having it's drives installed anywhere and stitch them together where it finds them. If you accidentally swap a disk around its internal label will be wrong. More to the point, if you have to migrate drives to another machine, ZFS will be cool but your labels won't be. The most useful thing I can think of is to label the caddies with the GUID (first or last 2-3 digits). If you have only one shelf you should be able to find the one you want quick enough. Incidentally, the Lucas book says you should configure your raidz arrays with 2, 4, 8, 16... data drives plus extras depending on the level of redundancy. I couldn't see why, so did some digging. The only reason I found relates to the "parity" data fitting exactly in to a block, assuming specific (small) block sizes to start with. Even if you hit this magic combo, using compression is A Good Thing with ZFS so your logical:physical mapping is never going to work. So do what you like with raidz. With four drives I'd go for raidz2, because I like to have more than one spare drive. With 2x2 mirrors you run the risk of killing the remaining drive on a pair when the first one dies. It happens more often than you think, because resilvering stresses the remaining drive and if it's gonna go, that's when (a scientific explanation for sods law). That said, mirrors are useful if the drives are separated on different shelves. It depends on your level of paranoia, but in a SOHO environment there's a tendency to use an array as its own backup. If you could get a fifth drive raidz2 would be an even better. raidz1 with four drives is statistically safer than two mirrors as long as you swap the failed drive fast. And on that subject, it's good to have a spare slot in the array for the replacement drive. Unless the failed drive has completely failed, this is much kinder to the remaining drives during the resilver. Regards, Frank.