Date: Mon, 04 Mar 2013 10:28:27 +0100 From: Peter Maloney <peter.maloney@brockmann-consult.de> To: freebsd-fs@freebsd.org Subject: Re: benefit of GEOM labels for ZFS, was Hard drive device names... serial numbers Message-ID: <5134693B.30408@brockmann-consult.de> In-Reply-To: <512FE773.3060903@physics.umn.edu> References: <512FE773.3060903@physics.umn.edu>
next in thread | previous in thread | raw e-mail | index | archive | help
I just use zfs offline, then dd to read the disk, and pull the one that blinks. :) zfs offline means the disk won't blink ever without me causing it, so there's no confusion. I would only use a labelling system if I could easily label the disks on the front too, but I don't have small enough labels... the disks have too much vent space, so I assume the labels would just fall off, block airflow, and be a hassle. And the servers are local, so dd isn't a problem. On 2013-03-01 00:25, Graham Allan wrote: > Sorry to come in late on this thread but I've been struggling with > thinking about the same issue, from a different perspective. > > Several months ago we created our first "large" ZFS storage system, > using 42 drives plus a few SSDs in one of the oft-used Supermicro > 45-drive chassis. It has been working really nicely but has led to > some puzzling over the best way to do some things when we build more. > > We made our pool using geom drive labels. Ever since, I've been > wondering if this really gives any advantage - at least for this type > of system. If you need to replace a drive, you don't really know which > enclosure slot any given da device is, and so our answer has been to > dig around using sg3_utils commands wrapped in a bit of perl, to try > and correlate the da device to the slot via the drive serial number. > > At this point, having a geom label just seems like an extra bit of > indirection to increase my confusion :-) Although setting the geom > label to the drive serial number might be a serious improvement... > > We're about to add a couple more of these shelves to the system, > giving a total of 135 drives (although each shelf would be a separate > pool), and given that they will be standard consumer grade drives, > some frequency of replacement is a given. > > Does anyone have any good tips on how to manage a large number of > drives in a zfs pool like this? > > Thanks, > > Graham -- -------------------------------------------- Peter Maloney Brockmann Consult Max-Planck-Str. 2 21502 Geesthacht Germany Tel: +49 4152 889 300 Fax: +49 4152 889 333 E-mail: peter.maloney@brockmann-consult.de Internet: http://www.brockmann-consult.de --------------------------------------------
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5134693B.30408>