From owner-freebsd-fs@FreeBSD.ORG Mon Mar 4 09:28:35 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id AED6361A for ; Mon, 4 Mar 2013 09:28:35 +0000 (UTC) (envelope-from peter.maloney@brockmann-consult.de) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.17.9]) by mx1.freebsd.org (Postfix) with ESMTP id 327F68A6 for ; Mon, 4 Mar 2013 09:28:34 +0000 (UTC) Received: from [10.3.0.26] ([141.4.215.32]) by mrelayeu.kundenserver.de (node=mreu2) with ESMTP (Nemesis) id 0Lkkvg-1UkgaL0VKm-00b3t8; Mon, 04 Mar 2013 10:28:33 +0100 Message-ID: <5134693B.30408@brockmann-consult.de> Date: Mon, 04 Mar 2013 10:28:27 +0100 From: Peter Maloney User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Subject: Re: benefit of GEOM labels for ZFS, was Hard drive device names... serial numbers References: <512FE773.3060903@physics.umn.edu> In-Reply-To: <512FE773.3060903@physics.umn.edu> X-Enigmail-Version: 1.5 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Provags-ID: V02:K0:uy8FdlXG9bjXjjw1TLVEFoTIbkHmF9KbC9nLx/zfqMS gqmjdhe2UyapuzbujniGjifMtRFLXPy0P3GOrfs1MCVyV14HUG euArSWnStH0tzYHAXAKNg8LqltSFRXaWWyyazmP7cRME+gZ6VS 4JdEzy42nJOA/ZHm+fJ3G7LkC5/WrXYzWeM1fXnRM3zgt4hszp 6Rb3wPyIXS9dZs5fi/y3OYWxwCOLvrNAwjibp6ocX0iLf6LFQN 2HE+r83jp+kan3JqL3E4kG8p6RSzlswrf+Epdwc/F7RW5vsXgo LR39iQ0u0aUzN/hFye6cupaPsl/u28vNmjIIk3YJLILhIPSnHc uLz6VDUKDPHZ2w8tSY8XstQJz6noJdnzuc3GnFSPP X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 04 Mar 2013 09:28:35 -0000 I just use zfs offline, then dd to read the disk, and pull the one that blinks. :) zfs offline means the disk won't blink ever without me causing it, so there's no confusion. I would only use a labelling system if I could easily label the disks on the front too, but I don't have small enough labels... the disks have too much vent space, so I assume the labels would just fall off, block airflow, and be a hassle. And the servers are local, so dd isn't a problem. On 2013-03-01 00:25, Graham Allan wrote: > Sorry to come in late on this thread but I've been struggling with > thinking about the same issue, from a different perspective. > > Several months ago we created our first "large" ZFS storage system, > using 42 drives plus a few SSDs in one of the oft-used Supermicro > 45-drive chassis. It has been working really nicely but has led to > some puzzling over the best way to do some things when we build more. > > We made our pool using geom drive labels. Ever since, I've been > wondering if this really gives any advantage - at least for this type > of system. If you need to replace a drive, you don't really know which > enclosure slot any given da device is, and so our answer has been to > dig around using sg3_utils commands wrapped in a bit of perl, to try > and correlate the da device to the slot via the drive serial number. > > At this point, having a geom label just seems like an extra bit of > indirection to increase my confusion :-) Although setting the geom > label to the drive serial number might be a serious improvement... > > We're about to add a couple more of these shelves to the system, > giving a total of 135 drives (although each shelf would be a separate > pool), and given that they will be standard consumer grade drives, > some frequency of replacement is a given. > > Does anyone have any good tips on how to manage a large number of > drives in a zfs pool like this? > > Thanks, > > Graham -- -------------------------------------------- Peter Maloney Brockmann Consult Max-Planck-Str. 2 21502 Geesthacht Germany Tel: +49 4152 889 300 Fax: +49 4152 889 333 E-mail: peter.maloney@brockmann-consult.de Internet: http://www.brockmann-consult.de --------------------------------------------