From owner-freebsd-fs@FreeBSD.ORG Thu Feb 28 23:49:27 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 2D90E50F for ; Thu, 28 Feb 2013 23:49:27 +0000 (UTC) (envelope-from allan@physics.umn.edu) Received: from mail.physics.umn.edu (smtp.spa.umn.edu [128.101.220.4]) by mx1.freebsd.org (Postfix) with ESMTP id 082FB9D2 for ; Thu, 28 Feb 2013 23:49:26 +0000 (UTC) Received: from spa-sysadm-01.spa.umn.edu ([134.84.199.8]) by mail.physics.umn.edu with esmtpsa (TLSv1:CAMELLIA256-SHA:256) (Exim 4.77 (FreeBSD)) (envelope-from ) id 1UBCr5-000Bts-VB for freebsd-fs@freebsd.org; Thu, 28 Feb 2013 17:25:52 -0600 Message-ID: <512FE773.3060903@physics.umn.edu> Date: Thu, 28 Feb 2013 17:25:39 -0600 From: Graham Allan User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/17.0 Thunderbird/17.0 MIME-Version: 1.0 To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on mrmachenry.spa.umn.edu X-Spam-Level: X-Spam-Status: No, score=-1.7 required=5.0 tests=ALL_TRUSTED,AWL,BAYES_50, TW_ZF,T_RP_MATCHES_RCVD autolearn=no version=3.3.2 Subject: benefit of GEOM labels for ZFS, was Hard drive device names... serial numbers X-SA-Exim-Version: 4.2 X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 28 Feb 2013 23:49:27 -0000 Sorry to come in late on this thread but I've been struggling with thinking about the same issue, from a different perspective. Several months ago we created our first "large" ZFS storage system, using 42 drives plus a few SSDs in one of the oft-used Supermicro 45-drive chassis. It has been working really nicely but has led to some puzzling over the best way to do some things when we build more. We made our pool using geom drive labels. Ever since, I've been wondering if this really gives any advantage - at least for this type of system. If you need to replace a drive, you don't really know which enclosure slot any given da device is, and so our answer has been to dig around using sg3_utils commands wrapped in a bit of perl, to try and correlate the da device to the slot via the drive serial number. At this point, having a geom label just seems like an extra bit of indirection to increase my confusion :-) Although setting the geom label to the drive serial number might be a serious improvement... We're about to add a couple more of these shelves to the system, giving a total of 135 drives (although each shelf would be a separate pool), and given that they will be standard consumer grade drives, some frequency of replacement is a given. Does anyone have any good tips on how to manage a large number of drives in a zfs pool like this? Thanks, Graham -- ------------------------------------------------------------------------- Graham Allan School of Physics and Astronomy - University of Minnesota -------------------------------------------------------------------------