Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 1 Mar 2013 17:52:20 -0500 (EST)
From:      "Lawrence K. Chen, P.Eng." <lkchen@ksu.edu>
To:        freebsd-fs@freebsd.org
Subject:   Re: benefit of GEOM labels for ZFS, was Hard drive device names... serial numbers
Message-ID:  <1602333081.21816316.1362178340105.JavaMail.root@k-state.edu>
In-Reply-To: <51310CAA.1020701@entel.upc.edu>

next in thread | previous in thread | raw e-mail | index | archive | help


----- Original Message -----
> Al 01/03/2013 00:25, En/na Graham Allan ha escrit:
> > Sorry to come in late on this thread but I've been struggling with
> > thinking about the same issue, from a different perspective.
> >
> > Several months ago we created our first "large" ZFS storage system,
> > using 42 drives plus a few SSDs in one of the oft-used Supermicro
> > 45-drive chassis. It has been working really nicely but has led to
> > some puzzling over the best way to do some things when we build
> > more.
> >
> > We made our pool using geom drive labels. Ever since, I've been
> > wondering if this really gives any advantage - at least for this
> > type
> > of system. If you need to replace a drive, you don't really know
> > which
> > enclosure slot any given da device is, and so our answer has been
> > to
> > dig around using sg3_utils commands wrapped in a bit of perl, to
> > try
> > and correlate the da device to the slot via the drive serial
> > number.
> >
> > At this point, having a geom label just seems like an extra bit of
> > indirection to increase my confusion :-) Although setting the geom
> > label to the drive serial number might be a serious improvement...
> >
> > We're about to add a couple more of these shelves to the system,
> > giving a total of 135 drives (although each shelf would be a
> > separate
> > pool), and given that they will be standard consumer grade drives,
> > some frequency of replacement is a given.
> >
> > Does anyone have any good tips on how to manage a large number of
> > drives in a zfs pool like this?
> >
> 
>     I don't have such a large array, I have  about 8 or 10 drives at
> most but I'd go with Freddie's convention. I'd also go with GPT
> labels
> instead of geom labels because the former are universal.
> 
>     I'd also ensure that you can easily identify driver with leds.
> Either by issuing commands to the disk controller (I use mfiutil to
> visually identify them) or by using ses, but you probably have
> though.
> 

I only have 15 drives...(12 HDDs and 3 SSDs) but the ordering of drives seemed to randomize on every boot (wonder now if the controller was doing some kind of staggering in spin ups.  And, their other drivers cope with it.  They provide a v1.1 driver for FreeBSD 7.2 or source to the v1.0 driver.)  And, then everything moved around when I changed controllers a few times.

I had resorted at one point to putting device.hints to force all the drives to keep their mapping.  Which caused problems elsewhere, and a mess when I added another controller.  But, then I changed to more meaningful GPT labels and exported and re-imported my zpools with '-d /dev/gpt', and now things are ok.

L



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1602333081.21816316.1362178340105.JavaMail.root>