From owner-freebsd-fs@FreeBSD.ORG Wed Jan 23 02:06:28 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 85F07E28 for ; Wed, 23 Jan 2013 02:06:28 +0000 (UTC) (envelope-from freebsd@deman.com) Received: from plato.corp.nas.com (plato.corp.nas.com [66.114.32.138]) by mx1.freebsd.org (Postfix) with ESMTP id 4D5A01B0 for ; Wed, 23 Jan 2013 02:06:28 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by plato.corp.nas.com (Postfix) with ESMTP id E744912E1EE46; Tue, 22 Jan 2013 18:06:27 -0800 (PST) X-Virus-Scanned: amavisd-new at corp.nas.com Received: from plato.corp.nas.com ([127.0.0.1]) by localhost (plato.corp.nas.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id O2yRLmv8qFaX; Tue, 22 Jan 2013 18:06:27 -0800 (PST) Received: from [192.168.0.116] (c-50-135-255-120.hsd1.wa.comcast.net [50.135.255.120]) by plato.corp.nas.com (Postfix) with ESMTPSA id 164B112E1EE3B; Tue, 22 Jan 2013 18:06:27 -0800 (PST) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 6.2 \(1499\)) Subject: Re: RFC: Suggesting ZFS "best practices" in FreeBSD - mapping logical to physical drives From: Michael DeMan In-Reply-To: Date: Tue, 22 Jan 2013 18:06:27 -0800 Content-Transfer-Encoding: quoted-printable Message-Id: <16E9D784-D2F2-4C55-9138-907BF3957CE8@deman.com> References: <314B600D-E8E6-4300-B60F-33D5FA5A39CF@sarenet.es> To: Freddie Cash X-Mailer: Apple Mail (2.1499) Cc: FreeBSD Filesystems , Scott Long X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Jan 2013 02:06:28 -0000 Hi, We have been able to effectively mitigate this (and rigorously tested) = problem. I myself am fussy and in the situation where a disk drive dies want to = make sure that the data-center technician is removing/replacing exactly = the correct disk. -AND- if the machine reboots with a disk removed, or added - that it all = just looks normal. I think this is basically another item that there are standard ways to = deal with it but there is no documentation? What we did was /boot/device.hints. On the machine we rigorously tested this on, we have in = /boot/device.hints. This is for the particular controllers as noted but = I think works for any SATA or SAS controllers? # OAIMFD 2011.04.13 adding this to force ordering on adaX disks=20 # dev.mvs.0.%desc: Marvell 88SX6081 SATA controller=20 # dev.mvs.1.%desc: Marvell 88SX6081 SATA controller=20 hint.scbus.0.at=3D"mvsch0"=20 hint.ada.0.at=3D"scbus0"=20 hint.scbus.1.at=3D"mvsch1"=20 hint.ada.1.at=3D"scbus1"=20 hint.scbus.2.at=3D"mvsch2"=20 hint.ada.2.at=3D"scbus2"=20 hint.scbus.3.at=3D"mvsch3"=20 hint.ada.3.at=3D"scbus3"=20 ...and so on up to ada14... Inserting disks that were empty before and rebooting, or removing disks = that did exist and rebooting - it all 'just works'. On Jan 22, 2013, at 3:02 PM, Freddie Cash wrote: > On Jan 22, 2013 7:04 AM, "Warren Block" wrote: >>=20 >> On Tue, 22 Jan 2013, Borja Marcos wrote: >>=20 >>> 1- Dynamic disk naming -> We should use static naming (GPT labels, = for > instance) >>>=20 >>> ZFS was born in a system with static device naming (Solaris). When = you > plug a disk it gets a fixed name. As far as I know, at least from my > experience with Sun boxes, c1t3d12 is always c1t3d12. FreeBSD's = dynamic > naming can be very problematic. >>>=20 >>> For example, imagine that I have 16 disks, da0 to da15. One of them, > say, da5, dies. When I reboot the machine, all the devices from da6 to = da15 > will be renamed to the device number -1. Potential for trouble as a = minimum. >>>=20 >>> After several different installations, I am preferring to rely on = static > naming. Doing it with some care can really help to make pools portable = from > one system to another. I create a GPT partition in each drive, and = Iabel it > with a readable name. Thus, imagine I label each big partition (which = takes > the whole available space) as pool-vdev-disk, for example, > pool-raidz1-disk1. >>=20 >>=20 >> I'm a proponent of using various types of labels, but my impression = after > a recent experience was that ZFS metadata was enough to identify the = drives > even if they were moved around. That is, ZFS bare metadata on a drive = with > no other partitioning or labels. >>=20 >> Is that incorrect? >=20 > The ZFS metadata on disk allows you to move disks around in a system = and > still import the pool, correct. >=20 > But the ZFS metadata will not help you figure out which disk, in which = bay, > of which drive shelf just died and needs to be replaced. >=20 > That's where glabels, gpt labels, and similar come in handy. It's for = the > sysadmin, not the system itself. > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"