Date: Sun, 15 Nov 2009 01:14:36 +0100 From: Stefan Bethke <stb@lassitu.de> To: "Larry Rosenman" <ler@lerctr.org> Cc: freebsd-stable@freebsd.org Subject: Re: whats best pracfive for ZFS on a whole disc these days ? Message-ID: <1A7FA148-8F7D-43DB-B28B-5346004C4F45@lassitu.de> In-Reply-To: <2aed0fc0af06c5fb17495e8925214ac7.squirrel@webmail.lerctr.org> References: <E1N2NcA-0004c4-CE@dilbert.ticketswitch.com> <200910271902.19618.doconnor@gsoft.com.au> <20091027104316.dsp7kikkoogo80gw@www.goldsword.com> <200910281112.06300.doconnor@gsoft.com.au> <493EE416-62CE-4EA4-81A7-8F802789D5DD@lassitu.de> <4AFF40B1.3040705@gsoft.com.au> <2aed0fc0af06c5fb17495e8925214ac7.squirrel@webmail.lerctr.org>
next in thread | previous in thread | raw e-mail | index | archive | help
Am 15.11.2009 um 00:58 schrieb Larry Rosenman: >>> On Wed Jul 15 at 16:22, Freddie Cash <fjwcash at gmail.com> wrote: >>> Yep. It's as simple as: >>>=20 >>> * label all the drives using glabel, while they're still attached = to >>> the pool >>> * use "zpool replace pool ad4 label/disk01" to replace 1 drive >>> * wait for it to resilver >>> * use "zpool replace pool ad6 label/disk02" to replace the next >>> drive >>> * repeat the resilver and replace until all the devices are = replaced >>>=20 >>> This is what I did to one of our servers. Works quite nicely. >>>=20 >>> There's no need to detach anything. >>=20 >> I'll try it when I get home and see how it goes. >=20 > When I try that, I get: > # glabel label disk01 /dev/ada1 > glabel: Can't store metadata on /dev/ada1: Operation not permitted. There's some caveats that you need to consider before attempting this: = most importantly, glabel will re-use the last block of the = disk/partition to store the label. Apparently, in many cases, the = filesystem (UFS, ZFS) allocates blocks in larger chunks (8K or larger), = and the last few blocks are unused and can be repurposed. But there's = no guarantee, so you might damage the filesystem by labeling the device. = I don't understand enough to definitivly say how to deterime whether = the last block is available or not, so make sure you have a backup = before trying. Secondly, my limited experience shows that both GEOM and ZFS can get = confused about devices/partitions/geoms that start on the same block as = others. How these are picked up by GEOM and/or ZFS in their probing = depends on the order, and it wasn't always obvious to me how that = worked. In one case, I couldn't get GEOM to pick up the /dev/label = entry, since it removed the label entry as soon as the physical device = node was probed. I've since come to the conclusion that labelled GPT = partitions are the way forward, and now that booting off ZRAID pools on = GPT partitions works, there's little speaking against it, IMO. Finally, if you want to label the existing disks, you probably need to = take the pool offline for the labelling step, using zpool export, so the = devices are not mounted anymore. Stefan --=20 Stefan Bethke <stb@lassitu.de> Fon +49 151 14070811
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1A7FA148-8F7D-43DB-B28B-5346004C4F45>