From owner-freebsd-questions@FreeBSD.ORG Fri Jan 8 18:04:48 2010 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0524610656AD for ; Fri, 8 Jan 2010 18:04:48 +0000 (UTC) (envelope-from steve@ibctech.ca) Received: from smtp.ibctech.ca (v6.ibctech.ca [IPv6:2607:f118::b6]) by mx1.freebsd.org (Postfix) with SMTP id 9613F8FC08 for ; Fri, 8 Jan 2010 18:04:46 +0000 (UTC) Received: (qmail 47108 invoked by uid 89); 8 Jan 2010 18:03:44 -0000 Received: from unknown (HELO ?IPv6:2607:f118:2:8000:2592:5a24:e9e:52b9?) (steve@ibctech.ca@2607:f118:2:8000:2592:5a24:e9e:52b9) by 2607:f118::b6 with ESMTPA; 8 Jan 2010 18:03:44 -0000 Message-ID: <4B47739D.1090206@ibctech.ca> Date: Fri, 08 Jan 2010 13:04:13 -0500 From: Steve Bertrand User-Agent: Thunderbird 2.0.0.17 (Windows/20080914) MIME-Version: 1.0 To: krad References: <4B451FE9.6040501@ibctech.ca> <4B4761E6.3000904@ibctech.ca> In-Reply-To: <4B4761E6.3000904@ibctech.ca> X-Enigmail-Version: 0.96.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: Wes Morgan , "freebsd-questions@freebsd.org Questions -" Subject: Re: Replacing disks in a ZFS pool X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 08 Jan 2010 18:04:48 -0000 Steve Bertrand wrote: > krad wrote: > >>>> the idea of using this type of label instead of the disk names >>> themselves. >>> >>> I personally haven't run into any bad problems using the full device, but >>> I suppose it could be a problem. (Side note - geom should learn how to >>> parse zfs labels so it could create something like /dev/zfs/ for >>> device nodes instead of using other trickery) >>> >>>> How should I proceed? I'm assuming something like this: >>>> >>>> - add the new 1.5TB drives into the existing, running system >>>> - GPT label them >>>> - use 'zpool replace' to replace one drive at a time, allowing the pool >>>> to rebuild after each drive is replaced >>>> - once all four drives are complete, shut down the system, remove the >>>> four original drives, and connect the four new ones where the old ones >>> were >>> >>> If you have enough ports to bring all eight drives online at once, I would >>> recommend using 'zfs send' rather than the replacement. That way you'll >>> get something like a "burn-in" on your new drives, and I believe it will >>> probably be faster than the replacement process. Even on an active system, >>> you can use a couple of incremental snapshots and reduce the downtime to a >>> bare minimum. >>> >>> >> Surely it would be better to attach the drives either individually or as a >> matching vdev (assuming they can all run at once), then break the mirror >> after its resilvered. Far less work and far less liekly to miss something. >> >> What I have done with my system is label the drives up with a coloured >> sticker then create a glabel for the device. I then add the glabels to the >> zpool. Makes it very easy to identify the drives. > > Ok. Unfortunately, the box only has four SATA ports. > > Can I: > > - shut down > - replace a single existing drive with a new one (breaking the RAID) > - boot back up > - gpt label the new disk > - import the new gpt labelled disk > - rebuild array > - rinse, repeat three more times > This seems to work ok: # zpool offline storage ad6 # halt & replace disk, and start machine # zpool online storage ad6 # zpool replace storage ad6 I don't know enough about gpt/gpart to be able to work that into the mix. I would much prefer to have gpt labels as opposed to disk names, but alas. fwiw, can I label an entire disk (such as ad6) with gpt, without having to install boot blocks etc? I was hoping it would be as easy as: # gpt create -f ad6 # gpt label -l disk1 ad6 ...but it doesn't work. Neither does: # gpart create -s gpt ad6 # gpart add -t freebsd-zfs -l disk1 ad6 I'd like to do this so I don't have to manually specify a size to use. I just want the system to Do The Right Thing, which in this case, would be to just use the entire disk. Steve > If so, is there anything I should do prior to the initial drive > replacement, or will simulating the drive failure be ok? > > Steve > _______________________________________________ > freebsd-questions@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-questions > To unsubscribe, send any mail to "freebsd-questions-unsubscribe@freebsd.org"