From owner-freebsd-questions@FreeBSD.ORG Sat Jan 9 18:21:37 2010 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 58B321065676 for ; Sat, 9 Jan 2010 18:21:37 +0000 (UTC) (envelope-from kraduk@googlemail.com) Received: from mail-fx0-f227.google.com (mail-fx0-f227.google.com [209.85.220.227]) by mx1.freebsd.org (Postfix) with ESMTP id D91128FC14 for ; Sat, 9 Jan 2010 18:21:36 +0000 (UTC) Received: by fxm27 with SMTP id 27so5567695fxm.3 for ; Sat, 09 Jan 2010 10:21:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=yVPm2nus+P3GaRYvSXFy5MSjWStXplhUbWrrFLg/xmk=; b=VOXdHEwOEDjYMRHhnz0ba1oAm5CMmQX1kaYlk2CN3auxbKhuIN/AQKAe5yg7mmyjPf wrYdj7uht5dscK9eoQOsIL4hBADN4J/zoXp2z/FOOUZ1rEWBOEQfjzyzudknwdH67D9f tJW9i2XAO2gL5SgAFopEWbYJ41vj9sv1Ahdj8= DomainKey-Signature: a=rsa-sha1; c=nofws; d=googlemail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=Vo8HhRYBZnCFWrz2IbKNrjD0Ij3EGe5BDHWaOHXb087eE7JaDT0U/jwSWI6D5kftFG aND4OKL/CdIV3BIwVtpqndxG08gpSc2ot0lu0kiM7hI0jMm4DHdjC6NJC/zjttF3xFAZ j/J/iPqZaRe90NyPKowN0Ts9NWDSlfW4I3hkw= MIME-Version: 1.0 Received: by 10.239.138.23 with SMTP id n23mr131715hbn.154.1263061287869; Sat, 09 Jan 2010 10:21:27 -0800 (PST) In-Reply-To: <4B47739D.1090206@ibctech.ca> References: <4B451FE9.6040501@ibctech.ca> <4B4761E6.3000904@ibctech.ca> <4B47739D.1090206@ibctech.ca> Date: Sat, 9 Jan 2010 18:21:27 +0000 Message-ID: From: krad To: Steve Bertrand Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: Wes Morgan , "freebsd-questions@freebsd.org Questions -" Subject: Re: Replacing disks in a ZFS pool X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 09 Jan 2010 18:21:37 -0000 2010/1/8 Steve Bertrand > Steve Bertrand wrote: > > krad wrote: > > > >>>> the idea of using this type of label instead of the disk names > >>> themselves. > >>> > >>> I personally haven't run into any bad problems using the full device, > but > >>> I suppose it could be a problem. (Side note - geom should learn how to > >>> parse zfs labels so it could create something like /dev/zfs/ for > >>> device nodes instead of using other trickery) > >>> > >>>> How should I proceed? I'm assuming something like this: > >>>> > >>>> - add the new 1.5TB drives into the existing, running system > >>>> - GPT label them > >>>> - use 'zpool replace' to replace one drive at a time, allowing the > pool > >>>> to rebuild after each drive is replaced > >>>> - once all four drives are complete, shut down the system, remove the > >>>> four original drives, and connect the four new ones where the old ones > >>> were > >>> > >>> If you have enough ports to bring all eight drives online at once, I > would > >>> recommend using 'zfs send' rather than the replacement. That way you'll > >>> get something like a "burn-in" on your new drives, and I believe it > will > >>> probably be faster than the replacement process. Even on an active > system, > >>> you can use a couple of incremental snapshots and reduce the downtime > to a > >>> bare minimum. > >>> > >>> > >> Surely it would be better to attach the drives either individually or as > a > >> matching vdev (assuming they can all run at once), then break the mirror > >> after its resilvered. Far less work and far less liekly to miss > something. > >> > >> What I have done with my system is label the drives up with a coloured > >> sticker then create a glabel for the device. I then add the glabels to > the > >> zpool. Makes it very easy to identify the drives. > > > > Ok. Unfortunately, the box only has four SATA ports. > > > > Can I: > > > > - shut down > > - replace a single existing drive with a new one (breaking the RAID) > > - boot back up > > - gpt label the new disk > > - import the new gpt labelled disk > > - rebuild array > > - rinse, repeat three more times > > > > This seems to work ok: > > # zpool offline storage ad6 > # halt & replace disk, and start machine > # zpool online storage ad6 > # zpool replace storage ad6 > > I don't know enough about gpt/gpart to be able to work that into the > mix. I would much prefer to have gpt labels as opposed to disk names, > but alas. > > fwiw, can I label an entire disk (such as ad6) with gpt, without having > to install boot blocks etc? > > I was hoping it would be as easy as: > > # gpt create -f ad6 > # gpt label -l disk1 ad6 > > ...but it doesn't work. > > Neither does: > > # gpart create -s gpt ad6 > # gpart add -t freebsd-zfs -l disk1 ad6 > > I'd like to do this so I don't have to manually specify a size to use. I > just want the system to Do The Right Thing, which in this case, would be > to just use the entire disk. > > Steve > > > > > > If so, is there anything I should do prior to the initial drive > > replacement, or will simulating the drive failure be ok? > > > > Steve > > _______________________________________________ > > freebsd-questions@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-questions > > To unsubscribe, send any mail to " > freebsd-questions-unsubscribe@freebsd.org" > > glabel label red ad6 the device will /dev/label eg from my machine pool: zdump state: ONLINE scrub: scrub completed after 0h31m with 0 errors on Mon Jan 4 01:54:56 2010 config: NAME STATE READ WRITE CKSUM zdump ONLINE 0 0 0 mirror ONLINE 0 0 0 label/blue ONLINE 0 0 0 label/red ONLINE 0 0 0