Date: Wed, 21 Jul 2010 02:43:11 -0400 (EDT) From: Charles Sprickman <spork@bway.net> To: alan bryan <alanbryan1234@yahoo.com> Cc: freebsd-stable <freebsd-stable@freebsd.org>, Dan Langille <dan@langille.org> Subject: Re: Problems replacing failing drive in ZFS pool Message-ID: <alpine.OSX.2.00.1007210227100.33454@hotlap.local> In-Reply-To: <578438.38753.qm@web50502.mail.re2.yahoo.com> References: <578438.38753.qm@web50502.mail.re2.yahoo.com>
next in thread | previous in thread | raw e-mail | index | archive | help
This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. --0-1730636414-1279694592=:33454 Content-Type: TEXT/PLAIN; charset=iso-8859-1; format=flowed Content-Transfer-Encoding: 8BIT On Tue, 20 Jul 2010, alan bryan wrote: > > > --- On Mon, 7/19/10, Dan Langille <dan@langille.org> wrote: > >> From: Dan Langille <dan@langille.org> >> Subject: Re: Problems replacing failing drive in ZFS pool >> To: "Freddie Cash" <fjwcash@gmail.com> >> Cc: "freebsd-stable" <freebsd-stable@freebsd.org> >> Date: Monday, July 19, 2010, 7:07 PM >> On 7/19/2010 12:15 PM, Freddie Cash >> wrote: >> > On Mon, Jul 19, 2010 at 8:56 AM, Garrett Moore<garrettmoore@gmail.com> >> wrote: >> >> So you think it's because when I switch from the >> old disk to the new disk, >> >> ZFS doesn't realize the disk has changed, and >> thinks the data is just >> >> corrupt now? Even if that happens, shouldn't the >> pool still be available, >> >> since it's RAIDZ1 and only one disk has gone >> away? >> > >> > I think it's because you pull the old drive, boot with >> the new drive, >> > the controller re-numbers all the devices (ie da3 is >> now da2, da2 is >> > now da1, da1 is now da0, da0 is now da6, etc), and ZFS >> thinks that all >> > the drives have changed, thus corrupting the >> pool. I've had this >> > happen on our storage servers a couple of times before >> I started using >> > glabel(8) on all our drives (dead drive on RAID >> controller, remove >> > drive, reboot for whatever reason, all device nodes >> are renumbered, >> > everything goes kablooey). >> >> Can you explain a bit about how you use glabel(8) in >> conjunction with ZFS? If I can retrofit this into an >> exist ZFS array to make things easier in the future... >> >> 8.0-STABLE #0: Fri Mar 5 00:46:11 EST 2010 >> >> ]# zpool status >> pool: storage >> state: ONLINE >> scrub: none requested >> config: >> >> NAME >> STATE READ WRITE CKSUM >> storage >> ONLINE >> 0 0 >> 0 >> raidz1 >> ONLINE 0 >> 0 0 >> ad8 >> ONLINE >> 0 0 >> 0 >> ad10 >> ONLINE 0 >> 0 0 >> ad12 >> ONLINE 0 >> 0 0 >> ad14 >> ONLINE 0 >> 0 0 >> ad16 >> ONLINE 0 >> 0 0 >> >> > Of course, always have good backups. ;) >> >> In my case, this ZFS array is the backup. ;) >> >> But I'm setting up a tape library, real soon now.... >> >> -- Dan Langille - http://langille.org/ >> _______________________________________________ >> freebsd-stable@freebsd.org >> mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-stable >> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" >> > > Dan, > > Here's how to do it after the fact: > > http://unix.derkeiler.com/Mailing-Lists/FreeBSD/current/2009-07/msg00623.html Two things: -What's the preferred labelling method for disks that will be used with zfs these days? geom_label or gpt labels? I've been using the latter and I find them a little simpler. -I think that if you already are using gpt partitioning, you can add a gpt label after the fact (ie: gpart -i index# -l your_label adaX). "gpart list" will give you a list of index numbers. Charles > --Alan Bryan > _______________________________________________ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org" > --0-1730636414-1279694592=:33454--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.OSX.2.00.1007210227100.33454>