Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 30 Apr 2010 15:51:10 -0500 (CDT)
From:      Wes Morgan <morganw@chemikals.org>
To:        Freddie Cash <fjwcash@gmail.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: ZFS: "Cannot replace a replacing drive"
Message-ID:  <alpine.BSF.2.00.1004301550160.32311@ibyngvyr>
In-Reply-To: <p2ib269bc571004291840m85acbbafna617db0825766fd3@mail.gmail.com>
References:  <y2pb269bc571004280847od80c334cmc3a073cd7d20a927@mail.gmail.com> <alpine.BSF.2.00.1004292005410.58938@ibyngvyr> <p2ib269bc571004291840m85acbbafna617db0825766fd3@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 29 Apr 2010, Freddie Cash wrote:

> On Thu, Apr 29, 2010 at 6:06 PM, Wes Morgan <morganw@chemikals.org> wrote:
>
> > On Wed, 28 Apr 2010, Freddie Cash wrote:
> >
> > > Going through the archives, I see that others have run into this issue,
> > and
> > > managed to solve it via "zpool detach".  However, looking closely at the
> > > archived messages, all the successful tests had one thing in common:  1
> > > drive ONLINE, 1 drive FAULTED.  If a drive is online, obviously it can be
> > > detached.  In all the cases where people have been unsuccessful at fixing
> > > this situation, 1 drive is OFFLINE, and 1 drive is FAULTED.  As is our
> > case:
> > >
> >
> > What happened to the drive to fault it?
> >
> > Am in the process of replacing 500 GB drives with 1.5 TB drives, to
> increase the available storage space in the pool (process went flawlessly on
> the other storage server).  First 3 disks in the vdev replaced without
> issues.
>
> 4th disk turned out to be a dud.  Nothing but timeouts and read/write errors
> during the replace.  So I popped it out, put in a different 1.5 TB drive,
> glabel'd it with the same name ... and the pool went "boom".
>
> Now I'm stuck with a "label/disk04" device that can't be replaced, can't be
> offlined, can't be detached.
>
> Tried exporting the pool, importing the pool, with and without the disk in
> the system.  All kinds of variations on detach, online, offline, replace on
> the old device, the new device, the UUIDs.

Can you send the output of zpool history?

> I'm really hoping there's a way to recover from this, but it doesn't look
> like it.  Will probably have to destroy/recreate the pool next week, using
> the 1.5 TB drives from the get-go.

I'm sure you can still recover it. Just have some patience.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.00.1004301550160.32311>