Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 20 Jun 2022 17:32:13 -0600
From:      "John Doherty" <bsdlists@jld3.net>
To:        "Alan Somers" <asomers@freebsd.org>
Cc:        freebsd-fs <freebsd-fs@freebsd.org>
Subject:   Re: "spare-X" device remains after resilvering
Message-ID:  <7B7E61B8-0508-4FFD-B697-FBFB881E2B09@jld3.net>
In-Reply-To: <CAOtMX2gPZk3kW5_P_-MGZROrURqr%2Bf3rqEDM6XooLO2v=DbgDA@mail.gmail.com>
References:  <34A91D31-1883-40AE-82F3-57B783532ED7@jld3.net> <CAOtMX2iv3g-pA=XciiFCoH-6y%2B=RKeJ61TnOvJm2bPNoc_WwEg@mail.gmail.com> <768F3745-D7FF-48C8-BA28-ABEB49BAFAA8@jld3.net> <CAOtMX2gPZk3kW5_P_-MGZROrURqr%2Bf3rqEDM6XooLO2v=DbgDA@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon 2022-06-20 05:08 PM MDT -0600, <asomers@freebsd.org> wrote:

>> I don't think I can detach anything because this is all raidz2 and
>> detach only works with components of mirrors.
...

> Ahh, but you can detach in this case, because the "spare-9" vdev is
> itself a type of mirror.  Try that command.  I think it will do what
> you want, with no extra resilvering required.

Oh OK, my misunderstanding again. Not resilvering again would be great. 
For reference or if anyone else is following along, the current state 
(abbreviated) is this:

   pool: zp1
  state: DEGRADED
  ...
  config:
     NAME                       STATE     READ WRITE CKSUM
     zp1                        DEGRADED     0     0     0
       raidz2-0                 ONLINE       0     0     0
       ...
       raidz2-1                 ONLINE       0     0     0
       ...
       raidz2-2                 ONLINE       0     0     0
       ...
       raidz2-3                 DEGRADED     0     0     0
       ...
         spare-9                DEGRADED     0     0     0
           6960108738988598438  OFFLINE      0     0     0  was 
/dev/gpt/disk39
           gpt/disk41           ONLINE       0     0     0

And you're right, "zpool detach zp1 6960108738988598438" worked fine so 
I now have this:

   pool: zp1
  state: DEGRADED
  ...
   NAME                     STATE     READ WRITE CKSUM
   zp1                      DEGRADED     0     0     0
     raidz2-0               ONLINE       0     0     0
       ...
     raidz2-1               ONLINE       0     0     0
       ...
     raidz2-2               ONLINE       0     0     0
       ...
     raidz2-3               DEGRADED     0     0     0
       gpt/disk30           ONLINE       0     0     0
       3343132967577870793  OFFLINE      0     0     0  was 
/dev/gpt/disk31
       gpt/disk32           ONLINE       0     0     0
       gpt/disk33           ONLINE       0     0     0
       gpt/disk34           ONLINE       0     0     0
       gpt/disk35           ONLINE       0     0     0
       gpt/disk36           ONLINE       0     0     0
       gpt/disk37           ONLINE       0     0     0
       gpt/disk38           ONLINE       0     0     0
       gpt/disk41           ONLINE       0     0     0
     spares
       gpt/disk42             AVAIL
       gpt/disk43             AVAIL
       gpt/disk44             AVAIL

I thought I had tried that but obviously not. Now there is no more 
remnant of what was gpt/disk39 and gpt/disk41 is just a normal, 
permanent member of the raidz2-3 vdev and no longer in the configured 
spares list. Perfect.

The pool is still degraded because there is still another offline device 
that needs to be replaced. To fix that, I can do this:

# zpool replace zp1 3343132967577870793 gpt/disk42

Wait for the resilver to finish, and then do:

# zpool detach zp1 3343132967577870793

And everything should be fine with all the vdevs and the pool itself in 
"online" status again (and gpt/disk42 no longer an available spare).

This is great. You've been a big help, can't thank you enough.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?7B7E61B8-0508-4FFD-B697-FBFB881E2B09>