Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 6 Sep 2024 16:02:31 -0600
From:      Alan Somers <asomers@freebsd.org>
To:        Chris Ross <cross+freebsd@distal.com>
Cc:        Wes Morgan <morganw@gmail.com>, FreeBSD Filesystems <freebsd-fs@freebsd.org>
Subject:   Re: Unable to replace drive in raidz1
Message-ID:  <CAOtMX2ihPqe9w%2BhbZ=GqOcmmN%2B8y9-%2Bqyew9CfZV9ajpGHZmXA@mail.gmail.com>
In-Reply-To: <E50559CA-CC3D-45AE-82D7-172270BF4851@distal.com>
References:  <5ED5CB56-2E2A-4D83-8CDA-6D6A0719ED19@distal.com> <AC67D073-D476-41F5-AC53-F671430BB493@distal.com> <CAOtMX2h52d0vtceuwcDk2dzkH-fZW32inhk-dfjLMJxetVXKYg@mail.gmail.com> <CB79EC2B-E793-4561-95E7-D1CEEEFC1D72@distal.com> <CAOtMX2i_zFYuOnEK_aVkpO_M8uJCvGYW%2BSzLn3OED4n5fKFoEA@mail.gmail.com> <6A20ABDA-9BEA-4526-94C1-5768AA564C13@distal.com> <CAOtMX2jfcd43sBpHraWA=5e_Ka=hMD654m-5=boguPPbYXE4yw@mail.gmail.com> <0CF1E2D7-6C82-4A8B-82C3-A5BF1ED939CF@distal.com> <CAOtMX2hRJvt9uhctKvXO4R2tUNq9zeCEx6NZmc7Vk7fH=HO8eA@mail.gmail.com> <29003A7C-745D-4A06-8558-AE64310813EA@distal.com> <42346193-AD06-4D26-B0C6-4392953D21A3@gmail.com> <E6C615C1-E9D2-4F0D-8DC2-710BAAF10954@distal.com> <E85B00B1-7205-486D-800C-E6837780E819@gmail.com> <E93A9CA8-6705-4C26-9F33-B620A365F4BD@distal.com> <50B791D8-F0CC-431E-93B8-834D57AB3C14@gmail.com> <E50559CA-CC3D-45AE-82D7-172270BF4851@distal.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Sep 6, 2024 at 3:49=E2=80=AFPM Chris Ross <cross+freebsd@distal.com=
> wrote:
>
>
>
> > On Sep 6, 2024, at 17:22, Wes Morgan <morganw@gmail.com> wrote:
> >
> > The labels are helpful for fstab, but zfs doesn't need fstab. In the ea=
rly days of zfs on freebsd the unpartitioned device was recommended; maybe =
that's not accurate any longer, but I still follow it for a pool that conta=
ins vdevs with multiple devices (raidz).
> >
> > If you use, e.g., da0 in a pool, you cannot later replace it with a lab=
eled device of the same size; it won't have enough sectors.
>
> The problem is shown here.  da3 was in a pool.  Then, when the system reb=
ooted, da3 was the kernels name for a different device in a different pool.=
  Had I known then how to interact with the guid (status -g), I likely woul=
d=E2=80=99ve been fine.
>
> >> So, I offline=E2=80=99d the disk-to-be-replaced at 09:40 yesterday, th=
en I shut the system down, removed that physical device replacing it with a=
 larger disk, and rebooted.  I suspect the =E2=80=9Coffline=E2=80=9Ds after=
 that are me experimenting when it was telling me it couldn=E2=80=99t start=
 the replace action I was asking for.
> >
> > This is probably where you made your mistake. Rebooting shifted another=
 device into da3. When you tried to offline it, you were probably either ta=
rgeting a device in a different raidz or one that wasn't in the pool. The o=
utput of those original offline commands would have been informative. You c=
ould also check dmesg and map the serial numbers to device assignments to f=
igure out what device moved to da3.
>
> I offline=E2=80=99d =E2=80=9Cda3=E2=80=9D before I rebooted.  After reboo=
ting, I tried the obvious and correct (i thought) =E2=80=9Czpool replace da=
3 da10=E2=80=9D only to get the error I=E2=80=99ve been getting since.  Aga=
in, had I known how to use the guid for the device that used to be da3 but =
now isn=E2=80=99t, that might=E2=80=99ve worked.  I can=E2=80=99t know now.
>
> Then, while trying to fix the problem, I likely made it worse by trying t=
o interact with da3, which in the pools brain was a missing disk in raidz1-=
0, but the kernel also knew /dev/da3 to be a working disk (that happened to=
 be one of the drives in raidz1-1).  I feel that zfs did something wrong so=
mewhere if it _ever_ tried to talk to /dev/da3 when I said =E2=80=9Cda3=E2=
=80=9D after I rebooted and it found that device to be part of raidz1-1, bu=
t.
>
>
> > Sounds about right. In another message it seemed like the pool had star=
ted an autoreplace. So I assume you have zfsd enabled? That is what issues =
the replace command. Strange that it is not anywhere in the pool history. T=
here should be syslog entries for any actions it took.
>
> I don=E2=80=99t think so.  That message about some =E2=80=9Calready in re=
placing/spare config=E2=80=9D came up before anything else.  At which point=
, I=E2=80=99d never had a spare in this pool, and there was no replace show=
n in zpool status.
>
> > In your case, it appears that you had two missing devices - the origina=
l "da3" that was physically removed, and the new da3 that you forced offlin=
e. You added da10 as a spare, when what you needed to do was a replace. Spa=
re devices do not auto-replace without zfsd running and autoreplace set to =
on.
>
> I did offline =E2=80=9Cda3=E2=80=9D a couple of times, again thinking I w=
as working with what zpool showed as =E2=80=9Cda3=E2=80=9D.  If it did anyt=
hing with /dev/da3 there, then I think that may be a bug.  Or, at least, so=
mething that should be made more clear.  It _didn=E2=80=99t_ offline the di=
skid/DISK-K1GMBN9D from raidz1-1, which is what the kernel has at da3.  So.
>
> > This should all be reported in zpool status. In your original message, =
there is no sign of a replacement in progress or a spare device, assuming y=
ou didn't omit something. If the pool is only showing that a single device =
is missing, and that device is to be replaced by da10, zero out the first a=
nd last sectors (I think a zfs label is 128k?) to wipe out any labels and u=
se the replace command, not spare, e.g. "zpool replace tank da3 da10", or u=
se the missing guid as suggested elsewhere. This should work based on the i=
nformation provided.
>
> I=E2=80=99ve never seen a replacement going on, and I have had the new di=
sk =E2=80=9Cda10=E2=80=9D as a spare a couple of times while testing.  But =
it wasn=E2=80=99t left there after I determined that that also didn=E2=80=
=99t let me get it replaced into the raidz.
>
> And, that attempt to replace is what I=E2=80=99ve tried many times, with =
multiple id=E2=80=99s.  I have cleared the label on da10 multiple times.  T=
hat replace doesn=E2=80=99t work, giving this error message in all cases.
>
>             - Chris
>
>
> % glabel status
>                                       Name  Status  Components
>             diskid/DISK-BTWL503503TW480QGN     N/A  ada0
>                                  gpt/l2arc     N/A  ada0p1
> gptid/9d00849e-0b82-11ec-a143-84b2612f2c38     N/A  ada0p1
>                       diskid/DISK-K1GMBN9D     N/A  da3
>                       diskid/DISK-3WJDHJ2J     N/A  da6
>                       diskid/DISK-3WK3G1KJ     N/A  da7
>                       diskid/DISK-3WJ7ZMMJ     N/A  da8
>                       diskid/DISK-K1GMEDMD     N/A  da4
>                       diskid/DISK-K1GMAX1D     N/A  da5
>                                ufs/drive12     N/A  da9
>                       diskid/DISK-ZGG0A2PA     N/A  da10
>
> % zpool status tank
>   pool: tank
>  state: DEGRADED
> status: One or more devices are faulted in response to persistent errors.
>         Sufficient replicas exist for the pool to continue functioning in=
 a
>         degraded state.
> action: Replace the faulted device, or use 'zpool clear' to mark the devi=
ce
>         repaired.
>   scan: scrub repaired 0B in 17:14:03 with 0 errors on Fri Sep  6 09:08:3=
4 2024
> config:
>
>         NAME                      STATE     READ WRITE CKSUM
>         tank                      DEGRADED     0     0     0
>           raidz1-0                DEGRADED     0     0     0
>             da3                   FAULTED      0     0     0  external de=
vice fault
>             da1                   ONLINE       0     0     0
>             da2                   ONLINE       0     0     0
>           raidz1-1                ONLINE       0     0     0
>             diskid/DISK-K1GMBN9D  ONLINE       0     0     0
>             diskid/DISK-K1GMEDMD  ONLINE       0     0     0
>             diskid/DISK-K1GMAX1D  ONLINE       0     0     0
>           raidz1-2                ONLINE       0     0     0
>             diskid/DISK-3WJDHJ2J  ONLINE       0     0     0
>             diskid/DISK-3WK3G1KJ  ONLINE       0     0     0
>             diskid/DISK-3WJ7ZMMJ  ONLINE       0     0     0
>
> errors: No known data errors
>
> % sudo zpool replace tank da3 da10
> Password:
> cannot replace da3 with da10: already in replacing/spare config; wait for=
 completion or use 'zpool detach'
>
> % zpool status -g tank
>   pool: tank
>  state: DEGRADED
> status: One or more devices are faulted in response to persistent errors.
>         Sufficient replicas exist for the pool to continue functioning in=
 a
>         degraded state.
> action: Replace the faulted device, or use 'zpool clear' to mark the devi=
ce
>         repaired.
>   scan: scrub repaired 0B in 17:14:03 with 0 errors on Fri Sep  6 09:08:3=
4 2024
> config:
>
>         NAME                      STATE     READ WRITE CKSUM
>         tank                      DEGRADED     0     0     0
>           16506780107187041124    DEGRADED     0     0     0
>             9127016430593660128   FAULTED      0     0     0  external de=
vice fault
>             4094297345166589692   ONLINE       0     0     0
>             17850258180603290288  ONLINE       0     0     0
>           5104119975785735782     ONLINE       0     0     0
>             6752552549817423876   ONLINE       0     0     0
>             9072227575611698625   ONLINE       0     0     0
>             13778609510621402511  ONLINE       0     0     0
>           11410204456339324959    ONLINE       0     0     0
>             1083322824660576293   ONLINE       0     0     0
>             12505496659970146740  ONLINE       0     0     0
>             11847701970749615606  ONLINE       0     0     0
>
> errors: No known data errors
>
> % sudo zpool replace tank 9127016430593660128 da10
> cannot replace 9127016430593660128 with da10: already in replacing/spare =
config; wait for completion or use 'zpool detach'
>
> % sudo zpool replace tank 9127016430593660128 diskid/DISK-ZGG0A2PA
> cannot replace 9127016430593660128 with diskid/DISK-ZGG0A2PA: already in =
replacing/spare config; wait for completion or use 'zpool detach'

Another user reports the same error message.  In their case, it's an
inappropriate error message from /sbin/zpool.  Can you try a "zpool
status -v" and "diskinfo -f /dev/da10"?  That will show you if you
have the same problem.  If your pool has a 512B block size but the new
disk is 4kn, then you cannot use it as a replacement.

https://github.com/openzfs/zfs/issues/14730



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOtMX2ihPqe9w%2BhbZ=GqOcmmN%2B8y9-%2Bqyew9CfZV9ajpGHZmXA>