Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 14 Jul 2021 15:21:34 -0600
From:      Alan Somers <asomers@freebsd.org>
To:        Dave Baukus <daveb@spectralogic.com>
Cc:        "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org>
Subject:   Re: ZFS: zpool status on degraded pools (FreeBSD12 vs FreeBSD13)
Message-ID:  <CAOtMX2iojM%2BjJc3vcdP8qKpvKNwhU9m9EmZpr2V3OYr-U9-Kpw@mail.gmail.com>
In-Reply-To: <deb112a1-02d6-098b-347e-448db6714a52@spectralogic.com>
References:  <deb112a1-02d6-098b-347e-448db6714a52@spectralogic.com>

next in thread | previous in thread | raw e-mail | index | archive | help
--00000000000000b17805c71bf106
Content-Type: text/plain; charset="UTF-8"

On Wed, Jul 14, 2021 at 3:10 PM Dave Baukus <daveb@spectralogic.com> wrote:

> I'm seeking comments on the following 2 difference in the behavior of ZFS.
> The first, I consider a bug; the second could be a bug or a conscious
> choice:
>
> 1) Given a pool of 2 disks and one extra disk exactly the same as the 2
> pool members (no ZFS labels on the extra disk),
> power the box off, replace one pool disk with extra disk in the same
> location; power box back on.
>
> The pool is state on FreeBSD13 is ONLINE vs DEGRADED on FreeBSD12:
>

I agree, the FreeBSD 13 behavior seems like a bug.


> 2.) Add a spare to a degraded pool and issue a zpool replace to activate
> the spare.
> On FreeBSD13 after the resilver is complete, the pool remains degraded
> until the degraded disk
> is removed via zpool detach; on Freebsd12, the pool becomes ONLINE when
> the resilver is complete:
>

I agree.  I think I prefer the FreeBSD 13 behavior, but either way is
sensible.

The change is no doubt due to the OpenZFS import in FreeBSD 13.  Have you
tried to determine the responsible commits?  They could be regressions in
OpenZFS, or they could be bugs that we fixed in FreeBSD but never
upstreamed.
-Alan

--00000000000000b17805c71bf106--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOtMX2iojM%2BjJc3vcdP8qKpvKNwhU9m9EmZpr2V3OYr-U9-Kpw>