Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 22 Dec 2016 09:26:37 -0700
From:      Alan Somers <asomers@freebsd.org>
To:        "Eugene M. Zheganin" <emz@norma.perm.ru>
Cc:        freebsd-stable <freebsd-stable@freebsd.org>
Subject:   Re: cannot detach vdev from zfs pool
Message-ID:  <CAOtMX2gZ8XrDTdO5-V9=B030f=uNtJAd=gDQR7Kgg=MawpT0fw@mail.gmail.com>
In-Reply-To: <585B98C9.4070607@norma.perm.ru>
References:  <585B98C9.4070607@norma.perm.ru>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, Dec 22, 2016 at 2:11 AM, Eugene M. Zheganin <emz@norma.perm.ru> wrote:
> Hi,
>
> Recently I decided to remove the bogus zfs-inside-geli-inside-zvol pool,
> since it's now officially unsupported. So, I needed to reslice my disk,
> hence to detach one of the disks from a mirrored pool. I issued 'zpool
> detach zroot gpt/zroot1' and my system livelocked almost immidiately, so
> I pressed reset. Now I got this:
>
> # zpool status zroot
>   pool: zroot
>  state: DEGRADED
> status: One or more devices has been taken offline by the administrator.
>         Sufficient replicas exist for the pool to continue functioning in a
>         degraded state.
> action: Online the device using 'zpool online' or replace the device with
>         'zpool replace'.
>   scan: resilvered 687G in 5h26m with 0 errors on Sat Oct 17 19:41:49 2015
> config:
>
>         NAME                     STATE     READ WRITE CKSUM
>         zroot                    DEGRADED     0     0     0
>           mirror-0               DEGRADED     0     0     0
>             gpt/zroot0           ONLINE       0     0     0
>             1151243332124505229  OFFLINE      0     0     0  was
> /dev/gpt/zroot1
>
> errors: No known data errors
>
> This isn't a big deal by itself, since I was able to create second zfs
> pool and now I'm relocating my data to it, although I should say that
> this is very disturbing sequence of events, because I'm now unable to
> even delete the UNAVAIL vdev from the pool. I tried to boot from a
> FreeBSD USB stick and detach it there, but all I discovered was the fact
> that zfs subsystem locks up upon the command 'zpool detach zroot
> 1151243332124505229'. I waited for several minutes but nothing happened,
> furthermore subsequent zpool/zfs commands are hanging up too.
>
> Is this worth submitting a pr, or may be it does need additional
> investigation ? In general I intend to destroy this pool after
> relocation it, but I'm afraid someone (or even myself again) could step
> on this later. Both disks are healthy, and I don't see the complains in
> dmesg. I'm running a FreeBSD 11.0-release-p5 here. The pool was initialy
> created somewhere under 9.0 I guess.
>
> Thanks.
> Eugene.

I'm not surprised to see this kind of error in a ZFS on GELI on Zvol
pool.  ZFS on Zvols has known deadlocks, even without involving GELI.
GELI only makes it worse, because it foils the recursion detection in
zvol_open.  I wouldn't bother opening a PR if I were you, because it
probably wouldn't add any new information.

Sorry it didn't meet your expectations,
-Alan



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOtMX2gZ8XrDTdO5-V9=B030f=uNtJAd=gDQR7Kgg=MawpT0fw>