Date: Thu, 26 Apr 2018 12:28:09 +0500 From: "Eugene M. Zheganin" <eugene@zhegan.in> To: freebsd-stable@freebsd.org Cc: freebsd-fs@freebsd.org Subject: clear old pools remains from active vdevs Message-ID: <b9e405dc-d3b3-af81-64b3-9e310988fa7c@zhegan.in>
next in thread | raw e-mail | index | archive | help
Hello,
I have some active vdev disk members that used to be in pool that
clearly have not beed destroyed properly, so I'm seeing in a "zpool
import" output something like
# zpool import
pool: zroot
id: 14767697319309030904
state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
see: http://illumos.org/msg/ZFS-8000-EY
config:
zroot UNAVAIL insufficient replicas
mirror-0 UNAVAIL insufficient replicas
5291726022575795110 UNAVAIL cannot open
2933754417879630350 UNAVAIL cannot open
pool: esx
id: 8314148521324214892
state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
see: http://illumos.org/msg/ZFS-8000-EY
config:
esx UNAVAIL insufficient replicas
mirror-0 UNAVAIL insufficient replicas
10170732803757341731 UNAVAIL cannot open
9207269511643803468 UNAVAIL cannot open
is there any _safe_ way to get rid of this ? I'm asking because a
gptzfsboot loader in recent -STABLE stumbles upon this and refuses to
boot the system
(https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=227772). The
workaround is to use the 11.1 loader, but I'm afraid this behavior will
now be the intended one.
Eugene.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?b9e405dc-d3b3-af81-64b3-9e310988fa7c>
