Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 16 Sep 2021 15:02:48 -0700
From:      Mark Millard via freebsd-current <freebsd-current@freebsd.org>
To:        Alan Somers <asomers@freebsd.org>
Cc:        freebsd-current <freebsd-current@freebsd.org>
Subject:   Re: zpool import: "The pool cannot be imported due to damaged devices or data" but zpool status -x: "all pools are healthy" and zpool destroy: "no such pool"
Message-ID:  <D165B6EB-F0B6-41D8-8679-D07B70F62B09@yahoo.com>
In-Reply-To: <CAOtMX2gYXYArkU%2Bo5M-j1CYSws7mqmrbwbJLF7=JiOEnd65wzg@mail.gmail.com>
References:  <C312D693-8EAD-4398-B0CD-B134EB278F80.ref@yahoo.com> <C312D693-8EAD-4398-B0CD-B134EB278F80@yahoo.com> <CAOtMX2gYXYArkU%2Bo5M-j1CYSws7mqmrbwbJLF7=JiOEnd65wzg@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help


On 2021-Sep-16, at 13:39, Alan Somers <asomers at freebsd.org> wrote:

> On Thu, Sep 16, 2021 at 2:04 PM Mark Millard via freebsd-current =
<freebsd-current@freebsd.org> wrote:
> What do I go about:
>=20
> QUOTE
> # zpool import
>    pool: zopt0
>      id: 18166787938870325966
>   state: FAULTED
> status: One or more devices contains corrupted data.
>  action: The pool cannot be imported due to damaged devices or data.
>    see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
>  config:
>=20
>         zopt0       FAULTED  corrupted data
>           nda0p2    UNAVAIL  corrupted data
>=20
> # zpool status -x
> all pools are healthy
>=20
> # zpool destroy zopt0
> cannot open 'zopt0': no such pool
> END QUOTE
>=20
> (I had attempted to clean out the old zfs context on
> the media and delete/replace the 2 freebsd swap
> partitions and 1 freebsd-zfs partition, leaving the
> efi partition in place. Clearly I did not do everything
> require [or something is very wrong]. zopt0 had been
> a root-on-ZFS context and would be again. I have a
> backup of the context to send/receive once the pool
> in the partition is established.)
>=20
> For reference, as things now are:
>=20
> # gpart show
> =3D>       40  937703008  nda0  GPT  (447G)
>          40     532480     1  efi  (260M)
>      532520       2008        - free -  (1.0M)
>      534528  937166848     2  freebsd-zfs  (447G)
>   937701376       1672        - free -  (836K)
> . . .
>=20
> (That is not how it looked before I started.)
>=20
> # uname -apKU
> FreeBSD CA72_4c8G_ZFS 13.0-RELEASE-p4 FreeBSD 13.0-RELEASE-p4 #4 =
releng/13.0-n244760-940681634ee1-dirty: Mon Aug 30 11:35:45 PDT 2021     =
root@CA72_16Gp_ZFS:/usr/obj/BUILDs/13_0R-CA72-nodbg-clang/usr/13_0R-src/ar=
m64.aarch64/sys/GENERIC-NODBG-CA72  arm64 aarch64 1300139 1300139
>=20
> I have also tried under:
>=20
> # uname -apKU
> FreeBSD CA72_4c8G_ZFS 14.0-CURRENT FreeBSD 14.0-CURRENT #12 =
main-n249019-0637070b5bca-dirty: Tue Aug 31 02:24:20 PDT 2021     =
root@CA72_16Gp_ZFS:/usr/obj/BUILDs/main-CA72-nodbg-clang/usr/main-src/arm6=
4.aarch64/sys/GENERIC-NODBG-CA72  arm64 aarch64 1400032 1400032
>=20
> after reaching this state. It behaves the same.
>=20
> The text presented by:
>=20
> https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
>=20
> does not deal with what is happening overall.
>=20
> So you just want to clean nda0p2 in order to reuse it?  Do "zpool =
labelclear -f /dev/nda0p2"
>=20

I did not extract and show everything that I'd tried but
there were examples of:

# zpool labelclear -f /dev/nda0p2
failed to clear label for /dev/nda0p2

from when I'd tried such. So far I've not
identified anything with official commands
to deal with the issue.

Ultimately I zeroed out areas of the media that
happened to span the zfs related labels. After
that things returned to normal. I'd still like
to know a supported way of dealing with the
issue.

The page at the URL it listed just says:

QUOTE
The pool must be destroyed and recreated from an appropriate backup =
source
END QUOTE

But the official destroy commands did not work:
same sort of issue of reporting that nothing
appropriate was found to destroy and no way to
import the problematical pool.


Note: I use ZFS because of wanting to use bectl, not
for redundancy or such. So the configuration is very
simple.


=3D=3D=3D
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?D165B6EB-F0B6-41D8-8679-D07B70F62B09>