Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 16 Sep 2021 13:26:16 -0700
From:      joe mcguckin <joe@via.net>
To:        marklmi@yahoo.com
Cc:        freebsd-current <freebsd-current@freebsd.org>
Subject:   Re: zpool import: "The pool cannot be imported due to damaged devices or data" but zpool status -x: "all pools are healthy" and zpool destroy: "no such pool"
Message-ID:  <88799C4C-2371-42C1-A41C-392969A1C1E0@via.net>
In-Reply-To: <C312D693-8EAD-4398-B0CD-B134EB278F80@yahoo.com>
References:  <C312D693-8EAD-4398-B0CD-B134EB278F80.ref@yahoo.com> <C312D693-8EAD-4398-B0CD-B134EB278F80@yahoo.com>

next in thread | previous in thread | raw e-mail | index | archive | help

--Apple-Mail=_F8FAC72C-E5C8-4A65-BEDB-BB6BDC7D2E94
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8

I experienced the same yesterday. I grabbed an old disk that was =
previously part of a pool. Stuck it in the chassis and did =E2=80=98zpool =
import=E2=80=99 and got the same output you did.
Since the other drives of the pool were missing, the pool could not be =
imported.

zpool status reports 'everything ok=E2=80=99 because all the existing =
pools are ok. zpool destroy can=E2=80=99t destroy the pool becuase it =
has not been imported.

I simply created a new pool specifying the drive address of the disk - =
zfs happily overwrote the old incomplete pool info.

joe


Joe McGuckin
ViaNet Communications

joe@via.net
650-207-0372 cell
650-213-1302 office
650-969-2124 fax



> On Sep 16, 2021, at 1:01 PM, Mark Millard via freebsd-current =
<freebsd-current@freebsd.org> wrote:
>=20
> What do I go about:
>=20
> QUOTE
> # zpool import
>   pool: zopt0
>     id: 18166787938870325966
>  state: FAULTED
> status: One or more devices contains corrupted data.
> action: The pool cannot be imported due to damaged devices or data.
>   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
> config:
>=20
>        zopt0       FAULTED  corrupted data
>          nda0p2    UNAVAIL  corrupted data
>=20
> # zpool status -x
> all pools are healthy
>=20
> # zpool destroy zopt0
> cannot open 'zopt0': no such pool
> END QUOTE
>=20
> (I had attempted to clean out the old zfs context on
> the media and delete/replace the 2 freebsd swap
> partitions and 1 freebsd-zfs partition, leaving the
> efi partition in place. Clearly I did not do everything
> require [or something is very wrong]. zopt0 had been
> a root-on-ZFS context and would be again. I have a
> backup of the context to send/receive once the pool
> in the partition is established.)
>=20
> For reference, as things now are:
>=20
> # gpart show
> =3D>       40  937703008  nda0  GPT  (447G)
>         40     532480     1  efi  (260M)
>     532520       2008        - free -  (1.0M)
>     534528  937166848     2  freebsd-zfs  (447G)
>  937701376       1672        - free -  (836K)
> . . .
>=20
> (That is not how it looked before I started.)
>=20
> # uname -apKU
> FreeBSD CA72_4c8G_ZFS 13.0-RELEASE-p4 FreeBSD 13.0-RELEASE-p4 #4 =
releng/13.0-n244760-940681634ee1-dirty: Mon Aug 30 11:35:45 PDT 2021     =
root@CA72_16Gp_ZFS:/usr/obj/BUILDs/13_0R-CA72-nodbg-clang/usr/13_0R-src/ar=
m64.aarch64/sys/GENERIC-NODBG-CA72  arm64 aarch64 1300139 1300139
>=20
> I have also tried under:
>=20
> # uname -apKU
> FreeBSD CA72_4c8G_ZFS 14.0-CURRENT FreeBSD 14.0-CURRENT #12 =
main-n249019-0637070b5bca-dirty: Tue Aug 31 02:24:20 PDT 2021     =
root@CA72_16Gp_ZFS:/usr/obj/BUILDs/main-CA72-nodbg-clang/usr/main-src/arm6=
4.aarch64/sys/GENERIC-NODBG-CA72  arm64 aarch64 1400032 1400032
>=20
> after reaching this state. It behaves the same.
>=20
> The text presented by:
>=20
> https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
>=20
> does not deal with what is happening overall.
>=20
> =3D=3D=3D
> Mark Millard
> marklmi at yahoo.com
> ( dsl-only.net went
> away in early 2018-Mar)
>=20
>=20


--Apple-Mail=_F8FAC72C-E5C8-4A65-BEDB-BB6BDC7D2E94--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?88799C4C-2371-42C1-A41C-392969A1C1E0>