Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 16 Sep 2021 15:19:40 -0700
From:      Mark Millard via freebsd-current <freebsd-current@freebsd.org>
To:        joe mcguckin <joe@via.net>
Cc:        freebsd-current <freebsd-current@freebsd.org>
Subject:   Re: zpool import: "The pool cannot be imported due to damaged devices or data" but zpool status -x: "all pools are healthy" and zpool destroy: "no such pool"
Message-ID:  <6B748E71-0E70-4FFA-9AB5-639465E91275@yahoo.com>
In-Reply-To: <88799C4C-2371-42C1-A41C-392969A1C1E0@via.net>
References:  <C312D693-8EAD-4398-B0CD-B134EB278F80.ref@yahoo.com> <C312D693-8EAD-4398-B0CD-B134EB278F80@yahoo.com> <88799C4C-2371-42C1-A41C-392969A1C1E0@via.net>

next in thread | previous in thread | raw e-mail | index | archive | help


On 2021-Sep-16, at 13:26, joe mcguckin <joe at via.net> wrote:

> I experienced the same yesterday. I grabbed an old disk that was =
previously part of a pool. Stuck it in the chassis and did =E2=80=98zpool =
import=E2=80=99 and got the same output you did.

Mine was a single-disk pool. I use zfs just in order to
use bectl, not for redundancy or other such. So my
configuration is very simple.

> Since the other drives of the pool were missing, the pool could not be =
imported.
>=20
> zpool status reports 'everything ok=E2=80=99 because all the existing =
pools are ok. zpool destroy can=E2=80=99t destroy the pool becuase it =
has not been imported.

Yea, but the material at the URL it listed just says:

QUOTE
The pool must be destroyed and recreated from an appropriate backup =
source
END QUOTE

so it says to do something that in my context could not
be done via the normal zfs-related commands as far as I
can tell.

> I simply created a new pool specifying the drive address of the disk - =
zfs happily overwrote the old incomplete pool info.

Ultimately, I zeroed out an area of the media that
had the zfs related labels and after that things
operated normally and I could recreate the pool in
the partition, send/recieve to it the backup, and
use the restored state. I did not find a way to
use the zpool/zfs related commands to deal with
fixing the messed-up status. (I did not report
everything that I'd tried.)

> joe
>=20
>=20
> Joe McGuckin
> ViaNet Communications
>=20
> joe@via.net
> 650-207-0372 cell
> 650-213-1302 office
> 650-969-2124 fax
>=20
>=20
>=20
>> On Sep 16, 2021, at 1:01 PM, Mark Millard via freebsd-current =
<freebsd-current@freebsd.org> wrote:
>>=20
>> What do I go about:
>>=20
>> QUOTE
>> # zpool import
>>   pool: zopt0
>>     id: 18166787938870325966
>>  state: FAULTED
>> status: One or more devices contains corrupted data.
>> action: The pool cannot be imported due to damaged devices or data.
>>   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
>> config:
>>=20
>>        zopt0       FAULTED  corrupted data
>>          nda0p2    UNAVAIL  corrupted data
>>=20
>> # zpool status -x
>> all pools are healthy
>>=20
>> # zpool destroy zopt0
>> cannot open 'zopt0': no such pool
>> END QUOTE
>>=20
>> (I had attempted to clean out the old zfs context on
>> the media and delete/replace the 2 freebsd swap
>> partitions and 1 freebsd-zfs partition, leaving the
>> efi partition in place. Clearly I did not do everything
>> require [or something is very wrong]. zopt0 had been
>> a root-on-ZFS context and would be again. I have a
>> backup of the context to send/receive once the pool
>> in the partition is established.)
>>=20
>> For reference, as things now are:
>>=20
>> # gpart show
>> =3D>       40  937703008  nda0  GPT  (447G)
>>         40     532480     1  efi  (260M)
>>     532520       2008        - free -  (1.0M)
>>     534528  937166848     2  freebsd-zfs  (447G)
>>  937701376       1672        - free -  (836K)
>> . . .
>>=20
>> (That is not how it looked before I started.)
>>=20
>> # uname -apKU
>> FreeBSD CA72_4c8G_ZFS 13.0-RELEASE-p4 FreeBSD 13.0-RELEASE-p4 #4 =
releng/13.0-n244760-940681634ee1-dirty: Mon Aug 30 11:35:45 PDT 2021     =
root@CA72_16Gp_ZFS:/usr/obj/BUILDs/13_0R-CA72-nodbg-clang/usr/13_0R-src/ar=
m64.aarch64/sys/GENERIC-NODBG-CA72  arm64 aarch64 1300139 1300139
>>=20
>> I have also tried under:
>>=20
>> # uname -apKU
>> FreeBSD CA72_4c8G_ZFS 14.0-CURRENT FreeBSD 14.0-CURRENT #12 =
main-n249019-0637070b5bca-dirty: Tue Aug 31 02:24:20 PDT 2021     =
root@CA72_16Gp_ZFS:/usr/obj/BUILDs/main-CA72-nodbg-clang/usr/main-src/arm6=
4.aarch64/sys/GENERIC-NODBG-CA72  arm64 aarch64 1400032 1400032
>>=20
>> after reaching this state. It behaves the same.
>>=20
>> The text presented by:
>>=20
>> https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
>>=20
>> does not deal with what is happening overall.
>>=20
>=20


=3D=3D=3D
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?6B748E71-0E70-4FFA-9AB5-639465E91275>