Date: Thu, 16 Sep 2021 13:56:16 -0700 From: Mark Millard via freebsd-current <freebsd-current@freebsd.org> To: freebsd-current <freebsd-current@freebsd.org> Subject: Re: zpool import: "The pool cannot be imported due to damaged devices or data" but zpool status -x: "all pools are healthy" and zpool destroy: "no such pool" Message-ID: <D97F13C3-C29A-4326-93DD-E7BAA6101AB1@yahoo.com> In-Reply-To: <C312D693-8EAD-4398-B0CD-B134EB278F80@yahoo.com> References: <C312D693-8EAD-4398-B0CD-B134EB278F80@yahoo.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On 2021-Sep-16, at 13:01, Mark Millard <marklmi at yahoo.com> wrote: > What do I go about: >=20 > QUOTE > # zpool import > pool: zopt0 > id: 18166787938870325966 > state: FAULTED > status: One or more devices contains corrupted data. > action: The pool cannot be imported due to damaged devices or data. > see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E > config: >=20 > zopt0 FAULTED corrupted data > nda0p2 UNAVAIL corrupted data >=20 > # zpool status -x > all pools are healthy >=20 > # zpool destroy zopt0 > cannot open 'zopt0': no such pool > END QUOTE >=20 > (I had attempted to clean out the old zfs context on > the media and delete/replace the 2 freebsd swap > partitions and 1 freebsd-zfs partition, leaving the > efi partition in place. Clearly I did not do everything > require [or something is very wrong]. zopt0 had been > a root-on-ZFS context and would be again. I have a > backup of the context to send/receive once the pool > in the partition is established.) >=20 > For reference, as things now are: >=20 > # gpart show > =3D> 40 937703008 nda0 GPT (447G) > 40 532480 1 efi (260M) > 532520 2008 - free - (1.0M) > 534528 937166848 2 freebsd-zfs (447G) > 937701376 1672 - free - (836K) > . . . >=20 > (That is not how it looked before I started.) >=20 > # uname -apKU > FreeBSD CA72_4c8G_ZFS 13.0-RELEASE-p4 FreeBSD 13.0-RELEASE-p4 #4 = releng/13.0-n244760-940681634ee1-dirty: Mon Aug 30 11:35:45 PDT 2021 = root@CA72_16Gp_ZFS:/usr/obj/BUILDs/13_0R-CA72-nodbg-clang/usr/13_0R-src/ar= m64.aarch64/sys/GENERIC-NODBG-CA72 arm64 aarch64 1300139 1300139 >=20 > I have also tried under: >=20 > # uname -apKU > FreeBSD CA72_4c8G_ZFS 14.0-CURRENT FreeBSD 14.0-CURRENT #12 = main-n249019-0637070b5bca-dirty: Tue Aug 31 02:24:20 PDT 2021 = root@CA72_16Gp_ZFS:/usr/obj/BUILDs/main-CA72-nodbg-clang/usr/main-src/arm6= 4.aarch64/sys/GENERIC-NODBG-CA72 arm64 aarch64 1400032 1400032 >=20 > after reaching this state. It behaves the same. >=20 > The text presented by: >=20 > https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E >=20 > does not deal with what is happening overall. >=20 I finally seem to have stomped on enough to have gotten past the issue (last actions): # gpart add -tfreebsd-swap -s440g /dev/nda0 nda0p2 added # gpart add -tfreebsd-swap /dev/nda0 nda0p3 added 7384907776 bytes transferred in 5.326024 secs (1386570546 bytes/sec) # dd if=3D/dev/zero of=3D/dev/nda0p3 bs=3D4k conv=3Dsync status=3Dprogress= dd: /dev/nda0p3: end of device972 MiB) transferred 55.001s, 133 MB/s 1802957+0 records in 1802956+0 records out 7384907776 bytes transferred in 55.559644 secs (132918559 bytes/sec) # gpart delete -i3 /dev/nda0 nda0p3 deleted # gpart delete -i2 /dev/nda0 nda0p2 deleted # gpart add -tfreebsd-zfs -a1m /dev/nda0 nda0p2 added # zpool import no pools available to import # gpart show . . . =3D> 40 937703008 nda0 GPT (447G) 40 532480 1 efi (260M) 532520 2008 - free - (1.0M) 534528 937166848 2 freebsd-zfs (447G) 937701376 1672 - free - (836K) # zpool create -O compress=3Dlz4 -O atime=3Doff -f -tzpopt0 zopt0 = /dev/nda0p2 # zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP = HEALTH ALTROOT zpopt0 444G 420K 444G - - 0% 0% 1.00x = ONLINE - zroot 824G 105G 719G - - 1% 12% 1.00x = ONLINE - I've no clue what made my original zpool labelclear -f attempt leave material behind before repartitioning. Still could have been operator error of some kind. =3D=3D=3D Mark Millard marklmi at yahoo.com ( dsl-only.net went away in early 2018-Mar)
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?D97F13C3-C29A-4326-93DD-E7BAA6101AB1>