Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 17 Sep 2021 07:42:09 +0900
From:      Tomoaki AOKI <junchoon@dec.sakura.ne.jp>
To:        freebsd-current@freebsd.org
Cc:        marklmi@yahoo.com
Subject:   Re: zpool import: "The pool cannot be imported due to damaged devices or data" but zpool status -x: "all pools are healthy" and zpool destroy: "no such pool"
Message-ID:  <20210917074209.7b3b665188af8cdd22b98eef@dec.sakura.ne.jp>
In-Reply-To: <C312D693-8EAD-4398-B0CD-B134EB278F80@yahoo.com>
References:  <C312D693-8EAD-4398-B0CD-B134EB278F80.ref@yahoo.com> <C312D693-8EAD-4398-B0CD-B134EB278F80@yahoo.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 16 Sep 2021 13:01:16 -0700
Mark Millard via freebsd-current <freebsd-current@freebsd.org> wrote:

> What do I go about:
> 
> QUOTE
> # zpool import
>    pool: zopt0
>      id: 18166787938870325966
>   state: FAULTED
> status: One or more devices contains corrupted data.
>  action: The pool cannot be imported due to damaged devices or data.
>    see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
>  config:
> 
>         zopt0       FAULTED  corrupted data
>           nda0p2    UNAVAIL  corrupted data
> 
> # zpool status -x
> all pools are healthy
> 
> # zpool destroy zopt0
> cannot open 'zopt0': no such pool
> END QUOTE
> 
> (I had attempted to clean out the old zfs context on
> the media and delete/replace the 2 freebsd swap
> partitions and 1 freebsd-zfs partition, leaving the
> efi partition in place. Clearly I did not do everything
> require [or something is very wrong]. zopt0 had been
> a root-on-ZFS context and would be again. I have a
> backup of the context to send/receive once the pool
> in the partition is established.)
> 
> For reference, as things now are:
> 
> # gpart show
> =>       40  937703008  nda0  GPT  (447G)
>          40     532480     1  efi  (260M)
>      532520       2008        - free -  (1.0M)
>      534528  937166848     2  freebsd-zfs  (447G)
>   937701376       1672        - free -  (836K)
> . . .
> 
> (That is not how it looked before I started.)
> 
> # uname -apKU
> FreeBSD CA72_4c8G_ZFS 13.0-RELEASE-p4 FreeBSD 13.0-RELEASE-p4 #4 releng/13.0-n244760-940681634ee1-dirty: Mon Aug 30 11:35:45 PDT 2021     root@CA72_16Gp_ZFS:/usr/obj/BUILDs/13_0R-CA72-nodbg-clang/usr/13_0R-src/arm64.aarch64/sys/GENERIC-NODBG-CA72  arm64 aarch64 1300139 1300139
> 
> I have also tried under:
> 
> # uname -apKU
> FreeBSD CA72_4c8G_ZFS 14.0-CURRENT FreeBSD 14.0-CURRENT #12 main-n249019-0637070b5bca-dirty: Tue Aug 31 02:24:20 PDT 2021     root@CA72_16Gp_ZFS:/usr/obj/BUILDs/main-CA72-nodbg-clang/usr/main-src/arm64.aarch64/sys/GENERIC-NODBG-CA72  arm64 aarch64 1400032 1400032
> 
> after reaching this state. It behaves the same.
> 
> The text presented by:
> 
> https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E
> 
> does not deal with what is happening overall.
> 
> ===
> Mark Millard
> marklmi at yahoo.com
> ( dsl-only.net went
> away in early 2018-Mar)

IIRC, zpool (except zpool import) only works with already-imported
pool(s).

So IIUC, `zpool status` and `zpool destroy` for faulted pool(s) would
only works properly if the pool(s) fault after graceful import.

 *I have root pools only (in different physical drives), so non-root
  pools can behave differently.

-- 
Tomoaki AOKI    <junchoon@dec.sakura.ne.jp>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20210917074209.7b3b665188af8cdd22b98eef>