Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 25 Feb 2022 19:04:14 +0500
From:      "Eugene M. Zheganin" <eugene@zhegan.in>
To:        stable@freebsd.org
Subject:   Re: zfs mirrored pool dead after a disk death and reset
Message-ID:  <0a6d8a88-30a5-c043-a071-3bc8b59875b8@zhegan.in>
In-Reply-To: <10384e62-b643-95d9-1e1e-9ffa52a07c03@zhegan.in>
References:  <d959873f-3a0d-8f81-193d-f1f70c48eaa7@zhegan.in> <CAHEMsqYUt9EFFkLqw1fecfcBC0ts6WkkK2i4EqVDSN1ELJiERw@mail.gmail.com> <10384e62-b643-95d9-1e1e-9ffa52a07c03@zhegan.in>

next in thread | previous in thread | raw e-mail | index | archive | help
This is a multi-part message in MIME format.
--------------CC87B1E618CB296E88CA6E86
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit

Hello,

Jeez, my bad, looks like I created a non-mirrored pool. :)

On 25.02.2022 18:53, Eugene M. Zheganin wrote:
> Hello,
>
> On 25.02.2022 18:30, Steven Hartland wrote:
>> Have you tried removing the dead disk physically. I've seen in the 
>> past a bad disk sending causing bad data to be sent to the controller 
>> causing knock on issues.
>
> Yup, I did. I've even built 13.0 and tried to import it there. 13.0 
> complains dirrerently, but still refuses to import:
>
>
> # zpool import
> pool: data
> id: 15967028801499953224
> state: ONLINE
> status: One or more devices contains corrupted data.
> action: The pool can be imported using its name or numeric identifier.
> see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
> config:
> data        ONLINE
> nvd0      UNAVAIL  corrupted data
> nvd1      ONLINE
>
>
> And while importing:
>
> # zpool import -FX data
> cannot import 'data': one or more devices is currently unavailable
>
> and I see the following in dmesg:
>
> Feb 25 16:44:41 db0 ZFS[4857]: failed to load zpool data
> Feb 25 16:44:41 db0 ZFS[4873]: failed to load zpool data
> Feb 25 16:44:41 db0 ZFS[4889]: failed to load zpool data
> Feb 25 16:44:41 db0 ZFS[4909]: failed to load zpool data
> Feb 25 16:45:13 db0 ZFS[4940]: pool log replay failure, zpool=data
> Feb 25 16:45:13 db0 ZFS[4952]: pool log replay failure, zpool=data
> Feb 25 16:45:13 db0 ZFS[4964]: pool log replay failure, zpool=data
> Feb 25 16:45:13 db0 ZFS[4976]: pool log replay failure, zpool=data
>
>>
>> Also the output doesn't show multiple devices, only nvd0. I'm hoping 
>> you didn't use nv raid to create the mirror, as that means there's no 
>> ZFS protection?
> Nope, I'm aware of that. Acrually, the redundant drive is still there, 
> but dead already, it's the FAULTED device 9566965891719887395 in my 
> quotes below.
>
>>
>>     [root@db0:~]# zpool import
>>     pool: data
>>     id: 15967028801499953224
>>     state: FAULTED
>>     status: One or more devices contains corrupted data.
>>     action: The pool cannot be imported due to damaged devices or data.
>>     The pool may be active on another system, but can be imported using
>>     the '-f' flag.
>>     see: http://illumos.org/msg/ZFS-8000-5E
>>     config:
>>     data                   FAULTED  corrupted data
>>     9566965891719887395  FAULTED  corrupted data
>>     nvd0                 ONLINE
>>
>
> Thanks.
>
> Eugene.
>
>


--------------CC87B1E618CB296E88CA6E86
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <div class="moz-cite-prefix">Hello,</div>
    <div class="moz-cite-prefix"><br>
    </div>
    <div class="moz-cite-prefix">Jeez, my bad, looks like I created a
      non-mirrored pool. :)<br>
    </div>
    <div class="moz-cite-prefix"><br>
    </div>
    <div class="moz-cite-prefix">On 25.02.2022 18:53, Eugene M. Zheganin
      wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:10384e62-b643-95d9-1e1e-9ffa52a07c03@zhegan.in">
      <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
      <div class="moz-cite-prefix">Hello,<br>
      </div>
      <div class="moz-cite-prefix"><br>
      </div>
      <div class="moz-cite-prefix">On 25.02.2022 18:30, Steven Hartland
        wrote:<br>
      </div>
      <blockquote type="cite"
cite="mid:CAHEMsqYUt9EFFkLqw1fecfcBC0ts6WkkK2i4EqVDSN1ELJiERw@mail.gmail.com">
        <meta http-equiv="content-type" content="text/html;
          charset=UTF-8">
        <div dir="ltr">Have you tried removing the dead disk
          physically. I've seen in the past a bad disk sending causing
          bad data to be sent to the controller causing knock on issues.</div>
      </blockquote>
      <p>Yup, I did. I've even built 13.0 and tried to import it there.
        13.0 complains dirrerently, but still refuses to import:</p>
      <p><br>
      </p>
      <p># zpool import<br>
        pool: data<br>
        id: 15967028801499953224<br>
        state: ONLINE<br>
        status: One or more devices contains corrupted data.<br>
        action: The pool can be imported using its name or numeric
        identifier.<br>
        see: <a class="moz-txt-link-freetext"
          href="https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J"
          moz-do-not-send="true">https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J</a><br>;
        config:<br>
        data        ONLINE<br>
        nvd0      UNAVAIL  corrupted data<br>
        nvd1      ONLINE</p>
      <p><br>
      </p>
      <p>And while importing:</p>
      <p># zpool import -FX data<br>
        cannot import 'data': one or more devices is currently
        unavailable<br>
      </p>
      <p>and I see the following in dmesg:</p>
      <p>Feb 25 16:44:41 db0 ZFS[4857]: failed to load zpool data<br>
        Feb 25 16:44:41 db0 ZFS[4873]: failed to load zpool data<br>
        Feb 25 16:44:41 db0 ZFS[4889]: failed to load zpool data<br>
        Feb 25 16:44:41 db0 ZFS[4909]: failed to load zpool data<br>
        Feb 25 16:45:13 db0 ZFS[4940]: pool log replay failure,
        zpool=data<br>
        Feb 25 16:45:13 db0 ZFS[4952]: pool log replay failure,
        zpool=data<br>
        Feb 25 16:45:13 db0 ZFS[4964]: pool log replay failure,
        zpool=data<br>
        Feb 25 16:45:13 db0 ZFS[4976]: pool log replay failure,
        zpool=data<br>
      </p>
      <blockquote type="cite"
cite="mid:CAHEMsqYUt9EFFkLqw1fecfcBC0ts6WkkK2i4EqVDSN1ELJiERw@mail.gmail.com">
        <div dir="ltr">
          <div><br>
          </div>
          <div>Also the output doesn't show multiple devices, only nvd0.
            I'm hoping you didn't use nv raid to create the mirror, as
            that means there's no ZFS protection?</div>
        </div>
      </blockquote>
      Nope, I'm aware of that. Acrually, the redundant drive is still
      there, but dead already, it's the FAULTED device
      9566965891719887395 in my quotes below.<br>
      <br>
      <blockquote type="cite"
cite="mid:CAHEMsqYUt9EFFkLqw1fecfcBC0ts6WkkK2i4EqVDSN1ELJiERw@mail.gmail.com">
        <div class="gmail_quote"><br>
          <blockquote class="gmail_quote" style="margin:0px 0px 0px
            0.8ex;border-left:1px solid
            rgb(204,204,204);padding-left:1ex"> [root@db0:~]# zpool
            import<br>
            pool: data<br>
            id: 15967028801499953224<br>
            state: FAULTED<br>
            status: One or more devices contains corrupted data.<br>
            action: The pool cannot be imported due to damaged devices
            or data.<br>
            The pool may be active on another system, but can be
            imported using<br>
            the '-f' flag.<br>
            see: <a href="http://illumos.org/msg/ZFS-8000-5E"
              rel="noreferrer" target="_blank" moz-do-not-send="true">http://illumos.org/msg/ZFS-8000-5E</a><br>;
            config:<br>
            data                   FAULTED  corrupted data<br>
            9566965891719887395  FAULTED  corrupted data<br>
            nvd0                 ONLINE<br>
            <br>
          </blockquote>
        </div>
      </blockquote>
      <p><br>
      </p>
      <p>Thanks.</p>
      <p>Eugene.</p>
      <p><br>
      </p>
    </blockquote>
    <p><br>
    </p>
  </body>
</html>

--------------CC87B1E618CB296E88CA6E86--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?0a6d8a88-30a5-c043-a071-3bc8b59875b8>