Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 3 Feb 2025 19:24:10 +0100
From:      A FreeBSD User <freebsd@walstatt-de.de>
To:        freebsd-current@freebsd.org
Subject:   Re: ZFS: Rescue FAULTED Pool
Message-ID:  <20250203192437.36135323@thor.sb211.local>
In-Reply-To: <62da6831-fbc8-4bab-9a4c-6b0ec9dd3585@blastwave.org>
References:  <20250129112701.0c4a3236@freyja> <Z5oU1dLX4eQaN8Yq@albert.catwhisker.org> <20250130123354.2d767c7c@thor.sb211.local> <980401eb-f8f6-44c7-8ee1-5ff0c9e1c35c@freebsd.org> <20250201095656.1bdfbe5f@thor.sb211.local> <62da6831-fbc8-4bab-9a4c-6b0ec9dd3585@blastwave.org>

next in thread | previous in thread | raw e-mail | index | archive | help
--Sig_/M3cVul93=Xotfz1RpL_RS5X
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Sat, 1 Feb 2025 09:10:25 -0500
Dennis Clarke <dclarke@blastwave.org> schrieb:

> >>
> >> The most useful thing to share right now would be the output of `zpool
> >> import` (with no pool name) on the rebooted system.
> >>
> >> That will show where the issues are, and suggest how they might be sol=
ved.
> >> =20
> >=20
> > Hello, this exactly happens when trying to import the pool. Prior to th=
e loss, device da1p1
> > has been faulted with numbers in the colum/columns "corrupted data"/fur=
ther not seen now.
> >=20
> >=20
> >   ~# zpool import
> >     pool: BUNKER00
> >       id: XXXXXXXXXXXXXXXXXXXX
> >    state: FAULTED
> > status: The pool metadata is corrupted.
> >   action: The pool cannot be imported due to damaged devices or data.
> >          The pool may be active on another system, but can be imported =
using
> >          the '-f' flag.
> >     see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-72
> >   config:
> >=20
> >          BUNKER00    FAULTED  corrupted data
> >            raidz1-0  ONLINE
> >              da2p1   ONLINE
> >              da3p1   ONLINE
> >              da4p1   ONLINE
> >              da7p1   ONLINE
> >              da6p1   ONLINE
> >              da1p1   ONLINE
> >              da5p1   ONLINE
> >=20
> >=20
> >   ~# zpool import -f BUNKER00
> > cannot import 'BUNKER00': I/O error
> >          Destroy and re-create the pool from
> >          a backup source.
> >=20
> >=20
> > ~# zpool import -F BUNKER00
> > cannot import 'BUNKER00': one or more devices is currently unavailable
> >  =20
>=20
>      This is indeed a sad situation. You have a raidz1 pool with one or
> MORE devices that seem to have left the stage. I suspect more than one.
>=20
>      I can only guess what you see from "camcontrol devlist" as well as
> data from "gpart show -l" where we would see the partition data along
> with and GPT labels. If in fact you used GPT scheme. You have a list of
> devices that all say "p1" there and so I guess you made some sort of a
> partition table. ZFS does not need that but it can be nice to have. In
> any case, it really does look like you have _more_ than one failure in
> there somewhere and only dmesg and some separate tests on each device
> would reveal the truth.
>=20
>=20
> --
> Dennis Clarke
> RISC-V/SPARC/PPC/ARM/CISC
> UNIX and Linux spoken
>=20
>=20


Hello all!

Thank you for your tips!

Luckily, "zpool import -FX" as suggested herein did after a while (60-80 mi=
nutes) the trick!
There might be some data losses - but compared to the alternative bareable.

Thank you very much!

Kind regards,

Oliver


--=20

A FreeBSD user

--Sig_/M3cVul93=Xotfz1RpL_RS5X
Content-Type: application/pgp-signature
Content-Description: OpenPGP digital signature

-----BEGIN PGP SIGNATURE-----

iHUEARYKAB0WIQRQheDybVktG5eW/1Kxzvs8OqokrwUCZ6EJ5QAKCRCxzvs8Oqok
r7s3AQDE7f7eaByBVr197yoIBqDMhT1VQdW7KUQ3weMd0W67GwEAjnmf4bhZCyXY
5Mi/4Q6+65A8jRf74a0bwPIksBQDMAg=
=F0IF
-----END PGP SIGNATURE-----

--Sig_/M3cVul93=Xotfz1RpL_RS5X--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20250203192437.36135323>