Date: Sat, 1 Feb 2025 09:10:25 -0500 From: Dennis Clarke <dclarke@blastwave.org> To: freebsd-current@freebsd.org Subject: Re: ZFS: Rescue FAULTED Pool Message-ID: <62da6831-fbc8-4bab-9a4c-6b0ec9dd3585@blastwave.org> In-Reply-To: <20250201095656.1bdfbe5f@thor.sb211.local> References: <20250129112701.0c4a3236@freyja> <Z5oU1dLX4eQaN8Yq@albert.catwhisker.org> <20250130123354.2d767c7c@thor.sb211.local> <980401eb-f8f6-44c7-8ee1-5ff0c9e1c35c@freebsd.org> <20250201095656.1bdfbe5f@thor.sb211.local>
index | next in thread | previous in thread | raw e-mail
>> >> The most useful thing to share right now would be the output of `zpool >> import` (with no pool name) on the rebooted system. >> >> That will show where the issues are, and suggest how they might be solved. >> > > Hello, this exactly happens when trying to import the pool. Prior to the loss, device da1p1 > has been faulted with numbers in the colum/columns "corrupted data"/further not seen now. > > > ~# zpool import > pool: BUNKER00 > id: XXXXXXXXXXXXXXXXXXXX > state: FAULTED > status: The pool metadata is corrupted. > action: The pool cannot be imported due to damaged devices or data. > The pool may be active on another system, but can be imported using > the '-f' flag. > see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-72 > config: > > BUNKER00 FAULTED corrupted data > raidz1-0 ONLINE > da2p1 ONLINE > da3p1 ONLINE > da4p1 ONLINE > da7p1 ONLINE > da6p1 ONLINE > da1p1 ONLINE > da5p1 ONLINE > > > ~# zpool import -f BUNKER00 > cannot import 'BUNKER00': I/O error > Destroy and re-create the pool from > a backup source. > > > ~# zpool import -F BUNKER00 > cannot import 'BUNKER00': one or more devices is currently unavailable > This is indeed a sad situation. You have a raidz1 pool with one or MORE devices that seem to have left the stage. I suspect more than one. I can only guess what you see from "camcontrol devlist" as well as data from "gpart show -l" where we would see the partition data along with and GPT labels. If in fact you used GPT scheme. You have a list of devices that all say "p1" there and so I guess you made some sort of a partition table. ZFS does not need that but it can be nice to have. In any case, it really does look like you have _more_ than one failure in there somewhere and only dmesg and some separate tests on each device would reveal the truth. -- Dennis Clarke RISC-V/SPARC/PPC/ARM/CISC UNIX and Linux spokenhome | help
Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?62da6831-fbc8-4bab-9a4c-6b0ec9dd3585>
