Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 20 Aug 2024 18:25:11 -0000 (UTC)
From:      "Peter 'PMc' Much" <pmc@citylink.dinoex.sub.org>
To:        freebsd-fs@freebsd.org
Subject:   Re: ZFS: Suspended Pool due to allegedly uncorrectable I/O error
Message-ID:  <slrnvc9ns7.ap9.pmc@disp.intra.daemon.contact>
References:  <CAESeg0wm9iZN=8tpo_rtC1hgRMzyr4dhJrQyxgSuiS183_9y8A@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 2024-08-19, Pamela Ballantyne <boyvalue@gmail.com> wrote:
> --00000000000029846c06200fdf50
> Content-Type: text/plain; charset="UTF-8"
>
> Hi,
>
> So, this is long, so here's TL;DR:  ZFS suspended a pool for presumably
> good reasons, but on reboot, there didn't seem to be any good reason for it.
>
> As a background, I'm an early ZFS adopter of ZFS. I have a remote server
> running ZFS
> continuously since late 2010, 24x7. I also use ZFS on my home machines.
> While I do not
> claim to be a ZFS expert, I've managed to handle the various issues that
> have come up over
> the years and haven't had to ask for help from the experts.
>
> But now I am completely baffled and would appreciate any help, advice,
> pointers, links, whatever.
>
> On Sunday Morning, 08/11, I upgraded the server from 12.4-RELEASE-p9 to
> 13.3-RELEASE-p5.
> The upgrade went smoothly; there was no problem, and the server worked
> flawlessly post-upgrade.
>
> On Thursday evening, 8/15, the server became unreachable. It would still
> respond to pings via
> the IP address, but that was it.  I used to be able to access the server
> via IPMI, but that ability disappeared
> several company mergers ago. The current NOC staff sent me a screenshot of
> the server output,
> which showed repeated messages saying:
>
> "Solaris: WARNING: Pool 'zroot' has encountered an uncorrectable I/O
> failure and has been suspended."
>
> There had been no warnings in the log files, nothing. There was no sign
> from the S.M.A.R.T. monitoring system, nothing.
>
> It's a simple mirrored setup with just two drives. So I expected a
> catastrophic hardware failure. Maybe the HBA had
> failed (this is on a SuperMicro Blade server), or both drives had manage to
> die at the same time.
>
> Without any way to log in remotely, I requested a reboot.  The server
> rebooted without errors. I could
> ssh into my account and poke around.  Everything was normal. There were no
> log entries related to the crash. I realize post-crash
> there would be no filesystem to write to, but there was still nothing
> leading up to it - no hardware or disk-related
> messages of any kind.  The only sign of any problem I could find was 2
> checksum errors listed on only one of the
> drives in the mirror when I did zpool status.
>
> I ran a scrub, which completed without any problem or error. About 30
> minutes after the scrub, the
> two checksum errors disappeared without manually clearing them. I've run
> some drive tests and
> they both pass with flying colors. And it's now been a few days and the
> system has been performing flawlessly.
>
> So, I am completely flummoxed. I am trying to understand why the pool was
> suspended when it looks like
> something ZFS should have easily handled. I've had complete drive failures,
> and ZFS just kept on going.
> Is there any bug or incompatibility in 13.3-p5?  Is this something that
> will recur on each full moon?

Well, in fact it is a bit late for Lughnasadh.

But yes, these things can happen. Some disk gets a bad mood on the
controller, and is detached. On rare occasion another disk gets a bad
mood on the controller, and, well...
And after reboot everything is fine again. (Because, if the disk
doesn't like the environment for some reason, then it goes offline
to the controller, and comes back only after a reset or power switch.)

It would be interesting to read the kernel messages, as they would
tell what the controller did believe.
(That is one reason I do not use root-on-zfs. My /var/log, however, is
on zfs).
In my case, the main reason for these problems was thermal drift
(+ageing/oxidation) on the connectors, combined with a PS working at
it's limits (and, obviousely, me smoking). But then, mine is not a
stock machine: I'm running Xeon-EP put together from scrap parts.
Such failures *SHOULD* not happen on a state of the art machine.

How old is it, and how much temperature walk does it do?

> So thanks in advance for any advice, shared experiences, or whatever you
> can offer.

cheerio,
PMc



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?slrnvc9ns7.ap9.pmc>