Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 31 Mar 2020 21:16:34 +0000
From:      bugzilla-noreply@freebsd.org
To:        bugs@FreeBSD.org
Subject:   [Bug 245186] zfs panic despite failmode=continue
Message-ID:  <bug-245186-227-jVsOPLtCoR@https.bugs.freebsd.org/bugzilla/>
In-Reply-To: <bug-245186-227@https.bugs.freebsd.org/bugzilla/>
References:  <bug-245186-227@https.bugs.freebsd.org/bugzilla/>

next in thread | previous in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D245186

--- Comment #2 from John F. Carr <jfc@mit.edu> ---
I understand it's a different path internally, but I asked for disk errors =
not
to crash the system and that's what I expect to happen.

The code in spa_misc.c appears to allow 1,000 seconds.  I've seen sync take=
 a
significant fraction of that time with working disks.  I/O on a failing disk
can be orders of magnitude slower than usual.  It might take seems like for=
ever
to work through the queue, but the driver is continuing to process I/O
requests.

Unfortunately based on comments the deadman timer is based on the oldest
pending I/O.  If the kernel used a per-disk timer that counted time with a
non-empty queue and no requests completing it would be able to distinguish a
very slow disk from a hung driver.  Or it could maintain some counter of fa=
iled
I/O and mark the disk dead when the rate got too high.

I think the drive should be kicked out of the pool and its I/O queue flushe=
d in
this situation.  When my drive first started failing that's what happened. =
 I'd
run zpool status and find one of the drives removed.  I could run geli atta=
ch
and a zpool command to bring it back in until the next time it got kicked o=
ut.=20
More recently the system started crashing instead.

--=20
You are receiving this mail because:
You are the assignee for the bug.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-245186-227-jVsOPLtCoR>