Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 27 Jan 2021 21:59:38 +0100
From:      Peter Eriksson <pen@lysator.liu.se>
To:        Steven Schlansker <stevenschlansker@gmail.com>
Cc:        FreeBSD FS <freebsd-fs@freebsd.org>
Subject:   Re: persistent integer divide fault panic in zfs_rmnode
Message-ID:  <7B373FB5-E907-4880-A24E-DE9F2A173A49@lysator.liu.se>
In-Reply-To: <CAHjY6CV6viiZ-EbVUnzt94zwSVXF=kCgYePeAKHrv1w_mJWPMA@mail.gmail.com>
References:  <CAHjY6CVPTfkgzZ2kwwkKxRemmRyn5DpVu4SY=4GCvmo62sircQ@mail.gmail.com> <CAHjY6CV6viiZ-EbVUnzt94zwSVXF=kCgYePeAKHrv1w_mJWPMA@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Have you tried with the OpenZFS port instead in case the problem is =
solved there?

(Might be easiest to just boot a FreeBSD 13 kernel with your existing =
12.2 user land)

- Peter


> On 27 Jan 2021, at 19:15, Steven Schlansker =
<stevenschlansker@gmail.com> wrote:
>=20
> Does anybody have any suggestions as to what I can try next regarding =
this
> panic?
>=20
> At this point the only path forward I see is to declare the zpool =
corrupt
> and attempt to
> move all the data off, destroy, and migrate back, and hope the =
recreated
> pool does not tickle this bug.
>=20
> That would be a pretty disappointing end to a long fatal-problem-free =
run
> with ZFS.
>=20
> Thanks,
> Steven
>=20
> On Fri, Jan 8, 2021 at 3:41 PM Steven Schlansker =
<stevenschlansker@gmail.com>
> wrote:
>=20
>> Hi freebsd-fs,
>>=20
>> I have a 8-way raidz2 system running FreeBSD 12.2-RELEASE-p1 GENERIC
>> Approximately since upgrading to FreeBSD 12.2-RELEASE, I receive a =
nasty
>> panic when trying to unlink any of a large number of files.
>>=20
>> Fatal trap 18: integer divide fault while in kernel mode
>>=20
>>=20
>> The pool reports as healthy:
>>=20
>>  pool: universe
>> state: ONLINE
>> status: One or more devices are configured to use a non-native block =
size.
>>        Expect reduced performance.
>> action: Replace affected devices with devices that support the
>>        configured block size, or migrate data to a properly =
configured
>>        pool.
>>  scan: resilvered 416M in 0 days 00:08:35 with 0 errors on Thu Jan  7
>> 02:16:03 2021
>> When some files are unlinked, the system panics with a partial =
backtrace
>> of:
>>=20
>> #6 0xffffffff82a148ce at zfs_rmnode+0x5e
>> #7 0xffffffff82a35612 at zfs_freebsd_reclaim+0x42
>> #8 0xffffffff812482db at VOP_RECLAIM_APV+0x7b
>> #9 0xffffffff80c8e376 at vgonel+0x216
>> #10 0xffffffff80c8e9c5 at vrecycle+0x45
>>=20
>> I captured a dump, and using kgdb extracted a full backtrace, and =
filed it
>> as https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D250784
>>=20
>> #8  0xffffffff82963725 in get_next_chunk (dn=3D0xfffff804325045c0,
>> start=3D<optimized out>, minimum=3D0, l1blks=3D<optimized out>)
>>    at =
/usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/dmu.c:721
>> warning: Source file is more recent than executable.
>> 721                 (roundup(*start, iblkrange) - (minimum / =
iblkrange *
>> iblkrange)) /
>> (kgdb) list
>> 716              * L1 blocks in this range have data. If we can, we =
use
>> this
>> 717              * worst case value as an estimate so we can avoid =
having
>> to look
>> 718              * at the object's actual data.
>> 719              */
>> 720             uint64_t total_l1blks =3D
>> 721                 (roundup(*start, iblkrange) - (minimum / =
iblkrange *
>> iblkrange)) /
>> 722                 iblkrange;
>> 723             if (total_l1blks <=3D maxblks) {
>> 724                     *l1blks =3D total_l1blks;
>> 725                     *start =3D minimum;
>> (kgdb) print iblkrange
>> $1 =3D 0
>> (kgdb) print minimum
>> $2 =3D 0
>>=20
>> It looks like it is attempting to compute 0 / 0, causing the panic.
>>=20
>> How can I restore my zpool to a working state?  Thank you for any
>> assistance.
>>=20
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?7B373FB5-E907-4880-A24E-DE9F2A173A49>