Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 22 Jun 2017 09:13:41 +0000
From:      bugzilla-noreply@freebsd.org
To:        freebsd-bugs@FreeBSD.org
Subject:   [Bug 220203] [zfs] [panic] in dmu_objset_do_userquota_updates on mount
Message-ID:  <bug-220203-8@https.bugs.freebsd.org/bugzilla/>

next in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D220203

            Bug ID: 220203
           Summary: [zfs] [panic] in dmu_objset_do_userquota_updates on
                    mount
           Product: Base System
           Version: 11.0-RELEASE
          Hardware: amd64
                OS: Any
            Status: New
          Severity: Affects Some People
          Priority: ---
         Component: kern
          Assignee: freebsd-bugs@FreeBSD.org
          Reporter: neovortex@gmail.com

I've got a system that crashed due to an assert 0 =3D=3D zap_increment_int =
in
dmu_objset_do_userquota_updates, now whenever that filesystem is mounted the
same panic occurs.

panic: solaris assert: 0 =3D=3D zap_increment_int(os, (-2ULL), user, delta,=
 tx)
(0x0 =3D=3D 0x7a), file:
/usr/src/sys/cddl/contrib/ensolaris/uts/common/fs/zfs/dmu_object.c, line: 1=
203
cpuid =3D 2
KDB: stack backtrace
#0 0xffffffff80xxxxxx at kdb_backtrace+0x67
#1 0xffffffff80xxxxxx at panic+0x182
#2 0xffffffff80xxxxxx at do_userquota_update+0xad
#3 0xffffffff80xxxxxx at assfail3+0x2c
#4 0xffffffff80xxxxxx at dmu_objset_do_userquota_updates+0x111f
#5 0xffffffff80xxxxxx at dso_pool_sync+0x18f
#6 0xffffffff80xxxxxx at spa_sunc+0x7ce
#7 0xffffffff80xxxxxx at txg_sync_thread+0x389
#8 0xffffffff80xxxxxx at fork_exit+0x85
#9 0xffffffff80xxxxxx at fork_trampoline+0xe

Booting from a USB livecd and importing the pool also triggers the same cra=
sh,
although if you import the pool unmounted the crash does not occur. Only one
filesystem when mounted causes the panic.

The stack trace is the same as mentioned here:
https://lists.freebsd.org/pipermail/freebsd-stable/2012-July/068938.html

The system is a dual socket machine, although as per that thread I have tri=
ed
removing one of the CPUs but it hasn't helped. ECC memory, no issues.

If the affected filesystem is destroyed, the system will boot, but after a =
few
days the issue appears to reoccur with another filesystem. The pool has also
been destroyed and recreated with files migrated via zfs send/recv. Pool sc=
rubs
fine without any errors.

Mainboard: X8DTi-F
CPU: Intel X5680
RAM: 96GB ECC

--=20
You are receiving this mail because:
You are the assignee for the bug.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-220203-8>