Date: Wed, 26 Oct 2022 00:18:08 +0000 From: bugzilla-noreply@freebsd.org To: fs@FreeBSD.org Subject: [Bug 266014] panic: corrupted zfs dataset (zfs issue) Message-ID: <bug-266014-3630-uwO9x1Uj2s@https.bugs.freebsd.org/bugzilla/> In-Reply-To: <bug-266014-3630@https.bugs.freebsd.org/bugzilla/> References: <bug-266014-3630@https.bugs.freebsd.org/bugzilla/>
next in thread | previous in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D266014 --- Comment #7 from Duncan <dpy@pobox.com> --- (In reply to Graham Perrin from comment #6) The replication I beleive works fine, as long as one doesn't then try to mo= unt the dataset. I will check this properly and perphaps try setting up another machine to run the panics on. It is a bit of a pain to keep knocking over = my main server. I should get back to this within the week.=20 I did get a different type of crash dump, I believe from the mount and it is different, i.e. Unread portion of the kernel message buffer: panic: VERIFY3(sa.sa_magic =3D=3D SA_MAGIC) failed (1122422741 =3D=3D 31007= 62) cpuid =3D 5 time =3D 1666406111 KDB: stack backtrace: #0 0xffffffff80c694a5 at kdb_backtrace+0x65 #1 0xffffffff80c1bb5f at vpanic+0x17f #2 0xffffffff84ff4f4a at spl_panic+0x3a #3 0xffffffff851948f8 at zpl_get_file_info+0x1d8 #4 0xffffffff85060388 at dmu_objset_userquota_get_ids+0x298 #5 0xffffffff85073f24 at dnode_setdirty+0x34 #6 0xffffffff8504bd49 at dbuf_dirty+0x9d9 #7 0xffffffff85061fc0 at dmu_objset_space_upgrade+0x40 #8 0xffffffff85060a5f at dmu_objset_id_quota_upgrade_cb+0x14f #9 0xffffffff85061eaf at dmu_objset_upgrade_task_cb+0x7f #10 0xffffffff84ff6a0f at taskq_run+0x1f #11 0xffffffff80c7da81 at taskqueue_run_locked+0x181 #12 0xffffffff80c7ed92 at taskqueue_thread_loop+0xc2 #13 0xffffffff80bd8a9e at fork_exit+0x7e #14 0xffffffff810885ee at fork_trampoline+0xe Uptime: 13m13s (ada0:ahcich1:0:0:0): spin-down (ada1:ahcich2:0:0:0): spin-down (ada2:ahcich3:0:0:0): spin-down (ada3:ahcich4:0:0:0): spin-down Dumping 13911 out of 130858 MB:..1%..11%..21%..31%..41%..51%..61%..71%..81%..91% __curthread () at /usr/src/sys/amd64/include/pcpu_aux.h:55 55 __asm("movq %%gs:%P1,%0" : "=3Dr" (td) : "n" (offsetof(stru= ct pcpu, (kgdb) #0 __curthread () at /usr/src/sys/amd64/include/pcpu_aux.h:55 #1 doadump (textdump=3D<optimized out>) at /usr/src/sys/kern/kern_shutdown.c:399 #2 0xffffffff80c1b75c in kern_reboot (howto=3D260) at /usr/src/sys/kern/kern_shutdown.c:487 #3 0xffffffff80c1bbce in vpanic ( fmt=3D0xffffffff85250fe8 "VERIFY3(sa.sa_magic =3D=3D SA_MAGIC) failed (= %llu =3D=3D %llu)\n", ap=3D<optimized out>) at /usr/src/sys/kern/kern_shutdown.c:920 #4 0xffffffff84ff4f4a in spl_panic (file=3D<optimized out>, func=3D<optimized out>, line=3D<unavailable>, fmt=3D<unavailable>) at /usr/src/sys/contrib/openzfs/module/os/freebsd/spl/spl_misc.c:107 #5 0xffffffff851948f8 in zpl_get_file_info (bonustype=3D<optimized out>, data=3D0xfffffe035db250c0, zoi=3D0xfffffe027e72bc50) at /usr/src/sys/contrib/openzfs/module/zfs/zfs_quota.c:89 #6 0xffffffff85060388 in dmu_objset_userquota_get_ids ( dn=3D0xfffff8160ebcf660, before=3Dbefore@entry=3D1, tx=3D<optimized out= >, tx@entry=3D0xfffff80ec760a100) at /usr/src/sys/contrib/openzfs/module/zfs/dmu_objset.c:2215 #7 0xffffffff85073f24 in dnode_setdirty (dn=3D0xfffff8160ebcf660, tx=3D0xfffff80ec760a100) at /usr/src/sys/contrib/openzfs/module/zfs/dnode.c:1691 #8 0xffffffff8504bd49 in dbuf_dirty (db=3D0xfffff8160ebd3b90, db@entry=3D0= x0, tx=3Dtx@entry=3D0xfffff8160ebd3b90) at /usr/src/sys/contrib/openzfs/module/zfs/dbuf.c:2367 #9 0xffffffff8504c074 in dmu_buf_will_dirty_impl (db_fake=3D<optimized out= >, flags=3D<optimized out>, flags@entry=3D9, tx=3D0xfffff8160ebd3b90, tx@entry=3D0xfffff80ec760a100) at /usr/src/sys/contrib/openzfs/module/zfs/dbuf.c:2517 #10 0xffffffff8504aea2 in dmu_buf_will_dirty (db_fake=3D<unavailable>, tx=3D<unavailable>, tx@entry=3D0xfffff80ec760a100) at /usr/src/sys/contrib/openzfs/module/zfs/dbuf.c:2523 #11 0xffffffff85061fc0 in dmu_objset_space_upgrade ( os=3Dos@entry=3D0xfffff80408629800) at /usr/src/sys/contrib/openzfs/module/zfs/dmu_objset.c:2328 #12 0xffffffff85060a5f in dmu_objset_id_quota_upgrade_cb ( os=3D0xfffff80408629800) at /usr/src/sys/contrib/openzfs/module/zfs/dmu_objset.c:2385 #13 0xffffffff85061eaf in dmu_objset_upgrade_task_cb (data=3D0xfffff8040862= 9800) at /usr/src/sys/contrib/openzfs/module/zfs/dmu_objset.c:1447 #14 0xffffffff84ff6a0f in taskq_run (arg=3D0xfffff801e5ab5300, pending=3D<unavailable>) at /usr/src/sys/contrib/openzfs/module/os/freebsd/spl/spl_taskq.c:315 #15 0xffffffff80c7da81 in taskqueue_run_locked ( queue=3Dqueue@entry=3D0xfffff80116004300) at /usr/src/sys/kern/subr_taskqueue.c:477 #16 0xffffffff80c7ed92 in taskqueue_thread_loop (arg=3D<optimized out>, arg@entry=3D0xfffff801dfb570d0) at /usr/src/sys/kern/subr_taskqueue.c:7= 94 #17 0xffffffff80bd8a9e in fork_exit ( callout=3D0xffffffff80c7ecd0 <taskqueue_thread_loop>, arg=3D0xfffff801dfb570d0, frame=3D0xfffffe027e72bf40) at /usr/src/sys/kern/kern_fork.c:1093 #18 <signal handler called> #19 mi_startup () at /usr/src/sys/kern/init_main.c:322 #20 0xffffffff80f791d9 in swapper () at /usr/src/sys/vm/vm_swapout.c:755 #21 0xffffffff80385022 in btext () at /usr/src/sys/amd64/amd64/locore.S:80 ---------------------- I would say this is a similar but different problem. I had months of replicated copies on two different pools. Because I copied (send/recieve) = them encrypted and unmounted on the destination, nothing showed up. As soon as I tried to mount them, panic. Currently I have renamed the original dataset (currently unmounted), but I deleted the backups (they wouldn't mount, but I'm sure I can re-create them= ). I will do more experimintaion when I have a couple of hours spare (within t= he week) --=20 You are receiving this mail because: You are the assignee for the bug.=
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-266014-3630-uwO9x1Uj2s>