Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 22 Feb 2026 18:53:54 +0000
From:      bugzilla-noreply@freebsd.org
To:        bugs@FreeBSD.org
Subject:   [Bug 288345] poudriere run, hanging umounts, system fails to reboot due to hanging processes
Message-ID:  <bug-288345-227-d5pllW34FW@https.bugs.freebsd.org/bugzilla/>
In-Reply-To: <bug-288345-227@https.bugs.freebsd.org/bugzilla/>

index | next in thread | previous in thread | raw e-mail

https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=288345

--- Comment #42 from Mark Johnston <markj@FreeBSD.org> ---
(In reply to Craig Leres from comment #41)
Now we're getting somewhere... we panicked in nullfs_unlink_lowervp() because a
nullfs vnode with usecount == 0 is not doomed:

#4  0xffffffff80b16b23 in panic (fmt=<unavailable>) at
../../../kern/kern_shutdown.c:891
#5  0xffffffff834a00a1 in nullfs_unlink_lowervp (mp=<unavailable>,
lowervp=<unavailable>) at /usr/src/sys/fs/nullfs/null_vfsops.c:464
#6  0xffffffff80c161a9 in vfs_notify_upper (vp=vp@entry=0xfffff8132cfc2540,
event=event@entry=VFS_NOTIFY_UPPER_UNLINK) at ../../../kern/vfs_subr.c:4282
#7  0xffffffff80c177df in vop_remove_pre (ap=ap@entry=0xfffffe0431a83d68) at
../../../kern/vfs_subr.c:6174
#8  0xffffffff8110d643 in VOP_REMOVE_APV (vop=0xffffffff8251de50,
vop@entry=<error reading variable: value is not available>,
a=0xfffffe0431a83d68, a@entry=<error reading variable: value is not available>)
at vnode_if.c:1528
#9  0xffffffff834a01bd in null_bypass (ap=0xfffffe0431a83d68) at
/usr/src/sys/fs/nullfs/null_vnops.c:293
#10 0xffffffff834a081f in null_remove (ap=<optimized out>) at
/usr/src/sys/fs/nullfs/null_vnops.c:642
#11 0xffffffff8110d65a in VOP_REMOVE_APV (vop=0xffffffff834a3528
<sysctl___debug_nullfs_bug_bypass+80>, a=a@entry=0xfffffe0431a83d68) at
vnode_if.c:1534
#12 0xffffffff80c24110 in VOP_REMOVE (dvp=<unavailable>, vp=0xfffff82ab594c1c0,
cnp=0xfffffe0431a83d20) at ./vnode_if.h:789
#13 kern_funlinkat (td=0xfffff804d4347000, dfd=dfd@entry=-100,
path=0x2cac0741ee00 <error: Cannot access memory at address 0x2cac0741ee00>,
fd=fd@entry=-200, pathseg=pathseg@entry=UIO_USERSPACE, flag=flag@entry=0,
oldinum=0) at ../../../kern/vfs_syscalls.c:1999
#14 0xffffffff80c23c98 in sys_unlink (td=<unavailable>, uap=<optimized out>) at
../../../kern/vfs_syscalls.c:1880
#15 0xffffffff81046cfa in syscallenter (td=0xfffff804d4347000) at
../../../amd64/amd64/../../kern/subr_syscall.c:193
#16 amd64_syscall (td=0xfffff804d4347000, traced=0) at
../../../amd64/amd64/trap.c:1241
#17 <signal handler called>
(kgdb) frame 5
#5  0xffffffff834a00a1 in nullfs_unlink_lowervp (mp=<unavailable>,
lowervp=<unavailable>) at /usr/src/sys/fs/nullfs/null_vfsops.c:464
464             if (vp->v_usecount == 0) {
(kgdb) p vp
$1 = (struct vnode *) 0xfffff81fba553540
(kgdb) p &vp->v_lock
$2 = (struct lock *) 0xfffff81fba5535b0
(kgdb) p vp->v_vnlock
$3 = (struct lock *) 0xfffff8132cfc25b0
(kgdb) p/x vp->v_irflag
$4 = 0x0

Indeed, it's not doomed and the vnode lock pointer hasn't been reset yet.

Meanwhile, another thread is trying to reclaim the vnode but is blocked on the
vnode lock:

#3  <signal handler called>                                                     
#4  lock_delay (la=la@entry=0xfffffe0685061970) at
../../../kern/subr_lock.c:124                                                   
#5  0xffffffff80ae26fe in lockmgr_xlock_adaptive (lda=<optimized out>,
lk=<optimized out>, xp=<optimized out>) at ../../../kern/kern_lock.c:761
#6  lockmgr_xlock_hard (lk=0xfffff8132cfc25b0, flags=540672, ilk=0x0,
file=<optimized out>, line=778, lwa=0x0) at ../../../kern/kern_lock.c:852       
#7  0xffffffff834a0d38 in VOP_LOCK1 (vp=0xfffff8132cfc2540, flags=524288,
line=778, file=<optimized out>) at ./vnode_if.h:1118
#8  null_lock (ap=0xfffffe0685061a78) at
/usr/src/sys/fs/nullfs/null_vnops.c:778                     
#9  0xffffffff80c2c9b3 in VOP_LOCK1 (vp=0xfffff81fba553540, flags=524544,
file=0xffffffff811fd760 "../../../kern/vfs_subr.c", line=3556) at
./vnode_if.h:1118  
#10 _vn_lock (vp=vp@entry=0xfffff81fba553540, flags=flags@entry=524544,
file=0xffffffff834a2410 "/usr/src/sys/fs/nullfs/null_vnops.c", line=61,
line@entry=3556) at ../../../kern/vfs_vnops.c:1857
#11 0xffffffff80c143b9 in vput_final (vp=0xfffff81fba553540,
func=func@entry=VRELE) at ../../../kern/vfs_subr.c:3556                         
#12 0xffffffff80c13b93 in vrele (vp=0xfffffe0685061970,
vp@entry=0xfffff81fba553540) at ../../../kern/vfs_subr.c:3631                   
#13 0xffffffff834a0a07 in null_rename (ap=0xfffffe0685061be0) at
/usr/src/sys/fs/nullfs/null_vnops.c:711            
#14 0xffffffff8110d93a in VOP_RENAME_APV (vop=0xffffffff834a3528
<sysctl___debug_nullfs_bug_bypass+80>, a=a@entry=0xfffffe0685061be0) at
vnode_if.c:1672       
#15 0xffffffff80c280ef in VOP_RENAME (fdvp=0x3d, fvp=<optimized out>,
tdvp=<optimized out>, tvp=0xffffffff834a2410, fcnp=<optimized out>,
tcnp=<optimized out>) at ./vnode_if.h:850
#16 kern_renameat (td=0xfffff801b2d7d740, oldfd=-100, old=0x71139f752c0 <error:
Cannot access memory at address 0x71139f752c0>, newfd=-100, new=0x71139e1da00
<error: Cannot access memory at address 0x71139e1da00>, pathseg=UIO_USERSPACE)
at ../../../kern/vfs_syscalls.c:3776
#17 0xffffffff81046cfa in syscallenter (td=0xfffff801b2d7d740) at
../../../amd64/amd64/../../kern/subr_syscall.c:193
#18 amd64_syscall (td=0xfffff801b2d7d740, traced=0) at
../../../amd64/amd64/trap.c:1241

That vnode in vput_final() is the very same vnode that triggered the panic. 
Something is trying to rename a file via the nullfs mount, while the first
thread is trying to unlink the backing file directly.

The second thread has decremented the nullfs vnode's usecount to 0 but is still
blocked on the vnode lock, so hasn't reclaimed the vnode yet.  The first thread
assumes that usecount == 0 implies that it has reclaimed the vnode, but this is
not true.  Then, because it thinks the nullfs and lower vnode locks have been
split, nullfs_unlink_lowervp() unlocks the nullfs vnode, but that's definitely
wrong.

-- 
You are receiving this mail because:
You are the assignee for the bug.

home | help

Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-288345-227-d5pllW34FW>