Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 08 Jan 2004 08:12:09 +0900
From:      Jun Kuriyama <kuriyama@imgsrc.co.jp>
To:        Robert Watson <rwatson@FreeBSD.org>
Cc:        Current <freebsd-current@FreeBSD.org>
Subject:   Re: -current lockup (how to diagnose?)
Message-ID:  <7mn08ztkty.wl@black.imgsrc.co.jp>
In-Reply-To: <Pine.NEB.3.96L.1031202013204.57038S-100000@fledge.watson.org>
References:  <7mad6bbul4.wl@black.imgsrc.co.jp> <Pine.NEB.3.96L.1031202013204.57038S-100000@fledge.watson.org>

next in thread | previous in thread | raw e-mail | index | archive | help

Hi Robert,

At Tue, 2 Dec 2003 06:49:38 +0000 (UTC),
Robert Watson wrote:
> Could you try compiling in DEBUG_LOCKS into your kernel and doing "show
> lockedvnods" with that?  Unfortunately, someone removed the pid from the
> output of that command, but didn't add the thread pointer to the DDB ps
> output, so you'll probably need to modify the lockmgr_printinfo() function
> in vfs_subr.c to print out lkp->lk_lockholder->td_proc->p_pid as well for
> exclusive locks.  It looks like maybe something isn't releasing a vnode
> lock before returning to userspace.  I have some patches to assert that no
> lockmgr locks are held on the return to userspace, but I'll have to dig
> them up tomorrow and send them to you.  Basically, it adds a per-thread
> lockmgr lock count in a thread-local variable, incrementing for each lock,
> and decrementing for each release, and then KASSERT()'s in userret that
> the variable is 0.

Can I use these patches?  I still get this lockup almost everyday
(during nightly dump with snapshot).  I'd like to try any patches even
if they are half baked.


-- 
Jun Kuriyama <kuriyama@imgsrc.co.jp> // IMG SRC, Inc.
             <kuriyama@FreeBSD.org> // FreeBSD Project



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?7mn08ztkty.wl>