Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 28 May 2019 13:55:29 +0300
From:      Konstantin Belousov <kib@freebsd.org>
To:        Alexander Lochmann <alexander.lochmann@tu-dortmund.de>
Cc:        freebsd-fs@freebsd.org, Horst Schirmeier <horst.schirmeier@tu-dortmund.de>
Subject:   Re: RFC: LockDoc - Detecting Locking Bugs in the FreeBSD Kernel
Message-ID:  <20190528105529.GH2748@kib.kiev.ua>
In-Reply-To: <67482bf7-be1a-8f8d-ca80-2087bfc8cf99@tu-dortmund.de>
References:  <67482bf7-be1a-8f8d-ca80-2087bfc8cf99@tu-dortmund.de>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, May 28, 2019 at 11:34:36AM +0200, Alexander Lochmann wrote:
> Hi folks,
> 
> during the past months we've been developing LockDoc, a trace-based
> approach of Lock Analysis in the FreeBSD kernel.
> LockDoc uses execution traces of an instrumented FreeBSd kernel to
> automatically deduce
> locking rules for all members of arbitrary kernel data structures.
> The traces are gathered running a manually selected fs-specific subset
> of the Linux Test Project in a virtual machine.
> We use the Linux Test Project, because we weren't able to find such
> benchmark suite for FreeBSD and we initially applied our approach to Linux.
> These locking rules can be used to generate a comprehensive locking
> documentation and to reveal potential bugs.
> 
> LockDoc generates rules for each tuple of (data structure, member, {r,w}).
> It completely ignores any atomic_*() function.
> Accesses during initialization and destruction of objects are also ignored.
> The output of LockDoc looks like this:
> cdev_priv member: cdp_c.si_usecount [r] (5 lock combinations)
>   hypotheses: 24
>      34.8% (24 out of 69 mem accesses): EMBOTHER(vnode.v_lock[r]) ->
> EMBOTHER(vnode.v_interlock[w]) -> devmtx:89(sleep mutex[w])
>      34.8% (24 out of 69 mem accesses): EMBOTHER(vnode.v_lock[r]) ->
> devmtx:89(sleep mutex[w])
>      34.8% (24 out of 69 mem accesses): EMBOTHER(vnode.v_lock[r]) ->
> EMBOTHER(vnode.v_interlock[w])
>      34.8% (24 out of 69 mem accesses): EMBOTHER(vnode.v_lock[r])
>      63.8% (44 out of 69 mem accesses): EMBOTHER(vnode.v_lock[w]) ->
> EMBOTHER(vnode.v_interlock[w]) -> devmtx:89(sleep mutex[w])
>      63.8% (44 out of 69 mem accesses): EMBOTHER(vnode.v_lock[w]) ->
> EMBOTHER(vnode.v_interlock[w])
>      65.2% (45 out of 69 mem accesses): EMBOTHER(vnode.v_lock[w]) ->
> devmtx:89(sleep mutex[w])
>      65.2% (45 out of 69 mem accesses): EMBOTHER(vnode.v_lock[w])
>      98.6% (68 out of 69 mem accesses): EMBOTHER(vnode.v_interlock[w])
> -> devmtx:89(sleep mutex[w])
>      98.6% (68 out of 69 mem accesses): EMBOTHER(vnode.v_interlock[w])
> !     100% (69 out of 69 mem accesses): devmtx:89(sleep mutex[w])
>       100% (69 out of 69 mem accesses): (no locks held)
> 
> In this example LockDoc concludes that the lock
> "devmtx:89(sleep mutex[w])" is necessary for reading
> cdev_priv.cdp_c.si_usecount.
> In this case, devmtx means a global lock of type sleep mutex.
> To be more precise, the write lock (--> "[w]") of devmtx is needed.
> Btw, EMBSAME stands for the lock embedded in the data type being accessed.
> EMBOTHER respectively stands for a lock embedded in another object
> that is currently not accessed.
> 
> Based on this methodology, we can determine code locations that do not
> adhere to the deduced locking rules.
> The reports on rule-violating code include the stack trace and the
> actual locks held.
> 
> We've now created a series of bug reports for the following data types:
> struct devfs_dirent, devfs_mount, mount, namecache, pipepair, and vnode
> We present the counterexamples for each tuple of (data structure,
> member, {r,w}).
> Depending on the complexity of the callgraph, the counterexamples are
> either embedded in the callgraph or the callgraph is shown below them.
> In the latter case, zooming can be enabled via a button in the heading.
> 
> We kindly ask you to have a look at our findings and send us some
> comments back:
> https://ess.cs.tu-dortmund.de/lockdoc-bugs/freebsd/
> 
> If you are interested in the derived locking rules, have a look at:
> https://ess.cs.tu-dortmund.de/lockdoc-bugs/freebsd/all-txns-members-locks-hypo-nostack-nosubclasses.txt
> 
> It might be possible that LockDoc considers a certain access as a bug
> due to an incomplete blacklist of init and destroy functions.
> Hence, we appreciate any feedback refining our blacklist.
It is worse.  I did quick look, and all the issues among 5 I looked at
where reported because the tool does not understand the vnode visibility.
It is safe to access a vnode without holding any locks as far as the vnode
cannot be found by other threads.

For instance, while mount (VFS_MOUNT) initializes the filesystem, VFS
ensures that no thread could ever reach into it, e.g. by lookups.  One
way of ensuring it is by locking the covered vnode.  So a report that
devfs mount operates on some not-enough-locked vnode is false positive.

Same for the freshly created vnode on the mounted filesystem, while the
vnode not inserted into vfs hash or mount point vnode' list.  In fact,
we typically lock the new vnode exclusively right before adding to
hash and calling insmntque(), exactly to prevent other threads to operate
on partially initialized vnode.

There could be valid reports, but digging in the present amount of reports
with false positive is mostly equivalent to reviewing all mount/unmount
and vnode creation/reclamation paths.  It is too much to spend time usefully,
IMO.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20190528105529.GH2748>