Date: Mon, 16 Jul 2018 21:10:00 +0300 From: Konstantin Belousov <kostikbel@gmail.com> To: Jack Humphries <jack@chillysky.com> Cc: freebsd-fs@freebsd.org Subject: Re: tmpfs questions Message-ID: <20180716181000.GA1876@kib.kiev.ua> In-Reply-To: <CANxg70FowxKisR2sTFu4bx7n9g2VHbvGEep=TjseYHACqe%2BPXA@mail.gmail.com> References: <CANxg70HnN0qtb7sp7w30_-Z7pSw=8y7cV9ChWkH18XJtDTPCXA@mail.gmail.com> <20180714074829.GR5562@kib.kiev.ua> <CANxg70FowxKisR2sTFu4bx7n9g2VHbvGEep=TjseYHACqe%2BPXA@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Jul 16, 2018 at 10:44:17AM -0700, Jack Humphries wrote: > Thanks a lot, Konstantin. I also noticed that the tmpfs_mount > structure is not necessarily protected by a lock when an access is > made to the tm_nodes_inuse member in the if statement at the beginning > of the tmpfs_alloc_node function in tmpfs_subr.c (though this member > is protected by the allnode_lock when it is modified). Thus > conceivably it is possible to create more nodes than the maximum > number allowed if multiple threads try to do so at the same time. Do > you know how this situation is handled? What am I looking at > incorrectly? Thanks again! Well, the problem is not in the lock-less check at the start of the tmpfs_alloc_node() function. Even if we checked under the lock, we still would drop it immediately after the check and this is where the problem is. Imagine that we have several threads allocating nodes for the same mount point, and the current count is at max - 1. All of them would pass the check, but also all of them would insert a new node into the mount. To correctly handle this case, the check must be done under the TMPFS_LOCK() at the end of the function, before the counter is incremented. I am not sure that rollback of the fully allocated node is worth possible minor overflow. > > Jack > > On Sat, Jul 14, 2018 at 12:48 AM, Konstantin Belousov > <kostikbel@gmail.com> wrote: > > On Fri, Jul 13, 2018 at 05:42:59PM -0700, Jack Humphries wrote: > >> Hi everyone, > >> > >> I'm trying to study the FreeBSD tmpfs implementation as a personal > >> project, and I had a couple questions. I've been looking through the > >> code for a week and modifying various parts. I appreciate any help! > >> > >> 1. It seems that vnodes are locked before being passed to the various > >> VOP functions in tmpfs (because there is a call to > >> MPASS(VOP_ISLOCKED(vp)) near the beginning of each function). > >> Therefore, is the implicit assumption that a thread that holds the > >> vnode lock has exclusive access to the corresponding tmpfs_node > >> struct? In other words, is this why there are accesses to the tmpfs > >> node variables even though the tmpfs node is not locked? Note: I see > >> tn_interlock, but based on a comment above it in the source, it only > >> protects tn_vpstate and tn_status. > > tmpfs nodes are protected by the vnode locks. Note that vnode lock > > can be held exclusive or shared. Typically, only exclusive lock allows > > the code to modify the node, owning shared vnode lock only means that > > the node can be read safely. > > > > Interlock exists because node state must be examined sometimes without > > owning the vnode lock, in non-sleepable context. > > > >> > >> 2. What is the duplicate node list for (tn_dupindex)? If I had to > >> guess, it seems to have something to do with the case where one thread > >> calls readdir on a directory while another is modifying the directory, > >> but I'm not sure. Can someone explain this deeper? > > As an optimization, children of the directory node are organized into > > red/black tree, which cannot hold two entries with the same key value. > > If a second entry with the existing key is attempted to be created in > > the directory, it is added to the dup list instead.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20180716181000.GA1876>