Date: Fri, 11 Apr 2008 12:45:00 +0100 From: David Malone <dwmalone@maths.tcd.ie> To: Jeff Roberson <jeff@FreeBSD.org> Cc: cvs-src@FreeBSD.org, src-committers@FreeBSD.org, cvs-all@FreeBSD.org Subject: Re: cvs commit: src/sys/ufs/ufs dirhash.h ufs_dirhash.c Message-ID: <20080411114500.GA79162@walton.maths.tcd.ie> In-Reply-To: <200804110948.m3B9mCjB091438@repoman.freebsd.org> References: <200804110948.m3B9mCjB091438@repoman.freebsd.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Apr 11, 2008 at 09:48:12AM +0000, Jeff Roberson wrote: > - Use a lockmgr lock rather than a mtx to protect dirhash. This lock > may be held for the duration of the various dirhash operations which > avoids many complex unlock/lock/revalidate sequences. > - Permit shared locks on lookup. To protect the ip->i_dirhash pointer we > use the vnode interlock in the shared case. Callers holding the > exclusive vnode lock can run without fear of concurrent modification to > i_dirhash. > - Hold an exclusive dirhash lock when creating the dirhash structure for > the first time or when re-creating a dirhash structure which has been > recycled. Hi Jeff, I've been reading through this patch to understand it. I've a few questions: 1) You initialise a chunk of the dir hash struct earlier now, though it was previously left until we knew memory allocation was successful. Is there a reason for that? 2) Is ufsdirhash_create() trying harder than before to allocate a dirhash? It looks like it might be, but maybe that's just a feature of the new locking. 3) You've added a dh_memreq member to the structure to remember a value that's cheap to calculate from other structure members and only required at allocation and free time. I can understand why you'd want to avoid repeating the code, but it would seem better to make it a macro rather than storing it for the lifetime of the object? 4) You replaced an unlocked read with a locked read in ufsdirhash_lookup. I think this is because you are now locking the dh_score with the list mutex and are taking advantage of the fact that the score is changed just below? Won't this mean that for a sequence of operations on one directory, we'll now need to lock the list mutex for each lookup and get the lockmanager lock on the dirhash. At face value, this would seems to be worse than the old situation of only getting the dh mutex for each opteation? In the case of multiple directories, I guess we'll have two mutex locks replaced by one mutex and one lockmanager (shared) lock? This is probably the usual case, which isn't too bad. 5) It looks like the recycle policy (and locking thereof) is now slightly different and I think it might do something funny in some cases now. If we get the list lock in ufsdirhash_recycle, but someone holds an exclusive lock on some of the dirhashes, then we will walk over the locked ones until we find an unlocked one and free it. We then jump back to the start of the list and have to walk over the locked ones again. I wonder if in a low memory situation the locked nodes could be actually waiting for the list lock in ufsdirhash_list_locked() and there could be some sort of cascade where you end up freeing dirhashes that should actually be kept. Actually, this probably isn't possible because the list lock is dropped while we're doing the free? David.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20080411114500.GA79162>