From owner-freebsd-fs Fri Feb 27 10:02:04 1998 Return-Path: Received: (from majordom@localhost) by hub.freebsd.org (8.8.8/8.8.8) id KAA28136 for freebsd-fs-outgoing; Fri, 27 Feb 1998 10:02:04 -0800 (PST) (envelope-from owner-freebsd-fs@FreeBSD.ORG) Received: from ns1.yes.no (ns1.yes.no [195.119.24.10]) by hub.freebsd.org (8.8.8/8.8.8) with ESMTP id KAA28084 for ; Fri, 27 Feb 1998 10:01:59 -0800 (PST) (envelope-from eivind@bitbox.follo.net) Received: from bitbox.follo.net (bitbox.follo.net [194.198.43.36]) by ns1.yes.no (8.8.7/8.8.7) with ESMTP id SAA06032; Fri, 27 Feb 1998 18:01:56 GMT Received: (from eivind@localhost) by bitbox.follo.net (8.8.6/8.8.6) id TAA00976; Fri, 27 Feb 1998 19:01:33 +0100 (MET) Message-ID: <19980227190132.53798@follo.net> Date: Fri, 27 Feb 1998 19:01:32 +0100 From: Eivind Eklund To: Jonathan Lemon , Eivind Eklund Cc: fs@FreeBSD.ORG Subject: Re: syncer / SMP question References: <19980227164859.25557@follo.net> <19980227102555.05064@right.PCS> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Mailer: Mutt 0.89.1i In-Reply-To: <19980227102555.05064@right.PCS>; from Jonathan Lemon on Fri, Feb 27, 1998 at 10:25:55AM -0600 Sender: owner-freebsd-fs@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.org On Fri, Feb 27, 1998 at 10:25:55AM -0600, Jonathan Lemon wrote: > On Feb 02, 1998 at 04:48:59PM +0100, Eivind Eklund wrote: > > I was looking at implementing an incremental syncer for UFS, as a sort > > of "let's get to know the FS-code" project, and noticed something that > > looked like really strange code in the sync() syscall. Is there any > > reason why the below wouldn't be a benign change? The extra > > simplelock-call looks especially weird - it looks like either the lock > > is released somewhere else, or we'll have the mountlist locked many > > times. > > It appears to be released. The vfs_busy() routine makes a call to > lockmgr(), which sets a lock on mountpoint and releases the lock on > the mountlist. But... But... What about the fetch of the next-pointer? Looks like a potential race-condition to me, but this includes some lock-calls I don't really understand what do (interlocks? What are they?). > I agree that the lock calls look weird. Perhaps someone could explain > the purpose of the various locks? It appears that you need to get a lock > on the mountlist in order to iterate the mountpoints. But why does > vfs_busy() (which operates on a mountpoint) release your mountlist lock? > Convenience? Looks likely. The functionality is actually only used five places: sync(), getfsstat(), printlockedvnodes(), sysctl_vnode(), and nqnfs_lease_updatetime(), all in equal-looking loops. By comparison, vfs_busy itself is used in 15 places. Perhaps a WALK_MOUNTLIST macro would be better? I use #define LIST_WALK_FORWARD(list, node, extra_node) \ for ((node) = (list)->pHead; \ (node) = (extra_node); \ (extra_node) = (node)->pNext) for my list walking. This could be done for mountlists, too, and abstract away all knowledge of the mountlist/lock relation from the actual consumer. The below could handle everything in a simple macro #define WALK_MOUNT_LIST(ml, mp, nmp, p) \ for((mp) = _vfs_getfirstmountp((ml), (p)), \ (mp) || (vfs_unbusy((mp), (p)),0); \ (mp) = _vfs_getnextmountp((ml), (mp), (p))) and two functions struct mount * _vfs_firstmountp(ml, p) struct mntlist *ml; struct proc *p; { struct mount *mp; simple_lock(&mountlist_slock); mp = ml->cqh_first; while ((mp != (void*)ml) && vfs_busy(mp, LK_NOWAIT, &mountlist_slock, p)) { mp = mp->mnt_list.cqe_next; } if (mp == (void*)ml) mp = NULL; simple_unlock(&mountlist_slock); return mp; } struct mount * _vfs_getnextmountp(ml, mp, p) struct mntlist *ml; struct mount *mp; struct proc *p; { struct mount *nmp; simple_lock(&mountlist_slock); nmp = mp->mnt_list.cqe_next; vfs_unbusy(mp); mp = nmp; while (vfs_busy(mp, LK_NOWAIT, &mountlist_slock, p)) { mp = mp->mnt_list.cqe_next; if (mp == (void*)ml) { simple_unlock(&mountlist_slock); return NULL; } } simple_unlock(&mountlist_slock); return mp; } Yes, it is probably more code than presently (at least if you count it as lines of C-code), but it abstract away all the complexity, and make the interfaces clean. Locking and unlocking at different layers is IMHO not clean; it 'smells of trouble'. > (Yes, I know that simple_lock is only for SMP, but still...) > > As for hoisting up the calculation of nmp, I don't think you can do that, > since the inner portion releases it's lock on the mountlist, meaning that > the another processor can change the mountlist out from underneath you, > invalidating your (saved) nmp. Yes, I later noticed. What do you think of the above? Eivind. To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-fs" in the body of the message