Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 5 Dec 2002 13:18:58 +1100
From:      Tim Robbins <tjr@freebsd.org>
To:        freebsd-fs@freebsd.org
Subject:   vflush() and dependencies between vnodes
Message-ID:  <20021205131858.A54625@dilbert.robbins.dropbear.id.au>

next in thread | raw e-mail | index | archive | help
I've spent the past couple of days tracking down the bug that caused
umount -f on smbfs to panic, and umount without -f to often give EBUSY
when it should not have.

smbfs stores a pointer to each file or directory's parent directory in
the smbnode struct and vref()'s the parent directory's vnode. This means
that an smbfs directory vnode cannot be removed before all of its
children have vrele()'d it. However, vflush() iterates through the
mount's vnode list from start to end, so if a directory vnode appears
in the list before its children, it will not be removed. This causes a
panic when umount -f is used because smbfs_reclaim() calls vrele() on the
parent vnode after it has already been freed. This also causes umount
without -f to erroneously return EBUSY.

Can anyone think of a better way to solve this problem than to keep
rescanning the mount's vnode list until no more vnodes can be freed,
like the patch below does?

The output is typically like this:

vflush: trying again, 202 busy vnodes left
vflush: trying again, 64 busy vnodes left
vflush: trying again, 9 busy vnodes left
vflush: trying again, 5 busy vnodes left
vflush: trying again, 3 busy vnodes left
vflush: trying again, 2 busy vnodes left
vflush: trying again, 1 busy vnodes left
vflush: forcibly closing 1 vnodes

It seems to take approximately lg N passes where N is the number of
busy vnodes in the list, so its performance is not terribly bad,
it's just not very elegant.


Tim


Index: vfs_subr.c
===================================================================
RCS file: /x/freebsd/src/sys/kern/vfs_subr.c,v
retrieving revision 1.420
diff -u -r1.420 vfs_subr.c
--- vfs_subr.c	27 Nov 2002 16:45:54 -0000	1.420
+++ vfs_subr.c	5 Dec 2002 00:49:39 -0000
@@ -73,6 +73,8 @@
 #include <vm/vm_page.h>
 #include <vm/uma.h>
 
+#include <machine/limits.h>
+
 static MALLOC_DEFINE(M_NETADDR, "Export Host", "Export host address structure");
 
 static void	addalias(struct vnode *vp, dev_t nvp_rdev);
@@ -2322,7 +2324,7 @@
 	struct thread *td = curthread;	/* XXX */
 	struct vnode *vp, *nvp, *rootvp = NULL;
 	struct vattr vattr;
-	int busy = 0, error;
+	int busy = 0, error, lastpass = 0, prevbusy = INT_MAX;
 
 	if (rootrefs > 0) {
 		KASSERT((flags & (SKIPSYSTEM | WRITECLOSE)) == 0,
@@ -2394,7 +2396,7 @@
 		 * or character devices, revert to an anonymous device. For
 		 * all other files, just kill them.
 		 */
-		if (flags & FORCECLOSE) {
+		if (lastpass && flags & FORCECLOSE) {
 			if (vp->v_type != VCHR) {
 				vgonel(vp, td);
 			} else {
@@ -2413,6 +2415,38 @@
 		VI_UNLOCK(vp);
 		mtx_lock(&mntvnode_mtx);
 		busy++;
+	}
+	if (busy != 0 && !lastpass) {
+		if (busy < prevbusy) {
+			/*
+			 * Some vnodes were removed but some are still active.
+			 * Rescan the list trying to close them again in case
+			 * some vnodes are holding references to others.
+			 *
+			 * This happens on smbfs because file vnodes hold
+			 * references to their parent directories.
+			 */
+#ifdef DIAGNOSTIC
+			printf("vflush: trying again, %d busy vnodes left\n",
+			    busy);
+#endif
+			prevbusy = busy;
+			busy = 0;
+			goto loop;
+		}
+		if (flags & FORCECLOSE) {
+			/*
+			 * There are no more inactive vnodes to remove. Make
+			 * a final pass over the list to forcibly close any
+			 * remaining active vnodes.
+			 */
+#ifdef DIAGNOSTIC
+			printf("vflush: forcibly closing %d vnodes\n", busy);
+#endif
+			lastpass = 1;
+			busy = 0;
+			goto loop;
+		}
 	}
 	mtx_unlock(&mntvnode_mtx);
 	if (rootrefs > 0 && (flags & FORCECLOSE) == 0) {

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-fs" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20021205131858.A54625>