Date: 16 Oct 2001 20:44:57 +0200 From: Dag-Erling Smorgrav <des@ofug.org> To: freebsd-hackers@FreeBSD.org Cc: John Baldwin <jhb@FreeBSD.org>, cvs-all@FreeBSD.org, cvs-committers@FreeBSD.org, Bruce Evans <bde@zeta.org.au> Subject: Re: cvs commit: src/sys/vm vnode_pager.c Message-ID: <xzpadyrxw46.fsf@flood.ping.uio.no> In-Reply-To: <200110161800.f9GI0QP33797@apollo.backplane.com> References: <XFMail.011016103432.jhb@FreeBSD.org> <200110161800.f9GI0QP33797@apollo.backplane.com>
next in thread | previous in thread | raw e-mail | index | archive | help
--=-=-= Matthew Dillon <dillon@apollo.backplane.com> writes: > Just calling a mutex function hundreds of thousands of times is > going to be enough to cause the problem. I see a way we can fix the > problem... the mount point lock is locking the list, so if the > information we need to determine whether we have work to do is stored in > the vnode itself we can check and loop without having to mess > around with any more mutexes. If the flag says that there > may be something to do, *then* we do the hard work. That sounds reasonable. One stray thought, by the way - could it be that vnodes aren't being reclaimed as fast as they should? What's the policy - do vnodes only get reclaimed when we start running out? Should we re-evaluate the cost of having them slow down ffs_sync() vs. what we save by keeping them around so we don't need to reallocate them? Attached are a patch that adds counters to ffs_sync() (to see how many vnodes are traversed each time, and how many of those actually needed syncing), and a script I'm using to roughly measure the performance of ffs_sync(). DES -- Dag-Erling Smorgrav - des@ofug.org --=-=-= Content-Type: text/x-patch Content-Disposition: attachment; filename=ffs_sync.diff Index: sys/ufs/ffs/ffs_vfsops.c =================================================================== RCS file: /home/ncvs/src/sys/ufs/ffs/ffs_vfsops.c,v retrieving revision 1.161 diff -u -r1.161 ffs_vfsops.c --- sys/ufs/ffs/ffs_vfsops.c 2 Oct 2001 14:34:22 -0000 1.161 +++ sys/ufs/ffs/ffs_vfsops.c 16 Oct 2001 17:21:34 -0000 @@ -972,6 +972,9 @@ return (0); } +static int count_synced_vnodes = 0; +SYSCTL_INT(_debug, OID_AUTO, count_synced_vnodes, CTLFLAG_RW, &count_synced_vnodes, 0, ""); + /* * Go through the disk queues to initiate sandbagged IO; * go through the inodes to write those that have been modified; @@ -991,6 +994,7 @@ struct ufsmount *ump = VFSTOUFS(mp); struct fs *fs; int error, count, wait, lockreq, allerror = 0; + int looped = 0, vn_traversed = 0, vn_synced = 0; fs = ump->um_fs; if (fs->fs_fmod != 0 && fs->fs_ronly != 0) { /* XXX */ @@ -1008,6 +1012,7 @@ } mtx_lock(&mntvnode_mtx); loop: + ++looped; for (vp = LIST_FIRST(&mp->mnt_vnodelist); vp != NULL; vp = nvp) { /* * If the vnode that we are about to sync is no longer @@ -1017,6 +1022,7 @@ goto loop; nvp = LIST_NEXT(vp, v_mntvnodes); + ++vn_traversed; mtx_unlock(&mntvnode_mtx); mtx_lock(&vp->v_interlock); ip = VTOI(vp); @@ -1027,6 +1033,7 @@ mtx_lock(&mntvnode_mtx); continue; } + ++vn_synced; if (vp->v_type != VCHR) { if ((error = vget(vp, lockreq, td)) != 0) { mtx_lock(&mntvnode_mtx); @@ -1045,6 +1052,12 @@ mtx_lock(&mntvnode_mtx); } mtx_unlock(&mntvnode_mtx); + + if (count_synced_vnodes) + printf(__FUNCTION__ + "(): %d loops, %d vnodes traversed, %d vnodes synced\n", + looped, vn_traversed, vn_synced); + /* * Force stale file system control information to be flushed. */ --=-=-= Content-Type: application/x-sh Content-Disposition: attachment; filename=measure.sh #!/bin/sh vmstat -m | grep FFS sysctl debug.count_synced_vnodes=1 time sync sysctl debug.count_synced_vnodes=0 dmesg | grep ffs_sync | tail -5 --=-=-=-- To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?xzpadyrxw46.fsf>