From owner-freebsd-current@FreeBSD.ORG Sat Jul 17 23:01:02 2004 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id C07C216A4CE; Sat, 17 Jul 2004 23:01:02 +0000 (GMT) Received: from fledge.watson.org (fledge.watson.org [204.156.12.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id 35F4A43D45; Sat, 17 Jul 2004 23:01:02 +0000 (GMT) (envelope-from robert@fledge.watson.org) Received: from fledge.watson.org (localhost [127.0.0.1]) by fledge.watson.org (8.12.11/8.12.11) with ESMTP id i6HN0YYk037265; Sat, 17 Jul 2004 19:00:34 -0400 (EDT) (envelope-from robert@fledge.watson.org) Received: from localhost (robert@localhost)i6HN0XPx037262; Sat, 17 Jul 2004 19:00:33 -0400 (EDT) (envelope-from robert@fledge.watson.org) Date: Sat, 17 Jul 2004 19:00:33 -0400 (EDT) From: Robert Watson X-Sender: robert@fledge.watson.org To: Norikatsu Shigemura In-Reply-To: <200407172147.i6HLlbPL035974@sakura.ninth-nine.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII cc: Alex Vasylenko cc: freebsd-current@FreeBSD.org cc: julian@elischer.org Subject: Re: Call for PRs: nullfs X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 17 Jul 2004 23:01:02 -0000 On Sun, 18 Jul 2004, Norikatsu Shigemura wrote: > On Sat, 17 Jul 2004 15:59:31 -0400 > Alex Vasylenko wrote: > > I find the performance of nullfs somewhat lacking as measured in the test > > described below (a config with nullfs performs worse (~2x slower) than the same > > config with vnodefs). For simplicity the test was done in chroot, doing it in a > > jail has no significant impact on performance. > > Wow, I confirmed this behavior with 'make buildworld' on > 5-current(2004/7/2, SMP). > > nullfs mounted /usr/src, /usr/obj: about 5000sec > ln -s'ed /usr/src, /usr/obj: about 3000sec There are a number of potential causes for this, and working out which it is would be useful. One is the direct overhead associated with stacking -- extra computation, locking, function calls, etc. Another is the indirect overhead associated allocating additional twice as many vnodes for every file system object (original location, new location). This can be measured in both actual memory overhead, but also the impact on hitting the maxvnodes bound, which causes vnodes to be recycled. It could be that you're hitting the bound and as a result useful vnodes are leaving the vnode cache. You might try looking at the value of vfs.numvnodes, vfs.wantfreevnodes, vfs.freevnodes, and kern.maxvnodes at intervals through the benchmark -- maybe running a script that pulls down the sysctl values every 10 seconds or 20 seconds or such. On some systmes, "memory is no object" -- on other systems it is -- it would be interesting to know how much memory your system has. Finally, it might be interesting to know what the page fault rate and disk I/O transaction rates during the benchmark. These might point at the additional memory consumption creating pressure for necessary memory. Robert N M Watson FreeBSD Core Team, TrustedBSD Projects robert@fledge.watson.org Principal Research Scientist, McAfee Research