From owner-freebsd-hackers Wed Feb 15 15:50:52 1995 Return-Path: hackers-owner Received: (from root@localhost) by freefall.cdrom.com (8.6.9/8.6.6) id PAA13026 for hackers-outgoing; Wed, 15 Feb 1995 15:50:52 -0800 Received: from Root.COM (implode.Root.COM [198.145.90.1]) by freefall.cdrom.com (8.6.9/8.6.6) with ESMTP id PAA13015 for ; Wed, 15 Feb 1995 15:50:48 -0800 Received: from corbin.Root.COM (corbin.Root.COM [198.145.90.18]) by Root.COM (8.6.8/8.6.5) with ESMTP id PAA23242; Wed, 15 Feb 1995 15:50:42 -0800 Received: from localhost (localhost [127.0.0.1]) by corbin.Root.COM (8.6.9/8.6.5) with SMTP id PAA00559; Wed, 15 Feb 1995 15:50:41 -0800 Message-Id: <199502152350.PAA00559@corbin.Root.COM> X-Authentication-Warning: corbin.Root.COM: Host localhost didn't use HELO protocol To: Ed Hudson cc: hackers@FreeBSD.org Subject: Re: 950210-SNAP, VM Free In-reply-to: Your message of "Wed, 15 Feb 95 14:48:13 GMT." <199502151448.OAA17257@p5.spnet.com> From: David Greenman Reply-To: davidg@Root.COM Date: Wed, 15 Feb 1995 15:50:41 -0800 Sender: hackers-owner@FreeBSD.org Precedence: bulk > i think that the csh time command's 'io' field correlates > with both the sound that the disks make, and the loss of > performance. (with loss in performance is measured as wall > clock time) > > other than the number of io transactions reported by csh, > and the loss in performance, the only macroscopic parameters > that i can glean from the system show a huge drop in > free memory. > > > a freshly booted system: > > time /bin/ls -LFC : 0.0u 0.0s 0:00.06 66.6% 231+399k 0+0io 0pf+0w > medium make : 112.8u 24.3s 2:45.41 82.9% 869+1047k 336+1564io 8pf+0w > > > > after a big compile: > > time /bin/ls -LFC : 0.0u 0.0s 0:00.57 14.0% 205+352k 24+0io 0pf+0w > medium make : 113.0u 25.9s 4:07.75 56.0% 862+1040k 5571+1564io 14pf+0w > > (the '/bin/ls time is actually the second (or third, etc) one, - the very > first always takes a long time). The thing to compare this to would be a 2.0 system. I think you'll find that it is always better. I believe the non-optimal performance you're seeing is caused by our algorithm for deciding how much file data to cache. It tries very hard (too hard) to not thash the VM system when large amounts of file I/O are done. We will likely change the balance in the future, but at the moment this is very difficult to do without unusual side effects. One thing we can do right away, however, is increase the minimum size of the cache - it currently can shrink to less than 10% of memory (and only half of this for file data - the other half is for meta/directory data). This should probably be increased to 15% or 20%. Try out the attached patch which changes it to 20%. -DG Index: machdep.c =================================================================== RCS file: /home/ncvs/src/sys/i386/i386/machdep.c,v retrieving revision 1.110 diff -c -r1.110 machdep.c *** 1.110 1995/02/14 19:20:26 --- machdep.c 1995/02/15 23:49:29 *************** *** 257,263 **** if (nbuf == 0) { nbuf = 30; if( physmem > 1024) ! nbuf += min((physmem - 1024) / 20, 1024); } nswbuf = min(nbuf, 128); --- 257,263 ---- if (nbuf == 0) { nbuf = 30; if( physmem > 1024) ! nbuf += min((physmem - 1024) / 10, 1024); } nswbuf = min(nbuf, 128);