Date: Mon, 19 Apr 1999 21:51:09 +0200 (CEST) From: Arjan de Vet <Arjan.deVet@adv.iae.nl> To: hackers@freebsd.org Subject: Re: Directories not VMIO cached at all! Message-ID: <199904191951.VAA05105@adv.iae.nl> In-Reply-To: <199904171844.LAA75452@apollo.backplane.com>
next in thread | previous in thread | raw e-mail | index | archive | help
In article <199904171844.LAA75452@apollo.backplane.com> Matt Dillon
writes:
> I've been playing with my new large-memory configured box and,
> especially, looking at disk I/O.
I've been doing this with a 640MB Squid server too the last weeks.
> When I scan enough directories ( e.g. a megabyte worth of directories
> on a 1 GB machine), then scan again, the data is re-fetched from disk.
I've been tuning my machine to be able to cache at least 32MB worth of
directories, on which I which I mailed on April 6 already.
> [...]
>
> Right now, the buffer cache appears to limit itself to 8 MBytes or so,
> and the maximum malloc space limits itself to only 400K! Rather absurdly
> small for a directory cache, I think, yet I also believe that increasing
> the size of the buffer cache may be detrimental due to the amount of I/O
> it can bind up.
I did some tests as explained below and my results (on a 3.1-stable
machine, I don't now how much different -current already is) seems to
indicate that directories are not limited to the malloc space only. I
reran the test after applying your patch and the results are indeed
different and as expected.
I'm still doing some performance testing with Squid and I'll try to
report on it later; Squid's usage of the filesystem and I/O behavior
under heavy load is quite absurd and a good test-case for these kind of
things I think.
Arjan
-----------------------------------------------------------------------------
- 3.1 stable April 6 + big-KVM patches, 640MB RAM
- /cache contains 4096 Squid directories (64 dirs with 64 subdirs each)
with 384 files per directory (maximum, so dirs stay <8K). Total size
of the directories is 22MB (that's 5632 bytes/directory on average).
- Buffer cache size limited to 80MB instead of 8MB by patching
machdep.c:
diff -u -w -r1.322.2.4 machdep.c
--- machdep.c 1999/02/17 13:08:41 1.322.2.4
+++ machdep.c 1999/04/09 08:23:31
@@ -369,7 +369,7 @@
if (nbuf == 0) {
nbuf = 30;
if( physmem > 1024)
- nbuf += min((physmem - 1024) / 8, 2048);
+ nbuf += min((physmem - 1024) / 8, 20480);
}
nswbuf = max(min(nbuf/4, 64), 16);
- in rc.local:
tmp=`sysctl -n vfs.maxvmiobufspace`
sysctl -w vfs.maxvmiobufspace=`expr $tmp / 4`
to favor metadata in the buffer cache as suggested by John Dyson some
time ago.
- After a fresh reboot:
25544 wire
3740 act
3552 inact
0 cache
610048 free
7702 buf
vfs.maxbufspace: 83251200
vfs.bufspace: 7875584
vfs.maxvmiobufspace: 13875200
vfs.vmiospace: 7613440
vfs.maxmallocbufspace: 4162560
vfs.bufmallocspace: 57344
- Read all directories:
[/cache] > time ls -R > /dev/null
5.622u 2.398s 0:46.45 17.2% 210+400k 4975+0io 6pf+0w
57756 wire
3768 act
3688 inact
0 cache
577672 free
37838 buf
vfs.maxbufspace: 83251200
vfs.bufspace: 38746112
vfs.maxvmiobufspace: 13875200
vfs.vmiospace: 14290944
vfs.maxmallocbufspace: 4162560
vfs.bufmallocspace: 1472512
bufspace has increased by 30MB, vmiospace has increased by 7MB (I'm
wondering what data is in it...) and bufmallocspace has increased by
1.4MB. There was no other activity on the system so all this should be
due to reading the directories I guess. Note that 22MB (size of all
directories) plus that strange 7MB vmio data is close to the 30MB
increase in bufspace...
- Check whether they're really cached now:
[/cache] > time ls -R > /dev/null
5.658u 1.147s 0:06.94 97.8% 208+396k 0+0io 0pf+0w
OK, zero I/O.
- Now add the -l option so all files needs to be stat()-ed too:
[/cache] > time ls -lR > /dev/null
40.124u 54.509s 2:46.99 56.6% 209+464k 12370+0io 0pf+0w
99140 wire
3948 act
61376 inact
0 cache
478420 free
81302 buf
vfs.maxbufspace: 83251200
vfs.bufspace: 83252224
vfs.maxvmiobufspace: 13875200
vfs.vmiospace: 58797056
vfs.maxmallocbufspace: 4162560
vfs.bufmallocspace: 1472512
bufspace has reached its maximum value and 58MB of pages have been
moved from the buffer cache to the VM inact queue. vmiospace has
increased by 44MB, bufmallocspace stayed the same. The amount of
non-vmio and non-malloc space in the buffer cache is now 22MB... which
is the size of all directories. Coincidence or not? If not, John
Dyson's suggestion about vfs.maxvmiobufspace seems to work.
- Check whether everything is really cached:
[/cache] > time ls -lR > /dev/null
40.045u 54.867s 1:37.09 97.7% 208+463k 0+0io 0pf+0w
OK, zero I/O again.
- Applied Matt's patches, installed the new kernel, removed the
vfs.maxvmiobufspace hack from rc.local, and rebooted:
26168 wire
3964 act
3696 inact
0 cache
609048 free
8411 buf
vfs.maxbufspace: 83251200
vfs.bufspace: 8621056
vfs.maxvmiobufspace: 55500800
vfs.vmiospace: 8376320
vfs.maxmallocbufspace: 4162560
vfs.bufmallocspace: 44032
- [/cache] > time ls -R > /dev/null
5.572u 2.600s 0:46.82 17.4% 213+405k 4975+0io 6pf+0w
63076 wire
3852 act
3832 inact
0 cache
572112 free
38513 buf
vfs.maxbufspace: 83251200
vfs.bufspace: 39437312
vfs.maxvmiobufspace: 55500800
vfs.vmiospace: 39192576
vfs.maxmallocbufspace: 4162560
vfs.bufmallocspace: 44032
Indeed, bufmallocspace did not increase and bufspace has increased by
a slightly higher amount than in the old case because of this.
vmiospace clearly differs, 39MB instead of 14MB. So all directory data
seems indeed to be VMIO'ed now.
- [/cache] > time ls -R > /dev/null
5.535u 1.203s 0:07.02 95.8% 211+402k 0+0io 0pf+0w
- [/cache] > time ls -lR > /dev/null
40.492u 57.054s 2:51.16 56.9% 207+460k 12370+0io 0pf+0w
103744 wire
3976 act
62256 inact
0 cache
472900 free
81304 buf
vfs.maxbufspace: 83251200
vfs.bufspace: 83246080
vfs.maxvmiobufspace: 55500800
vfs.vmiospace: 83000320
vfs.maxmallocbufspace: 4162560
vfs.bufmallocspace: 45056
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199904191951.VAA05105>
