Date: Mon, 5 Feb 1996 15:26:18 +1100 From: Bruce Evans <bde@zeta.org.au> To: bde@zeta.org.au, gibbs@freefall.freebsd.org Cc: curt@emergent.com, freebsd-hackers@freebsd.org, jgreco@brasil.moneng.mei.com Subject: Re: Watchdog timers (was: Re: Multi-Port Async Cards) Message-ID: <199602050426.PAA27801@godzilla.zeta.org.au>
next in thread | raw e-mail | index | archive | help
>>I thought -current would be much better with -o async. However:
>>
>>FreeBSD-current - 486DX2/66 - 16MB RAM (BT445C Quantum Grand Prix drive,
>>1/3 full 1G fs)
>>(too-small buffer cache (nbuf = 247) + vm buffers that I don't understand)
>>
>>10000 files in 468 seconds - first run
>> 467.76 real 0.87 user 17.19 sys (time /tmp/prog)
>> 0.33 real 0.01 user 0.03 sys (time sync)
>>10000 files in 295 seconds - second run
>> 295.34 real 0.87 user 12.86 sys (time /tmp/prog)
>> 0.33 real 0.02 user 0.02 sys (time sync)
>Yes, but this is not exactly fair. Your SCSI controller only has 5 times
>the command overhead as your IDE card (because you have a very poor SCSI
>controller).
[The Minix-1.6.25++ - 486DX/33 - 8MB RAM (slow IDE controller, slow IDE
drive, fresh 137MB fs) are about 17 seconds for the first run and 12
seconds for the second run.]
Should I upgrade to a ST01 controller? ;-) The controller overhead was
almost irrelevant for the Minix-IDE setup because the disk activity LED
only blinked on for a second or two out of the 17 seconds. For FreeBSD
the disk LED was on for about 468 seconds out of 468. Joe's time of
"only" a couple of hundred seconds for the first run may be due to
lower command overhead.
>I'd like to see the performance difference on the same
>hardware.
FreeBSD-current - 486DX/33 - 8MB RAM (slow IDE controller, slow IDE drive,
fresh 137MB fs in same partition as for Minix):
-o async:
10000 files in 662 seconds - first run
662.13 real 1.60 user 24.56 sys
0.56 real 0.01 user 0.05 sys
10000 files in 407 seconds - second run
406.91 real 1.38 user 17.39 sys
0.26 real 0.01 user 0.03 sys
$ time find . >/dev/null
18.07 real 0.86 user 1.81 sys
$ time find . >/dev/null
19.62 real 0.87 user 1.85 sys
$ time du >/dev/null
39.35 real 1.22 user 6.50 sys
$ time du >/dev/null
38.16 real 1.11 user 6.48 sys
$ time rm -rf *
45.27 real 0.81 user 10.41 sys
sync:
10000 files in 939 seconds - first run
939.37 real 1.50 user 33.07 sys
0.39 real 0.02 user 0.03 sys
`find .' and `du': similar to above.
$ time rm -rf *
256.36 real 0.93 user 16.43 sys
It's surprising how many sync writes there are for async mounted file
systems. I zapped the one that seemed to be the most common:
---
*** /sys/ufs/ufs/ufs_readwrite.c~ Sat Jan 20 06:57:41 1996
--- /sys/ufs/ufs/ufs_readwrite.c Mon Feb 5 14:10:03 1996
***************
*** 284,288 ****
if (ioflag & IO_SYNC) {
! (void)bwrite(bp);
} else if (xfersize + blkoffset == fs->fs_bsize) {
if (doclusterwrite) {
--- 287,295 ----
if (ioflag & IO_SYNC) {
! if (vp->v_mount->mnt_flag & MNT_ASYNC)
! bdwrite(bp);
! else
! (void)bwrite(bp);
! /* XXX write-through when the block fills up is sloooow. */
} else if (xfersize + blkoffset == fs->fs_bsize) {
if (doclusterwrite) {
---
but this only improved the time from 662 seconds to 633 seconds.
Don't try this change if you value your async mounted file systems - it's
too late to check the MNT_ASYNC flag here since i/o to async-mounted file
systems must be synchronous for at least the fsync() system call.
Bruce
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199602050426.PAA27801>
