From owner-freebsd-current Wed Dec 20 05:54:51 1995 Return-Path: owner-current Received: (from root@localhost) by freefall.freebsd.org (8.7.3/8.7.3) id FAA19955 for current-outgoing; Wed, 20 Dec 1995 05:54:51 -0800 (PST) Received: from godzilla.zeta.org.au (godzilla.zeta.org.au [203.2.228.19]) by freefall.freebsd.org (8.7.3/8.7.3) with SMTP id FAA19946 for ; Wed, 20 Dec 1995 05:54:37 -0800 (PST) Received: (from bde@localhost) by godzilla.zeta.org.au (8.6.9/8.6.9) id AAA02773; Thu, 21 Dec 1995 00:49:00 +1100 Date: Thu, 21 Dec 1995 00:49:00 +1100 From: Bruce Evans Message-Id: <199512201349.AAA02773@godzilla.zeta.org.au> To: faulkner@mpd.tandem.com, se@zpr.uni-koeln.de Subject: Re: iozone and mount -o async Cc: current@freebsd.org Sender: owner-current@freebsd.org Precedence: bulk I'm reopening this old topic. Stefan's mail dated 9 Nov is quoted in full at the end. I happened to try `iozone 900 65536' on a fresh 1GB partition (on the 3rd quarter of a 4G Grand Prix drive with a BT445C controller) mounted with -o async, and noticed that writing is much slower: 3206878 bytes/second for writing the file 5106987 bytes/second for reading the file The sync results are: 4880600 bytes/second for writing the file 5105829 bytes/second for reading the file In old mail, I said that the slowdown is because -o async unavoidably writes in a bad order, and that this didn't matter a lot because it mainly affects stupid benchmarks such as iozone. Now I think that slowdown is because of concurrency problems, not because of a bad order. With -o async, almost all writes are delayed a relatively long time. Huge sequential writes result in the buffer cache becoming full of dirty buffers. These are written one at a time in LRU order in getblk(). LRU order is ideal for huge sequential writes, but getblk() has to wait a lot for free buffers. For non-sequential writes, a cache full of dirty buffers causes bad ordering too. Writing dirty buffers in blkno order when the cache becomes too dirty should work better. sync() should do something similar when a buffer has been dirty for too long. It's important to write nearby dirty buffers if writing them has a low cost (if clustering is done at a lower level and the drive[r] doesn't combine commands, then writing an extra buffer might reduce the number of writes and have a negative cost!). Because of this, sorting shouldn't be left to the hardware - the hardware can't optimize for buffers that will be written 30 seconds later. Bruce >On Nov 8, 16:12, Boyd Faulkner wrote: >} Subject: iozone and mount -o async >} With my Maxtor 235M SCSI 1 drive mounted normally >} >} iozone 32 gives >} IOZONE performance measurements: >} 804903 bytes/second for writing the file >} 1112974 bytes/second for reading the file >} >} mounted with -o async >} IOZONE performance measurements: >} 714042 bytes/second for writing the file >} 1115575 bytes/second for reading the file >I'm seeing slower writes, too, and surprisingly >by about the same absolute amount (some 100KB/s). >My "iozone 32" results are: >se@x14> tail -3 /tmp/sync.io >IOZONE performance measurements: > 2529427 bytes/second for writing the file > 4446135 bytes/second for reading the file >se@x14> tail -3 /tmp/async.io >IOZONE performance measurements: > 2422429 bytes/second for writing the file > 4423241 bytes/second for reading the file >Bonnie gives significantly different results, too: > -------Sequential Output-------- ---Sequential Input-- --Random-- > -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- >Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU >sync 100 1505 98.6 3267 31.2 2167 42.3 1656 98.5 5929 72.1 78.4 9.2 >async 100 1425 96.4 2661 24.3 2077 41.7 1644 98.0 5923 72.1 78.9 9.1 >All tests on the half ful inner (slower) half of my >2GB Atlas driven by an ASUS SP3G, 486DX2/66, NCR SCSI. >Regards, STefan >-- > Stefan Esser, Zentrum fuer Paralleles Rechnen Tel: +49 221 4706021 > Universitaet zu Koeln, Weyertal 80, 50931 Koeln FAX: +49 221 4705160 > ============================================================================== > http://www.zpr.uni-koeln.de/~se