From owner-freebsd-current Sat Sep 6 12:09:50 1997 Return-Path: Received: (from root@localhost) by hub.freebsd.org (8.8.7/8.8.7) id MAA27234 for current-outgoing; Sat, 6 Sep 1997 12:09:50 -0700 (PDT) Received: from smtp.algonet.se (angel.algonet.se [194.213.74.112]) by hub.freebsd.org (8.8.7/8.8.7) with SMTP id MAA27228 for ; Sat, 6 Sep 1997 12:09:45 -0700 (PDT) Received: (qmail 7759 invoked from network); 6 Sep 1997 19:09:42 -0000 Received: from kairos.algonet.se (HELO kairos) (mal@194.213.74.18) by angel.algonet.se with SMTP; 6 Sep 1997 19:09:42 -0000 Received: (mal@localhost) by kairos (SMI-8.6/8.6.12) id VAA25891; Sat, 6 Sep 1997 21:09:41 +0200 Date: Sat, 6 Sep 1997 21:09:41 +0200 Message-Id: <199709061909.VAA25891@kairos> From: Mats Lofkvist To: current@FreeBSD.ORG Subject: lousy disk perf. under cpu load (was IDE vs SCSI) Sender: owner-freebsd-current@FreeBSD.ORG X-Loop: FreeBSD.org Precedence: bulk For fun I tested the dd benchmark on my new machine (intel providence board with pp200, 64M, quantum viking 4.5G uw connected to the aic7880). As long as I run the dd with the machine idle, I get over 10MB/s. But with a single cpu-bound process running, the throughput drops to less than 1.5MB/s with 64k blocks (and it gets even worse with smaller blocks). I tested with a few different block sizes, and it seems like dd never reads more than 20 records per second. Isn't the scheduler run when an io request returns? If not, shouldn't it ?-) This behaviour kills any advantage with scsi over eide since as soon as you start using all the cpu cycles not needed to talk to the disk, disk performance goes down the drain :-( I'm running 2.2.2 so I know this is the wrong list, but is current any different? _ Mats Lofkvist mal@algonet.se Details ------- Idle machine: bash# dd if=/dev/rsd0 of=/dev/null count=800 bs=128k 800+0 records in 800+0 records out 104857600 bytes transferred in 9.752697 secs (10751652 bytes/sec) bash# dd if=/dev/rsd0 of=/dev/null count=800 bs=64k 800+0 records in 800+0 records out 52428800 bytes transferred in 4.878517 secs (10746873 bytes/sec) bash# dd if=/dev/rsd0 of=/dev/null count=800 bs=32k 800+0 records in 800+0 records out 26214400 bytes transferred in 2.442630 secs (10732039 bytes/sec) bash# dd if=/dev/rsd0 of=/dev/null count=800 bs=16k 800+0 records in 800+0 records out 13107200 bytes transferred in 1.223625 secs (10711778 bytes/sec) bash# dd if=/dev/rsd0 of=/dev/null count=1600 bs=8k 1600+0 records in 1600+0 records out 13107200 bytes transferred in 1.414880 secs (9263824 bytes/sec) bash# dd if=/dev/rsd0 of=/dev/null count=1600 bs=4k 1600+0 records in 1600+0 records out 6553600 bytes transferred in 0.617373 secs (10615301 bytes/sec) With a single "nice -19 loop" running in the background: bash# dd if=/dev/rsd0 of=/dev/null count=200 bs=128k 200+0 records in 200+0 records out 26214400 bytes transferred in 19.079705 secs (1373942 bytes/sec) bash# dd if=/dev/rsd0 of=/dev/null count=200 bs=64k 200+0 records in 200+0 records out 13107200 bytes transferred in 9.549171 secs (1372601 bytes/sec) bash# dd if=/dev/rsd0 of=/dev/null count=200 bs=32k 200+0 records in 200+0 records out 6553600 bytes transferred in 9.599674 secs (682690 bytes/sec) bash# dd if=/dev/rsd0 of=/dev/null count=200 bs=16k 200+0 records in 200+0 records out 3276800 bytes transferred in 9.499696 secs (344937 bytes/sec) bash# dd if=/dev/rsd0 of=/dev/null count=200 bs=8k 200+0 records in 200+0 records out 1638400 bytes transferred in 9.519665 secs (172107 bytes/sec) bash# dd if=/dev/rsd0 of=/dev/null count=200 bs=4k 200+0 records in 200+0 records out 819200 bytes transferred in 9.419678 secs (86967 bytes/sec) The only things running on the machine not in the default configuration are mfs, named, sshd, cfsd, ppp and the xig X server. None of them used any noticeable amount of cpu.