From owner-freebsd-hackers Tue Apr 30 12:33:13 1996 Return-Path: owner-hackers Received: (from root@localhost) by freefall.freebsd.org (8.7.3/8.7.3) id MAA29961 for hackers-outgoing; Tue, 30 Apr 1996 12:33:13 -0700 (PDT) Received: from godzilla.zeta.org.au (godzilla.zeta.org.au [203.2.228.19]) by freefall.freebsd.org (8.7.3/8.7.3) with SMTP id MAA29956 Tue, 30 Apr 1996 12:33:07 -0700 (PDT) Received: (from bde@localhost) by godzilla.zeta.org.au (8.6.12/8.6.9) id FAA19369; Wed, 1 May 1996 05:26:44 +1000 Date: Wed, 1 May 1996 05:26:44 +1000 From: Bruce Evans Message-Id: <199604301926.FAA19369@godzilla.zeta.org.au> To: koshy@india.hp.com, msmith@atrad.adelaide.edu.au Subject: Re: lmbench IDE anomaly Cc: current@freebsd.org, hackers@freebsd.org Sender: owner-hackers@freebsd.org X-Loop: FreeBSD.org Precedence: bulk >> >> two simultaneous runs on the scsi disk >... >> The performance degradation per process is around 2x which is to be expected. >> The overall throughput is around the same as the single benchmark case. >> >> However when the same exercise is repeated with the IDE disk: >... >> Here we see a 8x degradation per process; 4x in terms of total throughput. Using dd instead of lmdd, I get a 26x degradation per process for both SCSI and IDE (SCSI: P133, ncr'810, Quantum Grand Prix; speed reduced from 2048K/s to 78K/s; IDE: 486/33, slow Samsung drive; speed reduced from 682K/s to 26K/s) >That's about right. The SCSI disk gets the chance to sort the I/O to suit >itself, optimising its performance. The IDE disk only gets to look at one >transaction at a time, so it's at the mercy of the disksorting code in >the operating system. I don't know that FreeBSD's disksort stuff is >terribly wonderful, but I'd happily stand corrected. disksort() is a no-op for this test because the queue length is always 1. Neither disk gets much chance to sort the i/o. The speed depends on the caching strategy and size of the cache. The i/o pattern apparently completely defeats read-ahead and/or track buffering for both my drives. On another of my drives (SCSI, 486DX2/66, slow Toshiba drive) the degradation was only 2x (from 208K/sec to 104K/sec). This doesn't mean much since the drive is so slow for the small 1k block size to begin with (it needs a block size of 16K to approach the platter speed). OTOH, a larger block size would be more likely to defeat the drive's caching. Bruce