From owner-freebsd-current Sat Apr 8 03:28:53 1995 Return-Path: current-owner Received: (from majordom@localhost) by freefall.cdrom.com (8.6.10/8.6.6) id DAA09662 for current-outgoing; Sat, 8 Apr 1995 03:28:53 -0700 Received: from godzilla.zeta.org.au (godzilla.zeta.org.au [203.2.228.34]) by freefall.cdrom.com (8.6.10/8.6.6) with ESMTP id DAA09650 for ; Sat, 8 Apr 1995 03:27:36 -0700 Received: (from bde@localhost) by godzilla.zeta.org.au (8.6.9/8.6.9) id UAA24575; Sat, 8 Apr 1995 20:21:11 +1000 Date: Sat, 8 Apr 1995 20:21:11 +1000 From: Bruce Evans Message-Id: <199504081021.UAA24575@godzilla.zeta.org.au> To: freebsd-current@FreeBSD.org, taob@gate.sinica.edu.tw Subject: Re: Disk performance Sender: current-owner@FreeBSD.org Precedence: bulk > My feeling is that it should have been lower and not anywhere >close to 100% usage. Sending out 366 I/O requests to a SCSI device >and waiting for them to return did not seem to warrant a 50% busy >state with a 100-MHz processor on a 33-MHz bus. I gather this is >where IDE drives fare much worse? Actually only 366/8 i/o requests are sent to SCSI devices. Iozone does huge sequential i/o's on which clustering works perfectly, so file data is always read and written 64K at a time (not 8K for a file system with a block size of 8K). Normal file accesses aren't as sequential as for iozone, so clustering doesn't work so well. Normal file accesses are often 8 times as slow as for iozone for this and other reasons (seeking...) :-(. For IDE, drives, sending out 366/8 I/O requests is much faster, but "waiting" for them to return actually requires handling up to 366*16 interrupts (one for each sector) and copying 512 bytes or more per interrupt. Interrupt overhead is about 5usec/interrupt on a P90 and copying overhead is about 155usec/sector for the old IDE interface on all systems. Bruce