From owner-freebsd-hardware Wed Feb 19 23:41:40 1997 Return-Path: Received: (from root@localhost) by freefall.freebsd.org (8.8.5/8.8.5) id XAA26494 for hardware-outgoing; Wed, 19 Feb 1997 23:41:40 -0800 (PST) Received: from genesis.atrad.adelaide.edu.au (genesis.atrad.adelaide.edu.au [129.127.96.120]) by freefall.freebsd.org (8.8.5/8.8.5) with ESMTP id XAA26489; Wed, 19 Feb 1997 23:41:23 -0800 (PST) Received: (from msmith@localhost) by genesis.atrad.adelaide.edu.au (8.8.5/8.7.3) id SAA17125; Thu, 20 Feb 1997 18:11:10 +1030 (CST) From: Michael Smith Message-Id: <199702200741.SAA17125@genesis.atrad.adelaide.edu.au> Subject: Re: _big_ IDE disks? In-Reply-To: <199702200437.PAA19813@godzilla.zeta.org.au> from Bruce Evans at "Feb 20, 97 03:37:15 pm" To: bde@zeta.org.au (Bruce Evans) Date: Thu, 20 Feb 1997 18:11:08 +1030 (CST) Cc: bde@zeta.org.au, msmith@atrad.adelaide.edu.au, hardware@freebsd.org, se@freebsd.org X-Mailer: ELM [version 2.4ME+ PL28 (25)] MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-hardware@freebsd.org X-Loop: FreeBSD.org Precedence: bulk Bruce Evans stands accused of saying: > > > >Yup, this was a basic criteria for it being used instead of a SCSI > >disk; by the time you compensate for the extra CPU overhead, I figured > >that it would make a reasonable alternative to a ~3M/sec SCSI disk > >(which was not available 8( ) > > The DORS-32160 is 5-6MB/sec, but you only nead a FireballTM to compete > with that on transfer speed :-). I was contemplating the fireballs, but the image wasn't really what I wanted to pass on to people who expect the machine to be reliable 8) Given your previous nasty remarks about IDE disks and CPU overhead (the machine runs several concurrent compute-heavy jobs) and prior experience with SCSI vs IDE in these systems, I wanted to go as far as possible up the IDE performance curve. > >How do I go about ascertaining this? npx0 is enabled, and the kernel > >was build with I586_CPU defined (obviously). The kernel it's running > >is built with npx.c v 1.31.2.5. > > I can't think of a better way than using `cvs log'. It was reenabled > in npx.c 1.31.2.6. Other ways: run a debugger and look at the vectors. > Run `dd if=/dev/zero of=/dev/null bs=1m count=1000' and complain if > the throughtput is much lower than 120MB/sec. Hmm, 85MB/sec beforehand, 131MB/sec with a new kernel. Thanks for the pointer. New test results : wdc0: unit 0 (wd0): , 32-bit, multi-block-16 wd0: 4884MB (10003392 sectors), 9924 cyls, 16 heads, 63 S/T, 512 B/S Writing the 128 Megabyte file, 'iozone.tmp'...15.578125 seconds Reading the file...14.953125 seconds IOZONE performance measurements: 8615781 bytes/second for writing the file 8975898 bytes/second for reading the file -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU 100 3939 72.4 8105 24.5 2026 10.7 4427 71.1 8175 20.3 93.7 3.5 Throughput's not up, but CPU overhead is down. Perhaps we've hit the drive's limit? > Bruce -- ]] Mike Smith, Software Engineer msmith@gsoft.com.au [[ ]] Genesis Software genesis@gsoft.com.au [[ ]] High-speed data acquisition and (GSM mobile) 0411-222-496 [[ ]] realtime instrument control. (ph) +61-8-8267-3493 [[ ]] Unix hardware collector. "Where are your PEZ?" The Tick [[