From owner-freebsd-questions Tue Nov 9 6:47:36 1999 Delivered-To: freebsd-questions@freebsd.org Received: from mojave.sitaranetworks.com (mojave.sitaranetworks.com [199.103.141.157]) by hub.freebsd.org (Postfix) with ESMTP id DA35B14D7F for ; Tue, 9 Nov 1999 06:47:27 -0800 (PST) (envelope-from grog@mojave.sitaranetworks.com) Message-ID: <19991108204749.33806@mojave.sitaranetworks.com> Date: Mon, 8 Nov 1999 20:47:49 -0500 From: Greg Lehey To: sthaug@nethelp.no, rsnow@lgc.com Cc: gurney_j@resnet.uoregon.edu, freebsd-questions@FreeBSD.ORG Subject: rawio bug (was: writing much slower than reading...) Reply-To: Greg Lehey References: <38240CC0.8099D19D@lgc.com> <89944.941888974@verdi.nethelp.no> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii In-Reply-To: <89944.941888974@verdi.nethelp.no>; from sthaug@nethelp.no on Sat, Nov 06, 1999 at 12:49:34PM +0100 Sender: owner-freebsd-questions@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG On Saturday, 6 November 1999 at 12:49:34 +0100, sthaug@nethelp.no wrote: >> Emm, I want your system. Have you double checked your numbers? They >> look a bit high. Here's what I get on a vinum stripe across two 'cudas >> on an SMP box: >> >> rsnow@basil% time dd if=/dev/vinum/rstripe of=/dev/null bs=64k >> count=2048 >> 2048+0 records in >> 2048+0 records out >> 134217728 bytes transferred in 7.938773 secs (16906609 bytes/sec) >> 0.007u 0.520s 0:07.98 6.5% 73+371k 2+0io 0pf+0w > > I can confirm the sequential read numbers for the DPTA-372730. The disk > rotates at 7200 RPM and has extremely high bit density. Thus very high > numbers for sequential read. I consistently get more than 23 MByte/s > (M = 1000000 here). > > Haven't tried the sequential write yet. Right now I'm testing it with > rawio, after having run some bonnie tests. I found a bug in rawio sequential measurements a couple of days ago, which results in exaggeratedly high results when you have more than one process (by default rawio starts 8). With an increasing number of processes, the performance appears to increase, whereas it should decrease. The reason for this is that each transfer started at the same place, thus using the disk cache more than it should. I'll release a fix Real Soon Now, but for this kind of measurement you should use -n 1 to get true sequential measurements. Greg -- When replying to this message, please copy the original recipients. For more information, see http://www.lemis.com/questions.html Finger grog@lemis.com for PGP public key See complete headers for address and phone numbers To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-questions" in the body of the message