Date: Tue, 07 May 1996 19:07:11 +0530 From: A JOSEPH KOSHY <koshy@india.hp.com> To: "Andrew V. Stesin" <stesin@elvisti.kiev.ua> Cc: hackers@freebsd.org, current@freebsd.org Subject: Re: lmbench IDE anomaly Message-ID: <199605071337.AA138846231@fakir.india.hp.com> In-Reply-To: Your message of "Sat, 04 May 1996 13:55:44 %2B0300." <199605041055.NAA23197@office.elvisti.kiev.ua>
next in thread | previous in thread | raw e-mail | index | archive | help
Following Andrew Stesin's suggestion I enabled flags 0x80ff80ff on the
onboard IDE controller. IDE Disk transfer figures went up dramatically;
but the slowdown on simultaneous reads was still around.
Here are the figures:
(Machine configuration at end)
(The test cases involved running "lmdd" from LMBENCH on
the various disk devices and timing reads of 16MB of data.
i.e# ./lmdd if=/dev/DEVICE bs=BLOCKSIZE count=16MEG/BLOCKSIZE of=internal
Throughput for one lmdd reader process and two simultaneous lmdd readers are
given below).
Per process Per process
Device blocksize KB/s blocksize KB/s
~~~~~~ ~~~~~~~~~ ~~~~ ~~~~~~~~~ ~~~~
-- SCSI DISK --
--single reader--
rsd0a bs=1024 653.74 bs=8192 1312.31
682.66 1268.36
677.52 1361.95
--two readers--
rsd0a bs=1024 424.27 bs=8192 805.69
424.24 807.64
-- 812.82
Looks like changing the block size for the read can double throughput.
Also, two readers yield better thoughput than a single reader process.
So far so good.
-- IDE DISK --
--single reader--
rwd0a bs=1024 839.05 bs=8192 2392.08 (!!)
841.53 2402.42 (!!)
841.85 2402.45 (!!)
--two readers--
rwd0a bs=1024 199.38 bs=8192 251.83
218.38 237.95
220.68 238.50
The read rates for the single reader case are fantastic, however
disaster seems to strike when two reader access the same device
So I looked at the block device.
--single reader--
wd0a bs=1024 199.80 bs=8192 796.07
200.04 795.06
--two readers--
wd0a bs=1024 200.04 bs=8192 795.60
200.33 795.20
Hmm, block size makes a huge difference still. Is this to
be expected? Also the two reader case and the single reader case
are around the same performance -- i.e. the buffer cache seems to
be working well. Also note the 3x-4x slowdown when enabling the buffer
cache compared to the raw device read.
Machine config
~~~~~~~~~~~~~~
FreeBSD 2.2-CURRENT #0: Mon May 6 12:16:33 IST 1996
root@krill.india.hp.com:/usr/src/sys/compile/KRILL
...
CPU: Pentium (89.99-MHz 586-class CPU)
Origin = "GenuineIntel" Id = 0x525 Stepping=5
Features=0x1bf<FPU,VME,DE,PSE,TSC,MSR,MCE,CX8>
real memory = 16777216 (16384K bytes)
avail memory = 14737408 (14392K bytes)
...
wdc0 at 0x1f0-0x1f7 irq 14 flags 0x80ff80ff on isa
wdc0: unit 0 (wd0): <QUANTUM MAVERICK 540A>, multi-block-8
wd0: 516MB (1057392 sectors), 1049 cyls, 16 heads, 63 S/T, 512 B/S
aha0: Rev 41 (AHA-154x[AB]) V0.5, enabling residuals, target ops
aha0: reading board settings, dma=5 int=11 id=7 (bus speed defaulted)
aha0 at 0x330-0x333 irq 11 drq 5 on isa
(aha0:5:0): "QUANTUM LPS1080S 1220" type 0 fixed SCSI 2
sd0(aha0:5:0): Direct-Access 1001MB (2051460 512 byte sectors)
sd0(aha0:5:0): with 2874 cyls, 8 heads, and an average 89 sectors/track
...
Koshy
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199605071337.AA138846231>
