Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 18 Sep 1996 18:12:07 -0400 (EDT)
From:      Brian Tao <taob@io.org>
To:        FREEBSD-SCSI-L <freebsd-scsi@freebsd.org>, FREEBSD-CURRENT-L <freebsd-current@freebsd.org>, FREEBSD-ISP-L <freebsd-isp@freebsd.org>
Subject:   Streamlogic RAID array benchmarks
Message-ID:  <Pine.NEB.3.92.960917173042.7033P-100000@zap.io.org>

next in thread | raw e-mail | index | archive | help
    I now have a P166 equipped with an Adaptec AHA-2940UW controller
and a 3x4GB RAID 5 subsystem from Streamlogic (kindly lent to us by
Tenex Data Systems here in Toronto).

    The drive comes preformatted and preconfigured with two data
drives and one parity drive.  It has two 68-pin wide connectors on the
back and a nifty little pop-up LCD panel on the front to control
various aspects of the RAID.  There is also a 9-pin male serial port
on the back if you want to hook up a VT-100 terminal to it.

    The RAID worked right out of the box.  I have a narrow 4GB
Barracuda plugged into the 50-pin connector on the Adaptec, and the
RAID on the external 68-pin connector.  The RAID drives themselves are
connected via a 10MB/s narrow SCSI bus to the Streamlogic RAID
controller, which then interfaces with the host via a F/W bus.  It has
four narrow busses, allowing for up to 28 drives per RAID controller.
The GENERIC kernel recognizes this setup:

FreeBSD 2.2-960801-SNAP #0: Sat Aug  3 15:18:25  1996
    jkh@time.cdrom.com:/usr/src/sys/compile/GENERIC
Calibrating clock(s) relative to mc146818A clock...
i586 clock: 133663775 Hz, i8254 clock: 1193428 Hz
CPU: Pentium (133.63-MHz 586-class CPU)
  Origin = "GenuineIntel"  Id = 0x52c  Stepping=12
  Features=0x1bf<FPU,VME,DE,PSE,TSC,MSR,MCE,CX8>
real memory  = 67108864 (65536K bytes)
avail memory = 62947328 (61472K bytes)
Probing for devices on PCI bus 0:
chip0 <generic PCI bridge (vendor=8086 device=7030 subclass=0)> rev 2 on pci0:0
chip1 <generic PCI bridge (vendor=8086 device=7000 subclass=1)> rev 1 on pci0:7:0
pci0:7:1: Intel Corporation, device=0x7010, class=storage (ide) [no driver assigned]
ahc0 <Adaptec 2940 Ultra SCSI host adapter> rev 0 int a irq 11 on pci0:8
ahc0: aic7880 Wide Channel, SCSI Id=7, 16 SCBs
ahc0 waiting for scsi devices to settle
(ahc0:0:0): "SEAGATE ST15150N 0022" type 0 fixed SCSI 2
sd0(ahc0:0:0): Direct-Access 4095MB (8388315 512 byte sectors)
(ahc0:2:0): "MICROP LTX       011000 7t38" type 0 fixed SCSI 2
sd1(ahc0:2:0): Direct-Access 8189MB (16772017 512 byte sectors)
[...]


    All the usual tools under FreeBSD treat the RAID as a single
drive.  The default stripe size is 8K, RAID 5 (striping with parity).
This can be changed via the LCD keypad on the unit itself, or with a
VT-100 interface via the serial port.

    The problem I have is that it isn't particularly fast on the raw
throughput tests.  Granted, there is some overhead in calculating the
parity, but I would expect it to go at least as fast as the single 4GB
drive.  Results from various benchmarks are included below.

    The tests were conducted on newly newfs'd filesystems.  sd0 is the
single 4GB drive and sd1 is the RAID.  A unit from CMD should be
arriving next week, so I'll have another product to benchmark.
Comments welcome, and please note the Reply-To.
--
Brian Tao (BT300, taob@io.org, taob@ican.net)
Senior Systems and Network Administrator, Internet Canada Corp.
"Though this be madness, yet there is method in't"


>>>>>

Filesystem  1K-blocks     Used    Avail Capacity  Mounted on
/dev/sd0s1d   2847603        4  2619791     0%    /single
/dev/sd1s1a   8135644       10  7484783     0%    /raid

    Bonnie 1.0 output is shown here.  The single drive easily outpaces
the RAID for both linear and random accesses.

             -------Sequential Output-------- ---Sequential Input-- --Random--
             -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine   MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
raid     100  1354 25.1  1308  4.5  1056  7.3  3059 55.9  3190 11.5 139.0  5.1
single   100  3506 66.8  3429 12.0  1848 12.5  5367 99.1  6462 25.6 202.5  7.1


    Iozone 2.01 from the packages collection showed a dramatic
difference in favour of the non-RAID drive.  It was over twice as fast
as the RAID both in block reads and block writes.  I'm a little
suspicious of the read numbers for the single drive... I thought the
ST15150N's maxed out at around 6.5MB/s (at least on narrow
controllers)?  I'm seeing over 8MB/s on a single drive.  Does the
2940UW make that much difference even on narrow drives?  The test file
size was 128MB.  Size is the data block size, Write and Read are in
bytes per second.

    -------- RAID --------       ------- SINGLE -------
     Size    Write    Read        Size    Write    Read
      256  1334877 3420921         256  3551027 7683304
      512  1398101 3477002         512  3312101 8134407
     1024  1410614 3519022        1024  3372569 8458822
     2048  1413748 3575415        2048  3369923 8646134
     4096  1414563 3585862        4096  3360694 8729608
     8192  1421233 3568730        8192  3356754 8769713
    16384  1419941 3554700       16384  3374556 8347847
    32768  1419354 3469979       32768  3375219 8751843
    65536  1420176 3408028       65536  3367281 8774192


    I then ran a simple test that should take advantage of the 4MB
write cache on the RAID controller.  Create 10000 files in an empty
directory, then retouch their inodes, then delete them all.  I
performed this on synchronously and asynchronously mounted filesystems
on both types of drives.  The RAID drive is as fast as an
async-mounted single drive filesystem.  Times are in min:sec.

          ----- RAID -----     ---- SINGLE ----
             Sync    Async        Sync    Async
touch     1:42.24  1:23.95     3:57.62  1:27.80
retouch   0:12.37  0:03.24     1:23.72  0:03.26
remove    0:17.58  0:08.55     1:25.80  0:10.19

Synchronous (RAID):
# time touch `jot 10000 1`
0.4u 84.6s 1:42.24 83.2% 10+170k 143+20315io 0pf+0w
# time touch `jot 10000 1`
0.3u 3.6s 0:12.37 32.8% 22+193k 0+10000io 0pf+0w
# time rm *
0.4u 10.0s 0:17.58 59.6% 184+252k 0+10000io 0pf+0w

Asynchronous (RAID):
# time touch `jot 10000 1`
0.4u 83.3s 1:23.95 99.7% 10+170k 159+315io 0pf+0w
# time touch `jot 10000 1`
0.3u 3.2s 0:03.24 110.8% 22+191k 0+0io 0pf+0w
# time rm *
0.2u 9.5s 0:08.55 115.2% 186+253k 0+305io 0pf+0w

Synchronous (single):
# time touch `jot 10000 1`
0.4u 86.9s 3:57.62 36.7% 10+170k 162+20314io 0pf+0w
# time touch `jot 10000 1`
0.4u 3.8s 1:23.72 5.1% 21+191k 0+10000io 0pf+0w
# time rm *
0.4u 10.4s 1:25.80 12.6% 186+251k 0+10000io 0pf+0w

Asynchronous (single):
# time touch `jot 10000 1`
0.4u 82.3s 1:27.80 94.3% 10+170k 159+315io 0pf+0w
# time touch `jot 10000 1`
0.3u 3.2s 0:03.26 111.0% 22+191k 0+0io 0pf+0w
# time rm *
0.4u 9.4s 0:10.19 96.2% 187+254k 0+305io 0pf+0w


    The last benchmark was to untar the contents of the /usr
filesystem (15330 files, 78469120 bytes).  The tar file was located on
the same filesystem to which it was untarred.  As in the previous
benchmark, the RAID is much faster with numerous small file
operations.

RAID:
# time tar xf usr-test.tar
1.7u 20.4s 5:02.25 7.3% 285+329k 2472+51582io 0pf+0w
# time rm -rf usr
0.5u 11.3s 2:57.98 6.6% 163+569k 2132+29479io 0pf+0w

Single:
# time tar xf usr-test.tar
1.6u 18.0s 8:49.25 3.7% 287+329k 1995+49819io 10pf+0w
# time rm -rf usr
0.5u 9.1s 4:44.14 3.4% 164+569k 2462+29479io 1pf+0w

<<<<<




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.NEB.3.92.960917173042.7033P-100000>