Date: Wed, 1 Aug 2001 12:05:29 +0300 From: Dmitry Alyabyev <dimitry@al.org.ua> To: freebsd-fs@FreeBSD.ORG Subject: Re: Adaptec 2100S RAID Performance Message-ID: <198176393894.20010801120529@al.org.ua>
next in thread | raw e-mail | index | archive | help
hi
AFAIK dd isn't good tool for that. Just use iozone - it's in port collection.
But in other hand I'd like to follow this question and ask people
which are using 2100S. I have 2100S with RAID1 of two Ultra2 disks
under FreeBSD and Mylex with RAID0+1 with several Ultra3 disks.
The results of iozone tests are terrible under FBSD in comparison
with Linux (please see the figures below - I'm talking about random
read/write). So I'd like to know WHAT IS THE BOTTLENECK - Ultra2 vs. Ultra3
OR Adaptec 2100S vs. Mylex OR FBSD io vs. Linux io (softupdate is set)
For FBSD:
Record Size 4 KB
File size set to 1048576 KB
Time Resolution = 0.000004 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 3 processes
Each process writes a 1048576 Kbyte file in 4 Kbyte records
Children see throughput for 3 initial writers = 20621.72 KB/sec
Parent sees throughput for 3 initial writers = 20134.93 KB/sec
Min throughput per process = 6596.70 KB/sec
Max throughput per process = 7042.41 KB/sec
Avg throughput per process = 6873.91 KB/sec
Min xfer = 983196.00 KB
Children see throughput for 3 rewriters = 3043.30 KB/sec
Parent sees throughput for 3 rewriters = 3043.25 KB/sec
Min throughput per process = 1006.05 KB/sec
Max throughput per process = 1019.20 KB/sec
Avg throughput per process = 1014.43 KB/sec
Min xfer = 1035048.00 KB
Children see throughput for 3 random readers = 964.08 KB/sec
Parent sees throughput for 3 random readers = 964.07 KB/sec
Min throughput per process = 321.22 KB/sec
Max throughput per process = 321.44 KB/sec
Avg throughput per process = 321.36 KB/sec
Min xfer = 1047860.00 KB
Children see throughput for 3 random writers = 440.61 KB/sec
Parent sees throughput for 3 random writers = 438.70 KB/sec
Min throughput per process = 146.49 KB/sec
Max throughput per process = 147.42 KB/sec
Avg throughput per process = 146.87 KB/sec
Min xfer = 1041952.00 KB
For Linux:
Record Size 4 KB
File size set to 1048576 KB
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 3 processes
Each process writes a 1048576 Kbyte file in 4 Kbyte records
Children see throughput for 3 initial writers = 34334.45 KB/sec
Parent sees throughput for 3 initial writers = 20504.82 KB/sec
Min throughput per process = 10804.76 KB/sec
Max throughput per process = 12669.23 KB/sec
Avg throughput per process = 11444.82 KB/sec
Min xfer = 896260.00 KB
Children see throughput for 3 rewriters = 21032.06 KB/sec
Parent sees throughput for 3 rewriters = 14973.13 KB/sec
Min throughput per process = 7010.16 KB/sec
Max throughput per process = 7011.58 KB/sec
Avg throughput per process = 7010.69 KB/sec
Min xfer = 1048576.00 KB
Children see throughput for 3 random readers = 1637.39 KB/sec
Parent sees throughput for 3 random readers = 1637.35 KB/sec
Min throughput per process = 528.07 KB/sec
Max throughput per process = 560.51 KB/sec
Avg throughput per process = 545.80 KB/sec
Min xfer = 987920.00 KB
Children see throughput for 3 random writers = 5057.66 KB/sec
Parent sees throughput for 3 random writers = 3441.19 KB/sec
Min throughput per process = 1613.77 KB/sec
Max throughput per process = 1754.04 KB/sec
Avg throughput per process = 1685.89 KB/sec
Min xfer = 964792.00 KB
--
Dimitry
Wednesday, July 18, 2001, 10:00:04 PM, Adrian Gonzalez wrote:
> Hello everyone
> Sorry if this is slightly off topic, but I couldn't find anything similar
> on the archives. Here goes...
> I recently got an Adaptec 2100S single channel RAID controller (Ultra 160)
> and 4 Seagate Cheetah 18G 15K RPM drives.
> Basically, I mounted the 4 drives in a very nice but somewhat pricey
> enclosure from Storcase (http://www.storcase.com) and connected the array
> to the Adaptec card using a 3 ft Ultra-160 cable. The array was configured
> as RAID 1+0 (two pairs of two-drive RAID1 arrays) to get the best performance.
> FreeBSD 4.3 happily detected the controller and the disk array. I created
> a single partition and mounted it under /raid.
> Now for the question: What kind of performance should I expect from the
> array? I did simple tests like:
> dd if=/dev/zero of=test.file bs=1024k count=1000
> and wasn't terribly impressed with the performance. dd reported about
> 44Meg/sec reads and 18Meg/sec writes on average. I know this isn't a
> terribly reliable way to test the performance, and I'm hoping the
> advantages of using RAID will show themselves once this array is in a
> production server under a multiuser environment, but I can't help feeling
> it's somewhat on the slow side.
> Anyone have a similar setup or some suggestions for better ways to
> benchmark this array?
> Since this is somewhat off-topic, please reply directly to me. I will post
> any interesting results/observations to the list.
> Thank you
> -Adrian
> To Unsubscribe: send mail to majordomo@FreeBSD.org
> with "unsubscribe freebsd-isp" in the body of the message
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-fs" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?198176393894.20010801120529>
