Date: Thu, 15 Jan 2004 17:00:33 -0500 From: "Mike Jakubik" <mikej@rogers.com> To: "'Paul Mather'" <paul@gromit.dlib.vt.edu> Cc: freebsd-stable@freebsd.org Subject: RE: Adaptect raid performance with FreeBSD Message-ID: <20040115215754.SZEW23685.fep01-mail.bloor.is.net.cable.rogers.com@win2000> In-Reply-To: <20040115143923.GC6678@gromit.dlib.vt.edu>
next in thread | previous in thread | raw e-mail | index | archive | help
> -----Original Message----- > From: owner-freebsd-stable@freebsd.org > [mailto:owner-freebsd-stable@freebsd.org] On Behalf Of Paul Mather > Sent: Thursday, January 15, 2004 9:39 AM > To: Mike Jakubik > Cc: freebsd-stable@freebsd.org > Subject: Re: Adaptect raid performance with FreeBSD > > On Wed, Jan 14, 2004 at 05:52:50PM -0500, Mike Jakubik wrote: > > => This sounds pretty poor for SCSI raid. Here are my results > on a single => Maxtor ATA drive. > => > => CPU: AMD Athlon(tm) Processor (1410.21-MHz 686-class CPU) > => ad0: 76345MB <MAXTOR 6L080L4> [155114/16/63] at > ata0-master UDMA100 => => # dd if=/dev/rad0s1a of=/dev/null > bs=1m count=100 => 100+0 records in => 100+0 records out => > 104857600 bytes transferred in 2.484640 secs (42202333 > bytes/sec) => => 5 dd's running simultaneously show the > following in iostast. > > What about 5 dd's running simultaneously but with slightly > staggered start times so that four of them aren't hitting the > drive's cache and hence only really testing its interface speed? :-) Here are result with a .3 second delay between each dd start: 104857600 bytes transferred in 9.572284 secs (10954293 bytes/sec) 100+0 records in 100+0 records out 104857600 bytes transferred in 9.261223 secs (11322220 bytes/sec) 100+0 records in 100+0 records out 104857600 bytes transferred in 9.262631 secs (11320499 bytes/sec) 100+0 records in 100+0 records out 104857600 bytes transferred in 9.263857 secs (11319000 bytes/sec) 100+0 records in 100+0 records out 104857600 bytes transferred in 9.265230 secs (11317323 bytes/sec) Im not sure if this was done properly, here Is the command I used: # dd if=/dev/rad0s1a of=/dev/null bs=1m count=100 & sleep .3 && dd if=/dev/rad0s1a of=/dev/null bs=1m count=100 & sleep .3 && dd if=/dev/rad0s1a of=/dev/null bs=1m count=100 & sleep .3 && dd if=/dev/rad0s1a of=/dev/null bs=1m count=100& sleep .3 && dd if=/dev/rad0s1a of=/dev/null bs=1m count=100 iostst -w 1: tty ad0 ad2 ad4 cpu tin tout KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us ni sy in id 0 2 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 0 0 0 0 100 1 119 128.00 37 4.62 0.00 0 0.00 0.00 0 0.00 0 0 1 0 99 0 77 128.00 398 49.75 0.00 0 0.00 0.00 0 0.00 0 0 2 2 97 0 77 128.00 422 52.72 0.00 0 0.00 0.00 0 0.00 0 0 2 1 98 0 77 128.00 421 52.60 0.00 0 0.00 0.00 0 0.00 0 0 2 2 96 0 77 128.00 422 52.72 0.00 0 0.00 0.00 0 0.00 0 0 2 1 97 0 77 128.00 421 52.60 0.00 0 0.00 0.00 0 0.00 0 0 1 1 98 0 77 128.00 421 52.60 0.00 0 0.00 0.00 0 0.00 0 0 2 2 97 0 77 128.00 422 52.72 0.00 0 0.00 0.00 0 0.00 0 0 2 1 98 0 77 128.00 421 52.60 0.00 0 0.00 0.00 0 0.00 0 0 1 2 98 0 77 128.00 422 52.72 0.00 0 0.00 0.00 0 0.00 0 0 1 1 98 0 689 128.00 155 19.43 0.00 0 0.00 0.00 0 0.00 1 0 1 1 98 0 77 16.00 8 0.12 0.00 0 0.00 0.00 0 0.00 0 0 0 0 100 0 77 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 0 0 0 0 100 > Long seeks are the major time consumer in disk I/O (and > multiple spindle parallelism is one of this things in RAID > that helps minimise this penalty). The above dd test is not > a good test of performance in that regard. What it will give > you is a best-case performance, not an expected real-world > performance (which is more valuable to know, right?). > > Cheers, > > Paul. > > PS: Maybe you'll get faster transfers if you do the dd from > single-user mode, with no background system processes > interfering with the disk. :-) > Yes, I agree. Thanks.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20040115215754.SZEW23685.fep01-mail.bloor.is.net.cable.rogers.com>