From owner-freebsd-questions@FreeBSD.ORG Tue Mar 7 23:49:32 2006 Return-Path: X-Original-To: freebsd-questions@freebsd.org Delivered-To: freebsd-questions@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 4D37916A420 for ; Tue, 7 Mar 2006 23:49:32 +0000 (GMT) (envelope-from nikolas.britton@gmail.com) Received: from xproxy.gmail.com (xproxy.gmail.com [66.249.82.193]) by mx1.FreeBSD.org (Postfix) with ESMTP id 6947743D73 for ; Tue, 7 Mar 2006 23:49:20 +0000 (GMT) (envelope-from nikolas.britton@gmail.com) Received: by xproxy.gmail.com with SMTP id i31so39413wxd for ; Tue, 07 Mar 2006 15:49:19 -0800 (PST) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=E8xrPdx9i6F0RRVSXApMK5RijQ8YStMUhAZi8+rOKaS5wra8jOWWR1zbFdwCiVHqW0v9MtxPAEe8wxDsJ9IJuZeWprFJ0ph/a9jqC6Ri0GwKFW1TmtPVVteqdSMvyF0ERKwJLLpiKzIpZvwjXiAdU5maDV8K1hhEr4vKlHpWUc4= Received: by 10.70.94.9 with SMTP id r9mr143797wxb; Tue, 07 Mar 2006 15:49:19 -0800 (PST) Received: by 10.70.65.9 with HTTP; Tue, 7 Mar 2006 15:49:19 -0800 (PST) Message-ID: Date: Tue, 7 Mar 2006 17:49:19 -0600 From: "Nikolas Britton" To: Beastie In-Reply-To: <440BA5CA.2070202@mra.co.id> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline References: <61560.207.70.139.52.1139628926.squirrel@www.compedgeracing.com> <44058D9E.3010801@dial.pipex.com> <440675E0.1020204@mra.co.id> <4406CB4D.5050300@dial.pipex.com> <44072515.6080105@dial.pipex.com> <44082439.6070101@dial.pipex.com> <440BA5CA.2070202@mra.co.id> Cc: Liste FreeBSD , Alex Zbyslaw Subject: Re: SATA Raid (stress test..) X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 Mar 2006 23:49:32 -0000 On 3/5/06, Beastie wrote: > Nikolas Britton wrote: > On 3/3/06, Alex Zbyslaw wrote: > Nikolas Britton wrote: > > Please can you be careful when you attribute your comments. You've sent this > email "to" me, and left only my name in the attributions as if I were > someone suggesting either dd or diskinfo as accurate benchmarks, when in > fact my contribution was to suggest unixbench and sandra-lite. Maybe you > hate those too, in which case you can quote what I said in-context and > rubbish that at your pleasure. > Yes I see your point, it does look like I'm replying to something you wrote. > This was a oversight and I am sorry. > OK. > Remember that 105MB/s number I quoted above?, that's just the sustained read > transfer rate for a big ass file, I don't need to work with big ass files. I > need to work with 15MB files (+/- 5MB). After buying the right disks, > controller, mainboard etc. and lots of tuning with the help of iozone I get: > 200 - 350MB/s overall (read, write, etc.) for files less then or equal to > 64MB*. So anyways, that's what iozone can do for you. google it and > you'll find out more stuff about it. > Thanks for the info. I think I can only dream about numbers like like yours. > Iozone looks to be in the ports so I see some of my weekend disappearing > looking at it :-) > > It runs on over two dozen operating systems, including windows. Their are > two primary reasons I can get such high transfer rates from simple SATA > drives. The first one was the selection of the mainboard that had a PCI-X > slots, I built this system before PCI-Express mainboards and controllers = hit > the market. The PCI bus is severely restricted and obsolete, I'm simply > going to post the theoretical maximum throughput in MB/s for the various = bus > standards: f(x,y) =3D x-bits * y-MHz / 8 =3D maximum theoretical throughp= ut in > MB/s PCI: 32 bits * 33 Mhz / 8 =3D 132 MB/s (standard PCI bus found on ev= ery > pc) PCI: (32bits, 66MHz) =3D 264MB/s (Cards are commonplace, mainboards > aren't) PCI-X: (64, 33) =3D 264MB/s (obsolete, won't find it on new board= s.) > PCI-X: (64, 66) =3D 528MB/s (Commonplace.) PCI-X: (64, 100) =3D 800 PCI-X= : (64, > 133) =3D 1064 (Commonplace.) PCI-X: (64, 266) =3D 2128 PCI-X: (64, 533) = =3D 4264 > (very hard to find, even on high-end equipment.) PCI-X version 1 (66MHz - > 133MHz) and PCI-X version 2 (266MHz - 533MHz). PCI-X is backwards compati= ble > with PCI and slower versions of PCI-X, for example you can put a standard > PCI card in a PCI-X 533MHz slot and it will simply run at (32, 33) simila= rly > a 66 MHz PCI card will run at (32, 66) and so on and so forth. PCI-X is a= lso > forwards compatible in the fact that you can run a 133MHz PCI-X card in a > standard (32, 33) PCI slot. Because of the backwards and an forwards > compatibly I feel that PCI-X is superior to PCI-Express, *BUT* PCI-Expres= s > moving forwards is far far superior to PCI & PCI-X because it does not ha= ve > 13 years of legacy to remain compatible with, it's cheaper to produce, an= d > it's already in lower-end desktop systems as a replacement for AGP thanks= to > all the gamers. A few years from now PCI will end up where ISA / EISA are= . > I'm veering way off topic so I will not go into anymore details about PCI= , > PCI-X, and PCI-Express. Google around for the shortcomings of PCI / PCI-X > and why PCI-Express is the future. PCI-Express: PCIe is not compatible wi= th > PCI or PCI-X (except for PCIe to PCI bridging) and it's just, well, total= ly > different from the PCI spec and I'm already way off topic so again just > google the details. It's theoretical maximums are expressed in Gigabits p= er > second but I will convert them to MB/s for comparison with PCI and PCI-X. > x1: 2.5Gbps =3D 312.5MB/s x2: 625MB/s x4: 1250MB/s x8: 2500MB/s x12: 3750= MB/s > x16: 5000MB/s x32: 10000MB/s Anyways back on topic, what was the topic? O= h > yes, why you won't see 200MB/s - 350MB/s if your using a standard PCI slo= t. > If you look back up all the way at the top you will see that the standard > PCI bus is a crap shoot and that it's limited to a theoretical maximum of > 132 MB/s. What this means is that your RAID controller and the disks > attached to it and the cache buffers attached to the disks are all capped= at > that theoretical maximum of 132MB/s. Then you have to take into account t= hat > the PCI bus is shared with other devices such as the network card, video > card, USB, etc. Your RAID controller has to fight will all these devices = and > a 1Gbit NIC card can eat up 125MB/s (12.5MB/s for a 100Mbit NIC). The nex= t > reason for those high gains is because I picked drives with 16MB cache > buffers and that I'm insane enough to run a production server with the > write-back cache policy enabled on the array controller and enabling the > write cache on the disks. This is stupidly insane unless you've planned f= or > the worsts. The worst case scenario would be that you corrupt the array i= nto > an unrepairable state and loose everything if you had a power failure. -- > BSD Podcasts @ http://bsdtalk.blogspot.com/ > _______________________________________________ > freebsd-questions@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-questions > To unsubscribe, send any mail to > "freebsd-questions-unsubscribe@freebsd.org" > attach iozone result of amrd0 with 4 spindle Seagate Baracuda 300 Gb SATA= II > (1 hotspare) > w/ Intel SRCS16 PCI-X > Is that fast or what ? :) > I'll have to take a closer look, but the first thing I noticed in your test report is that you are only using a 1MB test file. You should run a test that will also max out the on disk / controller buffers. I think the Baracuda's have a 16MB buffers (16MBx4=3D64MB) so try a 128MB test file. Also be nice to see more detailed hardware specs about the system and what version of FreeBSD are you running. Thanks. -- BSD Podcasts @ http://bsdtalk.blogspot.com/