From owner-freebsd-questions@FreeBSD.ORG Fri Mar 3 17:51:18 2006 Return-Path: X-Original-To: freebsd-questions@freebsd.org Delivered-To: freebsd-questions@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 93BEF16A422 for ; Fri, 3 Mar 2006 17:51:18 +0000 (GMT) (envelope-from nikolas.britton@gmail.com) Received: from xproxy.gmail.com (xproxy.gmail.com [66.249.82.204]) by mx1.FreeBSD.org (Postfix) with ESMTP id B80DE43D70 for ; Fri, 3 Mar 2006 17:51:13 +0000 (GMT) (envelope-from nikolas.britton@gmail.com) Received: by xproxy.gmail.com with SMTP id h29so490159wxd for ; Fri, 03 Mar 2006 09:51:13 -0800 (PST) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=bs2rxAqQ2dDqHEWz+2cShe5iGuOH1qxlb0V/c4dimNFiJ8cCHn3HSYFFNoLllhLDGY5Oaz8pevOt5EdUhCgu2Ex54AmFF7624V6MGNlxwCd7XxxW1iTqxiVNrvqmfs+je5BbhJl4WNwquVpve2YTJL7ScusV3DWw8+1PwxMWEWY= Received: by 10.70.122.19 with SMTP id u19mr118981wxc; Fri, 03 Mar 2006 09:51:12 -0800 (PST) Received: by 10.70.65.9 with HTTP; Fri, 3 Mar 2006 09:51:12 -0800 (PST) Message-ID: Date: Fri, 3 Mar 2006 11:51:12 -0600 From: "Nikolas Britton" To: "Alex Zbyslaw" In-Reply-To: <44082439.6070101@dial.pipex.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline References: <61560.207.70.139.52.1139628926.squirrel@www.compedgeracing.com> <44052663.7000802@mra.co.id> <440565FF.3030002@mra.co.id> <44058D9E.3010801@dial.pipex.com> <440675E0.1020204@mra.co.id> <4406CB4D.5050300@dial.pipex.com> <44072515.6080105@dial.pipex.com> <44082439.6070101@dial.pipex.com> Cc: Liste FreeBSD Subject: Re: SATA Raid (stress test..) X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 03 Mar 2006 17:51:18 -0000 On 3/3/06, Alex Zbyslaw wrote: > Nikolas Britton wrote: > > >>Please can you be careful when you attribute your comments. You've sen= t > >>this email "to" me, and left only my name in the attributions as if I > >>were someone suggesting either dd or diskinfo as accurate benchmarks, > >>when in fact my contribution was to suggest unixbench and sandra-lite. > >>Maybe you hate those too, in which case you can quote what I said > >>in-context and rubbish that at your pleasure. > >> > >> > > > >Yes I see your point, it does look like I'm replying to something you > >wrote. This was a oversight and I am sorry. > > > > > OK. > > >Remember that 105MB/s number I quoted above?, that's just the > >sustained read transfer rate for a big ass file, I don't need to work > >with big ass files. I need to work with 15MB files (+/- 5MB). After > >buying the right disks, controller, mainboard etc. and lots of tuning > >with the help of iozone I get: 200 - 350MB/s overall (read, write, > >etc.) for files less then or equal to 64MB*. > > > >So anyways, that's what iozone can do for you. google it and you'll > >find out more stuff about it. > > > > > Thanks for the info. I think I can only dream about numbers like like > yours. Iozone looks to be in the ports so I see some of my weekend > disappearing looking at it :-) > It runs on over two dozen operating systems, including windows. Their are two primary reasons I can get such high transfer rates from simple SATA drives. The first one was the selection of the mainboard that had a PCI-X slots, I built this system before PCI-Express mainboards and controllers hit the market. The PCI bus is severely restricted and obsolete, I'm simply going to post the theoretical maximum throughput in MB/s for the various bus standards: f(x,y) =3D x-bits * y-MHz / 8 =3D maximum theoretical throughput in MB/s PCI: 32 bits * 33 Mhz / 8 =3D 132 MB/s (standard PCI bus found on every pc) PCI: (32bits, 66MHz) =3D 264MB/s (Cards are commonplace, mainboards aren't) PCI-X: (64, 33) =3D 264MB/s (obsolete, won't find it on new boards.) PCI-X: (64, 66) =3D 528MB/s (Commonplace.) PCI-X: (64, 100) =3D 800 PCI-X: (64, 133) =3D 1064 (Commonplace.) PCI-X: (64, 266) =3D 2128 PCI-X: (64, 533) =3D 4264 (very hard to find, even on high-end equipment.) PCI-X version 1 (66MHz - 133MHz) and PCI-X version 2 (266MHz - 533MHz). PCI-X is backwards compatible with PCI and slower versions of PCI-X, for example you can put a standard PCI card in a PCI-X 533MHz slot and it will simply run at (32, 33) similarly a 66 MHz PCI card will run at (32, 66) and so on and so forth. PCI-X is also forwards compatible in the fact that you can run a 133MHz PCI-X card in a standard (32, 33) PCI slot. Because of the backwards and an forwards compatibly I feel that PCI-X is superior to PCI-Express, *BUT* PCI-Express moving forwards is far far superior to PCI & PCI-X because it does not have 13 years of legacy to remain compatible with, it's cheaper to produce, and it's already in lower-end desktop systems as a replacement for AGP thanks to all the gamers. A few years from now PCI will end up where ISA / EISA are. I'm veering way off topic so I will not go into anymore details about PCI, PCI-X, and PCI-Express. Google around for the shortcomings of PCI / PCI-X and why PCI-Express is the future. PCI-Express: PCIe is not compatible with PCI or PCI-X (except for PCIe to PCI bridging) and it's just, well, totally different from the PCI spec and I'm already way off topic so again just google the details. It's theoretical maximums are expressed in Gigabits per second but I will convert them to MB/s for comparison with PCI and PCI-X. x1: 2.5Gbps =3D 312.5MB/s x2: 625MB/s x4: 1250MB/s x8: 2500MB/s x12: 3750MB/s x16: 5000MB/s x32: 10000MB/s Anyways back on topic, what was the topic? Oh yes, why you won't see 200MB/s - 350MB/s if your using a standard PCI slot. If you look back up all the way at the top you will see that the standard PCI bus is a crap shoot and that it's limited to a theoretical maximum of 132 MB/s. What this means is that your RAID controller and the disks attached to it and the cache buffers attached to the disks are all capped at that theoretical maximum of 132MB/s. Then you have to take into account that the PCI bus is shared with other devices such as the network card, video card, USB, etc. Your RAID controller has to fight will all these devices and a 1Gbit NIC card can eat up 125MB/s (12.5MB/s for a 100Mbit NIC). The next reason for those high gains is because I picked drives with 16MB cache buffers and that I'm insane enough to run a production server with the write-back cache policy enabled on the array controller and enabling the write cache on the disks. This is stupidly insane unless you've planned for the worsts. The worst case scenario would be that you corrupt the array into an unrepairable state and loose everything if you had a power failure. -- BSD Podcasts @ http://bsdtalk.blogspot.com/