From owner-freebsd-stable Mon May 1 17:39:10 2000 Delivered-To: freebsd-stable@freebsd.org Received: from implode.root.com (root.com [209.102.106.178]) by hub.freebsd.org (Postfix) with ESMTP id 5B8B737B9CB; Mon, 1 May 2000 17:39:01 -0700 (PDT) (envelope-from dg@implode.root.com) Received: from implode.root.com (localhost [127.0.0.1]) by implode.root.com (8.8.8/8.8.5) with ESMTP id RAA28815; Mon, 1 May 2000 17:35:57 -0700 (PDT) Message-Id: <200005020035.RAA28815@implode.root.com> To: Mike Smith Cc: freebsd-stable@FreeBSD.ORG Subject: Re: How good is AMI MegaRAID support? In-reply-to: Your message of "Mon, 01 May 2000 17:20:57 PDT." <200005020020.RAA04531@mass.cdrom.com> From: David Greenman Reply-To: dg@root.com Date: Mon, 01 May 2000 17:35:57 -0700 Sender: owner-freebsd-stable@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG >> TeraSolutions' RAID systems (TSR-2200) as used on ftp.freesoftware.com are >> capable of >9,000 IOPS. > >Is this a controller capability, or does it represent a sustainable load >over the entire array? How do you measure this? (I'd love to add to my >benchmark/test suite). The controller itself is spec'd at 10,000 IOPS out of the cache, but under FreeBSD a more practical limit is about 8,500 due to OS latency and SCSI bus overhead issues. As for how it's measured, usually you write the same block over and over again (to test write-to-cache performance), or read the same block over and over again. You do this with 50 or so processes simultaneously to take advantage of overlapped tagged operations. For read from the disk drives, the performance is pretty much whatever the drives are rated at. We use the fastest 10K RPM drives on the market, but the actual number you get depends on both the speed and number of the drives in the array. Write performance is hard to measure since the cache defers the writes. On a software RAID-5, write performance can be expected to be totally lousy due to the lack of non-volatile write-back cache. If it's not lousy, then your filesystem is in danger of being destroyed on power fail or system crash. I don't think anyone who goes to the trouble of doing software RAID is willing to risk losing everything due to a power failure. >FWIW, most of the low-end PCI:SCSI RAID controllers claim throughput in >the 3-5k IOPs, and 20k is not an uncommon claim for mid-high end >controllers. Simon Shapiro was pushing over 20k on the DPT Century >adapters in "real" applications. I've had a hard time generating more >than 3k or so out of a FreeBSD box's I/O subsystem - we cluster so >aggressively that I typically run out of I/O bandwidth before I hit an >IOP limit. You need to use the raw (character) device for testing things like this. RAID controllers certainly vary greatly in performance. I've tested just about all of the SCSI-SCSI controllers on the market and can tell you that most of them really suck. The Mylex DAC960-SX, for example, tops out at about 1500 IOPS; the Infortrend 3102U2G at about 3200 IOPS; the CMD CRD-5440 at about 4400 IOPS; etc... Anyway, sorry, I didn't really want to get sucked into this discussion, so I'll just jump out now as fast as I jumped in. :-) -DG David Greenman Co-founder/Principal Architect, The FreeBSD Project - http://www.freebsd.org Creator of high-performance Internet servers - http://www.terasolutions.com Pave the road of life with opportunities. To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-stable" in the body of the message