From owner-freebsd-hardware@FreeBSD.ORG Fri Sep 19 03:26:47 2008 Return-Path: Delivered-To: freebsd-hardware@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 278C91065685 for ; Fri, 19 Sep 2008 03:26:47 +0000 (UTC) (envelope-from bsd@fluffles.net) Received: from mail.fluffles.net (fluffles.net [80.69.95.190]) by mx1.freebsd.org (Postfix) with ESMTP id B9D3C8FC20 for ; Fri, 19 Sep 2008 03:26:46 +0000 (UTC) (envelope-from bsd@fluffles.net) Received: from [10.10.0.2] (cust.95.160.adsl.cistron.nl [195.64.95.160]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: info@fluffles.net) by mail.fluffles.net (Postfix) with ESMTP id 14381B29D5D; Fri, 19 Sep 2008 05:27:53 +0200 (CEST) Message-ID: <48D31C82.8030007@fluffles.net> Date: Fri, 19 Sep 2008 05:29:06 +0200 From: "fluffles.net" User-Agent: Thunderbird 2.0.0.16 (X11/20080724) MIME-Version: 1.0 To: Lukas Razik References: <200809051759.45729.freebsd@razik.name> In-Reply-To: <200809051759.45729.freebsd@razik.name> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: Jeremy Chadwick , freebsd-hardware@freebsd.org Subject: Re: Test: HighPoint RocketRaid 3120 PCIex1 2xSATA controller under FreeBSD 7.1-PRERELEASE X-BeenThere: freebsd-hardware@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: General discussion of FreeBSD hardware List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 19 Sep 2008 03:26:47 -0000 Lukas Razik wrote: > Hello Jeremy! > > We wrote about Areca's and HighPoint's HW-RAID controllers some weeks ago: > http://lists.freebsd.org/pipermail/freebsd-hardware/2008-August/005339.html > > Now I've tested the HighPoint RocketRAID 3120 controller with two Samsung > 320GB SATA (HD322HJ) harddisks under FreeBSD 7.1-PRERELEASE with the bonnie++ > harddisk benchmark and the following modes: > > JBOD: > http://net.razik.de/HPT_RR_3120/bonnie_JBOD.html > RAID1: > http://net.razik.de/HPT_RR_3120/bonnie_RAID1.html > RAID0: > http://net.razik.de/HPT_RR_3120/bonnie_RAID0.html > Hi Lukas, Your scores are too low, especially for RAID0. Could you please try: dd if=/dev/zero of=/path/to/raid/zerofile bs=1m count=20000 umount re-mount dd if=/path/to/raid/zerofile of=/dev/null bs=1m The unmount is necessary to clear the filecache, else you will be (partly) benchmarking your RAM since part of the data will come from RAM and not the disks; not what you want when testing disk performance. As a rule of thumb you should test with a size at least 8 times bigger than the sum of all write-back mechanisms, either in hardware or software. The 20GB zerofile in the example above is a safe guess. Also make sure the filesystem is near-empty when you benchmark, else you are benchmarking a slower portion of your harddrives so you can expect lower scores. If you get about the same scores with dd, try using a higher read-ahead (vfs.read_max value, set it to 32 for example). Also sometimes it's required to use a higher blocksize to get full potential, try: newfs -U -b 32768 /dev/ Warning: using 64KiB blocksize you risk hanging the system under heavy load (like 2 bonnies running simultaniously). Also *DO NOT* create partitions on the raid device, unless you have manually created them to avoid a "stripe misalignment", where one read request might hit two disks causing lower performance. Ideally you'd want a single disk to handle one I/O request, not several since the only real bottleneck is the seek time. So if your raid device is /dev/da0, just pass that to newfs, after making sure your partitions are gone with: dd if=/dev/zero of=/dev/da0 bs=1m count=20 This will, ofcourse, destroy all data on the RAIDrai volume. The last thing i can think of is stripesize, you should not set this lower than 64KiB to avoid two disks processing one I/O request (ata driver does 64KiB max requests i think, maxphys is 128KiB). But also because UFS2 begins at 64KiB offset, to allow for partitioning data to be preserved. So where you think the filesystems starts, for example as defined in a label, is not actually where the filesystem starts storing data. All these factors can cause RAID-performance to be low, or to be cloaked due to improper benchmarking. Many people using HDTune think their RAID does not perform well, while its just HDTune which was never meant to test RAID-arrays since RAIDs can only be faster when parallellisation is possible - processing 2 of more I/O at once by different physical disks. HDTune sends only one request at a time, and as such RAIDs that do not have internal read-ahead optimizations will fail in HDTune. An Areca however will perform well, due to its own optimizations. But normally the filesystem takes care of generating enough I/O, also on Windows. Also note that virtually all Windows systems using RAID are suffered by stripe misalignement, since Windows requires the use of partitions and Windows has neglected to take into account the misalignment issue by using a weird offset for the partitioning. It is possible to create an aligned partition using third party tools however. > I don't know the controllers from Areca but I think the reached values are > O.K. Anyhow, the performance is better than with my old 3ware 8006-2LP PCI > controller. > Tested filesystem was: UFS2 with enabled Soft Updates. > > ------ > > Under Vista 64 (Benchmark: HD Tune): > http://net.razik.de/HPT_RR_3120/Vista_HDTune_Benchmark_HPT_____DISK_0_0_JBOD.png > http://net.razik.de/HPT_RR_3120/Vista_HDTune_Benchmark_HPT_____DISK_0_0_RAID1.png > http://net.razik.de/HPT_RR_3120/Vista_HDTune_Benchmark_HPT_____DISK_0_0_RAID0.png > You should not use HDTune to test RAID-arrays, see also: http://www.fluffles.net/blogs/2.Why-HDTune-is-unsuitable-for-RAID-arrays.html Test with ATTO-256 and you should get higher scores. Just make sure the filesystem starts at the beginning of the volume and that it's close to empty. Hope its useful :) Regards, Veronica