Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 19 Sep 2008 05:29:06 +0200
From:      "fluffles.net" <bsd@fluffles.net>
To:        Lukas Razik <freebsd@razik.name>
Cc:        Jeremy Chadwick <koitsu@freebsd.org>, freebsd-hardware@freebsd.org
Subject:   Re: Test: HighPoint RocketRaid 3120 PCIex1 2xSATA controller under FreeBSD 7.1-PRERELEASE
Message-ID:  <48D31C82.8030007@fluffles.net>
In-Reply-To: <200809051759.45729.freebsd@razik.name>
References:  <200809051759.45729.freebsd@razik.name>

next in thread | previous in thread | raw e-mail | index | archive | help
Lukas Razik wrote:
> Hello Jeremy!
>
> We wrote about Areca's and HighPoint's HW-RAID controllers some weeks ago:
> http://lists.freebsd.org/pipermail/freebsd-hardware/2008-August/005339.html
>
> Now I've tested the HighPoint RocketRAID 3120 controller with two Samsung 
> 320GB SATA (HD322HJ) harddisks under FreeBSD 7.1-PRERELEASE with the bonnie++ 
> harddisk benchmark and the following modes:
>
> JBOD:
> http://net.razik.de/HPT_RR_3120/bonnie_JBOD.html
> RAID1:
> http://net.razik.de/HPT_RR_3120/bonnie_RAID1.html
> RAID0:
> http://net.razik.de/HPT_RR_3120/bonnie_RAID0.html
>   

Hi Lukas,

Your scores are too low, especially for RAID0. Could you please try:

dd if=/dev/zero of=/path/to/raid/zerofile bs=1m count=20000
umount <raid mountpoint>
re-mount
dd if=/path/to/raid/zerofile of=/dev/null bs=1m

The unmount is necessary to clear the filecache, else you will be
(partly) benchmarking your RAM since part of the data will come from RAM
and not the disks; not what you want when testing disk performance. As a
rule of thumb you should test with a size at least 8 times bigger than
the sum of all write-back mechanisms, either in hardware or software.
The 20GB zerofile in the example above is a safe guess.

Also make sure the filesystem is near-empty when you benchmark, else you
are benchmarking a slower portion of your harddrives so you can expect
lower scores.

If you get about the same scores with dd, try using a higher read-ahead
(vfs.read_max value, set it to 32 for example). Also sometimes it's
required to use a higher blocksize to get full potential, try:

newfs -U -b 32768 /dev/<raid device>

Warning: using 64KiB blocksize you risk hanging the system under heavy
load (like 2 bonnies running simultaniously).

Also *DO NOT* create partitions on the raid device, unless you have
manually created them to avoid a "stripe misalignment", where one read
request might hit two disks causing lower performance. Ideally you'd
want a single disk to handle one I/O request, not several since the only
real bottleneck is the seek time.

So if your raid device is /dev/da0, just pass that to newfs, after
making sure your partitions are gone with:
dd if=/dev/zero of=/dev/da0 bs=1m count=20
This will, ofcourse, destroy all data on the RAIDrai volume.

The last thing i can think of is stripesize, you should not set this
lower than 64KiB to avoid two disks processing one I/O request (ata
driver does 64KiB max requests i think, maxphys is 128KiB). But also
because UFS2 begins at 64KiB offset, to allow for partitioning data to
be preserved. So where you think the filesystems starts, for example as
defined in a label, is not actually where the filesystem starts storing
data. All these factors can cause RAID-performance to be low, or to be
cloaked due to improper benchmarking. Many people using HDTune think
their RAID does not perform well, while its just HDTune which was never
meant to test RAID-arrays since RAIDs can only be faster when
parallellisation is possible - processing 2 of more I/O at once by
different physical disks. HDTune sends only one request at a time, and
as such RAIDs that do not have internal read-ahead optimizations will
fail in HDTune. An Areca however will perform well, due to its own
optimizations. But normally the filesystem takes care of generating
enough I/O, also on Windows.

Also note that virtually all Windows systems using RAID are suffered by
stripe misalignement, since Windows requires the use of partitions and
Windows has neglected to take into account the misalignment issue by
using a weird offset for the partitioning. It is possible to create an
aligned partition using third party tools however.

> I don't  know the controllers from Areca but I think the reached values are 
> O.K.  Anyhow, the performance is better than with my old 3ware 8006-2LP PCI 
> controller.
> Tested filesystem was: UFS2 with enabled Soft Updates.
>
> ------
>
> Under Vista 64 (Benchmark: HD Tune):
> http://net.razik.de/HPT_RR_3120/Vista_HDTune_Benchmark_HPT_____DISK_0_0_JBOD.png
> http://net.razik.de/HPT_RR_3120/Vista_HDTune_Benchmark_HPT_____DISK_0_0_RAID1.png
> http://net.razik.de/HPT_RR_3120/Vista_HDTune_Benchmark_HPT_____DISK_0_0_RAID0.png
>   

You should not use HDTune to test RAID-arrays, see also:
http://www.fluffles.net/blogs/2.Why-HDTune-is-unsuitable-for-RAID-arrays.html

Test with ATTO-256 and you should get higher scores. Just make sure the
filesystem starts at the beginning of the volume and that it's close to
empty.

Hope its useful :)
Regards,

Veronica



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?48D31C82.8030007>