Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 14 Jan 2004 13:20:43 +0000
From:      Karl Pielorz <kpielorz@tdx.co.uk>
To:        Ari Suutari <ari@suutari.iki.fi>, freebsd-stable@freebsd.org
Subject:   Re: Adaptect raid performance with FreeBSD
Message-ID:  <741002750.1074086443@rainbow>
In-Reply-To: <200401141453.50150.ari@suutari.iki.fi>
References:  <200401141453.50150.ari@suutari.iki.fi>

next in thread | previous in thread | raw e-mail | index | archive | help


--On 14 January 2004 14:53 +0200 Ari Suutari <ari@suutari.iki.fi> wrote:

> I have a dual PIII 500 Mhz with intel server mother board.
> (a couple of years old). On that, I have DPT (or currently Adaptec)
> raid controller "DPT PM2654U2", which supports 40 Mhz SCSI bus,
> giving a theoretical data transfer speed of 80 MB/s. There are
> two physical disks, which have been mirrored (ie. raid-1).
> The disks are maxtor atlas 10K4, I think that maxtor tells
> that they should give sustained transfer rate up to 72MB/s.

72Mbytes per second? - That seems a little high? If that's the 'up to rate' 
- you can guarantee it won't do that across the whole surface - it could be 
half that speed in places.

> dd if=/dev/rda1s1a of=/dev/null bs=1m count=100
> 100+0 records in
> 100+0 records out
> 104857600 bytes transferred in 4.193832 secs (25002814 bytes/sec)
>
> So, I get only about 25MB/s. Shouldn't I be getting something
> like 70 MB/s, or even more since there are two disks that
> can server read requests ?

Hmmm, what happens if you run two of those at once?

> Maybe there is something I could tune ? The BIOS doesn't
> have much, there is only setting to enable bus mastering (enabled)
> and another for pci latency timer values (was 40, I think)

In theory (and assuming nothing else fiddles, or overrides it etc.) - the 
higher you set the latency, the faster but more 'jerky' the PCI bus gets. 
i.e. devices spend longer talking on it, which means the initial setup for 
transfers etc. as an overhead goes down - at the expensive of devices 
'hogging' the bus for longer... I think :)

But, I can't remember seeing adjusting it making any real-world difference 
- what happens if you max it out (e.g. 128, or 256)?

It'll be interesting to see what speed you get for 2 of the benchmarks 
running at the same time... Have you also tried different block sizes, e.g. 
64k, 128k etc?

-Karl



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?741002750.1074086443>