Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 06 Nov 2008 16:09:43 +1000
From:      Danny Carroll <danny@dannysplace.net>
To:        Ivan Voras <ivoras@freebsd.org>
Cc:        freebsd-hardware@freebsd.org
Subject:   Re: Areca vs. ZFS performance testing.
Message-ID:  <49128A27.2080405@dannysplace.net>
In-Reply-To: <geru8q$fbr$1@ger.gmane.org>
References:  <490A782F.9060406@dannysplace.net> <geesig$9gg$1@ger.gmane.org>	<490FE404.2000308@dannysplace.net> <geru8q$fbr$1@ger.gmane.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Ivan Voras wrote:
> Danny Carroll wrote:
> 
>>  - I have seen sustained 130Mb reads from ZFS:
>>                capacity     operations    bandwidth
>> pool         used  avail   read  write   read  write
>> ----------  -----  -----  -----  -----  -----  -----
>> bigarray    1.29T  3.25T  1.10K      0   140M      0
>> bigarray    1.29T  3.25T  1.00K      0   128M      0
>> bigarray    1.29T  3.25T    945      0   118M      0
>> bigarray    1.29T  3.25T  1.05K      0   135M      0
>> bigarray    1.29T  3.25T  1.01K      0   129M      0
>> bigarray    1.29T  3.25T    994      0   124M      0
>>
>>            ad4              ad6              ad8             cpu
>> KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
>> 0.00   0  0.00  65.90 375 24.10  63.74 387 24.08   0  0 19  2 78
>> 0.00   0  0.00  66.36 357 23.16  63.93 370 23.11   0  0 23  2 75
>> 16.00  0  0.00  64.84 387 24.51  63.79 389 24.20   0  0 23  2 75
>> 16.00  2  0.03  68.09 407 27.04  64.98 409 25.98   0  0 28  2 70
> 
>> I'm curious if the ~130M figure shown above is bandwidth from the array
>> or a total of all the drives.  In other words, does it include reading
>> the parity information?  I think it does not since if I look at iostat
>> figures and add up all of the drives it is greater than that reported by
>> zfs by a factor of 5/4  (100M in Zfs iostat = 5 x 25Mb in standard iostat).
> 
> The numbers make sense - you have 5 drives in RAID-Z and 4/5ths of total
> bandwidth is the "real" bandwidth. On the other hand, 25 MB/s is very
> slow for modern drives (assuming you're doing sequential read/write
> tests). Are you having hardware problems?

No, just the IO from disk to net is slow...

>> Lastly, The windows client which performed these tests was measuring
>> local bandwidth at about 30-50Mb/s.  I believe this figure to be
>> incorrect (given how much I transferred in X seconds...)
> 
> Using Samba? Search the lists for Samba performance advice - the default
> configuration isn't nearly optimal.

In my second post I mentioned that the IO windows was reporting was
right.  I was getting about 50Mb/sec but ZFS was reporting about 130M/s.

I timed this by copying 20Gb and timing it with my watch.  Just as a
rough guide.

I am curious about this inconsistency.  If anyone has any ideas???

-D





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?49128A27.2080405>