Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 24 Sep 2015 01:07:31 -0500
From:      Adam Vande More <amvandemore@gmail.com>
To:        Dmitrijs <war@dim.lv>
Cc:        FreeBSD Questions <freebsd-questions@freebsd.org>
Subject:   Re: zfs performance degradation
Message-ID:  <CA%2BtpaK3s6-O4uNsGJXd0b1QgxWayhu2Ocrt0dx%2BBvQLnHRJ%2BbA@mail.gmail.com>
In-Reply-To: <56019211.2050307@dim.lv>
References:  <56019211.2050307@dim.lv>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Sep 22, 2015 at 12:38 PM, Dmitrijs <war@dim.lv> wrote:

> Goog afternoon,
>
>   I've encountered strange ZFS behavior - serious performance degradation
> over few days. Right after setup on fresh ZFS (2 hdd in a mirror) I made =
a
> test on a file 30Gb size with dd like
> dd if=3Dtest.mkv of=3D/dev/null bs=3D64k
> and got 150+Mbs speed.
>
> Today I got only 90Mbs, tested with different blocksizes, many times,
> speed seems to be stable +-5%
>

I doubt that.  Block sizes have a large impact on dd read efficiency
regardless of the filesystem.  So unless you were testing the speed of
cached data, there would have been a significant difference between runs of
different block sizes.


>
> nas4free: divx# dd if=3Dtest.mkv of=3D/dev/null bs=3D64k
> 484486+1 records in
> 484486+1 records out
> 31751303111 bytes transferred in 349.423294 secs (90867734 bytes/sec)
>

Perfectly normal for the parameters you've imposed.  What happens if you
use bs=3D1m?


> Computer\system details:
>
>  nas4free: /mnt# uname -a
> FreeBSD nas4free.local 10.2-RELEASE-p2 FreeBSD 10.2-RELEASE-p2 #0
> r287260M: Fri Aug 28 18:38:18 CEST 2015 root@dev.nas4free.org:/usr/obj/na=
s4free/usr/src/sys/NAS4FREE-amd64
> amd64
>
> RAM 4Gb
> I've got brand new 2x HGST HDN724040ALE640, 4=D0=A2=D0=B1, 7200rpm (ada0,=
 ada1) for
> pool data4.
> Another pool, data2, performs slightly better even on older\cheaper WD
> Green 5400 HDDs, up to 99Mbs.
>

What parameters for both are you using here to make this claim?


>
> While dd is running, gstat is showing like:
>
> dT: 1.002s w: 1.000s
> L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
> 0 366 366 46648 1.1 0 0 0.0 39.6| ada0
> 1 432 432 54841 1.0 0 0 0.0 45.1| ada1
>
>
>
> so iops are very high, while %busy is quite low.


%busy is a misunderstood stat.  Do not use it to evaluate if your drive is
being utilized efficiently.  L(q), ops and seek times are what is
interesting.


> It averages about 50%, rare peaks till 85-90%
>

Basically as close to perfect as you'll ever get considering how you
invoked dd.  ZFS doesn't split sequential reads across a vdev, only a pool
and that's only if multiple vdev's were in the pool when the file was
written.

Your testing methodology is poorly thought and implemented, or at least the
way it was presented to us.  Testing needing to a methodical, repeatable,
testable process accounting for all the variables involved.  All I saw was
a bunch of haphazard and scattered attempts to test sequential read speed
of a ZFS mirror.  Is that really an accurate test of the pool workload?
Did you clear caches between tests?  Why are there other daemons running
like proftpd during the testing? Etc, ad nauseam.



--=20
Adam



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CA%2BtpaK3s6-O4uNsGJXd0b1QgxWayhu2Ocrt0dx%2BBvQLnHRJ%2BbA>