Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 24 Sep 2015 10:40:33 -0400
From:      Paul Kraus <paul@kraus-haus.org>
To:        Dmitrijs <war@dim.lv>, FreeBSD Questions <freebsd-questions@freebsd.org>
Subject:   Re: zfs performance degradation
Message-ID:  <60BF2FC3-0342-46C9-A718-52492303522F@kraus-haus.org>
In-Reply-To: <56040150.90403@dim.lv>
References:  <56019211.2050307@dim.lv> <37A37E9D-9D65-4553-BBA2-C5B032163499@kraus-haus.org> <56038054.5060906@dim.lv> <782C9CEF-BE07-4E05-83ED-133B7DA96780@kraus-haus.org> <56040150.90403@dim.lv>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sep 24, 2015, at 9:57, Dmitrijs <war@dim.lv> wrote:

>>=20
>> So a zpool made up of one single vdev, no matter how many drives, =
will average the performance of one of those drives. It does not really =
matter if it is a 2-way mirror vdev, a 3-way mirror vdev, a RAIDz2 vdev, =
a RAIDz3 vdev, etc. This is more true for write operations that read =
(mirrors can achieve higher performance by reading from multiple copies =
at once).

> Thanks! Now I understand. Although it is strange, that you did not =
mention how RAM and\or CPU matters. Or do they? I start observing that =
my 4core Celeron J1900 is throttling writes.

Do you have compression turned on ? I have only seen ZFS limited by CPU =
(assuming relatively modern CPU) when using compression. If you are =
using compression, make sure it is lz4 and not just =93on".

RAM effects performance in that pending (async) writes are cached in the =
ARC. The ARC also caches both demand read data as well as prefetched =
read data. There are a number of utilities out there to give you =
visibility into the ARC. `sysctl -a | grep arcstats` will get you the =
raw data :-)

When you benchmark you _must_ use a test set of data that is larger than =
your RAM you you will not be testing all the way to / from the drives =
:-) That or artificially reduce the size of the ARC (set =
vfs.zfs.arc_max=3D=93<bytes>=94 in /boot/loader.conf).

> Still haven't found at least approximate specification\recommendation =
as simple as "if you need zfs mirror 2 drives, take at least core i3 or =
e3 processor, 10 drives - go for e5 xeon, etc". I did not notice cpu =
impact on windows machine, still i've got " load averages:  1.60, 1.43,  =
1.24 " on write on zfs.

How many cores / threads ? As long as you have more cores / threads than =
the load value you are NOT out of CPU resources, but you may be =
saturating ONE CPU with compression or other function.

I have been using HP Proliant MicroServer N36L, N40L, and N54L for small =
file servers and I am only occasionally CPU limited. But my work load on =
these boxes is very different from yours.

My backup server is a SuperMicro with dual Xeon E5520 (16 total threads) =
and 12 GB RAM. I can handily saturate my single 1 Gbps network. I have =
compression (lz4) enabled on all datasets.

--
Paul Kraus
paul@kraus-haus.org




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?60BF2FC3-0342-46C9-A718-52492303522F>