Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 24 Sep 2015 09:31:42 -0400
From:      Paul Kraus <paul@kraus-haus.org>
To:        Raimund Sacherer <raimund.sacherer@logitravel.com>, FreeBSD Questions <freebsd-questions@freebsd.org>
Subject:   Re: Restructure a ZFS Pool
Message-ID:  <9EE24D9C-260A-408A-A7B5-14BACB12DDA9@kraus-haus.org>
In-Reply-To: <480627999.9462316.1443098561442.JavaMail.zimbra@logitravel.com>
References:  <480627999.9462316.1443098561442.JavaMail.zimbra@logitravel.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sep 24, 2015, at 8:42, Raimund Sacherer =
<raimund.sacherer@logitravel.com> wrote:

> I had the pool fill up to over 80%, then I got it back to about =
50-60%, but it feels more sluggish. I use a lot of NFS and we use it to =
backup some 5 million files in lots of sub-directorys (a/b/c/d/abcd...), =
besides other big files (SQL dump backups, bacula, etc.)
>=20
> I said above sluggish because I do not have empirical data and I do =
not know exactly how to test the system correctly, but I read a lot and =
there seem to be suggestions that if you have NFS etc. that a =
independent ZIL helps with copy-on-write fragmentation.=20

A SLOG (Separate Log Device) will not remove existing fragmentation, but =
it will help prevent future fragmentation _iff_ (if and only if) the =
write operations are synchronous. NFS is not, by itself, sync, but the =
write calls on the client _may_ be sync.

> What I would like to know is if I can eliminate one Spare disk from =
the pool, and add it as a ZIL again, without having to shutdown/reboot =
the server?

Yes, but unless you can stand loosing data in flight (writes that the =
system says have been committed but have only made it to the SLOG), you =
really want your SLOG vdev to be a mirror (at least 2 drives).

> I am also thinking about swapping the spare 4TB disk for a small SSD, =
but that's immaterial to whether I can perform the change.=20

I assume you want to swap instead of just add due to lack of open drive =
slots / ports.

In a zpool of this size, especially a RAIDz<N> zpool, you really want a =
hot spare and a notification mechanism so you can replace a failed drive =
ASAP. The resilver time (to replace afield drive) will be limited by the =
performance of a _single_ drive for _random_ I/O. See this post =
http://pk1048.com/zfs-resilver-observations/ for one of my resilver =
operations and the performance of such.

> Also I would appreciate it if someone has some pointers on how to test =
correctly so I see if there are real benefits before/after this =
operation.

I use a combination of iozone and filebench to test, but first I =
characterize my workload. Once I know what my workload looks like I can =
adjust the test parameters to match the workload. If the test results do =
not agree with observed behavior, then I tune them until they do. =
Recently I needed to test a server before going live. I knew the =
workload was NFS for storing VM images. So I ran iozone with 8-64 GB =
files and 4 KB to 1 MB blocks, and sync writes (the -o option). The =
measurements matched very closely to the observations, so I knew could =
trust them and any changes I made would give me valid results.

--
Paul Kraus
paul@kraus-haus.org




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?9EE24D9C-260A-408A-A7B5-14BACB12DDA9>