Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 11 Oct 2017 09:41:13 -0500
From:      Adam Vande More <amvandemore@gmail.com>
To:        Kate Dawson <k4t@3msg.es>
Cc:        FreeBSD Questions <freebsd-questions@freebsd.org>
Subject:   Re: FreeBSD ZFS file server with SSD HDD
Message-ID:  <CA%2BtpaK3Cga3SKmbKnRts_SSp=D4qk9p%2BaTzNBZeEqDuvQGVd9A@mail.gmail.com>
In-Reply-To: <20171011130512.GE24374@apple.rat.burntout.org>
References:  <20171011130512.GE24374@apple.rat.burntout.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Oct 11, 2017 at 8:05 AM, Kate Dawson <k4t@3msg.es> wrote:

> Hi,
>
> Currently running a FreeBSD NFS server with a zpool comprising
>
> 12 x 1TB hard disk drives are arranged as pairs of mirrors in a strip set
> ( RAID 10 )
>
> An additional 2x 960GB SSD added. These two SSD are partitioned with a
> small partition begin used for a ZIL log, and larger partion arranged for
> L2ARC cache.
>
> Additionally the host has 64GB RAM and 16 CPU cores (AMD Opteron 2Ghz)
>
> A dataset from the pool is exported via NFS to a number of Debian
> Gnu/Linux hosts running a xen hypervisor. These run several disk image
> based virtual machines
>
> In general use, the FreeBSD NFS host sees very little read IO, which is to
> expected
> as the RAM cache  and L2ARC are designed to minimise the amount of read
> load
> on the disks.
>
> However we're starting to see high load ( mostly IO WAIT ) on the Linux
> virtualisation hosts, and virtual machines - with kernel timeouts
> occurring resulting in crashes and instability.
>
> I believe this may be due to the limited number of random write IOPS
> available
> on the zpool NFS export.
>
> I can get sequential writes and reads to and from the NFS server at
> speeds that approach the maximum the network provides ( currently 1Gb/s
> + Jumbo Frames, and I could increase this by bonding multiple interfaces
> together. )
>
> However day to day usage does not show network utilisation anywhere near
> this maximum.
>
> If I look at the output of `zpool iostat -v tank 1 ` I see that every
> five seconds or so, the numner of write operation go to > 2k
>
> I think this shows that the I'm hitting the limit that the spinning disk
> can provide in this workload.
>

I doubt that is the cause.  It is more likely you have

vfs.zfs.txg.timeout

set to the default.  Have you tried any other zfs or nfs tuning?  If so,
please share those details.

Does gstat reveal anything useful?


-- 
Adam



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CA%2BtpaK3Cga3SKmbKnRts_SSp=D4qk9p%2BaTzNBZeEqDuvQGVd9A>