Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 03 Oct 2012 14:36:19 +0200
From:      Peter Maloney <peter.maloney@brockmann-consult.de>
To:        freebsd-fs@freebsd.org
Subject:   Re: NFS Performance Help
Message-ID:  <506C3143.6060000@brockmann-consult.de>
In-Reply-To: <1363900011.1436778.1348962614353.JavaMail.root@erie.cs.uoguelph.ca>
References:  <1363900011.1436778.1348962614353.JavaMail.root@erie.cs.uoguelph.ca>

next in thread | previous in thread | raw e-mail | index | archive | help
On 09/30/2012 01:50 AM, Rick Macklem wrote:
> Wayne Hotmail wrote:
>> Like others I am having issues getting any decent performance out of
>> my NFS clients on FreeBSD.I have tried 8.3 and 9.1 beta on stand alone
>> servers or as vmware clients. Used 1 Gig connections or a 10 Gig
>> connection.Tried mounting using Version 3 and Version 4.I have tried
>> the noatime, sync, and tcp options nothing seems to help.I am
>> connecting to a IceWeb NAS. My performance with DD is 60 meg a second
>> at best when writing to the server. If I load a Redhat Linux server on
>> the same hardware using the same connection my write performance is
>> about 340 Meg a second.
>> It really falls apart when I run a test script where I create a 100
>> folders then create a 100 files in the folders and append to these
>> files 5 times using 5 random files. I am trying to simulate a IMAP
>> email server. If I run the script on my local mirror drives it takes
>> about a one minute and thirty seconds to complete. If I run the script
>> on the NFS mounted drives it takes hours to complete. With my Linux
>> install on the same hardware this NFS mounted script takes about 4
>> minutes.
>> Google is tired of me asking the same question over and over. So if
>> anyone would be so kind as to point out some kernel or system tweaks
>> to get me passed my NFS issues that would be greatly appreciated.
>> Wayne
>>
> You could try a smaller rsize,wsize by setting the command line args
> for the mount. In general, larger rsize,wsize should perform better,
> but if a large write generates a burst of traffic that overloads
> some part of the network fabric or server, such that packets get
> dropped, performance will be hit big time.
>
> Other than that, if you capture packets and look at them in
> wireshark, you might be able to spot where packets are getting lost
> and retransmitted. (If packets are getting dropped, then the fun
> part is figuring out why and coming up with a workaround.)
>
> Hopefully others will have more/better suggestions, rick
>
My only suggestion is to try (but not necessarily in production) the
changes suggested in the thread "NFSv3, ZFS, 10GE performance" started
by Sven Brandenburg. It didn't do much for my testing, but he says it
does a bunch. However, he is using a Linux client, and a ram based ZIL
(using ZFS).

Other than that, I can only say that I observed the same thing as you
(testing both FreeBSD and Linux clients), but I always tested with ZFS.
And I found that with FreeBSD, it was putting high load on the ZIL,
meaning FreeBSD was using sync writes, but Linux was not. ESXi did the
same thing as a client. So with a cheap SSD as a ZIL, ESXi and FreeBSD
were writing at around 40-70 MB/s and Linux was writing at 600. The same
test using a virtual machine disk mounted over NFS shows how extreme the
problem can be, and was instead 7 MB/s with FreeBSD and ESXi, and around
90-200 with Linux. (And to compare 10Gbps performance with other non-NFS
tests, I could get something like 600MB/s with a simple netcat from
local RAM to remote /dev/null, and 800-900 with more threads or NICs, I
don't remember).

I couldn't figure out for sure, but I couldn't cause any corruption in
my testing, so I just assume Linux is only running "sync" calls when
creating files, write barriers to virtual disks, etc. like it does with
local file systems instead of doing every single write synchronously.

Peter



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?506C3143.6060000>