Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 5 Apr 2018 11:57:59 -0400
From:      Mike Tancsa <mike@sentex.net>
To:        Rick Macklem <rmacklem@uoguelph.ca>, "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org>
Subject:   Re: Linux NFS client and FreeBSD server strangeness
Message-ID:  <9040d0fa-f9c2-2cc3-efbd-f96408cff73b@sentex.net>
In-Reply-To: <YQBPR0101MB104297BE1296B72597086A3FDDBB0@YQBPR0101MB1042.CANPRD01.PROD.OUTLOOK.COM>
References:  <369fab06-6213-ba87-cc66-c9829e8a76a0@sentex.net> <YQBPR0101MB104297BE1296B72597086A3FDDBB0@YQBPR0101MB1042.CANPRD01.PROD.OUTLOOK.COM>

next in thread | previous in thread | raw e-mail | index | archive | help

Thank you for all the feedback, pointers/insights.  Coming directly from
'Mr. NFS', its particularly appreciated :)

I think I am on a better track now to getting things playing well
between FreeBSD+Linux, or at least better understanding the
interactions.  WRT the locking, I think I added it as Virtbox would not
work otherwise when the file was accessed over NFS.  KVM as the
hypervisor I dont think has this limitation.

I think the root of the issue partially stems from the client having a
LOT of RAM. So according to this default behaviour


----------------
       The NFS client treats the sync mount option differently than some
other file systems (refer to mount(8) for a description of  the
       generic  sync  and  async  mount options).  If neither sync nor
async is specified (or if the async option is specified), the NFS
       client delays sending application writes to the server until any
of these events occur:

              Memory pressure forces reclamation of system memory resources.

              An application flushes file data explicitly with sync(2),
msync(2), or fsync(3).

              An application closes a file with close(2).

              The file is locked/unlocked via fcntl(2).

       In other words, under normal circumstances, data written by an
application may not immediately appear on the  server  that  hosts
       the file.
-----------------------------

it sort of makes more sense now.
I will check out some of your tuning suggestions too.



	---Mike

On 4/4/2018 8:38 PM, Rick Macklem wrote:
> Mike Tancsa wrote:
>> Not sure where the tweaking needs to happen, but I am getting strange
>> behaviour between a Linux nfs client and FreeBSD RELENG_11 NFS server.
>>
>> The FreeBSD server starts with
>>
>>
>> nfs_client_enable="YES"
>> nfs_server_enable="YES"
>>
>>
>> rpcbind_enable="YES"
>> rpc_lockd_enable="YES"
>> rpc_statd_enable="YES"
> Although it probably isn't related to what you are seeing, I avoid the NSM, NLM since
> they are fundamentally flawed protocols. You only need them for NFSv3 clients where
> the clients must see each others byte range locks.
> If byte range locks only need to be visible to processes within a client, you can get
> rid of these and use the "nfslockd" mount option, called "nfslock" on Linux, I think?
> 
>> nfs_server_flags="-u -t -n 16"
> 16 nfsd threads is very low. The default (if you don't specify "-n") is 8 per core, which
> is still very low. Extra ones cause very little overhead (a kernel stack for each one), so
> I usually use "-n 256" if the server is going to be under any amount of load.
> 
> Another thing you can try is:
> # sysctl vfs.nfsd.cachetcp=0
> which disables use of the DRC for TCP mounts. (Many NFS servers never use the DRC
> for TCP mounts. I designed one to try and make NFS over TCP more fault tolerant,
> but it does result in quite a bit of overhead for write loads. If this fixes the problem,
> but you want to use it, it can be tuned with something like:
> vfs.nfsd.tcpcachetimeo=300 (five minutes instead of hours)
> vfs.nfsd.tcphighwater=100  (limit of 100 cached entries)
> --> The smaller you make these, the lower the overheads and the less effective at
>       making NFS over TCP reliable when TCP reconnects occur it becomes.
> 
> There are several tunables for NFSv4 (but none of these affect NFSv3):
> vfs.nfsd.sessionhashsize=1000
> vfs.nfsd.fhhashsize=1000
> vfs.nfsd.clienthashsize=1000
> vfs.nfsd.statehashsize=100
> (A fairly large system dedicated to serving NFS might make the above "1000"s "10000"s.)
> 
>> and on the Linux client I have been trying various options to no avail.
>> The mount works, but on a straight up write to the FreeBSD server,
>> everything is very bursty.  I noticed this (I think) a few months ago
>> where Linux dumps across an nfs mount seemed to take a lot longer and
>> were getting very bursty.
>>
>> It seems if there are a mixture of reads and writes, everything is
>> pretty fast. But if a client is just writing to the server, something,
>> somewhere is blocking.  Doing something simple like
>> ls -l /nfsmount
>>from the client "wakes" up the server/client so that write stream can
>> keep going. Otherwise, it will do a big blast of writes and then several
>> seconds of pausing on the dump.
> This sounds like a network device driver issue to me. The main difference between
> a FreeBSD client and a Linux client that I am aware of is that the Linux client likes
> to do page size (4K) writes, so it generates lots of them.
> 
> One example might be interrupt moderation. It's a wonderful thing for some TCP loads,
> but can be a terrible thing for NFS loads. Basically anything that adds delay to interrupt
> delivery/processing will increase latency and that kills NFS performance, from what I've
> seen.
> Someone else suggested disabling TSO, which is often broken in the net device drivers.
> If you have a different type of net interface that uses a different driver, you might try
> that and see if it has the same problem.
> 
> I might look at your packet trace someday, but I haven't yet.
> 
> Good luck with it, rick
> [stuff snipped]
> 
> 


-- 
-------------------
Mike Tancsa, tel +1 519 651 3400 x203
Sentex Communications, mike@sentex.net
Providing Internet services since 1994 www.sentex.net
Cambridge, Ontario Canada



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?9040d0fa-f9c2-2cc3-efbd-f96408cff73b>