Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 25 Apr 2013 11:43:58 -0700
From:      "Marc G. Fournier" <scrappy@hub.org>
To:        Rick Macklem <rmacklem@uoguelph.ca>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: NFS Performance issue against NetApp
Message-ID:  <2F5E9262-E465-4837-9B35-6220970C185E@hub.org>
In-Reply-To: <972106612.1125159.1366850215231.JavaMail.root@erie.cs.uoguelph.ca>
References:  <972106612.1125159.1366850215231.JavaMail.root@erie.cs.uoguelph.ca>

next in thread | previous in thread | raw e-mail | index | archive | help


On 2013-04-24, at 17:36 , Rick Macklem <rmacklem@uoguelph.ca> wrote:

> If you explicitly set rsize=N,wsize=N on a mount, those sizes will be
> used unless they are greater than min(MAXBSIZE, server-max). MAXBSIZE is
> the limit for the client side buffer cache and server-max is whatever
> the server says is its max, so the client never uses a value greater than
> that.

Just got my new Intel card in, so starting to play with it … one thing I didn't notice yesterday when I ran nfsstat -m:

nfsv3,tcp,resvport,soft,intr,cto,lockd,sec=sys,acdirmin=3,acdirmax=60,acregmin=5,acregmax=60,nametimeo=60,negnametimeo=60,rsize=65536,wsize=65536,readdirsize=32768,readahead=1,wcommitsize=5175966,timeout=120,retrans=2

Earlier in this thread, it was recommended to change to 32k … and Jeremy Chadwick thought it defaulted to 8k …

My fstab entry right now is simply:

192.168.1.1:/vol/vm     /vm             nfs     rw,intr,soft    0       0

so I'm not changing rsize/wsize anywhere … did those defaults get raised recently and nobody noticed?  or does it make sense to reduce from 64k -> 32k to get better performance?

Again, this is using a FreeBSD client to mount from a NetApp file server ...


Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?2F5E9262-E465-4837-9B35-6220970C185E>