Date: Wed, 8 Feb 2012 14:09:03 -0600 From: Dan Nelson <dnelson@allantgroup.com> To: Tim Daneliuk <tundra@tundraware.com> Cc: FreeBSD Mailing List <freebsd-questions@freebsd.org> Subject: Re: Asymmetric NFS Performance Message-ID: <20120208200903.GI5775@dan.emsphone.com> In-Reply-To: <4F2AD107.40703@tundraware.com> References: <4F2AD107.40703@tundraware.com>
next in thread | previous in thread | raw e-mail | index | archive | help
In the last episode (Feb 02), Tim Daneliuk said: > Server: FBSD 8.2-STABLE / MTU set to 15000 > Client: Linux Mint 12 / MTU set to 8192 > NFS Mount Options: rw,soft,intr > Problem: > > Throughput copying from Server to Client is about 2x that when copying a > file from client to server. The client does have a SSD whereas the server > has conventional SATA drives but ... This problem is evident with either > 100- or 1000- speed ethernet so I don't think it is a drive thing since > you'd expect to saturate 100-BASE with either type of drive. > > Things I've Tried So Far: > > - Increasing the MTUs - This helped speed things up, but the up/down > ratio stayed about the same. > > - Fiddling with rsize and wsize on the client - No real difference If "iostat -zx 1" on the server shows the disks at 100% busy, you're probably getting hit by the fact that NFS has to commit writes to stable storage before acking the client, so writes over NFS can be many times slower than local write speed. Setting the vfs.nfsrv.async sysctl to 1 will speed things up, but if the server reboots while a client is writing, you will probably end up with missing data even though the client thought everything was written. If you are serving ZFS filesystems, stick an SSD in the server and point the ZFS intent log at it: "zpool add mypool log da3". 8GB of ZIL is more than enough, but it needs to be fast, so no sticking a $10 thumb drive in and expecting any improvement :) -- Dan Nelson dnelson@allantgroup.com
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20120208200903.GI5775>