Date: Thu, 13 Jan 2011 08:26:09 -0500 (EST) From: Rick Macklem <rmacklem@uoguelph.ca> To: Robert Schulze <rs@bytecamp.net> Cc: freebsd-fs@freebsd.org Subject: Re: nfs and dropped udp datagrams Message-ID: <1544383922.152247.1294925169439.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <213698831.150959.1294921164719.JavaMail.root@erie.cs.uoguelph.ca>
next in thread | previous in thread | raw e-mail | index | archive | help
> > I wonder how big kern.ipc.maxsockbuf shall be tuned on a busy > > NFS-Server. In the last time, I often see clients loosing connection > > to > > their mounted directories. Furthermore, I see increasing counts for > > upd datagrams "dropped due to full socket buffers" in netstat -s -p > > udp. > > > > The mbuf situation does not seem to be the reason for the lost > > connections, vmstat -z shows 0 failures in the mbuf section. > > > > Are there any other tunables which could prevent loss of connection > > to > > the server? What is a reasonable value for maxsockbuf? > > > Prior to r213756 the kernel rpc didn't check for the return value from > soreserve() so, if maxsockbuf wasn't large enough or the rsize/wsize > was > greater than 8K, it failed and the krpc didn't notice. However, if it > fails, then sesend() drops messages and that causes grief. > > I'd just double it on all systems (clients and server), then double it > again if you still see problems. > Oh, and for a change to kern.ipc.maxsockbuf to take effect for udp on a server, you need to kill off/restart the nfsds. (The soreserver() is done when the socket is added for servicing and that happens on startup for UDP.) And I think you want to crank up the value on the clients as well, especiallly if your client(s) are pre-r213756 FreeBSD8 ones. rick
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1544383922.152247.1294925169439.JavaMail.root>