From owner-freebsd-fs@FreeBSD.ORG Thu Jan 13 13:26:10 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D41B61065674 for ; Thu, 13 Jan 2011 13:26:10 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.mail.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 8DDD48FC1B for ; Thu, 13 Jan 2011 13:26:10 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: ApwEAHOMLk2DaFvO/2dsb2JhbACECKExrmaOC4Ehgzd0BIRohig X-IronPort-AV: E=Sophos;i="4.60,317,1291611600"; d="scan'208";a="105233900" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu-pri.mail.uoguelph.ca with ESMTP; 13 Jan 2011 08:26:09 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 78CEFB3F52; Thu, 13 Jan 2011 08:26:09 -0500 (EST) Date: Thu, 13 Jan 2011 08:26:09 -0500 (EST) From: Rick Macklem To: Robert Schulze Message-ID: <1544383922.152247.1294925169439.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <213698831.150959.1294921164719.JavaMail.root@erie.cs.uoguelph.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.203] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - IE8 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org Subject: Re: nfs and dropped udp datagrams X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 13 Jan 2011 13:26:10 -0000 > > I wonder how big kern.ipc.maxsockbuf shall be tuned on a busy > > NFS-Server. In the last time, I often see clients loosing connection > > to > > their mounted directories. Furthermore, I see increasing counts for > > upd datagrams "dropped due to full socket buffers" in netstat -s -p > > udp. > > > > The mbuf situation does not seem to be the reason for the lost > > connections, vmstat -z shows 0 failures in the mbuf section. > > > > Are there any other tunables which could prevent loss of connection > > to > > the server? What is a reasonable value for maxsockbuf? > > > Prior to r213756 the kernel rpc didn't check for the return value from > soreserve() so, if maxsockbuf wasn't large enough or the rsize/wsize > was > greater than 8K, it failed and the krpc didn't notice. However, if it > fails, then sesend() drops messages and that causes grief. > > I'd just double it on all systems (clients and server), then double it > again if you still see problems. > Oh, and for a change to kern.ipc.maxsockbuf to take effect for udp on a server, you need to kill off/restart the nfsds. (The soreserver() is done when the socket is added for servicing and that happens on startup for UDP.) And I think you want to crank up the value on the clients as well, especiallly if your client(s) are pre-r213756 FreeBSD8 ones. rick