From owner-freebsd-fs@FreeBSD.ORG Thu Jan 13 12:19:26 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 04A0C106566B for ; Thu, 13 Jan 2011 12:19:26 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.mail.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id B6C478FC0A for ; Thu, 13 Jan 2011 12:19:25 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: ApwEAAt8Lk2DaFvO/2dsb2JhbACECKExrmqODYEhgzd0BIRohig X-IronPort-AV: E=Sophos;i="4.60,317,1291611600"; d="scan'208";a="105228820" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu-pri.mail.uoguelph.ca with ESMTP; 13 Jan 2011 07:19:24 -0500 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id B798DB3F2D; Thu, 13 Jan 2011 07:19:24 -0500 (EST) Date: Thu, 13 Jan 2011 07:19:24 -0500 (EST) From: Rick Macklem To: Robert Schulze Message-ID: <213698831.150959.1294921164719.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <4D2D7ADB.3090902@bytecamp.net> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - IE8 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org Subject: Re: nfs and dropped udp datagrams X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 13 Jan 2011 12:19:26 -0000 > I wonder how big kern.ipc.maxsockbuf shall be tuned on a busy > NFS-Server. In the last time, I often see clients loosing connection > to > their mounted directories. Furthermore, I see increasing counts for > upd datagrams "dropped due to full socket buffers" in netstat -s -p > udp. > > The mbuf situation does not seem to be the reason for the lost > connections, vmstat -z shows 0 failures in the mbuf section. > > Are there any other tunables which could prevent loss of connection to > the server? What is a reasonable value for maxsockbuf? > Prior to r213756 the kernel rpc didn't check for the return value from soreserve() so, if maxsockbuf wasn't large enough or the rsize/wsize was greater than 8K, it failed and the krpc didn't notice. However, if it fails, then sesend() drops messages and that causes grief. I'd just double it on all systems (clients and server), then double it again if you still see problems. rick