From owner-freebsd-fs@FreeBSD.ORG Thu Aug 29 01:31:33 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 041CC9E7 for ; Thu, 29 Aug 2013 01:31:33 +0000 (UTC) (envelope-from wollman@hergotha.csail.mit.edu) Received: from hergotha.csail.mit.edu (wollman-1-pt.tunnel.tserv4.nyc4.ipv6.he.net [IPv6:2001:470:1f06:ccb::2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 96751271C for ; Thu, 29 Aug 2013 01:31:32 +0000 (UTC) Received: from hergotha.csail.mit.edu (localhost [127.0.0.1]) by hergotha.csail.mit.edu (8.14.5/8.14.5) with ESMTP id r7T1VUKQ036585; Wed, 28 Aug 2013 21:31:30 -0400 (EDT) (envelope-from wollman@hergotha.csail.mit.edu) Received: (from wollman@localhost) by hergotha.csail.mit.edu (8.14.5/8.14.4/Submit) id r7T1VUSk036582; Wed, 28 Aug 2013 21:31:30 -0400 (EDT) (envelope-from wollman) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <21022.42098.291440.900505@hergotha.csail.mit.edu> Date: Wed, 28 Aug 2013 21:31:30 -0400 From: Garrett Wollman To: Rick Macklem Subject: Re: NFS on ZFS pure SSD pool In-Reply-To: <461209820.15034260.1377733709648.JavaMail.root@uoguelph.ca> References: <21022.29128.557471.157078@hergotha.csail.mit.edu> <461209820.15034260.1377733709648.JavaMail.root@uoguelph.ca> X-Mailer: VM 7.17 under 21.4 (patch 22) "Instant Classic" XEmacs Lucid X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (hergotha.csail.mit.edu [127.0.0.1]); Wed, 28 Aug 2013 21:31:30 -0400 (EDT) X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED autolearn=disabled version=3.3.2 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on hergotha.csail.mit.edu Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 29 Aug 2013 01:31:33 -0000 < said: > You should get your users to do their mounts over flaky WiFi links > and such, in order to make better use of the cache;-) We don't support NFS use by such clients -- it's purely for compute cluster type applications. Anything that can use AFS is supposed to use AFS. > By the way Garrett, what do you have kern.ipc.nmbclusters set to, > since cache entries will use mbuf clusters normally. I have it at 2**20, which is actually only important because it causes kern.ipc.nmbjumbop to be set as a side effect. We also set maxusers (to match the new calculation in 10-current) so that other kernel data structures will be sized appropriately. This server's pretty idle right now: 36907/150098/187005 mbufs in use (current/cache/total) 948/22794/23742/1048576 mbuf clusters in use (current/cache/total/max) 0/4352 mbuf+clusters out of packet secondary zone in use (current/cache) 24583/36548/61131/524288 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/262144 9k jumbo clusters in use (current/cache/total/max) 0/0/0/131072 16k jumbo clusters in use (current/cache/total/max) 109454K/229304K/338759K bytes allocated to network (current/cache/total) On a machine without jumbo frames, it looks like this: 10829/230836/241665 mbufs in use (current/cache/total) 8268/93146/101414/1048576 mbuf clusters in use (current/cache/total/max) 8190/80641 mbuf+clusters out of packet secondary zone in use (current/cache) 0/1993/1993/524288 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/262144 9k jumbo clusters in use (current/cache/total/max) 0/0/0/131072 16k jumbo clusters in use (current/cache/total/max) 19243K/251973K/271216K bytes allocated to network (current/cache/total) -GAWollman