From owner-freebsd-fs@FreeBSD.ORG Wed Aug 28 21:55:24 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 958FAEC5 for ; Wed, 28 Aug 2013 21:55:24 +0000 (UTC) (envelope-from wollman@hergotha.csail.mit.edu) Received: from hergotha.csail.mit.edu (wollman-1-pt.tunnel.tserv4.nyc4.ipv6.he.net [IPv6:2001:470:1f06:ccb::2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 20BD42A8A for ; Wed, 28 Aug 2013 21:55:23 +0000 (UTC) Received: from hergotha.csail.mit.edu (localhost [127.0.0.1]) by hergotha.csail.mit.edu (8.14.5/8.14.5) with ESMTP id r7SLtK53033771; Wed, 28 Aug 2013 17:55:20 -0400 (EDT) (envelope-from wollman@hergotha.csail.mit.edu) Received: (from wollman@localhost) by hergotha.csail.mit.edu (8.14.5/8.14.4/Submit) id r7SLtK17033768; Wed, 28 Aug 2013 17:55:20 -0400 (EDT) (envelope-from wollman) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Message-ID: <21022.29128.557471.157078@hergotha.csail.mit.edu> Date: Wed, 28 Aug 2013 17:55:20 -0400 From: Garrett Wollman To: Rick Macklem Subject: Re: NFS on ZFS pure SSD pool In-Reply-To: <1342658741.14983067.1377722983208.JavaMail.root@uoguelph.ca> References: <1342658741.14983067.1377722983208.JavaMail.root@uoguelph.ca> X-Mailer: VM 7.17 under 21.4 (patch 22) "Instant Classic" XEmacs Lucid X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (hergotha.csail.mit.edu [127.0.0.1]); Wed, 28 Aug 2013 17:55:21 -0400 (EDT) X-Spam-Status: No, score=-1.0 required=5.0 tests=ALL_TRUSTED autolearn=disabled version=3.3.2 X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on hergotha.csail.mit.edu Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 28 Aug 2013 21:55:24 -0000 < said: > Eric Browning wrote: >> Sam and I applied the patch (kernel now at r254983M) and set >> vfs.nfsd.tcphighwater=5000 in sysctl.conf and my CPU is still >> slammed. SHould I up it to 10000? >> > You can try. I have no insight into where this goes, since I can't > produce the kind of server/load where it makes any difference. (I have > single core i386 (P4 or similar) to test with and I don't use ZFS at all.) > I've cc'd Garrett Wollman, since he runs rather large servers and may > have some insight into appropriate tuning, etc. 10,000 is probably way too small. We run high-peformance servers with vfs.nfsd.tcphighwater set between 100k and 150k, and we crank vfs.nfsd.tcpcachetimeo down to five minutes or less. Just to give you an idea of how rarely this cache is actually hit: my two main production file servers have both been up for about three months now, and have answered billions of requests (enough for the 32-bit signed statistics counters to wrap). One server shows 63 hits, with a peak TCP cache size of 150k and the other shows zero, with a peak cache size of 64k. Another server, which serves scratch space, has been up for a little more than a month, and in nearly two billion accesses has yet to see a single cache hit (peak cache size 131k, which was actually hitting the configured limit, which I've since raised). -GAWollman