From owner-freebsd-fs@FreeBSD.ORG Thu Apr 25 00:36:56 2013 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 4EC68F54 for ; Thu, 25 Apr 2013 00:36:56 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-jnhn.mail.uoguelph.ca (esa-jnhn.mail.uoguelph.ca [131.104.91.44]) by mx1.freebsd.org (Postfix) with ESMTP id 173921630 for ; Thu, 25 Apr 2013 00:36:55 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqAEADx6eFGDaFvO/2dsb2JhbABRgzyDMrpygRh0gh8BAQUjVhsOCgICDRkCWQYTGYd7rSGRJYEjjFaBAwEzB4I4gRMDlxyRIIMqIIEvPQ X-IronPort-AV: E=Sophos;i="4.87,546,1363147200"; d="scan'208";a="27172761" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-jnhn.mail.uoguelph.ca with ESMTP; 24 Apr 2013 20:36:55 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 3B257B3F43; Wed, 24 Apr 2013 20:36:55 -0400 (EDT) Date: Wed, 24 Apr 2013 20:36:55 -0400 (EDT) From: Rick Macklem To: Jeremy Chadwick Message-ID: <972106612.1125159.1366850215231.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <20130424231808.GA20882@icarus.home.lan> Subject: Re: NFS Performance issue against NetApp MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.17.91.202] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Apr 2013 00:36:56 -0000 Jeremy Chadwick wrote: > On Wed, Apr 24, 2013 at 04:08:24PM -0700, Marc G. Fournier wrote: > > > > On 2013-04-24, at 16:02 , Rick Macklem wrote: > > > > >> > > > Along with rsize,wsize you might want to try increasing readahead. > > > The > > > default is only 1. > > > > Stupid question on this, possibly, but are the current defaults > > "sane" anymore, or residual from 'the old days'? Like, I've read in > > many places where you should raise rsize/wsize =EF=BF=BD in what > > circumstances would leaving as the defaults make sense? >=20 > From what I can discern, the defaults on stable/9 (for an NFS client) > are 8192 -- see sys/nfsclient/nfs.h, NFS_WSIZE and NFS_RSIZE. >=20 For the new client, it defaults to the min(MAXBSIZE, server-max), where server-max is whatever the server says is its maximum (also MAXBSIZE for the new server). I think the old server uses 32768. These numbers are for the default tcp mounts. Specify udp (or mntudp) and I think the default becomes 16384. If you explicitly set rsize=3DN,wsize=3DN on a mount, those sizes will be used unless they are greater than min(MAXBSIZE, server-max). MAXBSIZE is the limit for the client side buffer cache and server-max is whatever the server says is its max, so the client never uses a value greater than that. For readahead, the default is 1. This seems rather small to me and I think = is in the "from the old days" category. You can set it to a larger value, although there is an ifdef'd upper limit, which is what you'll get if you specify a really large value for readahead. Admittedly, if you are using a large rsize,wsize on a low latency LAN, readahead=3D1 may be sufficient. As someone else noted, if you are using head or stable/9, "nfsstat -m" shows you what is actually being used, for the new client only. rick > -- > | Jeremy Chadwick jdc@koitsu.org | > | UNIX Systems Administrator http://jdc.koitsu.org/ | > | Mountain View, CA, US | > | Making life hard for others since 1977. PGP 4BD6C0CB |