From owner-freebsd-fs@FreeBSD.ORG Mon Apr 2 01:04:39 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 7ACB6106564A for ; Mon, 2 Apr 2012 01:04:39 +0000 (UTC) (envelope-from rmacklem@uoguelph.ca) Received: from esa-annu.mail.uoguelph.ca (esa-annu.mail.uoguelph.ca [131.104.91.36]) by mx1.freebsd.org (Postfix) with ESMTP id 365CE8FC1B for ; Mon, 2 Apr 2012 01:04:38 +0000 (UTC) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AqAEAJb6eE+DaFvO/2dsb2JhbABDhVSwLoQPggkBAQUjVhsYAgINGQJZBgojh2+tKIkRgS+JW4R6gRgElWGQL4MDgTg X-IronPort-AV: E=Sophos;i="4.75,354,1330923600"; d="scan'208";a="163248538" Received: from erie.cs.uoguelph.ca (HELO zcs3.mail.uoguelph.ca) ([131.104.91.206]) by esa-annu-pri.mail.uoguelph.ca with ESMTP; 01 Apr 2012 21:04:38 -0400 Received: from zcs3.mail.uoguelph.ca (localhost.localdomain [127.0.0.1]) by zcs3.mail.uoguelph.ca (Postfix) with ESMTP id 51362B4028; Sun, 1 Apr 2012 21:04:38 -0400 (EDT) Date: Sun, 1 Apr 2012 21:04:38 -0400 (EDT) From: Rick Macklem To: Sven Brandenburg Message-ID: <1428634009.2076834.1333328678316.JavaMail.root@erie.cs.uoguelph.ca> In-Reply-To: <4F746D8C.8010903@crashme.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [172.17.91.203] X-Mailer: Zimbra 6.0.10_GA_2692 (ZimbraWebClient - FF3.0 (Win)/6.0.10_GA_2692) Cc: freebsd-fs@freebsd.org Subject: Re: NFSv3, ZFS, 10GE performance X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Apr 2012 01:04:39 -0000 Sven Brandenburg wrote: > On 03/26/2012 11:47 PM, Rick Macklem wrote: > > MAX_BSIZE is 64kb. I'd like to try making that bigger, but haven't > > gotten > > around to it yet. (If you wanted to try bumping MAX_BSIZE to 128Kb > > on both > > client and server and seeing what happens, that might be > > interesting, since > > my understanding is that ZFS uses a 128Kb block size.) > > I finally came around and tested it (with 256k and 1M) - there is good > and bad news. > The good news is the system does indeed boot (off of zfs at least, no > idea on ufs) and it does increase performance. > I am now seeing roughly 800MB/s off the bat which is quite nice. > The bad news is that I had to use a Linux client because the FreeBSD > client declined to work: > mount_nfs: /mnt, : No buffer space available > > (Although I will freely admit that my knowledge of where to ajdust > this > value is rather limited: What I did was changing MAXBSIZE MAXPHYS to > 1M > in /usr/src/sys/sys/param.h, remaking world+kernel then reboot. > I forgot MAXPHYS in my first try and this crashed the client machine > as > soon as I tried to mount something via nfs. Notably, the server seems > to > be working ok even with a mismatched MAXPHYS/MAXBSIZE). > > So far, the results are very promising. > I did a quick test after rebuilding a kernel with MAXBSIZE set to 131072 and it seemed to work ok. UFS plus NFS. I haven't tried anything larger than 128K. (I don't think you need to change your userland if you are increasing MAXBSIZE just so NFS can do bigger transfers, but I'm not sure;-) Btw, I didn't mean to suggest this as something to do for a production system, but just as a "bleeding edge" experiment, in case you wanted to do so. I don't see a problem with using 64Kb rsize + a readahead of 8. Have fun with it, rick > regards, > Sven