From owner-freebsd-stable@FreeBSD.ORG Tue Nov 29 15:36:46 2011 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3F053106564A for ; Tue, 29 Nov 2011 15:36:46 +0000 (UTC) (envelope-from jhb@freebsd.org) Received: from cyrus.watson.org (cyrus.watson.org [65.122.17.42]) by mx1.freebsd.org (Postfix) with ESMTP id D07408FC13 for ; Tue, 29 Nov 2011 15:36:45 +0000 (UTC) Received: from bigwig.baldwin.cx (bigwig.baldwin.cx [96.47.65.170]) by cyrus.watson.org (Postfix) with ESMTPSA id 82F4746B09; Tue, 29 Nov 2011 10:36:45 -0500 (EST) Received: from jhbbsd.localnet (unknown [209.249.190.124]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id 163C3B946; Tue, 29 Nov 2011 10:36:45 -0500 (EST) From: John Baldwin To: freebsd-stable@freebsd.org Date: Tue, 29 Nov 2011 10:36:44 -0500 User-Agent: KMail/1.13.5 (FreeBSD/8.2-CBSD-20110714-p8; KDE/4.5.5; amd64; ; ) References: <201111180310.pAI3ARbZ075115@mippet.ci.com.au> <201111290012.pAT0CdPw070812@mippet.ci.com.au> In-Reply-To: <201111290012.pAT0CdPw070812@mippet.ci.com.au> MIME-Version: 1.0 Content-Type: Text/Plain; charset="iso-8859-15" Content-Transfer-Encoding: 7bit Message-Id: <201111291036.44620.jhb@freebsd.org> X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (bigwig.baldwin.cx); Tue, 29 Nov 2011 10:36:45 -0500 (EST) Cc: Daryl Sayers Subject: Re: Low nfs write throughput X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 29 Nov 2011 15:36:46 -0000 On Monday, November 28, 2011 7:12:39 pm Daryl Sayers wrote: > >>>>> "Bengt" == Bengt Ahlgren writes: > > > Daryl Sayers writes: > >> Can anyone suggest why I am getting poor write performance from my nfs setup. > >> I have 2 x FreeBSD 8.2-STABLE i386 machines with ASUS P5B-plus mother boards, > >> 4G mem and Dual core 3g processor using 147G 15k Seagate SAS drives with > >> onboard Gb network cards connected to an idle network. The results below show > >> that I get nearly 100Mb/s with a dd over rsh but only 15Mbs using nfs. It > >> improves if I use async but a smbfs mount still beats it. I am using the same > >> file, source and destinations for all tests. I have tried alternate Network > >> cards with no resulting benefit. > > > [...] > > >> Looking at a systat -v on the destination I see that the nfs test does not > >> exceed 16KB/t with 100% busy where the other tests reach up to 128KB/t. > >> For the record I get reads of 22Mb/s without and 77Mb/s with async turned on > >> for the nfs mount. > > > On an UFS filesystem you get NFS writes with the same size as the > > filesystem blocksize. So an easy way to improve performance is to > > create a filesystem with larger blocks. I accidentally found this out > > when I had two NFS exported filesystems from the same box with 16K and > > 64K blocksizes respectively. > > > (Larger blocksize also tremendously improves the performance of UFS > > snapshots!) > > Thanks to all that answered. I did try the 'sysctl -w vfs.nfsrv.async=1' with > no reportable change in performance. We are using a UFS2 filesystem so the > zfs command was not required. I did not try the patch as we would like to stay > as standard as possible but will upgrade if the patch is released in new > kernel. If you can test the patch then it is something I will likely put into the next release. I have already tested it as far as robustness locally, what I don't have are good performance tests. It would really be helpful if you were able to test it. > Thanks Bengt for the suggestion of block size. Increasing the block size to > 64k made a significant improvement to performance. In theory the patch might have given you similar gains. During my simple tests I was able to raise the average I/O size in iostat to 70 to 80k from 16k. -- John Baldwin