From owner-freebsd-stable@FreeBSD.ORG Thu Dec 1 05:25:05 2011 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3BD6310659EF; Thu, 1 Dec 2011 05:25:03 +0000 (UTC) (envelope-from daryl@ci.com.au) Received: from mippet.ci.com.au (mippet.ci.com.au [192.65.182.30]) by mx1.freebsd.org (Postfix) with ESMTP id C8F748FC23; Thu, 1 Dec 2011 05:25:02 +0000 (UTC) Received: from mippet.ci.com.au (localhost [127.0.0.1]) by mippet.ci.com.au (8.14.4/8.14.4/CE101231/cmlga) with ESMTP id pB15OxiC073669; Thu, 1 Dec 2011 16:25:00 +1100 (EST) (envelope-from daryl@mippet.ci.com.au) Received: (from daryl@localhost) by mippet.ci.com.au (8.14.4/8.14.4/Submit) id pB15Ox4o073666; Thu, 1 Dec 2011 16:24:59 +1100 (EST) (envelope-from daryl) Date: Thu, 1 Dec 2011 16:24:59 +1100 (EST) From: Daryl Sayers Message-Id: <201112010524.pB15Ox4o073666@mippet.ci.com.au> To: jhb@freebsd.org In-reply-to: <201111301006.27651.jhb@freebsd.org> (message from John Baldwin on Wed, 30 Nov 2011 10:06:27 -0500) References: <201111180310.pAI3ARbZ075115@mippet.ci.com.au> <201111291036.44620.jhb@freebsd.org> <201111292356.pATNuRcq059817@mippet.ci.com.au> <201111301006.27651.jhb@freebsd.org> X-Scanned-By: MIMEDefang 2.68 on 192.65.182.30 Cc: freebsd-stable@freebsd.org Subject: Re: Low nfs write throughput X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Dec 2011 05:25:05 -0000 >>>>> "John" == John Baldwin writes: > On Tuesday, November 29, 2011 6:56:27 pm Daryl Sayers wrote: >> >>>>> "John" == John Baldwin writes: >> >> > On Monday, November 28, 2011 7:12:39 pm Daryl Sayers wrote: >> >> >>>>> "Bengt" == Bengt Ahlgren writes: >> >> >> >> > Daryl Sayers writes: >> >> >> Can anyone suggest why I am getting poor write performance from my nfs setup. >> >> >> I have 2 x FreeBSD 8.2-STABLE i386 machines with ASUS P5B-plus mother boards, >> >> >> 4G mem and Dual core 3g processor using 147G 15k Seagate SAS drives with >> >> >> onboard Gb network cards connected to an idle network. The results below show >> >> >> that I get nearly 100Mb/s with a dd over rsh but only 15Mbs using nfs. It >> >> >> improves if I use async but a smbfs mount still beats it. I am using the same >> >> >> file, source and destinations for all tests. I have tried alternate Network >> >> >> cards with no resulting benefit. >> >> >> >> > [...] >> >> >> >> >> Looking at a systat -v on the destination I see that the nfs test does not >> >> >> exceed 16KB/t with 100% busy where the other tests reach up to 128KB/t. >> >> >> For the record I get reads of 22Mb/s without and 77Mb/s with async turned on >> >> >> for the nfs mount. >> >> >> >> > On an UFS filesystem you get NFS writes with the same size as the >> >> > filesystem blocksize. So an easy way to improve performance is to >> >> > create a filesystem with larger blocks. I accidentally found this out >> >> > when I had two NFS exported filesystems from the same box with 16K and >> >> > 64K blocksizes respectively. >> >> >> >> > (Larger blocksize also tremendously improves the performance of UFS >> >> > snapshots!) >> >> >> >> Thanks to all that answered. I did try the 'sysctl -w vfs.nfsrv.async=1' with >> >> no reportable change in performance. We are using a UFS2 filesystem so the >> >> zfs command was not required. I did not try the patch as we would like to stay >> >> as standard as possible but will upgrade if the patch is released in new >> >> kernel. >> >> > If you can test the patch then it is something I will likely put into the >> > next release. I have already tested it as far as robustness locally, what >> > I don't have are good performance tests. It would really be helpful if you >> > were able to test it. >> >> >> Thanks Bengt for the suggestion of block size. Increasing the block size to >> >> 64k made a significant improvement to performance. >> >> > In theory the patch might have given you similar gains. During my simple tests >> > I was able to raise the average I/O size in iostat to 70 to 80k from 16k. >> >> OK, I downloaded and install the patch and did some basic testing and I can >> reveal that the patch does improve performance. I can also see that my KB/t >> now exceed the 16KB/t that seemed to be a limiting factor prior. > Ok, thanks. Does it give similar performance results to using 64k block size? >From the tests I have done I get similar results to the block size change. -- Daryl Sayers Direct: +612 95525510 Corinthian Engineering Office: +612 95525500 Suite 54, Jones Bay Wharf Fax: +612 95525549 26-32 Pirrama Rd email: daryl@ci.com.au Pyrmont NSW 2009 Australia www: http://www.ci.com.au