Date: Thu, 1 Dec 2011 16:24:59 +1100 (EST) From: Daryl Sayers <daryl@ci.com.au> To: jhb@freebsd.org Cc: freebsd-stable@freebsd.org Subject: Re: Low nfs write throughput Message-ID: <201112010524.pB15Ox4o073666@mippet.ci.com.au> In-Reply-To: <201111301006.27651.jhb@freebsd.org> (message from John Baldwin on Wed, 30 Nov 2011 10:06:27 -0500) References: <201111180310.pAI3ARbZ075115@mippet.ci.com.au> <201111291036.44620.jhb@freebsd.org> <201111292356.pATNuRcq059817@mippet.ci.com.au> <201111301006.27651.jhb@freebsd.org>
next in thread | previous in thread | raw e-mail | index | archive | help
>>>>> "John" == John Baldwin <jhb@freebsd.org> writes: > On Tuesday, November 29, 2011 6:56:27 pm Daryl Sayers wrote: >> >>>>> "John" == John Baldwin <jhb@freebsd.org> writes: >> >> > On Monday, November 28, 2011 7:12:39 pm Daryl Sayers wrote: >> >> >>>>> "Bengt" == Bengt Ahlgren <bengta@sics.se> writes: >> >> >> >> > Daryl Sayers <daryl@ci.com.au> writes: >> >> >> Can anyone suggest why I am getting poor write performance from my nfs setup. >> >> >> I have 2 x FreeBSD 8.2-STABLE i386 machines with ASUS P5B-plus mother boards, >> >> >> 4G mem and Dual core 3g processor using 147G 15k Seagate SAS drives with >> >> >> onboard Gb network cards connected to an idle network. The results below show >> >> >> that I get nearly 100Mb/s with a dd over rsh but only 15Mbs using nfs. It >> >> >> improves if I use async but a smbfs mount still beats it. I am using the same >> >> >> file, source and destinations for all tests. I have tried alternate Network >> >> >> cards with no resulting benefit. >> >> >> >> > [...] >> >> >> >> >> Looking at a systat -v on the destination I see that the nfs test does not >> >> >> exceed 16KB/t with 100% busy where the other tests reach up to 128KB/t. >> >> >> For the record I get reads of 22Mb/s without and 77Mb/s with async turned on >> >> >> for the nfs mount. >> >> >> >> > On an UFS filesystem you get NFS writes with the same size as the >> >> > filesystem blocksize. So an easy way to improve performance is to >> >> > create a filesystem with larger blocks. I accidentally found this out >> >> > when I had two NFS exported filesystems from the same box with 16K and >> >> > 64K blocksizes respectively. >> >> >> >> > (Larger blocksize also tremendously improves the performance of UFS >> >> > snapshots!) >> >> >> >> Thanks to all that answered. I did try the 'sysctl -w vfs.nfsrv.async=1' with >> >> no reportable change in performance. We are using a UFS2 filesystem so the >> >> zfs command was not required. I did not try the patch as we would like to stay >> >> as standard as possible but will upgrade if the patch is released in new >> >> kernel. >> >> > If you can test the patch then it is something I will likely put into the >> > next release. I have already tested it as far as robustness locally, what >> > I don't have are good performance tests. It would really be helpful if you >> > were able to test it. >> >> >> Thanks Bengt for the suggestion of block size. Increasing the block size to >> >> 64k made a significant improvement to performance. >> >> > In theory the patch might have given you similar gains. During my simple tests >> > I was able to raise the average I/O size in iostat to 70 to 80k from 16k. >> >> OK, I downloaded and install the patch and did some basic testing and I can >> reveal that the patch does improve performance. I can also see that my KB/t >> now exceed the 16KB/t that seemed to be a limiting factor prior. > Ok, thanks. Does it give similar performance results to using 64k block size? >From the tests I have done I get similar results to the block size change. -- Daryl Sayers Direct: +612 95525510 Corinthian Engineering Office: +612 95525500 Suite 54, Jones Bay Wharf Fax: +612 95525549 26-32 Pirrama Rd email: daryl@ci.com.au Pyrmont NSW 2009 Australia www: http://www.ci.com.au
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201112010524.pB15Ox4o073666>