From owner-freebsd-stable@FreeBSD.ORG Mon Feb 15 17:52:17 2010 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D0FE01065694 for ; Mon, 15 Feb 2010 17:52:17 +0000 (UTC) (envelope-from jon@witchspace.com) Received: from mtaout03-winn.ispmail.ntl.com (mtaout03-winn.ispmail.ntl.com [81.103.221.49]) by mx1.freebsd.org (Postfix) with ESMTP id 19B7A8FC14 for ; Mon, 15 Feb 2010 17:52:16 +0000 (UTC) Received: from aamtaout03-winn.ispmail.ntl.com ([81.103.221.35]) by mtaout03-winn.ispmail.ntl.com (InterMail vM.7.08.04.00 201-2186-134-20080326) with ESMTP id <20100215175211.JCOX10950.mtaout03-winn.ispmail.ntl.com@aamtaout03-winn.ispmail.ntl.com> for ; Mon, 15 Feb 2010 17:52:11 +0000 Received: from witchspace.com ([86.28.98.4]) by aamtaout03-winn.ispmail.ntl.com (InterMail vG.2.02.00.01 201-2161-120-102-20060912) with SMTP id <20100215175211.BYGU2093.aamtaout03-winn.ispmail.ntl.com@witchspace.com> for ; Mon, 15 Feb 2010 17:52:11 +0000 Received: (qmail 3107 invoked from network); 15 Feb 2010 16:51:30 -0000 Received: from unknown (HELO ?127.0.0.1?) (192.168.0.1) by 192.168.0.100 with SMTP; 15 Feb 2010 16:51:30 -0000 Message-ID: <4B7989AE.1050203@witchspace.com> Date: Mon, 15 Feb 2010 17:51:42 +0000 From: Jonathan Belson User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.1.7) Gecko/20100111 Lightning/1.0b1 Thunderbird/3.0.1 MIME-Version: 1.0 To: freebsd-stable@freebsd.org References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Cloudmark-Analysis: v=1.1 cv=ZtHxNT4mZm3rCuM0SmWmgWxeBwJsziC8EqOrwwVkrhA= c=1 sm=0 a=vReAG17O2aoGzJlzF_gA:9 a=76Uednzbgf3FhY5xRNYA:7 a=1xavW79Q4bOIrv8VOWIJ4ppVZeoA:4 a=HpAAvcLHHh0Zw7uRqdWCyQ==:117 Subject: Re: More zfs benchmarks X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Feb 2010 17:52:18 -0000 On 14/02/2010 17:28, Jonathan Belson wrote: > After reading some earlier threads about zfs performance, I decided to test my own server. I found the results rather surprising... Thanks to everyone who responded. I experimented with my load.conf settings, leaving me with the following: vm.kmem_size="1280M" vfs.zfs.prefetch_disable="1" That kmem_size seems quite big for a machine with only (!) 2GB of RAM, but I wanted to see if it gave better results than 1024MB (it did, an extra ~5MB/s). The rest of the settings are defaults: vm.kmem_size_scale: 3 vm.kmem_size_max: 329853485875 vm.kmem_size_min: 0 vm.kmem_size: 1342177280 vfs.zfs.arc_min: 104857600 vfs.zfs.arc_max: 838860800 My numbers are a lot better with these settings: # dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=2000 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 63.372441 secs (33092492 bytes/sec) # dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=2000 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 60.647568 secs (34579326 bytes/sec) # dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=2000 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 68.241539 secs (30731312 bytes/sec) # dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=2000 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 68.722902 secs (30516057 bytes/sec) Writing a 200MB file to a UFS partition gives around 37MB/s, so the zfs overhead is costing me a few MB per second. I'm guessing that the hard drives themselves have rather sucky performance (I used to use Spinpoints, but receiving three faulty ones in a row put me off them). Reading from a raw device: # dd if=/dev/ad4s1a of=/dev/null bs=1M count=2000 1024+0 records in 1024+0 records out 1073741824 bytes transferred in 11.286550 secs (95134635 bytes/sec) # dd if=/dev/ad4s1a of=/dev/null bs=1M count=2000 1024+0 records in 1024+0 records out 1073741824 bytes transferred in 11.445131 secs (93816473 bytes/sec) # dd if=/dev/ad4s1a of=/dev/null bs=1M count=2000 1024+0 records in 1024+0 records out 1073741824 bytes transferred in 11.284961 secs (95148032 bytes/sec) Reading from zfs file: # dd if=/tank/test/zerofile.000 of=/dev/null bs=1M count=4000 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 25.643737 secs (81780281 bytes/sec) # dd if=/tank/test/zerofile.000 of=/dev/null bs=1M count=4000 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 25.444214 secs (82421567 bytes/sec) # dd if=/tank/test/zerofile.000 of=/dev/null bs=1M count=4000 2000+0 records in 2000+0 records out 2097152000 bytes transferred in 25.572888 secs (82006851 bytes/sec) So, the value of arc_max from the zfs tuning wiki seemed to be the main brake on performance. Cheers, --Jon