From owner-freebsd-hardware@FreeBSD.ORG Fri Oct 31 04:47:49 2008 Return-Path: Delivered-To: freebsd-hardware@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 78385106564A; Fri, 31 Oct 2008 04:47:49 +0000 (UTC) (envelope-from fbsd@dannysplace.net) Received: from mail.dannysplace.net (mail.dannysplace.net [213.133.54.210]) by mx1.freebsd.org (Postfix) with ESMTP id 282558FC0C; Fri, 31 Oct 2008 04:47:49 +0000 (UTC) (envelope-from fbsd@dannysplace.net) Received: from 203-206-171-212.perm.iinet.net.au ([203.206.171.212] helo=[192.168.10.10]) by mail.dannysplace.net with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.69 (FreeBSD)) (envelope-from ) id 1KvlvG-000JWi-9A; Fri, 31 Oct 2008 14:47:48 +1000 Message-ID: <490A8DFB.8030405@dannysplace.net> Date: Fri, 31 Oct 2008 14:47:55 +1000 From: Danny Carroll User-Agent: Thunderbird 2.0.0.17 (Windows/20080914) MIME-Version: 1.0 To: Jeremy Chadwick References: <490A782F.9060406@dannysplace.net> <20081031033208.GA21220@icarus.home.lan> <490A849C.7030009@dannysplace.net> <20081031043412.GA22289@icarus.home.lan> In-Reply-To: <20081031043412.GA22289@icarus.home.lan> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Authenticated-User: danny X-Authenticator: plain X-Sender-Verify: SUCCEEDED (sender exists & accepts mail) X-Exim-Version: 4.69 (build at 08-Jul-2008 08:59:40) X-Date: 2008-10-31 14:47:46 X-Connected-IP: 203.206.171.212:1692 X-Message-Linecount: 97 X-Body-Linecount: 83 X-Message-Size: 3915 X-Body-Size: 3244 X-Received-Count: 1 X-Recipient-Count: 3 X-Local-Recipient-Count: 3 X-Local-Recipient-Defer-Count: 0 X-Local-Recipient-Fail-Count: 0 X-SA-Exim-Connect-IP: 203.206.171.212 X-SA-Exim-Rcpt-To: koitsu@FreeBSD.org, freebsd-fs@freebsd.org, freebsd-hardware@freebsd.org X-SA-Exim-Mail-From: fbsd@dannysplace.net X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on ferrari.dannysplace.net X-Spam-Level: X-Spam-Status: No, score=0.2 required=8.0 tests=ALL_TRUSTED,TVD_RCVD_IP autolearn=disabled version=3.2.5 X-SA-Exim-Version: 4.2 X-SA-Exim-Scanned: Yes (on mail.dannysplace.net) Cc: freebsd-fs@freebsd.org, freebsd-hardware@freebsd.org Subject: Re: Areca vs. ZFS performance testing. X-BeenThere: freebsd-hardware@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: fbsd@dannysplace.net List-Id: General discussion of FreeBSD hardware List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 31 Oct 2008 04:47:49 -0000 Jeremy Chadwick wrote: > On Fri, Oct 31, 2008 at 02:07:56PM +1000, Danny Carroll wrote: > - Memory cache enabled on Areca, write caching enabled on disks > - Memory cache enabled on Areca, write caching disabled on disks > - Memory cache disabled on Areca, write caching enabled on disks > - Memory cache disabled on Areca, write caching disabled on disks Does it matter what type of disk we are talking about? What I mean is, do you want to see this with both Raid5 and Raid6 arrays? Also, I'm pretty sure that in JBod mode the cache (on the card) will do nothing. But I am not certain, so I'll do the tests there as well. What about stripe sizes? I mainly use big files so I was going to stripe accordingly. But the bonnie++ tests might give strange results in that case. > I don't know if the controller will let you disable use of memory cache, > but I'm hoping it does. I'm pretty sure it lets you disable disk > write caching in its BIOS or via the CLI utility. > It's been a while since I've had a hardware raid card. I'll see what is available. > All of the tuning variables apply to i386 and amd64. > > You do not need the vfs.zfs.debug variable; I'm not sure why you enabled > that. I imagine it will have some impact on performance. Consider it gone. > I do not know anything about kern.maxvnodes, or vfs.zfs.vdev.cache.size. > At the moment I am not hitting anywhere near the max vnodes setting. So I think it is irrelevant. > The tuning variables I advocate for a system with 2GB of RAM or more, > on RELENG_7, are: > > vm.kmem_size="1536M" > vm.kmem_size_max="1536M" > vfs.zfs.arc_min="16M" > vfs.zfs.arc_max="64M" > vfs.zfs.prefetch_disable="1" > > You can gradually increase arc_min and arc_max by ~16MB increments as > you see fit; you should see general performance improvements as they > get larger (more data being kept in the ARC), but don't get too crazy. > I've tuned arc_max up to 128MB before with success, but I don't want > to try anything larger without decreasing kmem_size_*. What is the arc? Is it the ZFS file cache? > The only reason you need to adjust kmem_size and kmem_size_max is to > increase the amount of available kmap memory which ZFS relies heavily > on. If the values are too low, under heavy I/O, the kernel will panic > with kmem exhaustion messages (see the ZFS Wiki for what some look > like, or my Wiki). > > I would recommend you stick with a consistent set of loader.conf > tuning variables, and focus entirely on comparing the performance of > ZFS on the Areca controller vs. the ICH controller. Once I am settled on a 'starting point' I won't be altering it for the tests. > You can perform a "ZFS tuning comparison" later. One step at a time; > don't over-exert yourself quite yet. :-) Yeah, this is weekend stuff for me at the moment, it will take me some time to get things done. Firstly I need to figure out how I am going to hook up 10 drives to my system. I don't have the drive-bay space and I am not shelling out for a new case so I am hunting around for an ancient external disk cabinet. > You can add raidz2 to this comparison list too if you feel it's > worthwhile, but I think most people will be using raidz1. I might as well do both. -D