From owner-freebsd-fs@FreeBSD.ORG Sat Jan 22 11:10:48 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 047BF106564A for ; Sat, 22 Jan 2011 11:10:48 +0000 (UTC) (envelope-from jdc@koitsu.dyndns.org) Received: from qmta12.westchester.pa.mail.comcast.net (qmta12.westchester.pa.mail.comcast.net [76.96.59.227]) by mx1.freebsd.org (Postfix) with ESMTP id A63A38FC1C for ; Sat, 22 Jan 2011 11:10:47 +0000 (UTC) Received: from omta01.westchester.pa.mail.comcast.net ([76.96.62.11]) by qmta12.westchester.pa.mail.comcast.net with comcast id yb8d1f0020EZKEL5CbAnRd; Sat, 22 Jan 2011 11:10:47 +0000 Received: from koitsu.dyndns.org ([98.248.34.134]) by omta01.westchester.pa.mail.comcast.net with comcast id ybAm1f00F2tehsa3MbAnTX; Sat, 22 Jan 2011 11:10:47 +0000 Received: by icarus.home.lan (Postfix, from userid 1000) id 390029B427; Sat, 22 Jan 2011 03:10:45 -0800 (PST) Date: Sat, 22 Jan 2011 03:10:45 -0800 From: Jeremy Chadwick To: Karl Pielorz Message-ID: <20110122111045.GA59117@icarus.home.lan> References: <1ABA88EDF84B6472579216FE@Octa64> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1ABA88EDF84B6472579216FE@Octa64> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: freebsd-fs@freebsd.org Subject: Re: Write cache, is write cache, is write cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 22 Jan 2011 11:10:48 -0000 On Sat, Jan 22, 2011 at 10:39:13AM +0000, Karl Pielorz wrote: > I've a small HP server I've been using recently (an NL36). I've got > ZFS setup on it, and it runs quite nicely. > > I was using the server for zeroing some drives the other day - and > noticed that a: > > dd if=/dev/zero of=/dev/ada0 bs=2m > > Gives around 12Mbyte/sec throughput when that's all that's running > on the machine. > > Looking in the BIOS is a "Enabled drive write cache" option - which > was set to 'No'. Changing it to 'Yes' - I now get around > 90-120Mbyte/sec doing the same thing. > > Knowing all the issues with IDE drives and write caches - is there > any way of telling if this would be safe to enable with ZFS? (i.e. > if the option is likely to be making the drive completely ignore > flush requests?) - or if it's still honouring the various 'write > through' options if set on data to be written? > > I'm presuming DD won't by default be writing the data with the > 'flush' bit set - as it probably doesn't know about it. > > Is there anyway of testing this? (say using some tool to write the > data using either lots of 'cache flush' or 'write through' stuff) - > and seeing if the performance drops back to nearer the 12Mbyte/sec? > > I've not enabled the option with the ZFS drives in the machine - I > suppose I could test it. > > Write performance on the unit isn't that bad [it's not stunning] - > though with 4 drives in a mirrored set - it probably helps hide some > of the impact this option might have. I'm stating the below with the assumption that you have SATA disks with some form of AHCI-based controller (possibly Intel ICHxx or ESBx on-board), and *not* a hardware RAID controller with cache/RAM of its own: Keep write caching *enabled* in the system BIOS. ZFS will take care of any underlying "issues" in the case the system abruptly loses power (hard disk cache contents lost), since you're using ZFS mirroring. The same would apply if you were using raidz{1,2}, but not if you were using ZFS on a single device (no mirroring/raidz). In that scenario, expect data loss; but the same could be said of any non-journalling filesystem. I have no idea why your BIOS setting for this option was disabled. I do not know if it's the factory default either; you would have to talk to HP about that, or spend the time figuring out who was in the system BIOS last and how/if/why they messed around (the number of possibilities for why the option is disabled are endless). You can use bonnie++ (ports/benchmarks/bonnie++) if you wish to do throughput and/or benchmark testing of sorts. -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP 4BD6C0CB |