From owner-freebsd-fs@FreeBSD.ORG Sat Jan 22 10:51:33 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D6138106564A for ; Sat, 22 Jan 2011 10:51:33 +0000 (UTC) (envelope-from kpielorz_lst@tdx.co.uk) Received: from mail.tdx.com (mail.tdx.com [62.13.128.18]) by mx1.freebsd.org (Postfix) with ESMTP id 7AA8D8FC08 for ; Sat, 22 Jan 2011 10:51:33 +0000 (UTC) Received: from Octa64 (octa64.tdx.co.uk [62.13.130.232]) (authenticated bits=0) by mail.tdx.com (8.14.3/8.14.3/Kp) with ESMTP id p0MAcZn4031570 for ; Sat, 22 Jan 2011 10:38:35 GMT Date: Sat, 22 Jan 2011 10:39:13 +0000 From: Karl Pielorz To: freebsd-fs@freebsd.org Message-ID: <1ABA88EDF84B6472579216FE@Octa64> X-Mailer: Mulberry/4.0.8 (Win32) MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Content-Disposition: inline Subject: Write cache, is write cache, is write cache? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 22 Jan 2011 10:51:33 -0000 Hi, I've a small HP server I've been using recently (an NL36). I've got ZFS setup on it, and it runs quite nicely. I was using the server for zeroing some drives the other day - and noticed that a: dd if=/dev/zero of=/dev/ada0 bs=2m Gives around 12Mbyte/sec throughput when that's all that's running on the machine. Looking in the BIOS is a "Enabled drive write cache" option - which was set to 'No'. Changing it to 'Yes' - I now get around 90-120Mbyte/sec doing the same thing. Knowing all the issues with IDE drives and write caches - is there any way of telling if this would be safe to enable with ZFS? (i.e. if the option is likely to be making the drive completely ignore flush requests?) - or if it's still honouring the various 'write through' options if set on data to be written? I'm presuming DD won't by default be writing the data with the 'flush' bit set - as it probably doesn't know about it. Is there anyway of testing this? (say using some tool to write the data using either lots of 'cache flush' or 'write through' stuff) - and seeing if the performance drops back to nearer the 12Mbyte/sec? I've not enabled the option with the ZFS drives in the machine - I suppose I could test it. Write performance on the unit isn't that bad [it's not stunning] - though with 4 drives in a mirrored set - it probably helps hide some of the impact this option might have. -Kp