From owner-freebsd-fs@FreeBSD.ORG Thu Dec 3 17:00:23 2009 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0DA1D106568F for ; Thu, 3 Dec 2009 17:00:23 +0000 (UTC) (envelope-from josh@multipart-mixed.com) Received: from joshcarter.com (67-207-137-80.slicehost.net [67.207.137.80]) by mx1.freebsd.org (Postfix) with ESMTP id D6B248FC1B for ; Thu, 3 Dec 2009 17:00:22 +0000 (UTC) Received: from [192.168.3.53] (unknown [63.172.79.253]) by joshcarter.com (Postfix) with ESMTPSA id 893B6C85F4; Thu, 3 Dec 2009 17:00:22 +0000 (UTC) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Apple Message framework v1077) From: Josh Carter In-Reply-To: <20091203093809.3d54ea2e@orwell.free.de> Date: Thu, 3 Dec 2009 10:00:21 -0700 Content-Transfer-Encoding: quoted-printable Message-Id: <661F4A80-846F-44B4-9FA9-E0E630B984B3@multipart-mixed.com> References: <20091203093809.3d54ea2e@orwell.free.de> To: Kai Gallasch , freebsd-fs X-Mailer: Apple Mail (2.1077) Cc: Subject: Re: questions using zfs on raid controllers without jbod option X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 03 Dec 2009 17:00:23 -0000 Kai, Does your controller have the option of creating a "volume" rather than = a RAID0? On Adaptec and LSI cards I've tested, they've had the option of = creating a simple catenated volume of disks, thus bypassing any = re-chunking of data. I created one volume per drive and performance was = on-par with using a non-RAID card. (As a side note, ZFS could push the = driver harder as separate volumes than the RAID card could push the = drives using the hardware's RAID controller.) The spikes you see in write performance are normal. ZFS gathers up = individual writes and commits them to disk as transactions; when a = transaction flushes you see the spike in iostat. As for caching, I'd go ahead and turn on write caching on the RAID card = if you've got a battery. To use write caching in ZFS effectively (i.e. = with the ZIL) you need a very fast write device or you'll slow the = system down. STEC Zeus solid-state drives make good ZIL devices but = they're super-expensive. I would let ZFS do its own caching on the read = side. Best regards, Josh On Dec 3, 2009, at 1:38 AM, Kai Gallasch wrote: >=20 > Hi list. >=20 > What's the best way to deploy zfs on a server with builtin raid > controller and missing JBOD functionality? >=20 > I am currently testing a hp/compaq proliant server with Battery Backed > SmartArray P400 controller (ciss) and 5 sas disks which I use for a > raidz1 pool. >=20 > What I did was to create a raid0 array on the controller for each = disk, > with raid0 chunksize set to 32K (Those raid0 drives show up as da2-da6 > in FreeBSD) and used them for a raidz1 pool. >=20 > Following zpool iostat I can see, that there are almost all of the = time > no continous writes, but most of the copied data is written in spikes = of > write operations. My guess is, that this behaviour is caching related > and that it might be caused by zfs-arc and raid-controller cache not > playing too well together. >=20 > questions: >=20 > "raid0 drives": >=20 > - What's the best chunksize for a single raid0 drive that is used as a > device for a pool ? (I use 32K) >=20 > - Should the write cache on the physical disks that are used as raid0 > drives for zfs be enabled, if the raid controller has a battery > backup unit ? ( I enabled the disk write cache for all disks) >=20 > raid controller cache: >=20 > My current settings for the raid controller cache are: "cache 50% = reads > and 50% writes" >=20 > - Does it make sense to have caching of read- and write-ops enabled > with this setup? I wonder: Shouldn't it be the job of the zfs arc to > do the caching? >=20 > - Does zfs prefetch make any sense If your raid controller already > caches read operations? >=20 >=20 > Cheers, > Kai. >=20 >=20 >=20 >=20 > _______________________________________________ > freebsd-fs@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-fs > To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"