Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 3 Dec 2009 10:00:21 -0700
From:      Josh Carter <josh@multipart-mixed.com>
To:        Kai Gallasch <gallasch@free.de>, freebsd-fs <freebsd-fs@freebsd.org>
Subject:   Re: questions using zfs on raid controllers without jbod option
Message-ID:  <661F4A80-846F-44B4-9FA9-E0E630B984B3@multipart-mixed.com>
In-Reply-To: <20091203093809.3d54ea2e@orwell.free.de>
References:  <20091203093809.3d54ea2e@orwell.free.de>

next in thread | previous in thread | raw e-mail | index | archive | help
Kai,

Does your controller have the option of creating a "volume" rather than =
a RAID0? On Adaptec and LSI cards I've tested, they've had the option of =
creating a simple catenated volume of disks, thus bypassing any =
re-chunking of data. I created one volume per drive and performance was =
on-par with using a non-RAID card. (As a side note, ZFS could push the =
driver harder as separate volumes than the RAID card could push the =
drives using the hardware's RAID controller.)

The spikes you see in write performance are normal. ZFS gathers up =
individual writes and commits them to disk as transactions; when a =
transaction flushes you see the spike in iostat.

As for caching, I'd go ahead and turn on write caching on the RAID card =
if you've got a battery. To use write caching in ZFS effectively (i.e. =
with the ZIL) you need a very fast write device or you'll slow the =
system down. STEC Zeus solid-state drives make good ZIL devices but =
they're super-expensive. I would let ZFS do its own caching on the read =
side.

Best regards,
Josh


On Dec 3, 2009, at 1:38 AM, Kai Gallasch wrote:

>=20
> Hi list.
>=20
> What's the best way to deploy zfs on a server with builtin raid
> controller and missing JBOD functionality?
>=20
> I am currently testing a hp/compaq proliant server with Battery Backed
> SmartArray P400 controller (ciss) and 5 sas disks which I use for a
> raidz1 pool.
>=20
> What I did was to create a raid0 array on the controller for each =
disk,
> with raid0 chunksize set to 32K (Those raid0 drives show up as da2-da6
> in FreeBSD) and used them for a raidz1 pool.
>=20
> Following zpool iostat I can see, that there are almost all of the =
time
> no continous writes, but most of the copied data is written in spikes =
of
> write operations. My guess is, that this behaviour is caching related
> and that it might be caused by zfs-arc and raid-controller cache not
> playing too well together.
>=20
> questions:
>=20
> "raid0 drives":
>=20
> - What's the best chunksize for a single raid0 drive that is used as a
>  device for a pool ? (I use 32K)
>=20
> - Should the write cache on the physical disks that are used as raid0
>  drives for zfs be enabled, if the raid controller has a battery
>  backup unit ? ( I enabled the disk write cache for all disks)
>=20
> raid controller cache:
>=20
> My current settings for the raid controller cache are: "cache 50% =
reads
> and 50% writes"
>=20
> - Does it make sense to have caching of read- and write-ops enabled
>  with this setup? I wonder: Shouldn't it be the job of the zfs arc to
>  do the caching?
>=20
> - Does zfs prefetch make any sense If your raid controller already
>  caches read operations?
>=20
>=20
> Cheers,
> Kai.
>=20
>=20
>=20
>=20
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?661F4A80-846F-44B4-9FA9-E0E630B984B3>