Date: Wed, 23 Jan 2013 17:22:45 +0100 From: Guido Falsi <mad@madpilot.net> To: freebsd-fs@freebsd.org Subject: Re: RFC: Suggesting ZFS "best practices" in FreeBSD Message-ID: <51000E55.6070901@madpilot.net> In-Reply-To: <81460DE8-89B4-41E8-9D93-81B8CC27AA87@baaz.fr> References: <314B600D-E8E6-4300-B60F-33D5FA5A39CF@sarenet.es> <565CB55B-9A75-47F4-A88B-18FA8556E6A2@samsco.org> <81460DE8-89B4-41E8-9D93-81B8CC27AA87@baaz.fr>
next in thread | previous in thread | raw e-mail | index | archive | help
On 01/23/13 16:16, Jean-Yves Moulin wrote: > Hi, > > > On 22 Jan 2013, at 15:33 , Scott Long <scottl@samsco.org> wrote: > >> Agree 200%. Despite the best effort of sales and marketing people, RAID cards do not make good HBAs. At best they add latency. At worst, they add a lot of latency and extra failure modes. > > > But what about battery-backed cache RAID card ? They offer a non-volatile cache that improves writes. And this cache is safe because of the battery. These feature doesn't exist on bare disks. > Safe is optimistic. The cache can keep the memory alive for a 36-48 hours at most usually. In this (short) time frame you need to find identical hardware on which to move the disks and the controller without detaching the batteries. This in fact mostly means you need a second server without disks just in case you need a recovery. Also, expected battery life will decrease with time. Some vendors now sell solid state cache memory which can hold data indefinitely. This is a more sensible approach(and looks very similar to a dedicated ZIL device to me). It does not remove the need to find identical hardware on which to move disk and controllers to recover the array though. This is the one aspect in which open sourced software raid is better: any hardware with the enough connectors of the correct kind will do for recovery...well and enough RAM also. -- Guido Falsi <mad@madpilot.net>
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?51000E55.6070901>