Date: Wed, 10 Feb 2010 11:27:53 +0100 From: Pieter de Goeje <pieter@service2media.com> To: freebsd-stable@freebsd.org Cc: "Peter C. Lai" <peter@simons-rock.edu>, Charles Sprickman <spork@bway.net>, Boris Kochergin <spawk@acm.poly.edu>, Dan Langille <dan@langille.org> Subject: Re: hardware for home use large storage Message-ID: <201002101127.53444.pieter@service2media.com> In-Reply-To: <4B723609.8010802@langille.org> References: <4B6F9A8D.4050907@langille.org> <4B718EBB.6080709@acm.poly.edu> <4B723609.8010802@langille.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On Wednesday 10 February 2010 05:28:57 Dan Langille wrote: > Boris Kochergin wrote: > > Peter C. Lai wrote: > >> On 2010-02-09 06:37:47AM -0500, Dan Langille wrote: > >>> Charles Sprickman wrote: > >>>> On Mon, 8 Feb 2010, Dan Langille wrote: > >>>> Also, it seems like > >>>> people who use zfs (or gmirror + gstripe) generally end up buying > >>>> pricey hardware raid cards for compatibility reasons. There seem to > >>>> be no decent add-on SATA cards that play nice with FreeBSD other > >>>> than that weird supermicro card that has to be physically hacked > >>>> about to fit. > >> > >> Mostly only because certain cards have issues w/shoddy JBOD > >> implementation. Some cards (most notably ones like Adaptec 2610A which > >> was rebranded by Dell as the "CERC SATA 1.5/6ch" back in the day) > >> won't let you run the drives in passthrough mode and seem to all want > >> to stick their grubby little RAID paws into your JBOD setup (i.e. the > >> only way to have minimal > >> participation from the "hardware" RAID is to set each disk as its own > >> RAID-0/volume in the controller BIOS) which then cascades into issues > >> with SMART, AHCI, "triple caching"/write reordering, etc on the > >> FreeBSD side (the controller's own craptastic cache, ZFS vdev cache, > >> vmm/app cache, oh my!). So *some* people go with something > >> tried-and-true (basically bordering on server-level cards that let you > >> ditch any BIOS type of RAID config and present the raw disk devices to > >> the kernel) > > > > As someone else has mentioned, recent SiL stuff works well. I have > > multiple http://www.newegg.com/Product/Product.aspx?Item=N82E16816132008 > > cards servicing RAID-Z2 and GEOM_RAID3 arrays on 8.0-RELEASE and > > 8.0-STABLE machines using both the old ata(4) driver and ATA_CAM. Don't > > let the RAID label scare you--that stuff is off by default and the > > controller just presents the disks to the operating system. Hot swap > > works. I haven't had the time to try the siis(4) driver for them, which > > would result in better performance. > > That's a really good price. :) > > If needed, I could host all eight SATA drives for $160, much cheaper > than any of the other RAID cards I've seen. > > The issue then is finding a motherboard which has 4x PCI Express slots. ;) You should be able to put PCIe 4x card in a PCIe 16x or 8x slot. For an explanation allow me to quote wikipedia: "A PCIe card will fit into a slot of its physical size or bigger, but may not fit into a smaller PCIe slot. Some slots use open-ended sockets to permit physically longer cards and will negotiate the best available electrical connection. The number of lanes actually connected to a slot may also be less than the number supported by the physical slot size. An example is a x8 slot that actually only runs at ×1; these slots will allow any ×1, ×2, ×4 or ×8 card to be used, though only running at the ×1 speed. This type of socket is described as a ×8 (×1 mode) slot, meaning it physically accepts up to ×8 cards but only runs at ×1 speed. The advantage gained is that a larger range of PCIe cards can still be used without requiring the motherboard hardware to support the full transfer rate—in so doing keeping design and implementation costs down." -- Pieter
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201002101127.53444.pieter>
