Date: Fri, 16 Sep 2011 22:21:32 -0700 From: Julian Elischer <julian@freebsd.org> To: Joshua Boyd <boydjd@jbip.net> Cc: Jason Usher <jusher71@yahoo.com>, "freebsd-fs@freebsd.org" <freebsd-fs@freebsd.org> Subject: Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths... Message-ID: <4E742E5C.2010900@freebsd.org> In-Reply-To: <CAHcKe7kfdirJL-vPw=pnFvtzo7ZouDQNPkLVbKs_s35Amz40NQ@mail.gmail.com> References: <1316222526.31565.YahooMailNeo@web121205.mail.ne1.yahoo.com> <CAHcKe7kfdirJL-vPw=pnFvtzo7ZouDQNPkLVbKs_s35Amz40NQ@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On 9/16/11 8:45 PM, Joshua Boyd wrote: > On Fri, Sep 16, 2011 at 9:22 PM, Jason Usher<jusher71@yahoo.com> wrote: > >> Hello, >> >> I am building my first FreeBSD based ZFS system and am deciding on a >> hardware model. The overriding requirement is: >> >> 1) immediately support 48 internal sata3 drives at full bandwidth - every >> drive has independent path to CPU >> >> 2) future expansion to support another 48 drives on an attached JBOD, all >> of which ALSO have their own independent path to CPU >> >> The first question is: how many pcie 2.0 lanes does a motherboard need to >> run 96 independent sata3 connections ? Am I correct that this is extremely >> important ? >> >> Next, I see a lot of implementations done with LSI adaptors - is this as >> simple as choosing (3) LSI SAS 9201-16i for the 48 internal drives and (3) >> LSI SAS 9201-16e for the external drives ? I remember that these cards are >> supported with mps(4) in FreeBSD, but only in 9.x (?) - is that still the >> case, or is that support in 8.2 or later in 8.3 ? >> >> So I will boot of a pair of mirrored SSDs formatted UFS2 - easy. But I >> would also like to spec and use a ZIL+L2ARC and am not sure where to go ... >> the system will be VERY write-biased and use a LOT of inodes - so lots of >> scanning of large dirs with lots of inodes and writing data. Something like >> 400 million inodes on a filesystem with an average file size of 150 KB. >> >> - can I just skip the l2arc and just add more RAM ? Wouldn't the RAM >> always be faster/better ? Or do folks build such large L2arcs (4x200 GB SSD >> ?) that it outweighs an extra 32 GB of RAM ? >> >> - provided I maintain the free pcie slot(s) and/or free 2.5" drive slots, >> can I always just add a ZIL after the fact ? I'd prefer to skip it for now >> and save that complexity for later... >> >> Thanks very much for any comments/suggestions. >> >> _______________________________________________ >> freebsd-fs@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-fs >> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org" >> > I've built something similar, using 3 Supermicro SC933 chassis, 2 HP SAS > expanders, 2 AOC-USAS-L8i cards, and 1 card with 2 external ports (I can't > remember the exact name, but it's an LSI chipset card). This is a 45 drive > capable setup, so smaller than what you're wanting. > > I'd recommend you get two of these: > > http://www.supermicro.com/products/chassis/4U/847/SC847E26-RJBOD1.cfm > > That gives you 90 drives in 8U. They each have dual port expanders > integrated to the backplanes. Then build a separate 1 or 2u box that holds > your boot drives/cache drives. In this box put in 2 6Gb cards with external > SAS connectors. Something like the 9750-8E, which are 6Gbit/s cards and > support drives bigger than 2TB. You'll need to run 8-STABLE, as these cards > use the mptsas driver, which isn't in 8-RELEASE last I checked. > > I don't have any experience with separate cache/log devices, so I can't > offer much advice there. > what is it you are trying to achieve? large storage, or high transaction rates? (or both?) I'm biased but I'd put a 160GB zil on a fusion-io card and dedicate 8G of the ram to it's useage. it's remarkable what a 20uSec turnaround time on your metadata can do..
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4E742E5C.2010900>