Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 10 Jul 2012 15:31:09 -0400
From:      Rich <rercola@pha.jhu.edu>
To:        Jason Usher <jusher71@yahoo.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: chaining JBOD chassic to server ... why am I scared ? (ZFS)
Message-ID:  <CAOeNLupHFmVH%2BQK9n0sEPCiKp=fu6WMLyPq1Mz74yStvuHdYcg@mail.gmail.com>
In-Reply-To: <1341946657.18535.YahooMailClassic@web122505.mail.ne1.yahoo.com>
References:  <1341946657.18535.YahooMailClassic@web122505.mail.ne1.yahoo.com>

next in thread | previous in thread | raw e-mail | index | archive | help
There's not really a visible difference between either the head node
or JBOD(s) losing power and e.g. a backplane failure - power lost, and
then whatever happens depends on your disk configuration and
distribution.

The Supermicro chassis in question have the option to take 2.5
internal drives if desired, which is where I'd suggest
root+(SLOG,L2ARC) [though the 36d ones don't, IIRC].

You may also get more mileage out of the 9207-8[ie] - not much cost
difference, PCIe gen 3 and newer chip. [Can't speak to how it performs
in practice other than that reviews seem to be positive; mine haven't
arrived yet.]

- Rich

On Tue, Jul 10, 2012 at 2:57 PM, Jason Usher <jusher71@yahoo.com> wrote:
> The de-facto configuration the smart folks are using for ZFS seems to be:
>
> - 16/24/36 drive supermicro chassis
> - LSI 9211-8i internal cards
> - ZFS and probably raidz2 or raidz3 vdevs
>
> Ok, fine.  But then I see some even smarter folks attaching the 48-drive 4U JBOD chassis to this configuration, probably using a different LSI card that has an external SAS cable.
>
> So ... 84 drives accessible to ZFS on one system.  In terms of space and money efficiency, it sounds really great - fewer systems to manage, etc.
>
> But this scares me ...
>
> - two different power sources - so the "head unit" can lose power independent of the JBOD device ... how well does that turn out ?
>
> - external cabling - has anyone just yanked that external SAS cable a few times, and what does that look like ?
>
> - If you have a single SLOG, or a single L2ARC device, where do you put it ?  And then what happens if "the other half" of the system detaches from the half that the SLOG/L2ARC is in ?
>
> - ... any number of other weird things ?
>
>
> Just how well does ZFS v28 deal with these kind of situations, and do I have a good reason to be awfully shy about doing this ?
>
>
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAOeNLupHFmVH%2BQK9n0sEPCiKp=fu6WMLyPq1Mz74yStvuHdYcg>