Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 10 Jul 2012 11:57:37 -0700 (PDT)
From:      Jason Usher <jusher71@yahoo.com>
To:        freebsd-fs@freebsd.org
Subject:   chaining JBOD chassic to server ... why am I scared ?   (ZFS)
Message-ID:  <1341946657.18535.YahooMailClassic@web122505.mail.ne1.yahoo.com>

next in thread | raw e-mail | index | archive | help
The de-facto configuration the smart folks are using for ZFS seems to be:

- 16/24/36 drive supermicro chassis
- LSI 9211-8i internal cards
- ZFS and probably raidz2 or raidz3 vdevs

Ok, fine.  But then I see some even smarter folks attaching the 48-drive 4U JBOD chassis to this configuration, probably using a different LSI card that has an external SAS cable.

So ... 84 drives accessible to ZFS on one system.  In terms of space and money efficiency, it sounds really great - fewer systems to manage, etc.

But this scares me ...

- two different power sources - so the "head unit" can lose power independent of the JBOD device ... how well does that turn out ?

- external cabling - has anyone just yanked that external SAS cable a few times, and what does that look like ?

- If you have a single SLOG, or a single L2ARC device, where do you put it ?  And then what happens if "the other half" of the system detaches from the half that the SLOG/L2ARC is in ?

- ... any number of other weird things ?


Just how well does ZFS v28 deal with these kind of situations, and do I have a good reason to be awfully shy about doing this ?





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1341946657.18535.YahooMailClassic>