Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 19 Sep 2011 12:11:42 -0700 (PDT)
From:      Jason Usher <jusher71@yahoo.com>
To:        freebsd-fs@freebsd.org
Subject:   Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths...
Message-ID:  <1316459502.23423.YahooMailClassic@web121212.mail.ne1.yahoo.com>
In-Reply-To: <alpine.GSO.2.01.1109191403020.7097@freddy.simplesystems.org>

next in thread | previous in thread | raw e-mail | index | archive | help



--- On Mon, 9/19/11, Bob Friesenhahn <bfriesen@simple.dallas.tx.us> wrote:


> > Hmmm... I understand this, but is there not any data
> that might transfer from multiple magnetic disks,
> simultaneously, at 6GB, that could periodically max out the
> card bandwidth ?  As in, all drives in a 12 drive array
> perform an operation on their built-in cache simultaneously
> ?
> 
> The best way to deal with this is by careful zfs pool
> design so that disks that can be expected to perform related
> operations (e.g. in same vdev) are carefully split across
> interface cards and I/O channels. This also helps with
> reliability.


Understood.

But again, can't that all be dismissed completely by having a one drive / one path build ?  And since that does not add extra cost per drive, or per card ... only per motherboard ... it seems an easy cost to swallow - even if it's a very edge case that it might ever be useful.

Presuming I can *find* a 112+ lane mobo, I assume the cost would be at worst double ($800ish instead of $400ish) a mobo with fewer pcie lanes...



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1316459502.23423.YahooMailClassic>