Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 19 Sep 2011 12:11:42 -0700 (PDT)
From:      Jason Usher <jusher71@yahoo.com>
To:        freebsd-fs@freebsd.org
Subject:   Re: ZFS obn FreeBSD hardware model for 48 or 96 sata3 paths...
Message-ID:  <1316459502.23423.YahooMailClassic@web121212.mail.ne1.yahoo.com>
In-Reply-To: <alpine.GSO.2.01.1109191403020.7097@freddy.simplesystems.org>

next in thread | previous in thread | raw e-mail | index | archive | help
=0A=0A--- On Mon, 9/19/11, Bob Friesenhahn <bfriesen@simple.dallas.tx.us> w=
rote:=0A=0A=0A> > Hmmm... I understand this, but is there not any data=0A> =
that might transfer from multiple magnetic disks,=0A> simultaneously, at 6G=
B, that could periodically max out the=0A> card bandwidth ?=A0 As in, all d=
rives in a 12 drive array=0A> perform an operation on their built-in cache =
simultaneously=0A> ?=0A> =0A> The best way to deal with this is by careful =
zfs pool=0A> design so that disks that can be expected to perform related=
=0A> operations (e.g. in same vdev) are carefully split across=0A> interfac=
e cards and I/O channels. This also helps with=0A> reliability.=0A=0A=0AUnd=
erstood.=0A=0ABut again, can't that all be dismissed completely by having a=
 one drive / one path build ?  And since that does not add extra cost per d=
rive, or per card ... only per motherboard ... it seems an easy cost to swa=
llow - even if it's a very edge case that it might ever be useful.=0A=0APre=
suming I can *find* a 112+ lane mobo, I assume the cost would be at worst d=
ouble ($800ish instead of $400ish) a mobo with fewer pcie lanes...



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1316459502.23423.YahooMailClassic>