Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 20 Nov 2012 13:59:54 -0500
From:      George Neville-Neil <gnn@neville-neil.com>
To:        Daichi GOTO <daichi@bsdconsulting.co.jp>
Cc:        freebsd-infiniband@FreeBSD.org
Subject:   Re: Infiniband experiment environment setting up
Message-ID:  <2A385CE3-72E8-4D40-8413-69E00841536D@neville-neil.com>
In-Reply-To: <20121119115350.3456c580209197b4aea33d75@bsdconsulting.co.jp>
References:  <20121119115350.3456c580209197b4aea33d75@bsdconsulting.co.jp>

next in thread | previous in thread | raw e-mail | index | archive | help

On Nov 18, 2012, at 21:53 , Daichi GOTO <daichi@bsdconsulting.co.jp> =
wrote:

> Hi,
>=20
>  Currently, we are getting ready to test InfiniBands on FreeBSD
> using follow devices:
>=20
> 	CPU:		Intel Xeon E3-1240v2
> 	M/B:		Super Micro X9SCM
> 	RAM:		Kingston KVR16E11/8 (8GBx4)
>=20
> 	IB Card:	Mellanox Technologies ConnectX-2 VPI MHQH19B-XTR
> 	IB Card:	Mellanox Technologies ConnectX-3 VPI =
MCX353A-QCBT
> 	IB Switch:	Mellanox Technologies EXW-IS5022
> 	QSFP cable:	MC22096-130-001
>=20
>  Is there some information about above InfiniBand devices?

Well it's only the cards you have to worry about.  I know the ConnectX-2 =
works
but have not tested the ConnectX-3.

Also, note that it depends on what you are testing.  That is, Infiniband =
has low
latency at low payloads.  The numbers quoted by Mellanox about low =
latency depend
on you sending one byte (yes, 1 byte).  At real packet sizes Infiniband =
is no faster than
10GbE.

Best,
George=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?2A385CE3-72E8-4D40-8413-69E00841536D>