Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 7 Sep 2019 20:26:46 -0500
From:      Jason Bacon <bacon4000@gmail.com>
To:        John Fleming <john@spikefishsolutions.com>, freebsd-infiniband@freebsd.org
Subject:   Re: Just joined the infiniband club
Message-ID:  <00acac6f-3f13-a343-36c5-00fe45620eb0@gmail.com>
In-Reply-To: <CABy3cGxXa8J1j%2BodmfdQ6b534BiPwOMUAMOYqXKMD6zGOeBE3w@mail.gmail.com>
References:  <CABy3cGxXa8J1j%2BodmfdQ6b534BiPwOMUAMOYqXKMD6zGOeBE3w@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 2019-09-07 19:00, John Fleming wrote:
> Hi all, i've recently joined the club. I have two Dell R720s connected
> directly to each other. The card is a connectx-4. I was having a lot
> of problem with network drops. Where i'm at now is i'm running
> FreeBSD12-Stable as of a week ago and cards have been cross flashed
> with OEM firmware (these are lenovo i think) and i'm no longer getting
> network drops. This box is basically my storage server. Its exporting
> a raid 10 ZFS volume to a linux (compute 19.04 5.0.0-27-generic) box
> which is running GNS3 for a lab.
>
> So many questions.. sorry if this is a bit rambly!
>
>  From what I understand this card is really 4 x 25 gig lanes. If i
> understand that correctly then 1 data transfer should be able to do at
> max 25 gig (best case) correct?
>
> I'm not getting what the difference between connected mode and
> datagram mode is. Does this have anything to do with the card
> operating in infiniband mode vs ethernet mode? FreeBSD is using the
> modules compiled in connected mode with shell script (which is really
> a bash script not a sh script) from freebsd-infiniband page.

Nothing to do with Ethernet...

Google turned up a brief explanation here:

https://wiki.archlinux.org/index.php/InfiniBand

Those are my module building scripts on the wiki.=C2=A0 What bash extensi=
ons=20
did you see?
>
> Linux box complains if mtu is over 2044 with expect mulitcast drops or
> something like that so mtu on both boxes is set to 2044.
>
> Everything i'm reading makes it sound like there is no RDMA support in
> FreeBSD or maybe that was no NFS RDMA support. Is that correct?
RDMA is inherent in Infiniband AFAIK.=C2=A0 Last I checked, there was no =

support in FreeBSD for NFS over RDMA, but news travels slowly in this=20
group so a little digging might prove otherwise.
>
> So far it seems like these cards struggle to full 10 gig pipe. Using
> iperf (2) the best i'm getting is around 6gb(bit) sec. Interfaces
> aren't showing drops on either end. Doesn't seem to matter if i do 1,
> 2 or 4 threads on iperf.
You'll need both ends in connected mode with a fairly large MTU to get=20
good throughput.=C2=A0 CentOS defaults to 64k, but FreeBSD is unstable at=
=20
that size last I checked.=C2=A0 I got good results with 16k.

My FreeBSD ZFS NFS server performed comparably to the CentOS servers,=20
with some buffer space errors causing the interface to shut down (under=20
the same loads that caused CentOS servers to lock up completely).=C2=A0=20
Someone mentioned that this buffer space bug has been fixed, but I no=20
longer have a way to test it.

Best,

 =C2=A0=C2=A0=C2=A0 Jason

--=20
Earth is a beta site.





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?00acac6f-3f13-a343-36c5-00fe45620eb0>