Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 23 Aug 2015 19:02:53 -0400 (EDT)
From:      Rick Macklem <rmacklem@uoguelph.ca>
To:        Daniel Braniss <danny@cs.huji.ac.il>
Cc:        pyunyh@gmail.com, Hans Petter Selasky <hps@selasky.org>,  FreeBSD stable <freebsd-stable@freebsd.org>,  FreeBSD Net <freebsd-net@freebsd.org>,  Slawa Olhovchenkov <slw@zxy.spb.ru>, Gleb Smirnoff <glebius@FreeBSD.org>,  Christopher Forgeron <csforgeron@gmail.com>
Subject:   Re: ix(intel) vs mlxen(mellanox) 10Gb performance
Message-ID:  <1815942485.29539597.1440370972998.JavaMail.zimbra@uoguelph.ca>
In-Reply-To: <49173B1F-7B5E-4D59-8651-63D97B0CB5AC@cs.huji.ac.il>
References:  <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <55D43615.1030401@selasky.org> <2013503980.25726607.1439989235806.JavaMail.zimbra@uoguelph.ca> <20150820023024.GB996@michelle.fasterthan.com> <1153838447.28656490.1440193567940.JavaMail.zimbra@uoguelph.ca> <15D19823-08F7-4E55-BBD0-CE230F67D26E@cs.huji.ac.il> <818666007.28930310.1440244756872.JavaMail.zimbra@uoguelph.ca> <49173B1F-7B5E-4D59-8651-63D97B0CB5AC@cs.huji.ac.il>

next in thread | previous in thread | raw e-mail | index | archive | help
Daniel Braniss wrote:
>=20
> > On 22 Aug 2015, at 14:59, Rick Macklem <rmacklem@uoguelph.ca> wrote:
> >=20
> > Daniel Braniss wrote:
> >>=20
> >>> On Aug 22, 2015, at 12:46 AM, Rick Macklem <rmacklem@uoguelph.ca> wro=
te:
> >>>=20
> >>> Yonghyeon PYUN wrote:
> >>>> On Wed, Aug 19, 2015 at 09:00:35AM -0400, Rick Macklem wrote:
> >>>>> Hans Petter Selasky wrote:
> >>>>>> On 08/19/15 09:42, Yonghyeon PYUN wrote:
> >>>>>>> On Wed, Aug 19, 2015 at 09:00:52AM +0200, Hans Petter Selasky wro=
te:
> >>>>>>>> On 08/18/15 23:54, Rick Macklem wrote:
> >>>>>>>>> Ouch! Yes, I now see that the code that counts the # of mbufs i=
s
> >>>>>>>>> before
> >>>>>>>>> the
> >>>>>>>>> code that adds the tcp/ip header mbuf.
> >>>>>>>>>=20
> >>>>>>>>> In my opinion, this should be fixed by setting if_hw_tsomaxsegc=
ount
> >>>>>>>>> to
> >>>>>>>>> whatever
> >>>>>>>>> the driver provides - 1. It is not the driver's responsibility =
to
> >>>>>>>>> know if
> >>>>>>>>> a tcp/ip
> >>>>>>>>> header mbuf will be added and is a lot less confusing that
> >>>>>>>>> expecting
> >>>>>>>>> the
> >>>>>>>>> driver
> >>>>>>>>> author to know to subtract one. (I had mistakenly thought that
> >>>>>>>>> tcp_output() had
> >>>>>>>>> added the tc/ip header mbuf before the loop that counts mbufs i=
n
> >>>>>>>>> the
> >>>>>>>>> list.
> >>>>>>>>> Btw,
> >>>>>>>>> this tcp/ip header mbuf also has leading space for the MAC laye=
r
> >>>>>>>>> header.)
> >>>>>>>>>=20
> >>>>>>>>=20
> >>>>>>>> Hi Rick,
> >>>>>>>>=20
> >>>>>>>> Your question is good. With the Mellanox hardware we have separa=
te
> >>>>>>>> so-called inline data space for the TCP/IP headers, so if the TC=
P
> >>>>>>>> stack
> >>>>>>>> subtracts something, then we would need to add something to the
> >>>>>>>> limit,
> >>>>>>>> because then the scatter gather list is only used for the data p=
art.
> >>>>>>>>=20
> >>>>>>>=20
> >>>>>>> I think all drivers in tree don't subtract 1 for
> >>>>>>> if_hw_tsomaxsegcount.  Probably touching Mellanox driver would be
> >>>>>>> simpler than fixing all other drivers in tree.
> >>>>>>>=20
> >>>>>>>> Maybe it can be controlled by some kind of flag, if all the thre=
e
> >>>>>>>> TSO
> >>>>>>>> limits should include the TCP/IP/ethernet headers too. I'm prett=
y
> >>>>>>>> sure
> >>>>>>>> we want both versions.
> >>>>>>>>=20
> >>>>>>>=20
> >>>>>>> Hmm, I'm afraid it's already complex.  Drivers have to tell almos=
t
> >>>>>>> the same information to both bus_dma(9) and network stack.
> >>>>>>=20
> >>>>>> Don't forget that not all drivers in the tree set the TSO limits
> >>>>>> before
> >>>>>> if_attach(), so possibly the subtraction of one TSO fragment needs=
 to
> >>>>>> go
> >>>>>> into ip_output() ....
> >>>>>>=20
> >>>>> Ok, I realized that some drivers may not know the answers before
> >>>>> ether_ifattach(),
> >>>>> due to the way they are configured/written (I saw the use of
> >>>>> if_hw_tsomax_update()
> >>>>> in the patch).
> >>>>=20
> >>>> I was not able to find an interface that configures TSO parameters
> >>>> after if_t conversion.  I'm under the impression
> >>>> if_hw_tsomax_update() is not designed to use this way.  Probably we
> >>>> need a better one?(CCed to Gleb).
> >>>>=20
> >>>>>=20
> >>>>> If it is subtracted as a part of the assignment to if_hw_tsomaxsegc=
ount
> >>>>> in
> >>>>> tcp_output()
> >>>>> at line#791 in tcp_output() like the following, I don't think it sh=
ould
> >>>>> matter if the
> >>>>> values are set before ether_ifattach()?
> >>>>> =09=09=09/*
> >>>>> =09=09=09 * Subtract 1 for the tcp/ip header mbuf that
> >>>>> =09=09=09 * will be prepended to the mbuf chain in this
> >>>>> =09=09=09 * function in the code below this block.
> >>>>> =09=09=09 */
> >>>>> =09=09=09if_hw_tsomaxsegcount =3D tp->t_tsomaxsegcount - 1;
> >>>>>=20
> >>>>> I don't have a good solution for the case where a driver doesn't pl=
an
> >>>>> on
> >>>>> using the
> >>>>> tcp/ip header provided by tcp_output() except to say the driver can=
 add
> >>>>> one
> >>>>> to the
> >>>>> setting to compensate for that (and if they fail to do so, it still
> >>>>> works,
> >>>>> although
> >>>>> somewhat suboptimally). When I now read the comment in sys/net/if_v=
ar.h
> >>>>> it
> >>>>> is clear
> >>>>> what it means, but for some reason I didn't read it that way before=
? (I
> >>>>> think it was
> >>>>> the part that said the driver didn't have to subtract for the heade=
rs
> >>>>> that
> >>>>> confused me?)
> >>>>> In any case, we need to try and come up with a clear definition of =
what
> >>>>> they need to
> >>>>> be set to.
> >>>>>=20
> >>>>> I can now think of two ways to deal with this:
> >>>>> 1 - Leave tcp_output() as is, but provide a macro for the device dr=
iver
> >>>>> authors to use
> >>>>>   that sets if_hw_tsomaxsegcount with a flag for "driver uses tcp/i=
p
> >>>>>   header mbuf",
> >>>>>   documenting that this flag should normally be true.
> >>>>> OR
> >>>>> 2 - Change tcp_output() as above, noting that this is a workaround =
for
> >>>>> confusion w.r.t.
> >>>>>   whether or not if_hw_tsomaxsegcount should include the tcp/ip hea=
der
> >>>>>   mbuf and
> >>>>>   update the comment in if_var.h to reflect this. Then drivers that
> >>>>>   don't
> >>>>>   use the
> >>>>>   tcp/ip header mbuf can increase their value for if_hw_tsomaxsegco=
unt
> >>>>>   by
> >>>>>   1.
> >>>>>   (The comment should also mention that a value of 35 or greater is
> >>>>>   much
> >>>>>   preferred to
> >>>>>    32 if the hardware will support that.)
> >>>>>=20
> >>>>=20
> >>>> Both works for me.  My preference is 2 just because it's very
> >>>> common for most drivers that use tcp/ip header mbuf.
> >>> Thanks for this comment. I tend to agree, both for the reason you sta=
te
> >>> and
> >>> also
> >>> because the patch is simple enough that it might qualify as an errata=
 for
> >>> 10.2.
> >>>=20
> >>> I am hoping Daniel Braniss will be able to test the patch and let us =
know
> >>> if it
> >>> improves performance with TSO enabled?
> >>=20
> >> send me the patch and I=E2=80=99ll test it ASAP.
> >> =09danny
> >>=20
> > Patch is attached. The one for head will also include an update to the
> > comment
> > in sys/net/if_var.h, but that isn't needed for testing.
>=20
>=20
> well, the plot thickens.
>=20
> Yesterday, before running the new kernel, I decided to re run my test, an=
d to
> my surprise
> i was getting good numbers, about 300MGB/s with and without TSO.
>=20
> this morning, the numbers were again bad, around 70MGB/s,what the ^%$#@!
>=20
> so, after some coffee, I run some more tests, and some conclusions:
> using a netapp(*) as the nfs client:
>   - doing
> =09ifconfig ix0 tso or -tso
>     does some magic and numbers are back to normal - for a while
>=20
> using another Fbsd/zfs as client all is nifty, actually a bit faster than=
 the
> netapp (not a fair
> comparison, since the zfs client is not heavily used) and I can=E2=80=99t=
 see any
> degradation.
> =20
I assume you meant "server" and not "client" above.

> btw, this is with the patch applied, but was seeing similar numbers befor=
e
> the patch.
>=20
> running with tso, initially I get around 300MGB/s, but after a while(sorr=
y
> can=E2=80=99t be more scientific)
> it drops down to about half,  and finally to a pathetic 70MGB/s
>=20
Ok, so it sounds like tso isn't the issue. (At least it seems the patch,
which I believe is needed, doesn't cause a regression.)

All I can suggest is:
- looking at the ix stats (I know nothing about them), but if you post them
  maybe someone conversant with the chip can help? (Before and after degred=
ation.)
- if you captured packets for a short period of time when degraded and then
  after doing "ifconfig", looking at the packet capture in wireshark might =
give
  some indication of what changes?
  - For this I'd be focused on the TCP layer (window sizes, etc) and timing=
 of
    packets.
--> I don't know if there is a packet capture tool like tcpdump on a Netapp=
, but
    that might be better than capturing them on the client, in case tcpdump=
 affects
    the outcome. However, tcpdump run on the client would be a fallback, I =
think.

The other thing is the degradation seems to cut the rate by about half each=
 time.
300-->150-->70 I have no idea if this helps to explain it.

Have fun with it, rick

> *: while running the tests I monitored the Netapp, and nothing out of the
> ordinary there.
>=20
> cheers,
> =09danny
>=20
> _______________________________________________
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1815942485.29539597.1440370972998.JavaMail.zimbra>