From owner-freebsd-stable@freebsd.org Sun Aug 23 11:09:12 2015 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A98199C14BE; Sun, 23 Aug 2015 11:09:12 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from kabab.cs.huji.ac.il (kabab.cs.huji.ac.il [132.65.116.210]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 299A818D0; Sun, 23 Aug 2015 11:09:11 +0000 (UTC) (envelope-from danny@cs.huji.ac.il) Received: from chamsa.cs.huji.ac.il ([132.65.80.19]) by kabab.cs.huji.ac.il with esmtp id 1ZTT8z-000KPV-3n; Sun, 23 Aug 2015 14:08:57 +0300 Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2104\)) Subject: Re: ix(intel) vs mlxen(mellanox) 10Gb performance From: Daniel Braniss In-Reply-To: <818666007.28930310.1440244756872.JavaMail.zimbra@uoguelph.ca> Date: Sun, 23 Aug 2015 14:08:56 +0300 Cc: pyunyh@gmail.com, Hans Petter Selasky , FreeBSD stable , FreeBSD Net , Christopher Forgeron , Gleb Smirnoff , Slawa Olhovchenkov Message-Id: <49173B1F-7B5E-4D59-8651-63D97B0CB5AC@cs.huji.ac.il> References: <1D52028A-B39F-4F9B-BD38-CB1D73BF5D56@cs.huji.ac.il> <55D429A4.3010407@selasky.org> <20150819074212.GB964@michelle.fasterthan.com> <55D43615.1030401@selasky.org> <2013503980.25726607.1439989235806.JavaMail.zimbra@uoguelph.ca> <20150820023024.GB996@michelle.fasterthan.com> <1153838447.28656490.1440193567940.JavaMail.zimbra@uoguelph.ca> <15D19823-08F7-4E55-BBD0-CE230F67D26E@cs.huji.ac.il> <818666007.28930310.1440244756872.JavaMail.zimbra@uoguelph.ca> To: Rick Macklem X-Mailer: Apple Mail (2.2104) Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.20 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 23 Aug 2015 11:09:12 -0000 > On 22 Aug 2015, at 14:59, Rick Macklem wrote: >=20 > Daniel Braniss wrote: >>=20 >>> On Aug 22, 2015, at 12:46 AM, Rick Macklem = wrote: >>>=20 >>> Yonghyeon PYUN wrote: >>>> On Wed, Aug 19, 2015 at 09:00:35AM -0400, Rick Macklem wrote: >>>>> Hans Petter Selasky wrote: >>>>>> On 08/19/15 09:42, Yonghyeon PYUN wrote: >>>>>>> On Wed, Aug 19, 2015 at 09:00:52AM +0200, Hans Petter Selasky = wrote: >>>>>>>> On 08/18/15 23:54, Rick Macklem wrote: >>>>>>>>> Ouch! Yes, I now see that the code that counts the # of mbufs = is >>>>>>>>> before >>>>>>>>> the >>>>>>>>> code that adds the tcp/ip header mbuf. >>>>>>>>>=20 >>>>>>>>> In my opinion, this should be fixed by setting = if_hw_tsomaxsegcount >>>>>>>>> to >>>>>>>>> whatever >>>>>>>>> the driver provides - 1. It is not the driver's responsibility = to >>>>>>>>> know if >>>>>>>>> a tcp/ip >>>>>>>>> header mbuf will be added and is a lot less confusing that = expecting >>>>>>>>> the >>>>>>>>> driver >>>>>>>>> author to know to subtract one. (I had mistakenly thought that >>>>>>>>> tcp_output() had >>>>>>>>> added the tc/ip header mbuf before the loop that counts mbufs = in the >>>>>>>>> list. >>>>>>>>> Btw, >>>>>>>>> this tcp/ip header mbuf also has leading space for the MAC = layer >>>>>>>>> header.) >>>>>>>>>=20 >>>>>>>>=20 >>>>>>>> Hi Rick, >>>>>>>>=20 >>>>>>>> Your question is good. With the Mellanox hardware we have = separate >>>>>>>> so-called inline data space for the TCP/IP headers, so if the = TCP >>>>>>>> stack >>>>>>>> subtracts something, then we would need to add something to the = limit, >>>>>>>> because then the scatter gather list is only used for the data = part. >>>>>>>>=20 >>>>>>>=20 >>>>>>> I think all drivers in tree don't subtract 1 for >>>>>>> if_hw_tsomaxsegcount. Probably touching Mellanox driver would = be >>>>>>> simpler than fixing all other drivers in tree. >>>>>>>=20 >>>>>>>> Maybe it can be controlled by some kind of flag, if all the = three TSO >>>>>>>> limits should include the TCP/IP/ethernet headers too. I'm = pretty sure >>>>>>>> we want both versions. >>>>>>>>=20 >>>>>>>=20 >>>>>>> Hmm, I'm afraid it's already complex. Drivers have to tell = almost >>>>>>> the same information to both bus_dma(9) and network stack. >>>>>>=20 >>>>>> Don't forget that not all drivers in the tree set the TSO limits = before >>>>>> if_attach(), so possibly the subtraction of one TSO fragment = needs to go >>>>>> into ip_output() .... >>>>>>=20 >>>>> Ok, I realized that some drivers may not know the answers before >>>>> ether_ifattach(), >>>>> due to the way they are configured/written (I saw the use of >>>>> if_hw_tsomax_update() >>>>> in the patch). >>>>=20 >>>> I was not able to find an interface that configures TSO parameters >>>> after if_t conversion. I'm under the impression >>>> if_hw_tsomax_update() is not designed to use this way. Probably we >>>> need a better one?(CCed to Gleb). >>>>=20 >>>>>=20 >>>>> If it is subtracted as a part of the assignment to = if_hw_tsomaxsegcount >>>>> in >>>>> tcp_output() >>>>> at line#791 in tcp_output() like the following, I don't think it = should >>>>> matter if the >>>>> values are set before ether_ifattach()? >>>>> /* >>>>> * Subtract 1 for the tcp/ip header mbuf that >>>>> * will be prepended to the mbuf chain in this >>>>> * function in the code below this block. >>>>> */ >>>>> if_hw_tsomaxsegcount =3D tp->t_tsomaxsegcount - = 1; >>>>>=20 >>>>> I don't have a good solution for the case where a driver doesn't = plan on >>>>> using the >>>>> tcp/ip header provided by tcp_output() except to say the driver = can add >>>>> one >>>>> to the >>>>> setting to compensate for that (and if they fail to do so, it = still >>>>> works, >>>>> although >>>>> somewhat suboptimally). When I now read the comment in = sys/net/if_var.h >>>>> it >>>>> is clear >>>>> what it means, but for some reason I didn't read it that way = before? (I >>>>> think it was >>>>> the part that said the driver didn't have to subtract for the = headers >>>>> that >>>>> confused me?) >>>>> In any case, we need to try and come up with a clear definition of = what >>>>> they need to >>>>> be set to. >>>>>=20 >>>>> I can now think of two ways to deal with this: >>>>> 1 - Leave tcp_output() as is, but provide a macro for the device = driver >>>>> authors to use >>>>> that sets if_hw_tsomaxsegcount with a flag for "driver uses = tcp/ip >>>>> header mbuf", >>>>> documenting that this flag should normally be true. >>>>> OR >>>>> 2 - Change tcp_output() as above, noting that this is a workaround = for >>>>> confusion w.r.t. >>>>> whether or not if_hw_tsomaxsegcount should include the tcp/ip = header >>>>> mbuf and >>>>> update the comment in if_var.h to reflect this. Then drivers = that >>>>> don't >>>>> use the >>>>> tcp/ip header mbuf can increase their value for = if_hw_tsomaxsegcount >>>>> by >>>>> 1. >>>>> (The comment should also mention that a value of 35 or greater = is much >>>>> preferred to >>>>> 32 if the hardware will support that.) >>>>>=20 >>>>=20 >>>> Both works for me. My preference is 2 just because it's very >>>> common for most drivers that use tcp/ip header mbuf. >>> Thanks for this comment. I tend to agree, both for the reason you = state and >>> also >>> because the patch is simple enough that it might qualify as an = errata for >>> 10.2. >>>=20 >>> I am hoping Daniel Braniss will be able to test the patch and let us = know >>> if it >>> improves performance with TSO enabled? >>=20 >> send me the patch and I=E2=80=99ll test it ASAP. >> danny >>=20 > Patch is attached. The one for head will also include an update to the = comment > in sys/net/if_var.h, but that isn't needed for testing. well, the plot thickens. Yesterday, before running the new kernel, I decided to re run my test, = and to my surprise i was getting good numbers, about 300MGB/s with and without TSO. this morning, the numbers were again bad, around 70MGB/s,what the ^%$#@! so, after some coffee, I run some more tests, and some conclusions: using a netapp(*) as the nfs client: - doing=20 ifconfig ix0 tso or -tso does some magic and numbers are back to normal - for a while using another Fbsd/zfs as client all is nifty, actually a bit faster = than the netapp (not a fair comparison, since the zfs client is not heavily used) and I can=E2=80=99t = see any degradation. =20 btw, this is with the patch applied, but was seeing similar numbers = before the patch. running with tso, initially I get around 300MGB/s, but after a = while(sorry can=E2=80=99t be more scientific) it drops down to about half, and finally to a pathetic 70MGB/s *: while running the tests I monitored the Netapp, and nothing out of = the ordinary there. cheers, danny