Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 18 Aug 2009 09:21:13 +0200
From:      Invernizzi Fabrizio <fabrizio.invernizzi@telecomitalia.it>
To:        Jack Vogel <jfvogel@gmail.com>, Carlos Pardo <cpardo@fastsoft.com>
Cc:        "freebsd-performance@freebsd.org" <freebsd-performance@freebsd.org>
Subject:   RE: Test on 10GBE Intel based network card
Message-ID:  <36A93B31228D3B49B691AD31652BCAE9A4569679F5@GRFMBX702BA020.griffon.local>
In-Reply-To: <2a41acea0908171503r3613d430ib154cd3445eb1309@mail.gmail.com>
References:  <D13CB108B048BD47B69C0CA1E0B5C032CF1D94@hq-es.FASTSOFT.COM> <2a41acea0908171503r3613d430ib154cd3445eb1309@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi

I am using ixgbe 1.8.6 on FreeBSD  7.2-RELEASE (amd64).
        INT-64# sysctl -a | grep dev.ix | grep desc
        dev.ix.0.%desc: Intel(R) PRO/10GbE PCI-Express Network Driver, Vers=
ion - 1.8.6
        dev.ix.1.%desc: Intel(R) PRO/10GbE PCI-Express Network Driver, Vers=
ion - 1.8.6

I see same strange big number of XON/XOFF Rcvd.

ix0: XON Rcvd =3D 5828048552040
ix0: XON Xmtd =3D 0
XOFF Rcvd =3D 5828048552040
ix0: XOFF Xmtd =3D 0

Flow control disabled.
INT-64# sysctl -a | grep dev.ix | grep flow_control
dev.ix.0.flow_control: 0
dev.ix.1.flow_control: 0


Fabrizio

> -----Original Message-----
> From: owner-freebsd-performance@freebsd.org
> [mailto:owner-freebsd-performance@freebsd.org] On Behalf Of Jack Vogel
> Sent: marted=EC 18 agosto 2009 0.04
> To: Carlos Pardo
> Cc: freebsd-performance@freebsd.org
> Subject: Re: Test on 10GBE Intel based network card
>
> Who ya gonna call, why me of course, its my driver :)
>
> Hmmm, the numbers on those look bogus, like some
> uninitialized variables.
> You did say you aren't using flow control, right?
>
> Jack
>
>
> On Mon, Aug 17, 2009 at 2:52 PM, Carlos Pardo
> <cpardo@fastsoft.com> wrote:
>
> >  Hi Jack,
> >
> >
> >
> > Thanks for the quick response. We can not use LRO because
> of the way
> > we accelerate on the WAN ports. We just moved from 7.0 to
> 8.0 to use
> > your latest driver (1.8.8). One thing we do not understand
> in 8.0. We
> > are having insane numbers for XON/XOFF Rcvd  counters with
> essentially no traffic.
> >  Driver version 1.2.16 works fine. Who should we contact for help?
> >
> >
> >
> > ix0: Std Mbuf Failed =3D 0
> >
> > ix0: Missed Packets =3D 0
> >
> > ix0: Receive length errors =3D 0
> >
> > ix0: Crc errors =3D 0
> >
> > ix0: Driver dropped packets =3D 0
> >
> > ix0: watchdog timeouts =3D 0
> >
> > *ix0: XON Rcvd =3D 7950055973552*
> >
> > ix0: XON Xmtd =3D 0
> >
> > *ix0: XOFF Rcvd =3D 7950055973552*
> >
> > ix0: XOFF Xmtd =3D 0
> >
> > ix0: Total Packets Rcvd =3D 2149
> >
> > ix0: Good Packets Rcvd =3D 2149
> >
> > ix0: Good Packets Xmtd =3D 1001
> >
> > ix0: TSO Transmissions =3D 0
> >
> > ix1: Std Mbuf Failed =3D 0
> >
> > ix1: Missed Packets =3D 0
> >
> > ix1: Receive length errors =3D 0
> >
> > ix1: Crc errors =3D 0
> >
> > ix1: Driver dropped packets =3D 0
> >
> > ix1: watchdog timeouts =3D 0
> >
> > *ix1: XON Rcvd =3D 7946320044993*
> >
> > ix1: XON Xmtd =3D 0
> >
> > *ix1: XOFF Rcvd =3D 7946320044993*
> >
> > ix1: XOFF Xmtd =3D 0
> >
> > ix1: Total Packets Rcvd =3D 1002
> >
> > ix1: Good Packets Rcvd =3D 1002
> >
> > ix1: Good Packets Xmtd =3D 1588
> >
> > ix1: TSO Transmissions =3D 0
> >
> >
> >
> > Regards,
> >
> >
> >
> > C Pardo
> >
> >
> >
> > *From:* Jack Vogel [mailto:jfvogel@gmail.com]
> > *Sent:* Friday, August 14, 2009 3:15 PM
> > *To:* Carlos Pardo
> > *Cc:* freebsd-performance@freebsd.org
> > *Subject:* Re: Test on 10GBE Intel based network card
> >
> >
> >
> > I've talked over the issues with the guy on our team who
> has been most
> > involved in 10G performance, he asserts that 3Gbs will saturate a
> > single cpu with a small packet size, this is why you need
> multiqueue
> > across multiple cores.  He was dubious about the FIFO
> assertion, its a
> > relative thing, if you can keep the thing drained it won't be a
> > problem, doing that is a complex combination of factors,
> the cpu, the bus, the memory....
> >
> > If you don't deal with the systemic issues just cuz you go from an
> > 82598 to a 82599 is not going to solve things.
> >
> > What about LRO, are/can you use that? I never saw an answer
> about the
> > forwarding question, you can't use LRO in that case of course.
> >
> > Regards,
> >
> > Jack
> >
> > On Fri, Aug 14, 2009 at 2:33 PM, Carlos Pardo
> <cpardo@fastsoft.com> wrote:
> >
> > Hi Jack,
> >
> > I have a quick question. We are getting too many missed packets per
> > minute running about 3Gbs traffic. We can not use frame
> control in our application.
> > We are assuming that there is no way to improve upon the
> problem since
> > it seems to be a hardware limitation with the receive FIFO. We are
> > using the Intel=AE 82598EB 10 Gigabit Ethernet Controller.
> When can we
> > expect the next generation card from Intel? Thanks for any
> information you may provide.
> >
> > Typical error count "ix0: Missed Packets =3D 81174" after a
> few minutes.
> >
> > Best Regards,
> >
> > Cpardo
> >
> >
> >
> > -----Original Message-----
> > From: owner-freebsd-performance@freebsd.org [mailto:
> > owner-freebsd-performance@freebsd.org] On Behalf Of Invernizzi
> > Fabrizio
> >
> > Sent: Wednesday, August 05, 2009 3:13 AM
> > To: Jack Vogel; Julian Elischer
> > Cc: freebsd-performance@freebsd.org; Stefan Lambrev
> > Subject: RE: Test on 10GBE Intel based network card
> >
> > No improvement with kern.ipc.nmbclusters=3D262144 and 1.8.6 driver
> > :<(((((
> >
> > ++fabrizio
> >
> > ------------------------------------------------------------------
> > Telecom Italia
> > Fabrizio INVERNIZZI
> > Technology - TILAB
> > Accesso Fisso e Trasporto
> > Via Reiss Romoli, 274 10148 Torino
> > Tel.  +39 011 2285497
> > Mob. +39 3316001344
> > Fax +39 06 41867287
> >
> >
> > ________________________________
> > From: Jack Vogel [mailto:jfvogel@gmail.com]
> > Sent: marted=EC 4 agosto 2009 18.42
> > To: Julian Elischer
> > Cc: Invernizzi Fabrizio; freebsd-performance@freebsd.org; Stefan
> > Lambrev
> > Subject: Re: Test on 10GBE Intel based network card
> >
> > Your nmbclusters is very low, you list it twice so I'm assuming the
> > second value is what it ends up being, 32K :(
> >
> > I would set it to:
> >
> > kern.ipc.nmbclusters=3D262144
> >
> > Also, I thought you were using the current driver, but now it looks
> > like you are using something fairly old,  use my latest
> code which is
> > 1.8.8
> >
> > Jack
> >
> > On Tue, Aug 4, 2009 at 9:17 AM, Julian Elischer
> <julian@elischer.org
> > <mailto:julian@elischer.org>> wrote:
> > Invernizzi Fabrizio wrote:
> > The limitation that you see is about the max number of packets that
> > FreeBSD can handle - it looks like your best performance is
> reached at
> > 64 byte packets?
> > If you are meaning in term of Packet per second, you are
> right. These
> > are the packet per second measured during tests:
> > 64 byte:        610119 Pps
> > 512 byte:       516917 Pps
> > 1492 byte:      464962 Pps
> >
> >
> > Am I correct that the maximum you can reach is around
> 639,000 packets
> > per second?
> > Yes, as you can see the maximum is 610119 Pps.
> > Where does this limit come from?
> > ah that's the whole point of tuning :-) there are
> severalpossibities:
> > 1/ the card's interrupts are probably attache dto aonly 1
> cpu, so that
> > cpu can do no more work
> >
> > This seems not to be the problem. See below a top snapshot during a
> > 64byte-long packet storm
> >
> > last pid:  8552;  load averages:  0.40,  0.09,  0.03
> >                                                                   up
> > 0+20:36:58  09:40:29
> > 124 processes: 13 running, 73 sleeping, 38 waiting
> > CPU:  0.0% user,  0.0% nice, 86.3% system, 12.3% interrupt,
>  1.5% idle
> > Mem: 13M Active, 329M Inact, 372M Wired, 68K Cache, 399M Buf, 7207M
> > Free
> > Swap: 2048M Total, 2048M Free
> >
> >  PID USERNAME    THR PRI NICE   SIZE    RES STATE  C   TIME
>   WCPU COMMAND
> >  11 root          1 171 ki31     0K    16K RUN    3  20.2H
> 51.17% idle:
> > cpu3
> >  14 root          1 171 ki31     0K    16K RUN    0  20.2H
> 50.88% idle:
> > cpu0
> >  12 root          1 171 ki31     0K    16K RUN    2  20.2H
> 50.49% idle:
> > cpu2
> >  13 root          1 171 ki31     0K    16K RUN    1  20.2H
> 50.10% idle:
> > cpu1
> >  42 root          1 -68    -     0K    16K RUN    1  14:20
> 36.47% ix0 rxq
> >  38 root          1 -68    -     0K    16K CPU0   0  14:15
> 36.08% ix0 rxq
> >  44 root          1 -68    -     0K    16K CPU2   2  14:08
> 34.47% ix0 rxq
> >  40 root          1 -68    -     0K    16K CPU3   3  13:42
> 32.37% ix0 rxq
> > ....
> >
> > It looks like the 4 rxq processes are bound to the 4
> available cores
> > with equal distribution.
> >
> >
> >
> > 2/ if more than 1 cpu is working, it may be that there is a lock in
> > heavy contention somewhere.
> >
> > This I think is the problem. I am trying to understand how to
> > 1- see where the heavy contention is (context switching?
> Some limiting
> > setting?)
> > 2- mitigate it
> >
> >
> >
> > there ia a lock profiling tool that right now I can't remember the
> > name of..
> >
> > look it up with google :-)  FreeBSD lock profiling tool
> >
> > ah, first hit...
> >
> > http://blogs.epfl.ch/article/23832
> >
> >
> >
> > is the machine still responsive to other networks while running at
> > maximum capacity on this network? (make sure that the other
> networks
> > are on a differnet CPU (hmm I can't remember how to do that :-).
> >
> >
> >
> > Questo messaggio e i suoi allegati sono indirizzati esclusivamente
> > alle persone indicate. La diffusione, copia o qualsiasi
> altra azione
> > derivante dalla conoscenza di queste informazioni sono
> rigorosamente
> > vietate. Qualora abbiate ricevuto questo documento per errore siete
> > cortesemente pregati di darne immediata comunicazione al
> mittente e di
> > provvedere alla sua distruzione, Grazie.
> >
> > This e-mail and any attachments is confidential and may contain
> > privileged information intended for the addressee(s) only.
> > Dissemination, copying, printing or use by anybody else is
> > unauthorised. If you are not the intended recipient, please delete
> > this message and any attachments and advise the sender by
> return e-mail, Thanks.
> >
> > _______________________________________________
> >
> >
> freebsd-performance@freebsd.org<mailto:freebsd-performance@freebsd.org
> > >
> > mailing list
> >
> > http://lists.freebsd.org/mailman/listinfo/freebsd-performance
> >
> > To unsubscribe, send any mail to "
> > freebsd-performance-unsubscribe@freebsd.org<mailto:
> > freebsd-performance-unsubscribe@freebsd.org>"
> >
> >
> > Questo messaggio e i suoi allegati sono indirizzati esclusivamente
> > alle persone indicate. La diffusione, copia o qualsiasi
> altra azione
> > derivante dalla conoscenza di queste informazioni sono
> rigorosamente
> > vietate. Qualora abbiate ricevuto questo documento per errore siete
> > cortesemente pregati di darne immediata comunicazione al
> mittente e di
> > provvedere alla sua distruzione, Grazie.
> >
> > This e-mail and any attachments is confidential and may contain
> > privileged information intended for the addressee(s) only.
> > Dissemination, copying, printing or use by anybody else is
> > unauthorised. If you are not the intended recipient, please delete
> > this message and any attachments and advise the sender by
> return e-mail, Thanks.
> >
> >
> [cid:00000000000000000000000000000001@TI.Disclaimer]Rispetta
> l'ambiente.
> > Non stampare questa mail se non =E8 necessario.
> >
> >
> >
> _______________________________________________
> freebsd-performance@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-performance
> To unsubscribe, send any mail to
> "freebsd-performance-unsubscribe@freebsd.org"
>

Questo messaggio e i suoi allegati sono indirizzati esclusivamente alle per=
sone indicate. La diffusione, copia o qualsiasi altra azione derivante dall=
a conoscenza di queste informazioni sono rigorosamente vietate. Qualora abb=
iate ricevuto questo documento per errore siete cortesemente pregati di dar=
ne immediata comunicazione al mittente e di provvedere alla sua distruzione=
, Grazie.

This e-mail and any attachments is confidential and may contain privileged =
information intended for the addressee(s) only. Dissemination, copying, pri=
nting or use by anybody else is unauthorised. If you are not the intended r=
ecipient, please delete this message and any attachments and advise the sen=
der by return e-mail, Thanks.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?36A93B31228D3B49B691AD31652BCAE9A4569679F5>