Date: Thu, 13 Aug 2009 05:37:24 -0700 (PDT) From: Barney Cordoba <barney_cordoba@yahoo.com> To: Peter Steele <psteele@webmail.maxiscale.com>, pyunyh@gmail.com Cc: freebsd-net@freebsd.org Subject: Re: nfe taskq performance issues Message-ID: <978222.18685.qm@web63904.mail.re1.yahoo.com> In-Reply-To: <430428.63263.qm@web63902.mail.re1.yahoo.com>
next in thread | previous in thread | raw e-mail | index | archive | help
=0A=0A--- On Thu, 8/13/09, Barney Cordoba <barney_cordoba@yahoo.com> wrote:= =0A=0A> From: Barney Cordoba <barney_cordoba@yahoo.com>=0A> Subject: Re: nf= e taskq performance issues=0A> To: "Peter Steele" <psteele@webmail.maxiscal= e.com>, pyunyh@gmail.com=0A> Cc: freebsd-net@freebsd.org=0A> Date: Thursday= , August 13, 2009, 7:56 AM=0A> =0A> =0A> --- On Wed, 8/12/09, Pyun YongHyeo= n <pyunyh@gmail.com>=0A> wrote:=0A> =0A> > From: Pyun YongHyeon <pyunyh@gma= il.com>=0A> > Subject: Re: nfe taskq performance issues=0A> > To: "Peter St= eele" <psteele@webmail.maxiscale.com>=0A> > Cc: freebsd-net@freebsd.org=0A>= > Date: Wednesday, August 12, 2009, 5:35 PM=0A> > On Thu, Jul 23, 2009 at = 08:58:07AM=0A> > -0700, Peter Steele wrote:=0A> > > We've been hitting seri= ous nfe taskq performance=0A> > issues during stress=0A> > > tests and in d= oing some research on the problem=0A> we=0A> > came across this old=0A> > >= email:=0A> > > =0A> > >=A0 =0A> > > =0A> > > From: Ivan Voras <ivoras@free= bsd.org>=0A> > > Date: April 28, 2009 3:53:14 AM PDT=0A> > > To: freebsd-th= reads@freebsd.org=0A> > > Cc: freebsd-net@freebsd.org,=0A> > freebsd-perfor= mance@freebsd.org=0A> > > Subject: Re: FreeBSD 7.1 taskq em performance=0A>= > > >=0A> > > > I have been hitting some barrier with=0A> FreeBSD 7.1=0A> = > network performance.=0A> > > I=0A> > > > have written an application whic= h contains=0A> two=0A> > kernel threads that=0A> > > takes=0A> > > > mbufs = directly from a network interface and=0A> > forwards to another=0A> > > net= work=0A> > > > interface. This idea is to simulate=0A> different=0A> > netw= ork environment.=0A> > > >=0A> > > > I have been using FreeBSD 6.4 amd64 an= d=0A> tested=0A> > with an Ixia box=0A> > > > (specialised hardware firing = very high=0A> packet=0A> > rate). The PC was a=0A> > > Core2 2.6=0A> > > > = GHz with dual ports Intel PCIE Gigabit=0A> network=0A> > card. It can manag= e up=0A> > > to 1.2=0A> > > > million pps.=0A> > > >=0A> > > > I have a hig= her spec PC with FreeBSD 7.1=0A> amd64=0A> > and Quadcore 2.3 GHz=0A> > > a= nd=0A> > > > PCIE Gigabit network card. The performance=0A> can=0A> > only = achieve up to 600k=0A> > > pps.=0A> > > > I notice the 'taskq em0' and 'tas= kq em1' is=0A> solid=0A> > 100% CPU but it is=0A> > > not in=0A> > > > Free= BSD 6.4. =0A> > > =0A> > >=A0 =0A> > > =0A> > > In our case we are running = FreeBSD 7.0, but we=0A> are=0A> > seeing our boxes=0A> > > experience serio= us thread starvation issues as=0A> the=0A> > nfe0 cpu percentage=0A> > > cl= imbs steadily while cpu idle time drops at=0A> times to=0A> > 0 percent. Th= is=0A> > > email thread mentioned a patch for the em driver=0A> > here:=0A>= > > =0A> > >=A0 =0A> > > =0A> > > http://people.yandex-team.ru/~wawa/ =0A>= > > <http://people.yandex-team.ru/%7Ewawa/>=0A> > =0A> > > =0A> > >=A0 =0A= > > > =0A> > > Does anyone know if this patch will work with the=0A> nfe=0A= > > driver?=0A> > > =0A> > =0A> > That's for em(4).=0A> > =0A> > AFAIK all = nfe(4) controllers lacks intelligent=0A> interrupts=0A> > moderation so dri= ver should be prepared to handle=0A> > excessive=0A> > interrupt loads. I'm= not sure whether NVIDIA ethernet=0A> > controllers=0A> > really lacks effi= cient interrupt mitigation mechanism=0A> but=0A> > it=0A> > seems Linux als= o faces the same hardware problem.=0A> > As you might know there is no publ= icly available data=0A> sheet=0A> > for=0A> > NVIDIA controllers so setting= it right looks very hard=0A> to=0A> > me.=0A> > =0A> =0A> Try removing the= INTR_MPSAFE flag from the bus_setup_intr()=0A> call.=0A> The entire point = of using filters is to reduce lock=0A> contention.=0A> It might not solve t= he problem but its clearly an=0A> unnecessary=0A> potential bottleneck.=0A>= =0A> Barney=0A=0AI'm curious as to the statistics on your system. Your qua= d core adapter=0Amay actually be hurting the performance. What is the cpu u= sage=0Adistribution shown by top -SH when you are loaded, and how many =0Ai= nterrupts/second are you getting per nic? It looks like the=0Adefault moder= ation is set to 8000 ints/sec, which is probably ok=0Afor 1 interrupt per N= IC. Its not clear whether multiple msix =0Ainterrupts are allocated. Spread= ing interrupts isn't always a=0Agood thing as it increases lock contention = so much as to be=0Acounterproductive, unless you have properly written mute= x =0Amanagement code; which nfe doesn't. =0A=0ABarney=0A=0A=0A
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?978222.18685.qm>