Date: Thu, 13 Aug 2009 04:56:32 -0700 (PDT) From: Barney Cordoba <barney_cordoba@yahoo.com> To: Peter Steele <psteele@webmail.maxiscale.com>, pyunyh@gmail.com Cc: freebsd-net@freebsd.org Subject: Re: nfe taskq performance issues Message-ID: <430428.63263.qm@web63902.mail.re1.yahoo.com> In-Reply-To: <20090812213521.GG55129@michelle.cdnetworks.com>
next in thread | previous in thread | raw e-mail | index | archive | help
=0A=0A--- On Wed, 8/12/09, Pyun YongHyeon <pyunyh@gmail.com> wrote:=0A=0A> = From: Pyun YongHyeon <pyunyh@gmail.com>=0A> Subject: Re: nfe taskq performa= nce issues=0A> To: "Peter Steele" <psteele@webmail.maxiscale.com>=0A> Cc: f= reebsd-net@freebsd.org=0A> Date: Wednesday, August 12, 2009, 5:35 PM=0A> On= Thu, Jul 23, 2009 at 08:58:07AM=0A> -0700, Peter Steele wrote:=0A> > We've= been hitting serious nfe taskq performance=0A> issues during stress=0A> > = tests and in doing some research on the problem we=0A> came across this old= =0A> > email:=0A> > =0A> >=A0 =0A> > =0A> > From: Ivan Voras <ivoras@freebs= d.org>=0A> > Date: April 28, 2009 3:53:14 AM PDT=0A> > To: freebsd-threads@= freebsd.org=0A> > Cc: freebsd-net@freebsd.org,=0A> freebsd-performance@free= bsd.org=0A> > Subject: Re: FreeBSD 7.1 taskq em performance=0A> > >=0A> > >= I have been hitting some barrier with FreeBSD 7.1=0A> network performance.= =0A> > I=0A> > > have written an application which contains two=0A> kernel = threads that=0A> > takes=0A> > > mbufs directly from a network interface an= d=0A> forwards to another=0A> > network=0A> > > interface. This idea is to = simulate different=0A> network environment.=0A> > >=0A> > > I have been usi= ng FreeBSD 6.4 amd64 and tested=0A> with an Ixia box=0A> > > (specialised h= ardware firing very high packet=0A> rate). The PC was a=0A> > Core2 2.6=0A>= > > GHz with dual ports Intel PCIE Gigabit network=0A> card. It can manage= up=0A> > to 1.2=0A> > > million pps.=0A> > >=0A> > > I have a higher spec = PC with FreeBSD 7.1 amd64=0A> and Quadcore 2.3 GHz=0A> > and=0A> > > PCIE G= igabit network card. The performance can=0A> only achieve up to 600k=0A> > = pps.=0A> > > I notice the 'taskq em0' and 'taskq em1' is solid=0A> 100% CPU= but it is=0A> > not in=0A> > > FreeBSD 6.4. =0A> > =0A> >=A0 =0A> > =0A> >= In our case we are running FreeBSD 7.0, but we are=0A> seeing our boxes=0A= > > experience serious thread starvation issues as the=0A> nfe0 cpu percent= age=0A> > climbs steadily while cpu idle time drops at times to=0A> 0 perce= nt. This=0A> > email thread mentioned a patch for the em driver=0A> here:= =0A> > =0A> >=A0 =0A> > =0A> > http://people.yandex-team.ru/~wawa/ =0A> > <= http://people.yandex-team.ru/%7Ewawa/>=0A> =0A> > =0A> >=A0 =0A> > =0A> > D= oes anyone know if this patch will work with the nfe=0A> driver?=0A> > =0A>= =0A> That's for em(4).=0A> =0A> AFAIK all nfe(4) controllers lacks intelli= gent interrupts=0A> moderation so driver should be prepared to handle=0A> e= xcessive=0A> interrupt loads. I'm not sure whether NVIDIA ethernet=0A> cont= rollers=0A> really lacks efficient interrupt mitigation mechanism but=0A> i= t=0A> seems Linux also faces the same hardware problem.=0A> As you might kn= ow there is no publicly available data sheet=0A> for=0A> NVIDIA controllers= so setting it right looks very hard to=0A> me.=0A> =0A=0ATry removing the = INTR_MPSAFE flag from the bus_setup_intr() call.=0AThe entire point of usin= g filters is to reduce lock contention.=0AIt might not solve the problem bu= t its clearly an unnecessary=0Apotential bottleneck.=0A=0ABarney=0A=0A=0A =
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?430428.63263.qm>