Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 11 May 2013 17:15:35 -0700 (PDT)
From:      Barney Cordoba <barney_cordoba@yahoo.com>
To:        Hooman Fazaeli <hoomanfazaeli@gmail.com>
Cc:        freebsd-net@freebsd.org, =?iso-8859-1?Q?Cl=E9ment_Hermann_=28nodens=29?= <nodens2099@gmail.com>, Eugene Grosbein <egrosbein@rdtc.ru>
Subject:   Re: High CPU interrupt load on intel I350T4 with igb on 8.3
Message-ID:  <1368317735.40898.YahooMailClassic@web121603.mail.ne1.yahoo.com>
In-Reply-To: <518EA643.5010505@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
=0A=0A--- On Sat, 5/11/13, Hooman Fazaeli <hoomanfazaeli@gmail.com> wrote:=
=0A=0A> From: Hooman Fazaeli <hoomanfazaeli@gmail.com>=0A> Subject: Re: Hig=
h CPU interrupt load on intel I350T4 with igb on 8.3=0A> To: "Barney Cordob=
a" <barney_cordoba@yahoo.com>=0A> Cc: "Eugene Grosbein" <egrosbein@rdtc.ru>=
, freebsd-net@freebsd.org, ""Cl=E9ment Hermann (nodens)"" <nodens2099@gmail=
.com>=0A> Date: Saturday, May 11, 2013, 4:12 PM=0A> On 5/11/2013 8:26 PM, B=
arney Cordoba=0A> wrote:=0A> > Clearly you don't understand the problem. Yo=
ur logic is=0A> that because other drivers are defective also; therefore it=
s=0A> not a driver problem? The problem is caused by a=0A> multi-threaded d=
river that=0A> > haphazardly launches tasks and that doesn't manage the=0A>=
 case that the rest of the system can't handle the load. It's=0A> no differ=
ent than a driver that barfs when mbuf clusters are=0A> exhausted. The answ=
er=0A> > isn't to increase memory or mbufs, even though that may=0A> allevi=
ate the problem. The answer is to fix the driver, so=0A> that it doesn't cr=
ash the system for an event that is wholly=0A> predictable. igb has=0A> > 1=
) too many locks and 2) exasperates the problem by=0A> binding to cpus, whi=
ch causes it to not only have to wait=0A> for the lock to free, but also fo=
r a specific cpu to become=0A> free. So it chugs along=0A> > happily until =
it encounters a bottleneck, at which=0A> point it quickly blows up the enti=
re system in a domino=0A> effect. It needs to manage locks more efficiently=
, and also=0A> to detect when the backup is=0A> > unmanageable. Ever since =
FreeBSD 5 the answer has been=0A> "it's fixed in 7, or its fixed in 9, or i=
t's fixed in 10".=0A> There will always be bottlenecks, and no driver shoul=
d blow=0A> up the system no matter=0A> > what intermediate code may present=
 a problem. Its the=0A> driver's responsibility to behave and to drop packe=
ts if=0A> necessary. BC=0A> =0A> And how the driver should behave? You sugg=
est dropping the=0A> packets. Even if we accept=0A> that dropping packets i=
s a good strategy in all=0A> configurations (which I doubt), the driver is=
=0A> definitely not the best place to implement it, since that=0A> involves=
 duplication of similar=0A> code between drivers. Somewhere like the Ethern=
et layer is a=0A> much better choice to watch=0A> load of packets and drop =
them to prevent them to eat all the=0A> cores. Furthermore, ignoring=0A> th=
e fact that pf is not optimized for multi-processors and=0A> blaming driver=
s for not adjusting=0A> themselves with the this pf's fault, is a bit unfai=
r, I=0A> believe.=0A> =0A=0AIt's easier to make excuses than to write a rea=
lly good driver. I'll=0Agrant you that.=0ABC



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?1368317735.40898.YahooMailClassic>