Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 10 May 2013 19:56:53 +0700
From:      Eugene Grosbein <egrosbein@rdtc.ru>
To:        Barney Cordoba <barney_cordoba@yahoo.com>
Cc:        freebsd-net@freebsd.org, =?UTF-8?B?IkNsw6ltZW50IEhlcm1hbm4gKG5vZGVucyki?= <nodens2099@gmail.com>
Subject:   Re: High CPU interrupt load on intel I350T4 with igb on 8.3
Message-ID:  <518CEE95.7020702@rdtc.ru>
In-Reply-To: <1368137807.20874.YahooMailClassic@web121603.mail.ne1.yahoo.com>
References:  <1368137807.20874.YahooMailClassic@web121603.mail.ne1.yahoo.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 10.05.2013 05:16, Barney Cordoba wrote:

>>>> Network device driver is not guilty here, that's
>> just pf's
>>>> contention
>>>> running in igb's context.
>>>
>>> They're both at play. Single threadedness aggravates
>> subsystems that 
>>> have too many lock points.
>>>
>>> It can also be "solved" with using 1 queue, because
>> then you don't
>>> have 4 queues going into a single thread.
>>
>> Again, the problem is within pf(4)'s global lock, not in the
>> igb(4).
>>
> 
> Again, you're wrong. It's not the bottleneck's fault; it's the fault of 
> the multi-threaded code for only working properly when there are no
> bottlenecks.

In practice, the problem is easily solved without any change in the igb code.
The same problem will occur for other NIC drivers too -
if several NICs were combined within one lagg(4). So, driver is not guilty and
solution would be same - eliminate bottleneck and you will be fine and capable
to spread the load on several CPU cores.

Therefore, I don't care of CS theory for this particular case.

Eugene Grosbein




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?518CEE95.7020702>