From owner-freebsd-net@FreeBSD.ORG Fri May 10 12:57:06 2013 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 72B98220 for ; Fri, 10 May 2013 12:57:06 +0000 (UTC) (envelope-from egrosbein@rdtc.ru) Received: from eg.sd.rdtc.ru (eg.sd.rdtc.ru [IPv6:2a03:3100:c:13::5]) by mx1.freebsd.org (Postfix) with ESMTP id E5604820 for ; Fri, 10 May 2013 12:57:03 +0000 (UTC) Received: from eg.sd.rdtc.ru (localhost [127.0.0.1]) by eg.sd.rdtc.ru (8.14.6/8.14.6) with ESMTP id r4ACuweB015544; Fri, 10 May 2013 19:56:59 +0700 (NOVT) (envelope-from egrosbein@rdtc.ru) Message-ID: <518CEE95.7020702@rdtc.ru> Date: Fri, 10 May 2013 19:56:53 +0700 From: Eugene Grosbein User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:17.0) Gecko/20130415 Thunderbird/17.0.5 MIME-Version: 1.0 To: Barney Cordoba Subject: Re: High CPU interrupt load on intel I350T4 with igb on 8.3 References: <1368137807.20874.YahooMailClassic@web121603.mail.ne1.yahoo.com> In-Reply-To: <1368137807.20874.YahooMailClassic@web121603.mail.ne1.yahoo.com> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: freebsd-net@freebsd.org, =?UTF-8?B?IkNsw6ltZW50IEhlcm1hbm4gKG5vZGVucyki?= X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 May 2013 12:57:06 -0000 On 10.05.2013 05:16, Barney Cordoba wrote: >>>> Network device driver is not guilty here, that's >> just pf's >>>> contention >>>> running in igb's context. >>> >>> They're both at play. Single threadedness aggravates >> subsystems that >>> have too many lock points. >>> >>> It can also be "solved" with using 1 queue, because >> then you don't >>> have 4 queues going into a single thread. >> >> Again, the problem is within pf(4)'s global lock, not in the >> igb(4). >> > > Again, you're wrong. It's not the bottleneck's fault; it's the fault of > the multi-threaded code for only working properly when there are no > bottlenecks. In practice, the problem is easily solved without any change in the igb code. The same problem will occur for other NIC drivers too - if several NICs were combined within one lagg(4). So, driver is not guilty and solution would be same - eliminate bottleneck and you will be fine and capable to spread the load on several CPU cores. Therefore, I don't care of CS theory for this particular case. Eugene Grosbein