From owner-freebsd-net@FreeBSD.ORG Tue Oct 25 08:21:18 2011 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2F02B10656D3 for ; Tue, 25 Oct 2011 08:21:18 +0000 (UTC) (envelope-from sergeysaley@gmail.com) Received: from sam.nabble.com (sam.nabble.com [216.139.236.26]) by mx1.freebsd.org (Postfix) with ESMTP id 044848FC16 for ; Tue, 25 Oct 2011 08:21:17 +0000 (UTC) Received: from [192.168.236.26] (helo=sam.nabble.com) by sam.nabble.com with esmtp (Exim 4.72) (envelope-from ) id 1RIcG5-000090-Cp for freebsd-net@freebsd.org; Tue, 25 Oct 2011 01:21:17 -0700 Date: Tue, 25 Oct 2011 01:21:17 -0700 (PDT) From: Sergey Saley To: freebsd-net@freebsd.org Message-ID: <1319530877390-4935427.post@n5.nabble.com> In-Reply-To: References: <1319449307149-4931883.post@n5.nabble.com> <1319478384269-4933498.post@n5.nabble.com> <1319483324861-4933765.post@n5.nabble.com> <1319485884830-4933934.post@n5.nabble.com> <1319527328469-4935272.post@n5.nabble.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Subject: Re: Too much interrupts on ixgbe X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 25 Oct 2011 08:21:18 -0000 Jack Vogel wrote: > > On Tue, Oct 25, 2011 at 12:22 AM, Sergey Saley <sergeysaley@>wrote: > >> >> Ryan Stone-2 wrote: >> > >> > On Mon, Oct 24, 2011 at 3:51 PM, Sergey Saley <sergeysaley@> >> wrote: >> >> MPD5, netgraph, pppoe.Types of traffic - any (customer traffic). >> >> Bying this card I counted on a 3-4G traffic at 3-4K pppoe sessions. >> >> It turned to 600-700Mbit/s, about 50K pps at 700-800 pppoe sessions. >> > >> > PPPoE is your problem. The Intel cards can't load-balance PPPoE >> > traffic, so everything goes to one queue. It may be possible to write >> > a netgraph module to load-balance the traffic across your CPUs. >> > >> >> OK, thank You for explanation. >> And what about the large number of interrupts? >> As for me, it's too much... >> irq256: ix0:que 0 240536944 6132 >> irq257: ix0:que 1 89090444 2271 >> irq258: ix0:que 2 93222085 2376 >> irq259: ix0:que 3 89435179 2280 >> irq260: ix0:link 1 0 >> irq261: ix1:que 0 269468769 6870 >> irq262: ix1:que 1 110974 2 >> irq263: ix1:que 2 434214 11 >> irq264: ix1:que 3 112281 2 >> irq265: ix1:link 1 0 >> >> > How do you decide its 'too much' ? It may be that with your traffic you > end > up > not being able to use offloads, just thinking. Its not like the hardware > just "makes > it up", it interrupts on the last descriptor of a packet which has the RS > bit set. > With TSO you will get larger chunks of data and thus less interrupts but > your > traffic probably doesn't qualify for it. > It's easy. I have several servers with a similar task and load. About 30K pps, about 500-600M traffic, about 600-700 pppoe connections. One difference - em Here is a typical vmstat -i point06# vmstat -i interrupt total rate irq17: atapci0 6173367 0 cpu0: timer 3904389748 465 irq256: em0 3754877950 447 irq257: em1 2962728160 352 cpu2: timer 3904389720 465 cpu1: timer 3904389720 465 cpu3: timer 3904389721 465 Total 22341338386 2661 point05# vmstat -i interrupt total rate irq14: ata0 35 0 irq19: atapci1 8323568 0 cpu0: timer 3905440143 465 irq256: em0 3870403571 461 irq257: em1 1541695487 183 cpu1: timer 3905439895 465 cpu3: timer 3905439895 465 cpu2: timer 3905439895 465 Total 21042182489 2506 point04# vmstat -i interrupt total rate irq19: atapci0 6047874 0 cpu0: timer 3901683760 464 irq256: em0 823774953 98 irq257: em1 1340659093 159 cpu1: timer 3901683730 464 cpu2: timer 3901683730 464 cpu3: timer 3901683730 464 Total 17777216870 2117 -- View this message in context: http://freebsd.1045724.n5.nabble.com/Too-much-interrupts-on-ixgbe-tp4931883p4935427.html Sent from the freebsd-net mailing list archive at Nabble.com.