From owner-freebsd-net@FreeBSD.ORG Tue Oct 28 15:12:11 2008 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6795C10656A2 for ; Tue, 28 Oct 2008 15:12:11 +0000 (UTC) (envelope-from mav@FreeBSD.org) Received: from cmail.optima.ua (cmail.optima.ua [195.248.191.121]) by mx1.freebsd.org (Postfix) with ESMTP id EA9158FC1C for ; Tue, 28 Oct 2008 15:12:10 +0000 (UTC) (envelope-from mav@FreeBSD.org) X-Spam-Flag: SKIP X-Spam-Yversion: Spamooborona-2.1.0 Received: from orphanage.alkar.net (account mav@alkar.net [212.86.226.11] verified) by cmail.optima.ua (CommuniGate Pro SMTP 5.2.9) with ESMTPA id 226342188; Tue, 28 Oct 2008 17:12:10 +0200 Message-ID: <49072BC9.4010103@FreeBSD.org> Date: Tue, 28 Oct 2008 17:12:09 +0200 From: Alexander Motin User-Agent: Thunderbird 2.0.0.14 (X11/20080612) MIME-Version: 1.0 To: Bartosz Giza References: <1225203780.00029971.1225190402@10.7.7.3> In-Reply-To: <1225203780.00029971.1225190402@10.7.7.3> X-Enigmail-Version: 0.95.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: freebsd-net@freebsd.org Subject: Re: two NIC on 2 core system (scheduling problem) X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Oct 2008 15:12:11 -0000 Bartosz Giza wrote: > On other router based on the same hardware and software i have something > like that: > > 10 root 1 171 ki31 0K 8K RUN 1 235.4H 78.66% idle: cpu1 > 11 root 1 171 ki31 0K 8K RUN 0 185.2H 72.12% idle: cpu0 > 20 root 1 -68 - 0K 8K - 0 48.7H 23.00% em0 taskq > 23 root 1 -68 - 0K 8K WAIT 0 19.2H 9.67% irq16: fxp1 > 21 root 1 -68 - 0K 8K WAIT 1 28.2H 8.01% irq17: bge0 > > I don't know why on this router system balance over two cores. One > difference is that on this router i have another fxp card (3 total) In verbose boot messages system shows that different IRQs assigned to different APICs in round-robin fashion. So I may assume that this IRQ->CPU mapping is static. em0's taskqueue at the same time able to migrate CPUs as any usual process. > Another question is why em0 taskq is eating so much cpu ? BGE interface is > actually one that pushes 2 times more packets than em0 and it uses about > half cpu comparing to em0. Is that not strange ? > Could someone tell my why is this happening ? BGE is faster ? or maybe i can > tune some The CPU time you see there includes much more then just a card handling itself. It also includes CPU time of the most parts of network stack used to process received packet. So if you have NAT, big firewall, netgraph or any other CPU-hungry actions done with packets incoming via em0 you will see such results. Even more interesting is that if bge0 or fxp0 cards will require much CPU time to send packet, this time will also be accounted to em0 process. :) -- Alexander Motin