From owner-freebsd-net Wed Nov 13 9:52:43 2002 Delivered-To: freebsd-net@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 8257F37B401 for ; Wed, 13 Nov 2002 09:52:41 -0800 (PST) Received: from silver.he.iki.fi (silver.he.iki.fi [193.64.42.241]) by mx1.FreeBSD.org (Postfix) with ESMTP id 3A5D343E88 for ; Wed, 13 Nov 2002 09:52:40 -0800 (PST) (envelope-from pete@he.iki.fi) Received: from he.iki.fi (localhost.he.iki.fi [127.0.0.1]) by silver.he.iki.fi (8.12.6/8.11.4) with ESMTP id gADHqbuO036141 for ; Wed, 13 Nov 2002 19:52:38 +0200 (EET) (envelope-from pete@he.iki.fi) Message-ID: <3DD29165.8010904@he.iki.fi> Date: Wed, 13 Nov 2002 19:52:37 +0200 From: Petri Helenius User-Agent: Mozilla/5.0 (X11; U; Linux i386; en-US; rv:0.9.4) Gecko/20011126 Netscape6/6.2.1 X-Accept-Language: English [en],Finnish [fi] MIME-Version: 1.0 To: freebsd-net@freebsd.org Subject: em0 under CURRENT Content-Type: text/plain; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: 7bit Sender: owner-freebsd-net@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.org Just for the sake of it, I tried if the performance of em would be different under -CURRENT and it is. Initially when I had: options INVARIANTS #Enable calls of extra sanity checking options INVARIANT_SUPPORT #Extra sanity checks of internal structures, required by INVARIANTS options WITNESS #Enable checks to detect deadlocks and cycles options WITNESS_SKIPSPIN #Don't run witness on spinlocks for speed The performance was not much better than 100Mbps ethernet. Dropping these options upped the performance to ~300Mbps:ish while 4.7-STABLE gives twice that using the same application. The machine is 2.4GHz Dual P4. At 300Mbps the other CPU seems to be spending almost all it's time on interrupt context while the stuff on other CPU is waiting for *Giant. Are all network drivers still under Giant on 5.0? Is there any other parameters I should tune? Increase allowed mbuf clusters? mbuf usage: GEN list: 0/0 (in use/in pool) CPU #0 list: 232/528 (in use/in pool) CPU #1 list: 282/416 (in use/in pool) Total: 514/944 (in use/in pool) Maximum number allowed on each CPU list: 512 Maximum possible: 51200 Allocated mbuf types: 514 mbufs allocated to data 1% of mbuf map consumed mbuf cluster usage: GEN list: 1/30 (in use/in pool) CPU #0 list: 205/248 (in use/in pool) CPU #1 list: 306/392 (in use/in pool) Total: 512/670 (in use/in pool) Maximum number allowed on each CPU list: 128 Maximum possible: 25600 2% of cluster map consumed 1576 KBytes of wired memory reserved (73% in use) 0 requests for memory denied 0 requests for memory delayed 0 calls to protocol drain routines Pete To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-net" in the body of the message