From owner-freebsd-net@FreeBSD.ORG Wed Oct 7 09:01:20 2009 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 02F16106568F for ; Wed, 7 Oct 2009 09:01:20 +0000 (UTC) (envelope-from rwatson@FreeBSD.org) Received: from cyrus.watson.org (cyrus.watson.org [65.122.17.42]) by mx1.freebsd.org (Postfix) with ESMTP id CAD038FC1D for ; Wed, 7 Oct 2009 09:01:19 +0000 (UTC) Received: from fledge.watson.org (fledge.watson.org [65.122.17.41]) by cyrus.watson.org (Postfix) with ESMTPS id 40F1D46B1A; Wed, 7 Oct 2009 05:01:19 -0400 (EDT) Date: Wed, 7 Oct 2009 10:01:19 +0100 (BST) From: Robert Watson X-X-Sender: robert@fledge.watson.org To: rihad In-Reply-To: <4ACC56A6.1030808@mail.ru> Message-ID: References: <4AC9E29B.6080908@mail.ru> <20091005123230.GA64167@onelab2.iet.unipi.it> <4AC9EFDF.4080302@mail.ru> <4ACA2CC6.70201@elischer.org> <4ACAFF2A.1000206@mail.ru> <4ACB0C22.4000008@mail.ru> <20091006100726.GA26426@svzserv.kemerovo.su> <4ACB42D2.2070909@mail.ru> <20091006142152.GA42350@svzserv.kemerovo.su> <4ACB6223.1000709@mail.ru> <20091006161240.GA49940@svzserv.kemerovo.su> <4ACC5563.602@mail.ru> <4ACC56A6.1030808@mail.ru> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-net@freebsd.org, Eugene Grosbein , Luigi Rizzo , Julian Elischer Subject: Re: dummynet dropping too many packets X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Oct 2009 09:01:20 -0000 On Wed, 7 Oct 2009, rihad wrote: > rihad wrote: >> I've yet to test how this direct=0 improves extensive dummynet drops. > > Ooops... After a couple of minutes, suddenly: > > net.inet.ip.intr_queue_drops: 1284 > > Bumped it up a bit. Yes, I was going to suggest that moving to deferred dispatch has probably simply moved the drops to a new spot, the queue between the ithreads and the netisr thread. In your setup, how many network interfaces are in use, and what drivers? If what's happening is that you're maxing out a CPU then moving to multiple netisrs might help if your card supports generating flow IDs, but most lower-end cards don't. I have patches to generate those flow IDs in software rather than hardware, but there are some downsides to doing so, not least that it takes cache line misses on the packet that generally make up a lot of the cost of processing the packet. My experience with most reasonable cards is that letting them doing the work distribution with RSS and use multiple ithreads is a more performant strategy than using software work distribution on current systems, though. Someone has probably asked for this already, but -- could you send a snapshot of the top -SH output in the steady state? Let top run for a few minutes and then copy/paste the first 10-20 lines into an e-mail. Robert N M Watson Computer Laboratory University of Cambridge