From owner-freebsd-performance@FreeBSD.ORG Wed Oct 12 12:33:27 2005 Return-Path: X-Original-To: performance@freebsd.org Delivered-To: freebsd-performance@FreeBSD.ORG Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 166A216A41F; Wed, 12 Oct 2005 12:33:27 +0000 (GMT) (envelope-from rwatson@FreeBSD.org) Received: from cyrus.watson.org (cyrus.watson.org [209.31.154.42]) by mx1.FreeBSD.org (Postfix) with ESMTP id E60D543D69; Wed, 12 Oct 2005 12:33:07 +0000 (GMT) (envelope-from rwatson@FreeBSD.org) Received: from fledge.watson.org (fledge.watson.org [209.31.154.41]) by cyrus.watson.org (Postfix) with ESMTP id 82CFE46B0A; Wed, 12 Oct 2005 08:33:06 -0400 (EDT) Date: Wed, 12 Oct 2005 13:33:06 +0100 (BST) From: Robert Watson X-X-Sender: robert@fledge.watson.org To: gnn@freebsd.org In-Reply-To: Message-ID: <20051012132648.J7178@fledge.watson.org> References: <20051005133730.R87201@fledge.watson.org> <20051011145923.B92528@fledge.watson.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: performance@freebsd.org, net@freebsd.org Subject: Re: Call for performance evaluation: net.isr.direct X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 12 Oct 2005 12:33:27 -0000 On Wed, 12 Oct 2005 gnn@freebsd.org wrote: > At Tue, 11 Oct 2005 15:01:11 +0100 (BST), > rwatson wrote: >> If I don't hear anything back in the near future, I will commit a >> change to 7.x to make direct dispatch the default, in order to let a >> broader community do the testing. :-) If you are setup to easily >> test stability and performance relating to direct dispatch, I would >> appreciate any help. > > One thing I would caution, though I have no proof nor have I made any > tests (yes, I know, bad gnn), is that I would expect this change to > degrade non-network performance when the network is under load. This > kind of change is most likely to help those with purely network loads, > i.e. routers, bridges, etc and to hurt anyone else. Are you absolutely > sure we should make this the default? In theory, as I mentioned in my earlier e-mail, this does result in more network processing occurring at a hardware ithread priority. However, the software ithread (swi) priority is already quite high. Looking closely at that is probably called for -- specifically, how will this impact scheduling for other hardware (rather than software) ithreads? The most interesting effect I've seen on non-network applications is that, because the network stack now uses significantly less CPU when under high load, more CPU is available for other activities. With the performance of network hardware available on server now often exceeding the CPU capacity of those servers (as compared to a few years ago when 100mbps cards could be trivially saturated by server hardware), the cost of processing packets is now back up again, so this can occur with relative ease. Another interesting point is that remote traffic can now no longer result in a denial of service of local traffic by virtue of overflowing the netisr queue. Previously, a single queue was shared by all network interfaces going to the netisr, and in the direct dispatch model, the queueing now happens almost entirely in the device driver and skips entering a queue to get to the stack. This has some other interesting effects, not least that older cards with less buffering now see significantly less queue space, but I'm not sure if that's significant. In general, I agree with your point though: we need to evaluate the effect of this change on a broad array of real-world workloads. Hence my e-mail, which so far has seen two responses -- a private one from Mike Tancsa offering to run testing, and your public one. So anyone willing to help evaluate the performance of this change would be most welcome to. Robert N M Watson