From owner-freebsd-net@FreeBSD.ORG Sat Mar 10 03:18:06 2007 Return-Path: X-Original-To: net@FreeBSD.org Delivered-To: freebsd-net@FreeBSD.ORG Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id A161016A403 for ; Sat, 10 Mar 2007 03:18:06 +0000 (UTC) (envelope-from bde@zeta.org.au) Received: from mailout2.pacific.net.au (mailout2-3.pacific.net.au [61.8.2.226]) by mx1.freebsd.org (Postfix) with ESMTP id 695AF13C478 for ; Sat, 10 Mar 2007 03:18:06 +0000 (UTC) (envelope-from bde@zeta.org.au) Received: from mailproxy1.pacific.net.au (mailproxy1.pacific.net.au [61.8.2.162]) by mailout2.pacific.net.au (Postfix) with ESMTP id 0D2B710A388; Sat, 10 Mar 2007 14:18:01 +1100 (EST) Received: from besplex.bde.org (katana.zip.com.au [61.8.7.246]) by mailproxy1.pacific.net.au (Postfix) with ESMTP id 296DC8C02; Sat, 10 Mar 2007 14:18:04 +1100 (EST) Date: Sat, 10 Mar 2007 14:18:02 +1100 (EST) From: Bruce Evans X-X-Sender: bde@besplex.bde.org To: Dave Baukus In-Reply-To: <45F08F1D.5080708@us.fujitsu.com> Message-ID: <20070310135211.R9179@besplex.bde.org> References: <45C0CA5D.5090903@incunabulum.net> <45E6BEE0.2050307@FreeBSD.org> <45E6C22D.7060200@freebsd.org> <45E6D70C.10104@FreeBSD.org> <45EEB086.3050409@FreeBSD.org> <45F03269.7050705@FreeBSD.org> <45F08F1D.5080708@us.fujitsu.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: net@FreeBSD.org Subject: Re: netisr_direct X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 10 Mar 2007 03:18:06 -0000 On Thu, 8 Mar 2007, Dave Baukus wrote: > What's the word on netisr_direct ? > Do people typically enable this feature ? I always enable it, but have never measured it doing anything useful. Under light loads, it should reduce network latency and overheads by a microsecond or two (whatever it takes to do 2 context switches (hopefully the bug that made it take 4 context switches is fixed)), but then the latency is still much larger than a microsecond or two (50 uS is unusually good and 100's of uS are common) and the overhead doesn't matter. Under heavy loads, not using it is potentially better since not using it allows queue lengths to grow longer so that the queues get processed more efficiently in bursts. However, I think there is no explicit management of queue lengths or latencies now, so machines that are too fast probably gain by doing direct dispatch if possible, since with indirect dispatch they would do the context switches to and from the netisr fast enough to keep queue lengths usually <= 1. Bruce