From owner-freebsd-net@FreeBSD.ORG Sat Sep 14 14:29:23 2013 Return-Path: Delivered-To: net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 91632418; Sat, 14 Sep 2013 14:29:23 +0000 (UTC) (envelope-from luigi@onelab2.iet.unipi.it) Received: from onelab2.iet.unipi.it (onelab2.iet.unipi.it [131.114.59.238]) by mx1.freebsd.org (Postfix) with ESMTP id 4B73D29EB; Sat, 14 Sep 2013 14:29:23 +0000 (UTC) Received: by onelab2.iet.unipi.it (Postfix, from userid 275) id 015667300A; Sat, 14 Sep 2013 16:25:26 +0200 (CEST) Date: Sat, 14 Sep 2013 16:25:26 +0200 From: Luigi Rizzo To: George Neville-Neil Subject: Re: Network stack changes Message-ID: <20130914142526.GB71010@onelab2.iet.unipi.it> References: <521E41CB.30700@yandex-team.ru> <6BDA4619-783C-433E-9819-A7EAA0BD3299@neville-neil.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6BDA4619-783C-433E-9819-A7EAA0BD3299@neville-neil.com> User-Agent: Mutt/1.5.20 (2009-06-14) X-Mailman-Approved-At: Sat, 14 Sep 2013 19:44:36 +0000 Cc: "Alexander V. Chernikov" , Adrian Chadd , Andre Oppermann , "freebsd-hackers@freebsd.org" , "freebsd-arch@freebsd.org" , Luigi Rizzo , "Andrey V. Elsukov" , FreeBSD Net X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 14 Sep 2013 14:29:23 -0000 On Fri, Sep 13, 2013 at 11:08:27AM -0400, George Neville-Neil wrote: > > On Aug 29, 2013, at 7:49 , Adrian Chadd wrote: ... > One quick note here. Every time you increase batching you may increase bandwidth > but you will also increase per packet latency for the last packet in a batch. The ones who suffer are the first ones, because their processing is somewhat delayed to 1) let the input batch build up, and 2) complete processing of the batch before pushing results to the next stage. However one should never wait for an input batch to grow; you process whatever your source gives you (one or more packets) by the time you are ready (and if you are slow/overloaded, of course you will get a large backlog at once). Either way, there is no reason to create additional delay on input. cheers luigi