From owner-freebsd-net@FreeBSD.ORG Mon Jul 7 12:56:24 2008 Return-Path: Delivered-To: freebsd-net@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 85EE31065684; Mon, 7 Jul 2008 12:56:24 +0000 (UTC) (envelope-from brde@optusnet.com.au) Received: from mail09.syd.optusnet.com.au (mail09.syd.optusnet.com.au [211.29.132.190]) by mx1.freebsd.org (Postfix) with ESMTP id 115978FC21; Mon, 7 Jul 2008 12:56:23 +0000 (UTC) (envelope-from brde@optusnet.com.au) Received: from besplex.bde.org (c220-239-252-11.carlnfd3.nsw.optusnet.com.au [220.239.252.11]) by mail09.syd.optusnet.com.au (8.13.1/8.13.1) with ESMTP id m67CuJrT026816 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Mon, 7 Jul 2008 22:56:21 +1000 Date: Mon, 7 Jul 2008 22:56:19 +1000 (EST) From: Bruce Evans X-X-Sender: bde@besplex.bde.org To: Robert Watson In-Reply-To: <20080707134036.S63144@fledge.watson.org> Message-ID: <20080707224659.B7844@besplex.bde.org> References: <4867420D.7090406@gtcomm.net> <4869ACFC.5020205@gtcomm.net> <4869B025.9080006@gtcomm.net> <486A7E45.3030902@gtcomm.net> <486A8F24.5010000@gtcomm.net> <486A9A0E.6060308@elischer.org> <486B41D5.3060609@gtcomm.net> <486B4F11.6040906@gtcomm.net> <486BC7F5.5070604@gtcomm.net> <20080703160540.W6369@delplex.bde.org> <486C7F93.7010308@gtcomm.net> <20080703195521.O6973@delplex.bde.org> <486D35A0.4000302@gtcomm.net> <486DF1A3.9000409@gtcomm.net> <486E65E6.3060301@gtcomm.net> <4871DB8E.5070903@freebsd.org> <20080707191918.B4703@besplex.bde.org> <4871FB66.1060406@freebsd.org> <20080707213356.G7572@besplex.bde.org> <20080707134036.S63144@fledge.watson.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: FreeBSD Net , Andre Oppermann , Ingo Flaschberger , Paul Subject: Re: Freebsd IP Forwarding performance (question, and some info) [7-stable, current, em, smp] X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Jul 2008 12:56:24 -0000 On Mon, 7 Jul 2008, Robert Watson wrote: > Since you're doing fine-grained performance measurements of a code path that > interests me a lot, could you compare the cost per-send on UDP for the > following four cases: > > (1) sendto() to a specific address and port on a socket that has been bound > to > INADDR_ANY and a specific port. > > (2) sendto() on a specific address and port on a socket that has been bound > to > a specific IP address (not INADDR_ANY) and a specific port. > > (3) send() on a socket that has been connect()'d to a specific IP address and > a specific port, and bound to INADDR_ANY and a specific port. > > (4) send() on a socket that has been connect()'d to a specific IP address > and a specific port, and bound to a specific IP address (not INADDR_ANY) > and a specific port. > > The last of these should really be quite a bit faster than the first of > these, but I'd be interested in seeing specific measurements for each if > that's possible! Not sure if I understand networking well enough to set these up quickly. Does netrate use one of (3) or (4) now? I can tell you vaguely about old results for netrate (send()) vs ttcp (sendto()). send() is lighter weight of course, and this made a difference of 10-20%, but after further tuning the difference became smaller, which suggests that everything ends up waiting for something in common. Now I can measure cache misses better and hope that a simple count of cache misses will be a more reproducible indicator of significant bottlenecks than pps. I got nowhere trying to reduce instruction counts, possibly because it would take avoiding 100's of instructions to get the same benefit as avoiding a single cache miss. Bruce