From owner-freebsd-net@FreeBSD.ORG Wed Dec 5 02:22:44 2007 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5045D16A417 for ; Wed, 5 Dec 2007 02:22:44 +0000 (UTC) (envelope-from rwatson@FreeBSD.org) Received: from cyrus.watson.org (cyrus.watson.org [209.31.154.42]) by mx1.freebsd.org (Postfix) with ESMTP id 28B2813C461 for ; Wed, 5 Dec 2007 02:22:44 +0000 (UTC) (envelope-from rwatson@FreeBSD.org) Received: from fledge.watson.org (fledge.watson.org [209.31.154.41]) by cyrus.watson.org (Postfix) with ESMTP id B4CAE46C2C; Tue, 4 Dec 2007 21:27:37 -0500 (EST) Date: Wed, 5 Dec 2007 02:22:35 +0000 (GMT) From: Robert Watson X-X-Sender: robert@fledge.watson.org To: Peter Losher In-Reply-To: <4755EFDD.8070609@isc.org> Message-ID: <20071205021851.V87930@fledge.watson.org> References: <4755EFDD.8070609@isc.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-net@freebsd.org Subject: Re: Aggregating many ports into one for tcpdump server. X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Dec 2007 02:22:44 -0000 On Tue, 4 Dec 2007, Peter Losher wrote: > I am currently working on a tcpdump collector where we have multiple feeds > coming in (via bge{0-8}). Since tcpdump can only poll one interface per > process, I was hoping to aggregate the traffic onto one pseudo-interface for > tcpdump to hold onto and to poll. > > Looking thru the archives, it seems ng_one2many (in this case 'many2one') is > what I am looking for. Am I barking the right tree here? Depending on the configuration of the system (number of interfaces, number of CPUs, etc), you may find that running many tcpdump sessions results on greater throughput due to making better use of parallelism. For example, if you have eight cores and four interfaces, then you can end up running with one ithread and one tcpdump session, each on their own CPU, per interface. Of course, if you have many more interfaces than CPUs/pairs, then you just end up with much more context-switching, which will hurt performance. BTW, if you find you're getting packet loss in BPF processing at high rates, we should have you try the zero-copy BPF patches. Finally, another configuration you might consider is a single 10gbps card configured as a vlan trunk attached to a switch serving the various vlans to various switch ports. I'm not sure if that will be faster or lower, but it would be different. :-) Robert N M Watson Computer Laboratory University of Cambridge