From owner-freebsd-hackers@FreeBSD.ORG Fri Jun 18 16:26:11 2004 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id A3EF716A4CE for ; Fri, 18 Jun 2004 16:26:11 +0000 (GMT) Received: from mail.sandvine.com (sandvine.com [199.243.201.138]) by mx1.FreeBSD.org (Postfix) with ESMTP id D843643D39 for ; Fri, 18 Jun 2004 16:26:08 +0000 (GMT) (envelope-from emaste@sandvine.com) Received: by mail.sandvine.com with Internet Mail Service (5.5.2657.72) id ; Fri, 18 Jun 2004 11:39:53 -0400 Message-ID: From: Ed Maste To: freebsd-hackers@freebsd.org Date: Fri, 18 Jun 2004 11:39:52 -0400 MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.5.2657.72) Content-Type: text/plain; charset="iso-8859-1" Subject: RE: memory mapped packet capturing - bpf replacement ? X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 18 Jun 2004 16:26:11 -0000 > A bit offtopic - what traffic generator you use ? It's an in-house project somewhat similar to Emulab . The traffic itself is generated by a small number of FreeBSD boxes. > > In my testing I found the call to microtime() to be quite > > expensive. (It will vary depending on which timecounter is > > being used.) > I haven't added the timestamp to the header yet, so what would you > recommed to use ? The problem is that microtime queries the timecounter on each use, and in my case of SMP the timecounter used is the 8254. Accessing it takes a (relatively) long time. If you need accurate timestamps on received packets you're pretty much stuck with the overhead. There's another call, getmicrotime, which can "return a less precise, but faster to obtain, time." I added a BPF ioctl (to our local tree) to turn off timestamps completely if they're not needed. > > Is this in a SMP or uniprocesor environment? I think your gain > > from a ringbuffer interface will be more significant in the SMP > > case. > I gonna test it much more on both SMP and UP machines All of my testing was done on SMP, where I think the benefit from removing the read call will be greater than UP. > This way, you intercept all Ethernet traffic trough ng_hub. Then, > ng_bpf does BPF filtering, if any. If no filtering is needed, then > ng_bpf node may be omitted. And, at last, ng_mmq does queuing. Would connecting ng_mmq directly to the ng_ether lower hook provide a useful datapoint? > > Are you using the same snap length (or copying the entire packet) > > in each case? > Hmm not sure what are you mean here. > I am copying whole mbuf chain the same way BPF does. mbuf chain comes > from the hook, and it can arrive to the hook from whatever source. If you just run e.g. tcpdump, by default only the first bit of the packet is actually copied. But as BMS pointed out you've set the snaplen to 32K so that won't be an issue here.