From owner-freebsd-hackers@FreeBSD.ORG Wed Jun 16 14:40:15 2004 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 318D516A4CE for ; Wed, 16 Jun 2004 14:40:15 +0000 (GMT) Received: from oasis.uptsoft.com (oasis.uptsoft.com [217.20.165.41]) by mx1.FreeBSD.org (Postfix) with ESMTP id 2DCD743D1D for ; Wed, 16 Jun 2004 14:40:14 +0000 (GMT) (envelope-from devnull@oasis.uptsoft.com) Received: (from devnull@localhost) by oasis.uptsoft.com (8.11.6/linuxconf) id i5GEcmB11566 for freebsd-hackers@freebsd.org; Wed, 16 Jun 2004 17:38:48 +0300 Date: Wed, 16 Jun 2004 17:38:48 +0300 From: Sergey Lyubka To: freebsd-hackers@freebsd.org Message-ID: <20040616173848.A8939@oasis.uptsoft.com> Mail-Followup-To: freebsd-hackers@freebsd.org References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5.1i In-Reply-To: ; from emaste@sandvine.com on Mon, Jun 14, 2004 at 08:38:57AM -0400 X-OS: FreeBSD 4.5-STABLE Subject: Re: memory mapped packet capturing - bpf replacement ? X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Jun 2004 14:40:15 -0000 > Does the ng_hub cause the packet to be copied? If so you've > still got the same number of copies as vanilla BPF. ng_hub does copy a packets. But, this does not explain the test results. the benchmark works like this: 1. connect ng_mmq node to ng_hub 2. run benchmark for mmq 3. open pcap device (mmq node still connected) 4. run benchmark for pcap (mmq node still connected) so, ng_mmq and ng_hub are working during pcap benchmark, so additional copies do not explain it. the strange thing is: why bpf, which does context switches, works more efficiently than grabbing packets directly from memory mapped chunk ? did I overlook something significant ? I was thinking that while application spins awaiting data, scheduler may detach it from the CPU, and then ringbuffer may be overflown. I increased the priority to ridiculous values, and increased ringbuffer size to as large as 32 Megabytes. The best I got is the same results as pcap. Can anybody explain this ? Example test, moderate traffic generated by the ping -f: # ./benchmark rl0 /dev/mmq16 10000 desc rcvd dropped seen totlen pps time (sec) mmq 10784 770 10000 13420000 10076 1.070 pcap 10016 0 10000 13420000 9093 1.102