Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 16 Jun 2004 17:38:48 +0300
From:      Sergey Lyubka <devnull@uptsoft.com>
To:        freebsd-hackers@freebsd.org
Subject:   Re: memory mapped packet capturing - bpf replacement ?
Message-ID:  <20040616173848.A8939@oasis.uptsoft.com>
In-Reply-To: <FE045D4D9F7AED4CBFF1B3B813C8533701BD40C7@mail.sandvine.com>; from emaste@sandvine.com on Mon, Jun 14, 2004 at 08:38:57AM -0400
References:  <FE045D4D9F7AED4CBFF1B3B813C8533701BD40C7@mail.sandvine.com>

next in thread | previous in thread | raw e-mail | index | archive | help
> Does the ng_hub cause the packet to be copied?  If so you've 
> still got the same number of copies as vanilla BPF.

ng_hub does copy a packets. But, this does not explain the test results.
the benchmark works like this:

1. connect ng_mmq node to ng_hub
2. run benchmark for mmq
3. open pcap device   (mmq node still connected)
4. run benchmark for pcap (mmq node still connected)

so, ng_mmq and ng_hub are working during pcap benchmark, so additional
copies do not explain it.

the strange thing is:
why bpf, which does context switches, works more efficiently than
grabbing packets directly from memory mapped chunk ?

did I overlook something significant ?

I was thinking that while application spins awaiting data, scheduler may
detach it from the CPU, and then ringbuffer may be overflown.
I increased the priority to ridiculous values, and increased ringbuffer size
to as large as 32 Megabytes. The best I got is the same results as pcap.
Can anybody explain this ?

Example test, moderate traffic generated by the ping -f:
# ./benchmark rl0 /dev/mmq16 10000
desc  rcvd       dropped    seen       totlen     pps        time (sec)
mmq   10784      770        10000      13420000   10076      1.070  
pcap  10016      0          10000      13420000   9093       1.102  



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20040616173848.A8939>