Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 14 Dec 2000 10:39:23 -0500
From:      "Michael T. Stolarchuk" <mts@off.off.to>
To:        "Loris Degioanni" <loris@netgroup-serv.polito.it>
Cc:        "Michael T. Stolarchuk" <mts@off.off.to>, tcpdump-workers@tcpdump.org, ethereal-dev@ethereal.com, snort-devel@lists.sourceforge.net, freebsd-hackers@freebsd.org, tech@openbsd.org, mts@off.off.to
Subject:   Re: R: [tcpdump-workers] Re: R: [Ethereal-dev] Re: Fwd: kyxtech: freebsd outsniffed by wintendo !!?!? 
Message-ID:  <200012141539.eBEFdNH30483@off.off.to>
In-Reply-To: Your message of "Thu, 14 Dec 2000 11:57:41 %2B0100." <009801c065c0$a2bd1200$016464c8@lorix> 

next in thread | previous in thread | raw e-mail | index | archive | help
In message <009801c065c0$a2bd1200$016464c8@lorix>, "Loris Degioanni" writes:
>
>-----Messaggio Originale-----
>Da: Michael T. Stolarchuk <mts@off.off.to>
>A: Fulvio Risso <risso@polito.it>
>freebsd outsniffed by wintendo !!?!?

WRT: http://netgroup-serv.polito.it/winpcap/docs/performance.htm

>>
>
>>
>> typical buffer sizes for bpf these days are still 32K,
>> One could then say that if you up the buffer sizes to (2) 512M
>buffers,
>> you'd get much better results, but the actual results are kinda
>suprising....
>> you may/may not get better performance...
>> by increasing the buffer size, you incur a longer kernel copy of
>> the buffer's out into user space.  In short bursts, the performance
>> may be better, but under long heavy loads, you could get *more*
>> packet loss...
>
>I think this is not a satisfactory explanation. I am not a freebsd guru
>but, as far as I know, bpfread is invoked during normal scheduling,
>while bpf_tap is called by the NIC driver, therefore I suppose during an
>interrupt. I am sure this is the situation in Windows. This means that
>the tap has always higher priority and is not influenced by the copy, so
>having huge buffers is not a problem, because the copy is always
>interrupted by the arrival of a new packet. Can anyone confirm/refute
>this behavior in freebsd?

ah, but the buffer sizes are fixed, and when the second buffer
is full, packets are lost.  yes, the tap runs at a higher prio
than the buffer, but that doesn't alone guarnatee you won't
see packet loss.  

(btw: i can confirm that behavior because i've had to work with it...
i'm familiar with these effects since i wrote the nfrd sniffing
and protocol decomposition stack)

Or saying it another way:  if you increase the buffer sizes, say
to 1M each, and you're using, say completely saturdated 100Mb,
which means 12.5Mbyes/sec, you have to get the copy out of bpf
to process space in 1/12.5Mb/sec->80 Millisec.

By copy rates, that's a long time.  But, typical BPF sleep
prioirities are LOW, which means that other processes complete
with the bpf-processes restart to gain the processor.  (As 
i recall, that has been fixed in a few architecutres). So if the
bpf is run on a loaded machine (ie: a typical intrusion detection system)
you still see periodic packet loss.  That also partially explains
why just test-sniffing the traffic isn't sufficient to test a platform
for its ability to perform a decent job at ids...

>`wintendo' sniffing is done in a way very similar to the one of BPF.
>With the same buffer size, the number of context switches is
>approximatly the same.

I'm sorry, but i don't see that in your paper.  Near the bottom of
the paper it says that windoes sniffing buffers are 1M large.  There
are *very few* bpf's with buffers that large.  In fact, in several
kernels which i've used, multiple 1M kernel alloc's for space will
cause the kernel to hang indefinitly (due to multiple 1M vm space
allocations).  i started my first reply with your text snippet noting
the buffer size differences.

Also, in the same article, there's not attempt at trying to
uncover the cause of performance difference, i don't see
measurements of context switch rates, number of kernel system
calls, nor number of interrupts.  If i have missed it somewhere
please let me know.

What i wish i had is a good tool to discover what is going on during
the bpf packet loss.  I was hoping (a few years back) to instructment
a kernel, so that instead of being able to profile the sniffing
process via statistical information about clock-tics, i could instead
collect statistical about what was going on during bpf-packet-loss
(ie: when the bpf second buffer is full).  Turns out, that's hard
to do, but i haven't forgotten how worthwhile such a hack would be...

mts.


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200012141539.eBEFdNH30483>