Date: Tue, 27 Nov 2001 14:41:47 -0500 From: Mike Tancsa <mike@sentex.net> To: Luigi Rizzo <rizzo@aciri.org> Cc: net@FreeBSD.ORG Subject: Re: Revised polling code (some stats II) Message-ID: <5.1.0.14.0.20011127140608.04f23c20@marble.sentex.ca> In-Reply-To: <20011127090907.A99632@iguana.aciri.org> References: <5.1.0.14.0.20011127113111.04f0b390@marble.sentex.ca> <5.1.0.14.0.20011127113111.04f0b390@marble.sentex.ca>
next in thread | previous in thread | raw e-mail | index | archive | help
At 09:09 AM 11/27/01 -0800, Luigi Rizzo wrote:
>On Tue, Nov 27, 2001 at 11:56:45AM -0500, Mike Tancsa wrote:
> >
> > Hi, just as an FYI, I did some simple tests using netperf of the polling
> > code. On first blush, it does look quite nice. I am going to try and
>
>well, the throughput numbers seems essentially unmodified,
>which is not surprising given that with large packet sizes
>your hardware should be able to bear the load.
>
>I have to say that 60Mbit/s for TCP seems a bit low given your
>hardware, i wonder if you are using half-duplex link
The strange thing is that its not much better having fixed the duplex
issue. I notice in dmesg that
dc3: TX underrun -- increasing TX threshold
dc3: TX underrun -- increasing TX threshold
dc3: TX underrun -- increasing TX threshold
dc3: TX underrun -- using store and forward mode
But I have found that to be normal for the card.
Anyways, here are the stats going from dc2 (PIII 800) to an fxp card on a PIV.
------------------------------------
Testing with the following command line:
/usr/local/netperf/netperf -t TCP_STREAM -l 60 -H 10.1.1.1 -i 10,3 -I 99,5
-- -s 57344 -S 57344 -m 4096
TCP STREAM TEST to 10.1.1.1 : +/-2.5% @ 99% conf. : histogram
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
57344 57344 4096 60.00 62.52
57344 57344 4096 60.00 64.19 +POLL
/usr/local/netperf/netperf -t TCP_STREAM -l 60 -H 10.1.1.1 -i 10,3 -I 99,5
-- -s 32768 -S 32768 -m 4096
TCP STREAM TEST to 10.1.1.1 : +/-2.5% @ 99% conf. : histogram
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
32768 32768 4096 59.99 64.19
32768 32768 4096 59.99 65.23 +POLL
/usr/local/netperf/netperf -t TCP_RR -l 60 -H 10.1.1.1 -i 10,3 -I 99,5 --
-r 1,1
TCP REQUEST/RESPONSE TEST to 10.1.1.1 : +/-2.5% @ 99% conf. : histogram
Local /Remote
Socket Size Request Resp. Elapsed Trans.
Send Recv Size Size Time Rate
bytes Bytes bytes bytes secs. per sec
16384 16384 1 1 59.99 6072.04
16384 16384 1 1 59.99 8351.88 +POLL
/usr/local/netperf/netperf -t UDP_RR -l 60 -H 10.1.1.1 -i 10,3 -I 99,5 --
-r 1,1
UDP REQUEST/RESPONSE TEST to 10.1.1.1 : +/-2.5% @ 99% conf. : histogram
Local /Remote
Socket Size Request Resp. Elapsed Trans.
Send Recv Size Size Time Rate
bytes Bytes bytes bytes secs. per sec
9216 42080 1 1 59.99 9473.54
9216 42080 1 1 59.99 10384.59 +POLL
/usr/local/netperf/netperf -t UDP_RR -l 60 -H 10.1.1.1 -i 10,3 -I 99,5 --
-r 516,4
UDP REQUEST/RESPONSE TEST to 10.1.1.1 : +/-2.5% @ 99% conf. : histogram
!!! WARNING
!!! Desired confidence was not achieved within the specified iterations.
!!! This implies that there was variability in the test environment that
!!! must be investigated before going further.
!!! Confidence intervals: Throughput : 8.5%
!!! Local CPU util : 0.0%
!!! Remote CPU util : 0.0%
Local /Remote
Socket Size Request Resp. Elapsed Trans.
Send Recv Size Size Time Rate
bytes Bytes bytes bytes secs. per sec
9216 42080 516 4 59.99 5332.39
9216 42080 516 4 59.99 5752.03 +POLL
/usr/local/netperf/netperf -t UDP_STREAM -l 60 -H 10.1.1.1 -i 10,3 -I 99,5
-- -s 32768 -S 32768 -m 4096
UDP UNIDIRECTIONAL SEND TEST to 10.1.1.1 : +/-2.5% @ 99% conf. : histogram
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughput
bytes bytes secs # # 10^6bits/sec
32768 4096 59.99 108559 7379153 59.30
32768 59.99 108521 59.28
32768 4096 59.99 114246 10368207 62.40 +POLL
32768 59.99 114227 62.39 +POLL
Testing with the following command line:
/usr/local/netperf/netperf -t UDP_STREAM -l 60 -H 10.1.1.1 -i 10,3 -I 99,5
-- -s 32768 -S 32768 -m 1024
UDP UNIDIRECTIONAL SEND TEST to 10.1.1.1 : +/-2.5% @ 99% conf. : histogram
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughput
bytes bytes secs # # 10^6bits/sec
32768 1024 59.99 475626 9177882 64.95
32768 59.99 475467 64.93
32768 1024 59.99 461228 13567726 62.98 +POLL
32768 59.99 461151 62.97 +POLL
The only problematic one is the last one. Still, I am surprised by the
amount of errors and even the somewhat low throughput. I recall running
this test some time ago and being able to pretty well get close to 100Mb/s
on the fxp card.
And, a UDP stream test
/usr/local/netperf/netperf -t UDP_STREAM -l 60 -H 10.1.1.1 -i 10,3 -I 99,5
-- -s 32768 -S 32768 -m 4
UDP UNIDIRECTIONAL SEND TEST to 10.1.1.1 : +/-2.5% @ 99% conf. : histogram
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughput
bytes bytes secs # # 10^6bits/sec
32768 4 59.99 3245141 11827778 1.73
32768 59.99 3212788 1.71
and
ruby3# sysctl -w net.xorp.polling=0
net.xorp.polling: 1 -> 0
ruby3# /usr/local/netperf/netperf -t UDP_STREAM -l 60 -H 10.1.1.1 -i 10,3
-I 99,5 -- -s 32768 -S 32768 -m 4
UDP UNIDIRECTIONAL SEND TEST to 10.1.1.1 : +/-2.5% @ 99% conf. : histogram
Socket Message Elapsed Messages
Size Size Time Okay Errors Throughput
bytes bytes secs # # 10^6bits/sec
32768 4 59.99 4168057 0 2.22
32768 59.99 4032396 2.15
ruby3#
Hmmm.. This is rather puzzling. Why would polling have so many errors and
be slower ?
On the box I am connected to, I am going to try against a non onboard NIC
to see if that is the problem.
---Mike
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-net" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5.1.0.14.0.20011127140608.04f23c20>
