Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 18 May 2010 12:12:26 -0400
From:      "Nuno Diogo" <nuno@diogonet.com>
To:        <freebsd-ipfw@freebsd.org>
Subject:   Re: Performance issue with new pipe profile feature in FreeBSD 8.0	RELEASE
Message-ID:  <005a01caf6a4$e8cf9c70$ba6ed550$@com>

next in thread | raw e-mail | index | archive | help
Hi all,

I'm encountering the same situation, and I'm not quite understanding Luigi's
explanation.

If a pipe is configured with 10Mbps bandwidth and 25ms delay, it will take
approximately 26.7ms for a 1470 byte packet to pass through it as per the
below math.

IPerf can fully utilize the available emulated bandwidth with that delay.

 

If we configure a profile with the same characteristics, 10Mbps and 25ms
overhead/extra-airtime/delay isn't the end result the same?

A 1470 byte packet should still take ~26.7ms to pass through the pipe and
IPerf should still be able to fully utilize the emulated bandwidth, no?

 

IPerf does not know how that delay is being emulated or configured, it just
knows that it's taking ~26.7ms to get ACKs back etc, so I guess I'm missing
something here?

 

I use dummynet often for WAN acceleration testing, and have been trying to
use the new profile method to try and emulate 'jitter'.

With pings it works great, but when trying to use the full configured
bandwidth, I get the same results as Charles.  

Regardless of delay/overhead/bandwidth configuration IPerf can't push more
than a fraction of the configured bandwidth with lots of packets queuing and
dropping.

 

Your patience is appreciated.

 

Sincerely,

 

____________________________________________________________________________
___

Nuno Diogo

 

Luigi Rizzo
Tue, 24 Nov 2009 21:21:56 -0800

Hi,
there is no bug, the 'pipe profile' code is working correctly.
 
In your mail below you are comparing two different things.
 
   "pipe config bw 10Mbit/s delay 25ms"
        means that _after shaping_ at 10Mbps, all traffic will
        be subject to an additional delay of 25ms.
        Each packet (1470 bytes) will take Length/Bandwidth sec
        to come out or 1470*8/10M = 1.176ms , but you won't
        see them until you wait another 25ms (7500km at the speed
        of light).
 
   "pipe config bw 10Mbit/s profile "test" ..."
        means that in addition to the Length/Bandwidth,
        _each packet transmission_ will consume
        some additional air-time as specified in the profile
        (25ms in your case)
 
        So, in your case with 1470bytes/pkt each transmission
        will take len/bw (1.176ms) + 25ms (extra air time) = 26.76ms
        That is 25 times more than the previous case and explains
        the reduced bandwidth you see.
 
The 'delay profile' is effectively extra air time used for each
transmission. The name is probably confusing, i should have called
it 'extra-time' or 'overhead' and not 'delay'
 
cheers
luigi
 
On Tue, Nov 24, 2009 at 12:40:31PM -0500, Charles Henri de Boysson wrote:
> Hi,
> 
> I have a simple setup with two computer connected via a FreeBSD bridge
> running 8.0 RELEASE.
> I am trying to use dummynet to simulate a wireless network between the
> two and for that I wanted to use the pipe profile feature of FreeBSD
> 8.0.
> But as I was experimenting with the pipe profile feature I ran into some 
> issues.
> 
> I have setup ipfw to send traffic coming for either interface of the
> bridge to a respective pipe as follow:
> 
> # ipfw show
> 00100 ?? ?? 0 ?? ?? ?? ?? 0 allow ip from any to any via lo0
> 00200 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from any to 127.0.0.0/8
> 00300 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from 127.0.0.0/8 to any
> 01000 ?? ?? 0 ?? ?? ?? ?? 0 pipe 1 ip from any to any via vr0 layer2
> 01100 ?? ?? 0 ?? ?? ?? ?? 0 pipe 101 ip from any to any via vr4 layer2
> 65000 ??7089 ?? ??716987 allow ip from any to any
> 65535 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from any to any
> 
> When I setup my pipes as follow:
> 
> # ipfw pipe 1 config bw 10Mbit delay 25 mask proto 0
> # ipfw pipe 101 config bw 10Mbit delay 25 mask proto 0
> # ipfw pipe show
> 
> 00001: ??10.000 Mbit/s ?? 25 ms ?? 50 sl. 0 queues (1 buckets) droptail
> burst: 0 Byte
> 00101: ??10.000 Mbit/s ?? 25 ms ?? 50 sl. 0 queues (1 buckets) droptail
> burst: 0 Byte
> 
> With this setup, when I try to pass traffic through the bridge with
> iperf, I obtain the desired speed: iperf reports about 9.7Mbits/sec in
> UDP mode and 9.5 in TCP mode (I copied and pasted the iperf runs at
> the end of this email).
> 
> The problem arise when I setup pipe 1 (the downlink) with an
> equivalent profile (I tried to simplify it as much as possible).
> 
> # ipfw pipe 1 config profile test.pipeconf   mask proto 0
> # ipfw pipe show
> 00001:  10.000 Mbit/s    0 ms   50 sl. 0 queues (1 buckets) droptail
>        burst: 0 Byte
>        profile: name "test" loss 1.000000 samples 2
> 00101:  10.000 Mbit/s   25 ms   50 sl. 0 queues (1 buckets) droptail
>        burst: 0 Byte
> 
> # cat test.pipeconf
> name        test
> bw          10Mbit
> loss-level  1.0
> samples     2
> prob        delay
> 0.0         25
> 1.0         25
> 
> The same iperf TCP tests then collapse to about 500Kbit/s with the
> same settings (copy and pasted the output of iperf bellow)
> 
> I can't figure out what is going on. There is no visible load on the
bridge.
> I have an unmodified GENERIC kernel with the following sysctl.
> 
> net.link.bridge.ipfw: 1
> kern.hz: 1000
> 
> The bridge configuration is as follow:
> 
> bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu
1500
> ether 1a:1f:2e:42:74:8d
> id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
> maxage 20 holdcnt 6 proto rstp maxaddr 100 timeout 1200
> root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
> member: vr4 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
> ?? ?? ?? ??ifmaxaddr 0 port 6 priority 128 path cost 200000
> member: vr0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
> ?? ?? ?? ??ifmaxaddr 0 port 2 priority 128 path cost 200000
> 
> 
> iperf runs without the profile set:
> % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15
> ------------------------------------------------------------
> Client connecting to 10.0.0.254, TCP port 5001
> Binding to local address 10.1.0.1
> TCP window size: 16.0 KByte (default)
> ------------------------------------------------------------
> [  3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0-15.0 sec  17.0 MBytes  9.49 Mbits/sec
> 
> % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 -u -b 10Mbit
> ------------------------------------------------------------
> Client connecting to 10.0.0.254, UDP port 5001
> Binding to local address 10.1.0.1
> Sending 1470 byte datagrams
> UDP buffer size:   110 KByte (default)
> ------------------------------------------------------------
> [  3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0-15.0 sec  18.8 MBytes  10.5 Mbits/sec
> [  3] Sent 13382 datagrams
> [  3] Server Report:
> [  3]  0.0-15.1 sec  17.4 MBytes  9.72 Mbits/sec  0.822 ms  934/13381 (7%)
> [  3]  0.0-15.1 sec  1 datagrams received out-of-order
> 
> 
> iperf runs with the profile set:
> % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15
> ------------------------------------------------------------
> Client connecting to 10.0.0.254, TCP port 5001
> Binding to local address 10.1.0.1
> TCP window size: 16.0 KByte (default)
> ------------------------------------------------------------
> [  3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0-15.7 sec    968 KBytes    505 Kbits/sec
> 
> % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 -u -b 10Mbit
> ------------------------------------------------------------
> Client connecting to 10.0.0.254, UDP port 5001
> Binding to local address 10.1.0.1
> Sending 1470 byte datagrams
> UDP buffer size:   110 KByte (default)
> ------------------------------------------------------------
> [  3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001
> [ ID] Interval       Transfer     Bandwidth
> [  3]  0.0-15.0 sec  18.8 MBytes  10.5 Mbits/sec
> [  3] Sent 13382 datagrams
> [  3] Server Report:
> [  3]  0.0-16.3 sec    893 KBytes    449 Kbits/sec  1.810 ms 12757/13379
(95%)
> 
> 
> Let me know what other information you would need to help me debugging
this.
> In advance, thank you for your help
> 
> --
> Charles-Henri de Boysson
> _______________________________________________
> freebsd-ipfw@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw
> To unsubscribe, send any mail to "freebsd-ipfw-unsubscr...@freebsd.org"
_______________________________________________
freebsd-ipfw@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw
To unsubscribe, send any mail to "freebsd-ipfw-unsubscr...@freebsd.org"

 




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?005a01caf6a4$e8cf9c70$ba6ed550$>