Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 20 May 2010 18:56:41 -0400
From:      Nuno Diogo <nuno@diogonet.com>
To:        freebsd-ipfw@freebsd.org
Subject:   Re: Performance issue with new pipe profile feature in FreeBSD 8.0  RELEASE
Message-ID:  <AANLkTikfs5K4soO5G_WpkHrDCfArGRkwWmh8ZGEJ4mUI@mail.gmail.com>
In-Reply-To: <005a01caf6a4$e8cf9c70$ba6ed550$@com>
References:  <005a01caf6a4$e8cf9c70$ba6ed550$@com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi all,
Sorry to spam the list with this issue, but I do believe that this is not
working as intended so I performed some more testing in a controlled
environment.
Using a dedicated FreeBSD 8-RELEASE-p2 i386 with GENERIC kernel + the
following additions:

   - options HZ=3D2000
   - device if_bridge
   - options IPFIREWALL
   - options IPFIREWALL_DEFAULTS_TO_ACCEPT
   - options DUMMYNET

Routing between VR0 and EM0 interfaces.
Ipfer TCP transfers between a Win 7 laptop and a Linux virtual server.
Only one variable changed at a time:

#So lets start with your typical pipe rule using bandwidth and delay
statement:

*Test 1 with 10Mbps 10ms:*

#Only one rule pushing packets to PIPE 1 if they're passing between these
two specific interfaces
FreeBSD-Test# ipfw list
0100 pipe 1 ip from any to any recv em0 xmit vr0
65535 allow ip from any to any

#Pipe configured with 10M bandwidth, 10ms delay and 50 slot queue
FreeBSD-Test# ipfw pipe 1 show
00001:  10.000 Mbit/s   10 ms   50 sl. 1 queues (1 buckets) droptail
         burst: 0 Byte
    mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte
Drp
0 icmp  192.168.100.10/0         10.168.0.99/0     112431 154127874  0    0
168

 #Traceroute from laptop to server showing just that one hop
C:\Users\nuno>tracert -d 10.168.0.99
Tracing route to 10.168.0.99 over a maximum of 30 hops
  1    <1 ms    <1 ms    <1 ms  192.168.100.1
  2    10 ms    10 ms    10 ms  10.168.0.99
Trace complete.

#Ping result for 1470 byte packet
C:\Users\nuno>ping 10.168.0.99 -t -l 1470



Pinging 10.168.0.99 with 1470 bytes of data:

Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63


#Iperf performance, as we can see it utilizes the entire emulated pipe

bin/iperf.exe -c 10.168.0.99 -P 1 -i 1 -p 5001 -f k -t 10000

------------------------------------------------------------

Client connecting to 10.168.0.99, TCP port 5001

TCP window size: 63.0 KByte (default)

------------------------------------------------------------

[148] local 192.168.100.10 port 49225 connected with 10.168.0.99 port 5001

[ ID] Interval       Transfer     Bandwidth

[148]  0.0- 1.0 sec  1392 KBytes  11403 Kbits/sec

[148]  1.0- 2.0 sec  1184 KBytes  9699 Kbits/sec

[148]  2.0- 3.0 sec  1192 KBytes  9765 Kbits/sec

[148]  3.0- 4.0 sec  1184 KBytes  9699 Kbits/sec

[148]  4.0- 5.0 sec  1184 KBytes  9699 Kbits/sec

[148]  5.0- 6.0 sec  1184 KBytes  9699 Kbits/sec

[148]  6.0- 7.0 sec  1184 KBytes  9699 Kbits/sec

[148]  7.0- 8.0 sec  1176 KBytes  9634 Kbits/sec

[148]  8.0- 9.0 sec  1192 KBytes  9765 Kbits/sec

[148]  9.0-10.0 sec  1200 KBytes  9830 Kbits/sec

[148] 10.0-11.0 sec  1120 KBytes  9175 Kbits/sec

[148] 11.0-12.0 sec  1248 KBytes  10224 Kbits/sec

[148] 12.0-13.0 sec  1184 KBytes  9699 Kbits/sec

[148] 13.0-14.0 sec  1184 KBytes  9699 Kbits/sec

[148] 14.0-15.0 sec  1184 KBytes  9699 Kbits/sec

[148] 15.0-16.0 sec  1184 KBytes  9699 Kbits/sec

[148] 16.0-17.0 sec  1184 KBytes  9699 Kbits/sec

[148] 17.0-18.0 sec  1184 KBytes  9699 Kbits/sec

[148] 18.0-19.0 sec  1184 KBytes  9699 Kbits/sec

[148] 19.0-20.0 sec  1192 KBytes  9765 Kbits/sec



#Now let configure the same emulation (from my understanding) but with a
profile

FreeBSD-Test# cat ./profile

name Test

samples 100

bw 10M

loss-level 1.0

prob delay

0.00 10

1.00 10


#Pipe 1 configured with the above profile file and no additional bandwidth
or delay parameters

FreeBSD-Test# ipfw pipe 1 show

00001:  10.000 Mbit/s    0 ms   50 sl. 1 queues (1 buckets) droptail

         burst: 0 Byte

         profile: name "Test" loss 1.000000 samples 100

    mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000

BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte
Drp

  0 icmp  192.168.100.10/0         10.168.0.99/0     131225 181884981  0   =
 0
211


#Ping time for a 1470 byte packet remains the same

C:\Users\nuno>ping 10.168.0.99 -t -l 1470



Pinging 10.168.0.99 with 1470 bytes of data:

Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D14ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D11ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63

#Iperf transfer however drops considerable!

bin/iperf.exe -c 10.168.0.99 -P 1 -i 1 -p 5001 -f k -t 10000

------------------------------------------------------------

Client connecting to 10.168.0.99, TCP port 5001

TCP window size: 63.0 KByte (default)

------------------------------------------------------------

[148] local 192.168.100.10 port 49226 connected with 10.168.0.99 port 5001

[ ID] Interval       Transfer     Bandwidth

[148]  0.0- 1.0 sec   248 KBytes  2032 Kbits/sec

[148]  1.0- 2.0 sec  56.0 KBytes   459 Kbits/sec

[148]  2.0- 3.0 sec   176 KBytes  1442 Kbits/sec

[148]  3.0- 4.0 sec   128 KBytes  1049 Kbits/sec

[148]  4.0- 5.0 sec   120 KBytes   983 Kbits/sec

[148]  5.0- 6.0 sec   128 KBytes  1049 Kbits/sec

[148]  6.0- 7.0 sec   128 KBytes  1049 Kbits/sec

[148]  7.0- 8.0 sec  96.0 KBytes   786 Kbits/sec

[148]  8.0- 9.0 sec   144 KBytes  1180 Kbits/sec

[148]  9.0-10.0 sec   128 KBytes  1049 Kbits/sec

[148] 10.0-11.0 sec   128 KBytes  1049 Kbits/sec

[148] 11.0-12.0 sec   120 KBytes   983 Kbits/sec

[148] 12.0-13.0 sec   120 KBytes   983 Kbits/sec

[148] 13.0-14.0 sec   128 KBytes  1049 Kbits/sec

[148] 14.0-15.0 sec   120 KBytes   983 Kbits/sec

[148] 15.0-16.0 sec   128 KBytes  1049 Kbits/sec

[148] 16.0-17.0 sec   120 KBytes   983 Kbits/sec

[148] 17.0-18.0 sec   120 KBytes   983 Kbits/sec

[148] 18.0-19.0 sec   128 KBytes  1049 Kbits/sec

[148] 19.0-20.0 sec  64.0 KBytes   524 Kbits/sec


Lets do the exact same but this time reducing the emulate latency down to
just 2ms.
*Test 2 with 10Mbps 2ms:*
#Pipe 1 configured for 10Mbps bandwidth, 2ms latency and 50 slot queue

FreeBSD-Test# ipfw pipe 1 show

00001:  10.000 Mbit/s    2 ms   50 sl. 1 queues (1 buckets) droptail

         burst: 0 Byte

    mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000

BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte
Drp

  0 icmp  192.168.100.10/0         10.168.0.99/0     21020 19358074  0    0
123


#Ping time from laptop to server

C:\Users\nuno>ping 10.168.0.99 -t -l 1470



Pinging 10.168.0.99 with 1470 bytes of data:

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D3ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D3ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63


#Ipfer throughput, again we can use all of the emulated bandwidth

bin/iperf.exe -c 10.168.0.99 -P 1 -i 1 -p 5001 -f k -t 10000

------------------------------------------------------------

Client connecting to 10.168.0.99, TCP port 5001

TCP window size: 63.0 KByte (default)

------------------------------------------------------------

[148] local 192.168.100.10 port 49196 connected with 10.168.0.99 port 5001

[ ID] Interval       Transfer     Bandwidth

[148]  0.0- 1.0 sec  1264 KBytes  10355 Kbits/sec

[148]  1.0- 2.0 sec  1192 KBytes  9765 Kbits/sec

[148]  2.0- 3.0 sec  1184 KBytes  9699 Kbits/sec

[148]  3.0- 4.0 sec  1184 KBytes  9699 Kbits/sec

[148]  4.0- 5.0 sec  1184 KBytes  9699 Kbits/sec

[148]  5.0- 6.0 sec  1192 KBytes  9765 Kbits/sec

[148]  6.0- 7.0 sec  1184 KBytes  9699 Kbits/sec

[148]  7.0- 8.0 sec  1184 KBytes  9699 Kbits/sec

[148]  8.0- 9.0 sec  1184 KBytes  9699 Kbits/sec

[148]  9.0-10.0 sec  1152 KBytes  9437 Kbits/sec

[148] 10.0-11.0 sec  1240 KBytes  10158 Kbits/sec

[148] 11.0-12.0 sec  1184 KBytes  9699 Kbits/sec

[148] 12.0-13.0 sec  1184 KBytes  9699 Kbits/sec

[148] 13.0-14.0 sec  1176 KBytes  9634 Kbits/sec

[148] 14.0-15.0 sec   984 KBytes  8061 Kbits/sec

[148] 15.0-16.0 sec  1192 KBytes  9765 Kbits/sec

[148] 16.0-17.0 sec  1184 KBytes  9699 Kbits/sec

[148] 17.0-18.0 sec  1184 KBytes  9699 Kbits/sec

[148] 18.0-19.0 sec  1184 KBytes  9699 Kbits/sec

[148] 19.0-20.0 sec  1208 KBytes  9896 Kbits/sec


#Now lets configure the profile file to emulate 10Mbps and 2ms of added
overhead

FreeBSD-Test# cat ./profile

name Test

samples 100

bw 10M

loss-level 1.0

prob delay

0.00 2
1.00 2



#Pipe 1 configured with the above profile file and no additional bandwidth
or delay parameters

FreeBSD-Test# ipfw pipe 1 show

00001:  10.000 Mbit/s    0 ms   50 sl. 1 queues (1 buckets) droptail

         burst: 0 Byte

         profile: name "Test" loss 1.000000 samples 100

    mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000

BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte
Drp
  0 icmp  192.168.100.10/0         10.168.0.99/0     39570 46750171  0    0
186

#Again, ping remains constant with this configuration

C:\Users\nuno>ping 10.168.0.99 -t -l 1470



Pinging 10.168.0.99 with 1470 bytes of data:

Reply from 10.168.0.99: bytes=3D1470 time=3D3ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D3ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63

Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63


#Iperf throughput again takes a big hit, although not as much as when we're
adding 10ms or overhead

bin/iperf.exe -c 10.168.0.99 -P 1 -i 1 -p 5001 -f k -t 10000

------------------------------------------------------------

Client connecting to 10.168.0.99, TCP port 5001

TCP window size: 63.0 KByte (default)

------------------------------------------------------------

[148] local 192.168.100.10 port 49197 connected with 10.168.0.99 port 5001

[ ID] Interval       Transfer     Bandwidth

[148]  0.0- 1.0 sec   544 KBytes  4456 Kbits/sec

[148]  1.0- 2.0 sec   440 KBytes  3604 Kbits/sec

[148]  2.0- 3.0 sec   440 KBytes  3604 Kbits/sec

[148]  3.0- 4.0 sec   432 KBytes  3539 Kbits/sec

[148]  4.0- 5.0 sec   440 KBytes  3604 Kbits/sec

[148]  5.0- 6.0 sec   448 KBytes  3670 Kbits/sec

[148]  6.0- 7.0 sec   432 KBytes  3539 Kbits/sec

[148]  7.0- 8.0 sec   440 KBytes  3604 Kbits/sec

[148]  8.0- 9.0 sec   440 KBytes  3604 Kbits/sec

[148]  9.0-10.0 sec   448 KBytes  3670 Kbits/sec

[148] 10.0-11.0 sec   440 KBytes  3604 Kbits/sec

[148] 11.0-12.0 sec   440 KBytes  3604 Kbits/sec

[148] 12.0-13.0 sec   392 KBytes  3211 Kbits/sec

[148] 13.0-14.0 sec   488 KBytes  3998 Kbits/sec

[148] 14.0-15.0 sec   440 KBytes  3604 Kbits/sec

[148] 15.0-16.0 sec   440 KBytes  3604 Kbits/sec

[148] 16.0-17.0 sec   440 KBytes  3604 Kbits/sec

[148] 17.0-18.0 sec   440 KBytes  3604 Kbits/sec

[148] 18.0-19.0 sec   440 KBytes  3604 Kbits/sec

[148] 19.0-20.0 sec   448 KBytes  3670 Kbits/sec


 From my understanding, since the emulated RTT of the link remains the same=
,
Iperf performance should also stay the same.

Regardless of how or why the RTT is present, (geographically induced
latency, MAC overhead, congestion etc) the effects on a TCP transmission
should be the same (assuming as in this test no jitter and packet loss)


On the first test we see throughput drop from ~9.7Mbps to 980Kbps-1050Kbps
with the addition of just 10ms of overhead in the profile!

On the second test we see throughput drop from ~9.7Mbps to ~3.6Mbps with th=
e
addition of just 2ms of overhead in the profile!

So is this feature not working as intended or am I completely missing
something here?


I (and hopefully others) would highly appreciate any opinions as this new
feature could really expand the use of dummynet as a WAN emulator, but it
seems that in it's current implementation it does not allow for the full
utilization of the emulated bandwidth regardless of how little or static th=
e
extra delay is set to.


Sincerely,

Nuno Diogo

On Tue, May 18, 2010 at 12:12 PM, Nuno Diogo <nuno@diogonet.com> wrote:

>  Hi all,
>
> I=92m encountering the same situation, and I=92m not quite understanding
> Luigi=92s explanation.
>
> If a pipe is configured with 10Mbps bandwidth and 25ms delay, it will tak=
e
> approximately 26.7ms for a 1470 byte packet to pass through it as per the
> below math.
>
> IPerf can fully utilize the available emulated bandwidth with that delay.
>
>
>
> If we configure a profile with the same characteristics, 10Mbps and 25ms
> overhead/extra-airtime/delay isn=92t the end result the same?
>
> A 1470 byte packet should still take ~26.7ms to pass through the pipe and
> IPerf should still be able to fully utilize the emulated bandwidth, no?
>
>
>
> IPerf does not know how that delay is being emulated or configured, it ju=
st
> knows that it=92s taking ~26.7ms to get ACKs back etc, so I guess I=92m m=
issing
> something here?
>
>
>
> I use dummynet often for WAN acceleration testing, and have been trying t=
o
> use the new profile method to try and emulate =91jitter=92.
>
> With pings it works great, but when trying to use the full configured
> bandwidth, I get the same results as Charles.
>
> Regardless of delay/overhead/bandwidth configuration IPerf can=92t push m=
ore
> than a fraction of the configured bandwidth with lots of packets queuing =
and
> dropping.
>
>
>
> Your patience is appreciated.
>
>
>
> Sincerely,
>
>
>
>
> _________________________________________________________________________=
______
>
> Nuno Diogo
>
>
>
> Luigi Rizzo
> Tue, 24 Nov 2009 21:21:56 -0800
>
> Hi,
>
> there is no bug, the 'pipe profile' code is working correctly.
>
>
>
> In your mail below you are comparing two different things.
>
>
>
>    "pipe config bw 10Mbit/s delay 25ms"
>
>         means that _after shaping_ at 10Mbps, all traffic will
>
>         be subject to an additional delay of 25ms.
>
>         Each packet (1470 bytes) will take Length/Bandwidth sec
>
>         to come out or 1470*8/10M =3D 1.176ms , but you won't
>
>         see them until you wait another 25ms (7500km at the speed
>
>         of light).
>
>
>
>    "pipe config bw 10Mbit/s profile "test" ..."
>
>         means that in addition to the Length/Bandwidth,
>
>         _each packet transmission_ will consume
>
>         some additional air-time as specified in the profile
>
>         (25ms in your case)
>
>
>
>         So, in your case with 1470bytes/pkt each transmission
>
>         will take len/bw (1.176ms) + 25ms (extra air time) =3D 26.76ms
>
>         That is 25 times more than the previous case and explains
>
>         the reduced bandwidth you see.
>
>
>
> The 'delay profile' is effectively extra air time used for each
>
> transmission. The name is probably confusing, i should have called
>
> it 'extra-time' or 'overhead' and not 'delay'
>
>
>
> cheers
>
> luigi
>
>
>
> On Tue, Nov 24, 2009 at 12:40:31PM -0500, Charles Henri de Boysson wrote:
>
> > Hi,
>
> >
>
> > I have a simple setup with two computer connected via a FreeBSD bridge
>
> > running 8.0 RELEASE.
>
> > I am trying to use dummynet to simulate a wireless network between the
>
> > two and for that I wanted to use the pipe profile feature of FreeBSD
>
> > 8.0.
>
> > But as I was experimenting with the pipe profile feature I ran into som=
e
>
> > issues.
>
> >
>
> > I have setup ipfw to send traffic coming for either interface of the
>
> > bridge to a respective pipe as follow:
>
> >
>
> > # ipfw show
>
> > 00100 ?? ?? 0 ?? ?? ?? ?? 0 allow ip from any to any via lo0
>
> > 00200 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from any to 127.0.0.0/8
>
> > 00300 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from 127.0.0.0/8 to any
>
> > 01000 ?? ?? 0 ?? ?? ?? ?? 0 pipe 1 ip from any to any via vr0 layer2
>
> > 01100 ?? ?? 0 ?? ?? ?? ?? 0 pipe 101 ip from any to any via vr4 layer2
>
> > 65000 ??7089 ?? ??716987 allow ip from any to any
>
> > 65535 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from any to any
>
> >
>
> > When I setup my pipes as follow:
>
> >
>
> > # ipfw pipe 1 config bw 10Mbit delay 25 mask proto 0
>
> > # ipfw pipe 101 config bw 10Mbit delay 25 mask proto 0
>
> > # ipfw pipe show
>
> >
>
> > 00001: ??10.000 Mbit/s ?? 25 ms ?? 50 sl. 0 queues (1 buckets) droptail
>
> > burst: 0 Byte
>
> > 00101: ??10.000 Mbit/s ?? 25 ms ?? 50 sl. 0 queues (1 buckets) droptail
>
> > burst: 0 Byte
>
> >
>
> > With this setup, when I try to pass traffic through the bridge with
>
> > iperf, I obtain the desired speed: iperf reports about 9.7Mbits/sec in
>
> > UDP mode and 9.5 in TCP mode (I copied and pasted the iperf runs at
>
> > the end of this email).
>
> >
>
> > The problem arise when I setup pipe 1 (the downlink) with an
>
> > equivalent profile (I tried to simplify it as much as possible).
>
> >
>
> > # ipfw pipe 1 config profile test.pipeconf   mask proto 0
>
> > # ipfw pipe show
>
> > 00001:  10.000 Mbit/s    0 ms   50 sl. 0 queues (1 buckets) droptail
>
> >        burst: 0 Byte
>
> >        profile: name "test" loss 1.000000 samples 2
>
> > 00101:  10.000 Mbit/s   25 ms   50 sl. 0 queues (1 buckets) droptail
>
> >        burst: 0 Byte
>
> >
>
> > # cat test.pipeconf
>
> > name        test
>
> > bw          10Mbit
>
> > loss-level  1.0
>
> > samples     2
>
> > prob        delay
>
> > 0.0         25
>
> > 1.0         25
>
> >
>
> > The same iperf TCP tests then collapse to about 500Kbit/s with the
>
> > same settings (copy and pasted the output of iperf bellow)
>
> >
>
> > I can't figure out what is going on. There is no visible load on the br=
idge.
>
> > I have an unmodified GENERIC kernel with the following sysctl.
>
> >
>
> > net.link.bridge.ipfw: 1
>
> > kern.hz: 1000
>
> >
>
> > The bridge configuration is as follow:
>
> >
>
> > bridge0: flags=3D8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 =
mtu 1500
>
> > ether 1a:1f:2e:42:74:8d
>
> > id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
>
> > maxage 20 holdcnt 6 proto rstp maxaddr 100 timeout 1200
>
> > root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
>
> > member: vr4 flags=3D143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
>
> > ?? ?? ?? ??ifmaxaddr 0 port 6 priority 128 path cost 200000
>
> > member: vr0 flags=3D143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
>
> > ?? ?? ?? ??ifmaxaddr 0 port 2 priority 128 path cost 200000
>
> >
>
> >
>
> > iperf runs without the profile set:
>
> > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15
>
> > ------------------------------------------------------------
>
> > Client connecting to 10.0.0.254, TCP port 5001
>
> > Binding to local address 10.1.0.1
>
> > TCP window size: 16.0 KByte (default)
>
> > ------------------------------------------------------------
>
> > [  3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001
>
> > [ ID] Interval       Transfer     Bandwidth
>
> > [  3]  0.0-15.0 sec  17.0 MBytes  9.49 Mbits/sec
>
> >
>
> > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 -u -b 10Mbit
>
> > ------------------------------------------------------------
>
> > Client connecting to 10.0.0.254, UDP port 5001
>
> > Binding to local address 10.1.0.1
>
> > Sending 1470 byte datagrams
>
> > UDP buffer size:   110 KByte (default)
>
> > ------------------------------------------------------------
>
> > [  3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001
>
> > [ ID] Interval       Transfer     Bandwidth
>
> > [  3]  0.0-15.0 sec  18.8 MBytes  10.5 Mbits/sec
>
> > [  3] Sent 13382 datagrams
>
> > [  3] Server Report:
>
> > [  3]  0.0-15.1 sec  17.4 MBytes  9.72 Mbits/sec  0.822 ms  934/13381 (=
7%)
>
> > [  3]  0.0-15.1 sec  1 datagrams received out-of-order
>
> >
>
> >
>
> > iperf runs with the profile set:
>
> > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15
>
> > ------------------------------------------------------------
>
> > Client connecting to 10.0.0.254, TCP port 5001
>
> > Binding to local address 10.1.0.1
>
> > TCP window size: 16.0 KByte (default)
>
> > ------------------------------------------------------------
>
> > [  3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001
>
> > [ ID] Interval       Transfer     Bandwidth
>
> > [  3]  0.0-15.7 sec    968 KBytes    505 Kbits/sec
>
> >
>
> > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 -u -b 10Mbit
>
> > ------------------------------------------------------------
>
> > Client connecting to 10.0.0.254, UDP port 5001
>
> > Binding to local address 10.1.0.1
>
> > Sending 1470 byte datagrams
>
> > UDP buffer size:   110 KByte (default)
>
> > ------------------------------------------------------------
>
> > [  3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001
>
> > [ ID] Interval       Transfer     Bandwidth
>
> > [  3]  0.0-15.0 sec  18.8 MBytes  10.5 Mbits/sec
>
> > [  3] Sent 13382 datagrams
>
> > [  3] Server Report:
>
> > [  3]  0.0-16.3 sec    893 KBytes    449 Kbits/sec  1.810 ms 12757/1337=
9 (95%)
>
> >
>
> >
>
> > Let me know what other information you would need to help me debugging =
this.
>
> > In advance, thank you for your help
>
> >
>
> > --
>
> > Charles-Henri de Boysson
>
> > _______________________________________________
>
> > freebsd-ipfw@freebsd.org mailing list
>
> > http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw
>
> > To unsubscribe, send any mail to "freebsd-ipfw-unsubscr...@freebsd.org"
>
> _______________________________________________
>
> freebsd-ipfw@freebsd.org mailing list
>
> http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw
>
> To unsubscribe, send any mail to "freebsd-ipfw-unsubscr...@freebsd.org"
>
>
>



--=20
---------------------------------------------------------------------------=
----------------------

Nuno Diogo



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTikfs5K4soO5G_WpkHrDCfArGRkwWmh8ZGEJ4mUI>