Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 24 Nov 2009 12:40:31 -0500
From:      Charles Henri de Boysson <ceache@gmail.com>
To:        freebsd-net@freebsd.org, freebsd-ipfw@freebsd.org
Subject:   Performance issue with new pipe profile feature in FreeBSD 8.0  RELEASE
Message-ID:  <184b04b20911240940g36621d69hf3ca160a6d122ecc@mail.gmail.com>

next in thread | raw e-mail | index | archive | help
Hi,

I have a simple setup with two computer connected via a FreeBSD bridge
running 8.0 RELEASE.
I am trying to use dummynet to simulate a wireless network between the
two and for that I wanted to use the pipe profile feature of FreeBSD
8.0.
But as I was experimenting with the pipe profile feature I ran into some is=
sues.

I have setup ipfw to send traffic coming for either interface of the
bridge to a respective pipe as follow:

# ipfw show
00100 =C2=A0 =C2=A0 0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0 allow ip from any to an=
y via lo0
00200 =C2=A0 =C2=A0 0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0 deny ip from any to 127=
.0.0.0/8
00300 =C2=A0 =C2=A0 0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0 deny ip from 127.0.0.0/=
8 to any
01000 =C2=A0 =C2=A0 0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0 pipe 1 ip from any to a=
ny via vr0 layer2
01100 =C2=A0 =C2=A0 0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0 pipe 101 ip from any to=
 any via vr4 layer2
65000 =C2=A07089 =C2=A0 =C2=A0716987 allow ip from any to any
65535 =C2=A0 =C2=A0 0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0 deny ip from any to any

When I setup my pipes as follow:

# ipfw pipe 1 config bw 10Mbit delay 25 mask proto 0
# ipfw pipe 101 config bw 10Mbit delay 25 mask proto 0
# ipfw pipe show

00001: =C2=A010.000 Mbit/s =C2=A0 25 ms =C2=A0 50 sl. 0 queues (1 buckets) =
droptail
burst: 0 Byte
00101: =C2=A010.000 Mbit/s =C2=A0 25 ms =C2=A0 50 sl. 0 queues (1 buckets) =
droptail
burst: 0 Byte

With this setup, when I try to pass traffic through the bridge with
iperf, I obtain the desired speed: iperf reports about 9.7Mbits/sec in
UDP mode and 9.5 in TCP mode (I copied and pasted the iperf runs at
the end of this email).

The problem arise when I setup pipe 1 (the downlink) with an
equivalent profile (I tried to simplify it as much as possible).

# ipfw pipe 1 config profile test.pipeconf   mask proto 0
# ipfw pipe show
00001:  10.000 Mbit/s    0 ms   50 sl. 0 queues (1 buckets) droptail
	 burst: 0 Byte
	 profile: name "test" loss 1.000000 samples 2
00101:  10.000 Mbit/s   25 ms   50 sl. 0 queues (1 buckets) droptail
	 burst: 0 Byte

# cat test.pipeconf
name        test
bw          10Mbit
loss-level  1.0
samples     2
prob        delay
0.0         25
1.0         25

The same iperf TCP tests then collapse to about 500Kbit/s with the
same settings (copy and pasted the output of iperf bellow)

I can't figure out what is going on. There is no visible load on the bridge=
.
I have an unmodified GENERIC kernel with the following sysctl.

net.link.bridge.ipfw: 1
kern.hz: 1000

The bridge configuration is as follow:

bridge0: flags=3D8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu =
1500
ether 1a:1f:2e:42:74:8d
id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
maxage 20 holdcnt 6 proto rstp maxaddr 100 timeout 1200
root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
member: vr4 flags=3D143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
=C2=A0 =C2=A0 =C2=A0 =C2=A0ifmaxaddr 0 port 6 priority 128 path cost 200000
member: vr0 flags=3D143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
=C2=A0 =C2=A0 =C2=A0 =C2=A0ifmaxaddr 0 port 2 priority 128 path cost 200000


iperf runs without the profile set:
% iperf -B 10.1.0.1 -c 10.0.0.254 -t 15
------------------------------------------------------------
Client connecting to 10.0.0.254, TCP port 5001
Binding to local address 10.1.0.1
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-15.0 sec  17.0 MBytes  9.49 Mbits/sec

% iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 -u -b 10Mbit
------------------------------------------------------------
Client connecting to 10.0.0.254, UDP port 5001
Binding to local address 10.1.0.1
Sending 1470 byte datagrams
UDP buffer size:   110 KByte (default)
------------------------------------------------------------
[  3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-15.0 sec  18.8 MBytes  10.5 Mbits/sec
[  3] Sent 13382 datagrams
[  3] Server Report:
[  3]  0.0-15.1 sec  17.4 MBytes  9.72 Mbits/sec  0.822 ms  934/13381 (7%)
[  3]  0.0-15.1 sec  1 datagrams received out-of-order


iperf runs with the profile set:
% iperf -B 10.1.0.1 -c 10.0.0.254 -t 15
------------------------------------------------------------
Client connecting to 10.0.0.254, TCP port 5001
Binding to local address 10.1.0.1
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-15.7 sec    968 KBytes    505 Kbits/sec

% iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 -u -b 10Mbit
------------------------------------------------------------
Client connecting to 10.0.0.254, UDP port 5001
Binding to local address 10.1.0.1
Sending 1470 byte datagrams
UDP buffer size:   110 KByte (default)
------------------------------------------------------------
[  3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-15.0 sec  18.8 MBytes  10.5 Mbits/sec
[  3] Sent 13382 datagrams
[  3] Server Report:
[  3]  0.0-16.3 sec    893 KBytes    449 Kbits/sec  1.810 ms 12757/13379 (9=
5%)


Let me know what other information you would need to help me debugging this=
.
In advance, thank you for your help

--
Charles-Henri de Boysson



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?184b04b20911240940g36621d69hf3ca160a6d122ecc>