From owner-freebsd-ipfw@FreeBSD.ORG Mon May 17 11:01:45 2010 Return-Path: Delivered-To: freebsd-ipfw@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D7AFD1065670 for ; Mon, 17 May 2010 11:01:45 +0000 (UTC) (envelope-from jilingshu@gmail.com) Received: from mail-pz0-f174.google.com (mail-pz0-f174.google.com [209.85.222.174]) by mx1.freebsd.org (Postfix) with ESMTP id ABB868FC1F for ; Mon, 17 May 2010 11:01:45 +0000 (UTC) Received: by pzk4 with SMTP id 4so2633620pzk.7 for ; Mon, 17 May 2010 04:01:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:date:from:to:subject :message-id:organization:x-mailer:mime-version:content-type :content-transfer-encoding; bh=jONmJIoQcXGK+EwdyS09vWcuPlvse5W60od1L62wRJs=; b=utZLoBw28SPwRmsb69blTZRNhamlHnqBok8tqqT8QtYwnhQPvBfd7gFTUxxmcBxXU/ +DGLVOLpmoR5NrXRxbFK2ctTzy8NBv81tOvl1iBHwXM7+0L2ZyClKzYZOiujwgNbwBF9 Mf8RJQv3S1lkrMgSkVMZN3h4XpWC77o6+q5yk= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:subject:message-id:organization:x-mailer:mime-version :content-type:content-transfer-encoding; b=iueL0G5oPOZegQPWrKtLPdvc7Sgs3ksHw+Ek2rsz7t/keFRU8U0nIV6L69+QPA/FA3 6zY1u9OWeh4q4H2F3aG5oqukEL3AB+1fk+3Q6Q+pxXIYp+dd14RnAsnImNCuV5nlZiO1 aPhj+gZzr8FWgEOWsPOPC0+IUtV+FqzeQRLzw= Received: by 10.141.107.14 with SMTP id j14mr3509019rvm.181.1274092436757; Mon, 17 May 2010 03:33:56 -0700 (PDT) Received: from Bear-Win ([183.32.180.39]) by mx.google.com with ESMTPS id h11sm4084489rvm.9.2010.05.17.03.33.54 (version=SSLv3 cipher=RC4-MD5); Mon, 17 May 2010 03:33:56 -0700 (PDT) Date: Mon, 17 May 2010 18:33:47 +0800 From: Bear To: "freebsd-ipfw" Message-ID: <201005171833458695114@Gmail.com> Organization: Freebear Develop Group X-mailer: Foxmail 6, 15, 201, 22 [cn] Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Subject: Some problems of IP6FW on FreeBSD 8.0-RELEASE X-BeenThere: freebsd-ipfw@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: IPFW Technical Discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 May 2010 11:01:45 -0000 hi all, I am a newbie on IPv6. I wanna to build a IPv6 Gateway on FreeBSD 8.0-RELEASE. Thought this gateway need to do some limitations on its clients(It works in a university networks. At least, User Authentication and Bandwith Limitation must be implemented). I wanna to use a Captive Portal to do user authentication and use ip6fw and dummynet to do bandwith control. But in my GENERIC kernel, it seems that there is no ip6fw available.When I type in ip6fw, it told me this: freebsd6# ip6fw ip6fw: Command not found. freebsd6# So I wanna know is the ip6fw has been compiled into the GENERIC kernel? If not, how can I compile it into kernel or as a loadable kernel module? And what about the dummynet? If dummynet unavailable, can you give me some advises to limit bandwith? thanks! -------------- Bear 2010-05-17 From owner-freebsd-ipfw@FreeBSD.ORG Mon May 17 11:07:01 2010 Return-Path: Delivered-To: freebsd-ipfw@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 3264E1065675 for ; Mon, 17 May 2010 11:07:01 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (unknown [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 206EB8FC08 for ; Mon, 17 May 2010 11:07:01 +0000 (UTC) Received: from freefall.freebsd.org (localhost [127.0.0.1]) by freefall.freebsd.org (8.14.4/8.14.4) with ESMTP id o4HB71KU015772 for ; Mon, 17 May 2010 11:07:01 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.4/8.14.4/Submit) id o4HB70mU015769 for freebsd-ipfw@FreeBSD.org; Mon, 17 May 2010 11:07:00 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 17 May 2010 11:07:00 GMT Message-Id: <201005171107.o4HB70mU015769@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: freebsd-ipfw@FreeBSD.org Cc: Subject: Current problem reports assigned to freebsd-ipfw@FreeBSD.org X-BeenThere: freebsd-ipfw@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: IPFW Technical Discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 May 2010 11:07:01 -0000 Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description -------------------------------------------------------------------------------- o kern/145733 ipfw [ipfw] [patch] ipfw flaws with ipv6 fragments o kern/145305 ipfw [ipfw] ipfw problems, panics, data corruption, ipv6 so o kern/145167 ipfw [ipfw] ipfw nat does not follow its documentation o kern/144869 ipfw [ipfw] [panic] Instant kernel panic when adding NAT ru o kern/144269 ipfw [ipfw] problem with ipfw tables o kern/144187 ipfw [ipfw] deadlock using multiple ipfw nat and multiple l o kern/143973 ipfw [ipfw] [panic] ipfw forward option causes kernel reboo o kern/143653 ipfw [ipfw] [patch] ipfw nat redirect_port "buf is too smal o kern/143621 ipfw [ipfw] [dummynet] [patch] dummynet and vnet use result o kern/143474 ipfw [ipfw] ipfw table contains the same address o kern/139581 ipfw [ipfw] "ipfw pipe" not limiting bandwidth o kern/139226 ipfw [ipfw] install_state: entry already present, done o kern/137346 ipfw [ipfw] ipfw nat redirect_proto is broken o kern/137232 ipfw [ipfw] parser troubles o kern/136695 ipfw [ipfw] [patch] fwd reached after skipto in dynamic rul o kern/135476 ipfw [ipfw] IPFW table breaks after adding a large number o o bin/134975 ipfw [patch] ipfw(8) can't work with set in rule file. o kern/132553 ipfw [ipfw] ipfw doesn't understand ftp-data port o kern/131817 ipfw [ipfw] blocks layer2 packets that should not be blocke o kern/131601 ipfw [ipfw] [panic] 7-STABLE panic in nat_finalise (tcp=0) o kern/131558 ipfw [ipfw] Inconsistent "via" ipfw behavior o bin/130132 ipfw [patch] ipfw(8): no way to get mask from ipfw pipe sho o kern/129103 ipfw [ipfw] IPFW check state does not work =( o kern/129093 ipfw [ipfw] ipfw nat must not drop packets o kern/129036 ipfw [ipfw] 'ipfw fwd' does not change outgoing interface n o kern/128260 ipfw [ipfw] [patch] ipfw_divert damages IPv6 packets o kern/127230 ipfw [ipfw] [patch] Feature request to add UID and/or GID l o kern/127209 ipfw [ipfw] IPFW table become corrupted after many changes o bin/125370 ipfw [ipfw] [patch] increase a line buffer limit o conf/123119 ipfw [patch] rc script for ipfw does not handle IPv6 o kern/122963 ipfw [ipfw] tcpdump does not show packets redirected by 'ip s kern/121807 ipfw [request] TCP and UDP port_table in ipfw o kern/121382 ipfw [dummynet]: 6.3-RELEASE-p1 page fault in dummynet (cor o kern/121122 ipfw [ipfw] [patch] add support to ToS IP PRECEDENCE fields o kern/118993 ipfw [ipfw] page fault - probably it's a locking problem o bin/117214 ipfw ipfw(8) fwd with IPv6 treats input as IPv4 o kern/116009 ipfw [ipfw] [patch] Ignore errors when loading ruleset from o docs/113803 ipfw [patch] ipfw(8) - don't get bitten by the fwd rule p kern/113388 ipfw [ipfw] [patch] Addition actions with rules within spec o kern/112561 ipfw [ipfw] ipfw fwd does not work with some TCP packets o kern/105330 ipfw [ipfw] [patch] ipfw (dummynet) does not allow to set q o bin/104921 ipfw [patch] ipfw(8) sometimes treats ipv6 input as ipv4 (a o kern/104682 ipfw [ipfw] [patch] Some minor language consistency fixes a o kern/103454 ipfw [ipfw] [patch] [request] add a facility to modify DF b o kern/103328 ipfw [ipfw] [request] sugestions about ipfw table o kern/102471 ipfw [ipfw] [patch] add tos and dscp support o kern/98831 ipfw [ipfw] ipfw has UDP hickups o kern/97951 ipfw [ipfw] [patch] ipfw does not tie interface details to o kern/97504 ipfw [ipfw] IPFW Rules bug o kern/95084 ipfw [ipfw] [regression] [patch] IPFW2 ignores "recv/xmit/v o kern/93300 ipfw [ipfw] ipfw pipe lost packets o kern/91847 ipfw [ipfw] ipfw with vlanX as the device o kern/88659 ipfw [modules] ipfw and ip6fw do not work properly as modul o kern/87032 ipfw [ipfw] [patch] ipfw ioctl interface implementation o kern/86957 ipfw [ipfw] [patch] ipfw mac logging o bin/83046 ipfw [ipfw] ipfw2 error: "setup" is allowed for icmp, but s o kern/82724 ipfw [ipfw] [patch] [request] Add setnexthop and defaultrou s kern/80642 ipfw [ipfw] [patch] ipfw small patch - new RULE OPTION o bin/78785 ipfw [patch] ipfw(8) verbosity locks machine if /etc/rc.fir o kern/74104 ipfw [ipfw] ipfw2/1 conflict not detected or reported, manp o kern/73910 ipfw [ipfw] serious bug on forwarding of packets after NAT o kern/72987 ipfw [ipfw] ipfw/dummynet pipe/queue 'queue [BYTES]KBytes ( o kern/71366 ipfw [ipfw] "ipfw fwd" sometimes rewrites destination mac a o kern/69963 ipfw [ipfw] install_state warning about already existing en o kern/60719 ipfw [ipfw] Headerless fragments generate cryptic error mes o kern/55984 ipfw [ipfw] [patch] time based firewalling support for ipfw o kern/51274 ipfw [ipfw] [patch] ipfw2 create dynamic rules with parent o kern/48172 ipfw [ipfw] [patch] ipfw does not log size and flags o kern/46159 ipfw [ipfw] [patch] [request] ipfw dynamic rules lifetime f a kern/26534 ipfw [ipfw] Add an option to ipfw to log gid/uid of who cau 70 problems total. From owner-freebsd-ipfw@FreeBSD.ORG Mon May 17 13:53:43 2010 Return-Path: Delivered-To: freebsd-ipfw@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C9A55106566C for ; Mon, 17 May 2010 13:53:43 +0000 (UTC) (envelope-from dhorn2000@gmail.com) Received: from mail-fx0-f54.google.com (mail-fx0-f54.google.com [209.85.161.54]) by mx1.freebsd.org (Postfix) with ESMTP id 5B5B88FC1A for ; Mon, 17 May 2010 13:53:42 +0000 (UTC) Received: by fxm19 with SMTP id 19so962410fxm.13 for ; Mon, 17 May 2010 06:53:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=Watr5J/+CafuOr4F+QttLpTp1yjN9/q/G5YhILs2wIc=; b=b390KblUO5VvBDn67oiwLLAC6S2FEVX/O/e8mzWWcrfh/oVyTfwgqqzH25+WDPjN34 F9gRmLI2Sk8kSBOglnemSdYi6o3KztU2EFIUvti+tscoogJsmQn3jA1S1Qc/gUiu1R/3 gKpMaFiUdjV2KuPz2bnUTIFT6plKGUfLGK/f4= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=KZi/Z8yIksUcDKzmj8ljUiLkmDG2xRtkSYnSWofo4TuCz9Ug4iQ2P9ZDLflqVjjIN+ MxGCfXbwsrLcT3O44IzC8qN3WXNa7EvoPIXVwNutzFkRbiLYEJlYfHtaTCh/WaRN7rQ3 HZAZu1OeCnxbVIkHPj7Y7vgUPUinwuippizKU= MIME-Version: 1.0 Received: by 10.239.187.134 with SMTP id l6mr514165hbh.151.1274104421642; Mon, 17 May 2010 06:53:41 -0700 (PDT) Received: by 10.239.150.79 with HTTP; Mon, 17 May 2010 06:53:41 -0700 (PDT) In-Reply-To: <201005171833458695114@Gmail.com> References: <201005171833458695114@Gmail.com> Date: Mon, 17 May 2010 09:53:41 -0400 Message-ID: From: David Horn To: Bear Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-ipfw Subject: Re: Some problems of IP6FW on FreeBSD 8.0-RELEASE X-BeenThere: freebsd-ipfw@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: IPFW Technical Discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 May 2010 13:53:43 -0000 On Mon, May 17, 2010 at 6:33 AM, Bear wrote: > hi all, > I am a newbie on IPv6. I wanna to build a IPv6 Gateway on FreeBSD 8.0-REL= EASE. Thought this gateway need to do some limitations on its clients(It wo= rks in a university networks. At least, User Authentication and Bandwith Li= mitation must be implemented). I wanna to use a Captive Portal to do user a= uthentication and use ip6fw and dummynet to do bandwith control. But in my = GENERIC kernel, it seems that there is no ip6fw available.When I type in ip= 6fw, it told me this: > > freebsd6# ip6fw > ip6fw: Command not found. > freebsd6# ip6fw has been depreciated starting with 7.0. As per the 7.0 Release notes found here: http://www.freebsd.org/releases/7.0R/relnotes.html >>>The ip6fw(8) packet filter has been removed. Since ipfw(4) has gained I= Pv6 support, it should be used >>>instead. Please note that some rules migh= t need to be adjusted. Take a look at the FreeBSD handbook for a start on how to use ipfw: http://www.freebsd.org/doc/en/books/handbook/firewalls-ipfw.html and IPv6 = here: http://www.freebsd.org/doc/en/books/handbook/network-ipv6.html And of course, make sure to: 'man 8 ipfw' for all of the details, or peruse online here: http://www.freebsd.org/cgi/man.cgi?query=3Dipfw Make certain to read the section labeled "TRAFFIC SHAPER (DUMMYNET) CONFIGURATION" Good Luck. -_Dave > > So I wanna know is the ip6fw has been compiled into the GENERIC kernel? I= f not, how can I compile it into kernel or as a loadable kernel module? And= what about the dummynet? If dummynet unavailable, can you give me some adv= ises to limit bandwith? From owner-freebsd-ipfw@FreeBSD.ORG Tue May 18 16:41:46 2010 Return-Path: Delivered-To: freebsd-ipfw@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 386FE1065673 for ; Tue, 18 May 2010 16:41:46 +0000 (UTC) (envelope-from nuno@diogonet.com) Received: from mail-gw0-f54.google.com (mail-gw0-f54.google.com [74.125.83.54]) by mx1.freebsd.org (Postfix) with ESMTP id DFDFF8FC1C for ; Tue, 18 May 2010 16:41:44 +0000 (UTC) Received: by gwb11 with SMTP id 11so1175335gwb.13 for ; Tue, 18 May 2010 09:41:42 -0700 (PDT) Received: by 10.101.200.21 with SMTP id c21mr8332655anq.195.1274199167285; Tue, 18 May 2010 09:12:47 -0700 (PDT) Received: from nunopc (c-65-34-225-233.hsd1.fl.comcast.net [65.34.225.233]) by mx.google.com with ESMTPS id e4sm4514652anb.15.2010.05.18.09.12.44 (version=SSLv3 cipher=RC4-MD5); Tue, 18 May 2010 09:12:44 -0700 (PDT) From: "Nuno Diogo" To: Date: Tue, 18 May 2010 12:12:26 -0400 Message-ID: <005a01caf6a4$e8cf9c70$ba6ed550$@com> MIME-Version: 1.0 X-Mailer: Microsoft Office Outlook 12.0 Thread-Index: Acr2pOgdVyKkKDHjRuaMtT8lrY/8/A== Content-Language: en-us Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: Re: Performance issue with new pipe profile feature in FreeBSD 8.0 RELEASE X-BeenThere: freebsd-ipfw@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: IPFW Technical Discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 May 2010 16:41:46 -0000 Hi all, I'm encountering the same situation, and I'm not quite understanding Luigi's explanation. If a pipe is configured with 10Mbps bandwidth and 25ms delay, it will take approximately 26.7ms for a 1470 byte packet to pass through it as per the below math. IPerf can fully utilize the available emulated bandwidth with that delay. If we configure a profile with the same characteristics, 10Mbps and 25ms overhead/extra-airtime/delay isn't the end result the same? A 1470 byte packet should still take ~26.7ms to pass through the pipe and IPerf should still be able to fully utilize the emulated bandwidth, no? IPerf does not know how that delay is being emulated or configured, it just knows that it's taking ~26.7ms to get ACKs back etc, so I guess I'm missing something here? I use dummynet often for WAN acceleration testing, and have been trying to use the new profile method to try and emulate 'jitter'. With pings it works great, but when trying to use the full configured bandwidth, I get the same results as Charles. Regardless of delay/overhead/bandwidth configuration IPerf can't push more than a fraction of the configured bandwidth with lots of packets queuing and dropping. Your patience is appreciated. Sincerely, ____________________________________________________________________________ ___ Nuno Diogo Luigi Rizzo Tue, 24 Nov 2009 21:21:56 -0800 Hi, there is no bug, the 'pipe profile' code is working correctly. In your mail below you are comparing two different things. "pipe config bw 10Mbit/s delay 25ms" means that _after shaping_ at 10Mbps, all traffic will be subject to an additional delay of 25ms. Each packet (1470 bytes) will take Length/Bandwidth sec to come out or 1470*8/10M = 1.176ms , but you won't see them until you wait another 25ms (7500km at the speed of light). "pipe config bw 10Mbit/s profile "test" ..." means that in addition to the Length/Bandwidth, _each packet transmission_ will consume some additional air-time as specified in the profile (25ms in your case) So, in your case with 1470bytes/pkt each transmission will take len/bw (1.176ms) + 25ms (extra air time) = 26.76ms That is 25 times more than the previous case and explains the reduced bandwidth you see. The 'delay profile' is effectively extra air time used for each transmission. The name is probably confusing, i should have called it 'extra-time' or 'overhead' and not 'delay' cheers luigi On Tue, Nov 24, 2009 at 12:40:31PM -0500, Charles Henri de Boysson wrote: > Hi, > > I have a simple setup with two computer connected via a FreeBSD bridge > running 8.0 RELEASE. > I am trying to use dummynet to simulate a wireless network between the > two and for that I wanted to use the pipe profile feature of FreeBSD > 8.0. > But as I was experimenting with the pipe profile feature I ran into some > issues. > > I have setup ipfw to send traffic coming for either interface of the > bridge to a respective pipe as follow: > > # ipfw show > 00100 ?? ?? 0 ?? ?? ?? ?? 0 allow ip from any to any via lo0 > 00200 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from any to 127.0.0.0/8 > 00300 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from 127.0.0.0/8 to any > 01000 ?? ?? 0 ?? ?? ?? ?? 0 pipe 1 ip from any to any via vr0 layer2 > 01100 ?? ?? 0 ?? ?? ?? ?? 0 pipe 101 ip from any to any via vr4 layer2 > 65000 ??7089 ?? ??716987 allow ip from any to any > 65535 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from any to any > > When I setup my pipes as follow: > > # ipfw pipe 1 config bw 10Mbit delay 25 mask proto 0 > # ipfw pipe 101 config bw 10Mbit delay 25 mask proto 0 > # ipfw pipe show > > 00001: ??10.000 Mbit/s ?? 25 ms ?? 50 sl. 0 queues (1 buckets) droptail > burst: 0 Byte > 00101: ??10.000 Mbit/s ?? 25 ms ?? 50 sl. 0 queues (1 buckets) droptail > burst: 0 Byte > > With this setup, when I try to pass traffic through the bridge with > iperf, I obtain the desired speed: iperf reports about 9.7Mbits/sec in > UDP mode and 9.5 in TCP mode (I copied and pasted the iperf runs at > the end of this email). > > The problem arise when I setup pipe 1 (the downlink) with an > equivalent profile (I tried to simplify it as much as possible). > > # ipfw pipe 1 config profile test.pipeconf mask proto 0 > # ipfw pipe show > 00001: 10.000 Mbit/s 0 ms 50 sl. 0 queues (1 buckets) droptail > burst: 0 Byte > profile: name "test" loss 1.000000 samples 2 > 00101: 10.000 Mbit/s 25 ms 50 sl. 0 queues (1 buckets) droptail > burst: 0 Byte > > # cat test.pipeconf > name test > bw 10Mbit > loss-level 1.0 > samples 2 > prob delay > 0.0 25 > 1.0 25 > > The same iperf TCP tests then collapse to about 500Kbit/s with the > same settings (copy and pasted the output of iperf bellow) > > I can't figure out what is going on. There is no visible load on the bridge. > I have an unmodified GENERIC kernel with the following sysctl. > > net.link.bridge.ipfw: 1 > kern.hz: 1000 > > The bridge configuration is as follow: > > bridge0: flags=8843 metric 0 mtu 1500 > ether 1a:1f:2e:42:74:8d > id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15 > maxage 20 holdcnt 6 proto rstp maxaddr 100 timeout 1200 > root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0 > member: vr4 flags=143 > ?? ?? ?? ??ifmaxaddr 0 port 6 priority 128 path cost 200000 > member: vr0 flags=143 > ?? ?? ?? ??ifmaxaddr 0 port 2 priority 128 path cost 200000 > > > iperf runs without the profile set: > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 > ------------------------------------------------------------ > Client connecting to 10.0.0.254, TCP port 5001 > Binding to local address 10.1.0.1 > TCP window size: 16.0 KByte (default) > ------------------------------------------------------------ > [ 3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001 > [ ID] Interval Transfer Bandwidth > [ 3] 0.0-15.0 sec 17.0 MBytes 9.49 Mbits/sec > > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 -u -b 10Mbit > ------------------------------------------------------------ > Client connecting to 10.0.0.254, UDP port 5001 > Binding to local address 10.1.0.1 > Sending 1470 byte datagrams > UDP buffer size: 110 KByte (default) > ------------------------------------------------------------ > [ 3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001 > [ ID] Interval Transfer Bandwidth > [ 3] 0.0-15.0 sec 18.8 MBytes 10.5 Mbits/sec > [ 3] Sent 13382 datagrams > [ 3] Server Report: > [ 3] 0.0-15.1 sec 17.4 MBytes 9.72 Mbits/sec 0.822 ms 934/13381 (7%) > [ 3] 0.0-15.1 sec 1 datagrams received out-of-order > > > iperf runs with the profile set: > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 > ------------------------------------------------------------ > Client connecting to 10.0.0.254, TCP port 5001 > Binding to local address 10.1.0.1 > TCP window size: 16.0 KByte (default) > ------------------------------------------------------------ > [ 3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001 > [ ID] Interval Transfer Bandwidth > [ 3] 0.0-15.7 sec 968 KBytes 505 Kbits/sec > > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 -u -b 10Mbit > ------------------------------------------------------------ > Client connecting to 10.0.0.254, UDP port 5001 > Binding to local address 10.1.0.1 > Sending 1470 byte datagrams > UDP buffer size: 110 KByte (default) > ------------------------------------------------------------ > [ 3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001 > [ ID] Interval Transfer Bandwidth > [ 3] 0.0-15.0 sec 18.8 MBytes 10.5 Mbits/sec > [ 3] Sent 13382 datagrams > [ 3] Server Report: > [ 3] 0.0-16.3 sec 893 KBytes 449 Kbits/sec 1.810 ms 12757/13379 (95%) > > > Let me know what other information you would need to help me debugging this. > In advance, thank you for your help > > -- > Charles-Henri de Boysson > _______________________________________________ > freebsd-ipfw@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw > To unsubscribe, send any mail to "freebsd-ipfw-unsubscr...@freebsd.org" _______________________________________________ freebsd-ipfw@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw To unsubscribe, send any mail to "freebsd-ipfw-unsubscr...@freebsd.org" From owner-freebsd-ipfw@FreeBSD.ORG Thu May 20 22:56:43 2010 Return-Path: Delivered-To: freebsd-ipfw@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 320F0106566C for ; Thu, 20 May 2010 22:56:43 +0000 (UTC) (envelope-from nuno@diogonet.com) Received: from mail-gw0-f54.google.com (mail-gw0-f54.google.com [74.125.83.54]) by mx1.freebsd.org (Postfix) with ESMTP id A5BB08FC1B for ; Thu, 20 May 2010 22:56:42 +0000 (UTC) Received: by gwj16 with SMTP id 16so241903gwj.13 for ; Thu, 20 May 2010 15:56:41 -0700 (PDT) MIME-Version: 1.0 Received: by 10.150.184.18 with SMTP id h18mr1949872ybf.163.1274396201333; Thu, 20 May 2010 15:56:41 -0700 (PDT) Received: by 10.151.85.20 with HTTP; Thu, 20 May 2010 15:56:41 -0700 (PDT) In-Reply-To: <005a01caf6a4$e8cf9c70$ba6ed550$@com> References: <005a01caf6a4$e8cf9c70$ba6ed550$@com> Date: Thu, 20 May 2010 18:56:41 -0400 Message-ID: From: Nuno Diogo To: freebsd-ipfw@freebsd.org Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: Re: Performance issue with new pipe profile feature in FreeBSD 8.0 RELEASE X-BeenThere: freebsd-ipfw@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: IPFW Technical Discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 May 2010 22:56:43 -0000 Hi all, Sorry to spam the list with this issue, but I do believe that this is not working as intended so I performed some more testing in a controlled environment. Using a dedicated FreeBSD 8-RELEASE-p2 i386 with GENERIC kernel + the following additions: - options HZ=3D2000 - device if_bridge - options IPFIREWALL - options IPFIREWALL_DEFAULTS_TO_ACCEPT - options DUMMYNET Routing between VR0 and EM0 interfaces. Ipfer TCP transfers between a Win 7 laptop and a Linux virtual server. Only one variable changed at a time: #So lets start with your typical pipe rule using bandwidth and delay statement: *Test 1 with 10Mbps 10ms:* #Only one rule pushing packets to PIPE 1 if they're passing between these two specific interfaces FreeBSD-Test# ipfw list 0100 pipe 1 ip from any to any recv em0 xmit vr0 65535 allow ip from any to any #Pipe configured with 10M bandwidth, 10ms delay and 50 slot queue FreeBSD-Test# ipfw pipe 1 show 00001: 10.000 Mbit/s 10 ms 50 sl. 1 queues (1 buckets) droptail burst: 0 Byte mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 icmp 192.168.100.10/0 10.168.0.99/0 112431 154127874 0 0 168 #Traceroute from laptop to server showing just that one hop C:\Users\nuno>tracert -d 10.168.0.99 Tracing route to 10.168.0.99 over a maximum of 30 hops 1 <1 ms <1 ms <1 ms 192.168.100.1 2 10 ms 10 ms 10 ms 10.168.0.99 Trace complete. #Ping result for 1470 byte packet C:\Users\nuno>ping 10.168.0.99 -t -l 1470 Pinging 10.168.0.99 with 1470 bytes of data: Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63 #Iperf performance, as we can see it utilizes the entire emulated pipe bin/iperf.exe -c 10.168.0.99 -P 1 -i 1 -p 5001 -f k -t 10000 ------------------------------------------------------------ Client connecting to 10.168.0.99, TCP port 5001 TCP window size: 63.0 KByte (default) ------------------------------------------------------------ [148] local 192.168.100.10 port 49225 connected with 10.168.0.99 port 5001 [ ID] Interval Transfer Bandwidth [148] 0.0- 1.0 sec 1392 KBytes 11403 Kbits/sec [148] 1.0- 2.0 sec 1184 KBytes 9699 Kbits/sec [148] 2.0- 3.0 sec 1192 KBytes 9765 Kbits/sec [148] 3.0- 4.0 sec 1184 KBytes 9699 Kbits/sec [148] 4.0- 5.0 sec 1184 KBytes 9699 Kbits/sec [148] 5.0- 6.0 sec 1184 KBytes 9699 Kbits/sec [148] 6.0- 7.0 sec 1184 KBytes 9699 Kbits/sec [148] 7.0- 8.0 sec 1176 KBytes 9634 Kbits/sec [148] 8.0- 9.0 sec 1192 KBytes 9765 Kbits/sec [148] 9.0-10.0 sec 1200 KBytes 9830 Kbits/sec [148] 10.0-11.0 sec 1120 KBytes 9175 Kbits/sec [148] 11.0-12.0 sec 1248 KBytes 10224 Kbits/sec [148] 12.0-13.0 sec 1184 KBytes 9699 Kbits/sec [148] 13.0-14.0 sec 1184 KBytes 9699 Kbits/sec [148] 14.0-15.0 sec 1184 KBytes 9699 Kbits/sec [148] 15.0-16.0 sec 1184 KBytes 9699 Kbits/sec [148] 16.0-17.0 sec 1184 KBytes 9699 Kbits/sec [148] 17.0-18.0 sec 1184 KBytes 9699 Kbits/sec [148] 18.0-19.0 sec 1184 KBytes 9699 Kbits/sec [148] 19.0-20.0 sec 1192 KBytes 9765 Kbits/sec #Now let configure the same emulation (from my understanding) but with a profile FreeBSD-Test# cat ./profile name Test samples 100 bw 10M loss-level 1.0 prob delay 0.00 10 1.00 10 #Pipe 1 configured with the above profile file and no additional bandwidth or delay parameters FreeBSD-Test# ipfw pipe 1 show 00001: 10.000 Mbit/s 0 ms 50 sl. 1 queues (1 buckets) droptail burst: 0 Byte profile: name "Test" loss 1.000000 samples 100 mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 icmp 192.168.100.10/0 10.168.0.99/0 131225 181884981 0 = 0 211 #Ping time for a 1470 byte packet remains the same C:\Users\nuno>ping 10.168.0.99 -t -l 1470 Pinging 10.168.0.99 with 1470 bytes of data: Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D14ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D11ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D12ms TTL=3D63 #Iperf transfer however drops considerable! bin/iperf.exe -c 10.168.0.99 -P 1 -i 1 -p 5001 -f k -t 10000 ------------------------------------------------------------ Client connecting to 10.168.0.99, TCP port 5001 TCP window size: 63.0 KByte (default) ------------------------------------------------------------ [148] local 192.168.100.10 port 49226 connected with 10.168.0.99 port 5001 [ ID] Interval Transfer Bandwidth [148] 0.0- 1.0 sec 248 KBytes 2032 Kbits/sec [148] 1.0- 2.0 sec 56.0 KBytes 459 Kbits/sec [148] 2.0- 3.0 sec 176 KBytes 1442 Kbits/sec [148] 3.0- 4.0 sec 128 KBytes 1049 Kbits/sec [148] 4.0- 5.0 sec 120 KBytes 983 Kbits/sec [148] 5.0- 6.0 sec 128 KBytes 1049 Kbits/sec [148] 6.0- 7.0 sec 128 KBytes 1049 Kbits/sec [148] 7.0- 8.0 sec 96.0 KBytes 786 Kbits/sec [148] 8.0- 9.0 sec 144 KBytes 1180 Kbits/sec [148] 9.0-10.0 sec 128 KBytes 1049 Kbits/sec [148] 10.0-11.0 sec 128 KBytes 1049 Kbits/sec [148] 11.0-12.0 sec 120 KBytes 983 Kbits/sec [148] 12.0-13.0 sec 120 KBytes 983 Kbits/sec [148] 13.0-14.0 sec 128 KBytes 1049 Kbits/sec [148] 14.0-15.0 sec 120 KBytes 983 Kbits/sec [148] 15.0-16.0 sec 128 KBytes 1049 Kbits/sec [148] 16.0-17.0 sec 120 KBytes 983 Kbits/sec [148] 17.0-18.0 sec 120 KBytes 983 Kbits/sec [148] 18.0-19.0 sec 128 KBytes 1049 Kbits/sec [148] 19.0-20.0 sec 64.0 KBytes 524 Kbits/sec Lets do the exact same but this time reducing the emulate latency down to just 2ms. *Test 2 with 10Mbps 2ms:* #Pipe 1 configured for 10Mbps bandwidth, 2ms latency and 50 slot queue FreeBSD-Test# ipfw pipe 1 show 00001: 10.000 Mbit/s 2 ms 50 sl. 1 queues (1 buckets) droptail burst: 0 Byte mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 icmp 192.168.100.10/0 10.168.0.99/0 21020 19358074 0 0 123 #Ping time from laptop to server C:\Users\nuno>ping 10.168.0.99 -t -l 1470 Pinging 10.168.0.99 with 1470 bytes of data: Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D3ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D3ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 #Ipfer throughput, again we can use all of the emulated bandwidth bin/iperf.exe -c 10.168.0.99 -P 1 -i 1 -p 5001 -f k -t 10000 ------------------------------------------------------------ Client connecting to 10.168.0.99, TCP port 5001 TCP window size: 63.0 KByte (default) ------------------------------------------------------------ [148] local 192.168.100.10 port 49196 connected with 10.168.0.99 port 5001 [ ID] Interval Transfer Bandwidth [148] 0.0- 1.0 sec 1264 KBytes 10355 Kbits/sec [148] 1.0- 2.0 sec 1192 KBytes 9765 Kbits/sec [148] 2.0- 3.0 sec 1184 KBytes 9699 Kbits/sec [148] 3.0- 4.0 sec 1184 KBytes 9699 Kbits/sec [148] 4.0- 5.0 sec 1184 KBytes 9699 Kbits/sec [148] 5.0- 6.0 sec 1192 KBytes 9765 Kbits/sec [148] 6.0- 7.0 sec 1184 KBytes 9699 Kbits/sec [148] 7.0- 8.0 sec 1184 KBytes 9699 Kbits/sec [148] 8.0- 9.0 sec 1184 KBytes 9699 Kbits/sec [148] 9.0-10.0 sec 1152 KBytes 9437 Kbits/sec [148] 10.0-11.0 sec 1240 KBytes 10158 Kbits/sec [148] 11.0-12.0 sec 1184 KBytes 9699 Kbits/sec [148] 12.0-13.0 sec 1184 KBytes 9699 Kbits/sec [148] 13.0-14.0 sec 1176 KBytes 9634 Kbits/sec [148] 14.0-15.0 sec 984 KBytes 8061 Kbits/sec [148] 15.0-16.0 sec 1192 KBytes 9765 Kbits/sec [148] 16.0-17.0 sec 1184 KBytes 9699 Kbits/sec [148] 17.0-18.0 sec 1184 KBytes 9699 Kbits/sec [148] 18.0-19.0 sec 1184 KBytes 9699 Kbits/sec [148] 19.0-20.0 sec 1208 KBytes 9896 Kbits/sec #Now lets configure the profile file to emulate 10Mbps and 2ms of added overhead FreeBSD-Test# cat ./profile name Test samples 100 bw 10M loss-level 1.0 prob delay 0.00 2 1.00 2 #Pipe 1 configured with the above profile file and no additional bandwidth or delay parameters FreeBSD-Test# ipfw pipe 1 show 00001: 10.000 Mbit/s 0 ms 50 sl. 1 queues (1 buckets) droptail burst: 0 Byte profile: name "Test" loss 1.000000 samples 100 mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 icmp 192.168.100.10/0 10.168.0.99/0 39570 46750171 0 0 186 #Again, ping remains constant with this configuration C:\Users\nuno>ping 10.168.0.99 -t -l 1470 Pinging 10.168.0.99 with 1470 bytes of data: Reply from 10.168.0.99: bytes=3D1470 time=3D3ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D3ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 Reply from 10.168.0.99: bytes=3D1470 time=3D4ms TTL=3D63 #Iperf throughput again takes a big hit, although not as much as when we're adding 10ms or overhead bin/iperf.exe -c 10.168.0.99 -P 1 -i 1 -p 5001 -f k -t 10000 ------------------------------------------------------------ Client connecting to 10.168.0.99, TCP port 5001 TCP window size: 63.0 KByte (default) ------------------------------------------------------------ [148] local 192.168.100.10 port 49197 connected with 10.168.0.99 port 5001 [ ID] Interval Transfer Bandwidth [148] 0.0- 1.0 sec 544 KBytes 4456 Kbits/sec [148] 1.0- 2.0 sec 440 KBytes 3604 Kbits/sec [148] 2.0- 3.0 sec 440 KBytes 3604 Kbits/sec [148] 3.0- 4.0 sec 432 KBytes 3539 Kbits/sec [148] 4.0- 5.0 sec 440 KBytes 3604 Kbits/sec [148] 5.0- 6.0 sec 448 KBytes 3670 Kbits/sec [148] 6.0- 7.0 sec 432 KBytes 3539 Kbits/sec [148] 7.0- 8.0 sec 440 KBytes 3604 Kbits/sec [148] 8.0- 9.0 sec 440 KBytes 3604 Kbits/sec [148] 9.0-10.0 sec 448 KBytes 3670 Kbits/sec [148] 10.0-11.0 sec 440 KBytes 3604 Kbits/sec [148] 11.0-12.0 sec 440 KBytes 3604 Kbits/sec [148] 12.0-13.0 sec 392 KBytes 3211 Kbits/sec [148] 13.0-14.0 sec 488 KBytes 3998 Kbits/sec [148] 14.0-15.0 sec 440 KBytes 3604 Kbits/sec [148] 15.0-16.0 sec 440 KBytes 3604 Kbits/sec [148] 16.0-17.0 sec 440 KBytes 3604 Kbits/sec [148] 17.0-18.0 sec 440 KBytes 3604 Kbits/sec [148] 18.0-19.0 sec 440 KBytes 3604 Kbits/sec [148] 19.0-20.0 sec 448 KBytes 3670 Kbits/sec From my understanding, since the emulated RTT of the link remains the same= , Iperf performance should also stay the same. Regardless of how or why the RTT is present, (geographically induced latency, MAC overhead, congestion etc) the effects on a TCP transmission should be the same (assuming as in this test no jitter and packet loss) On the first test we see throughput drop from ~9.7Mbps to 980Kbps-1050Kbps with the addition of just 10ms of overhead in the profile! On the second test we see throughput drop from ~9.7Mbps to ~3.6Mbps with th= e addition of just 2ms of overhead in the profile! So is this feature not working as intended or am I completely missing something here? I (and hopefully others) would highly appreciate any opinions as this new feature could really expand the use of dummynet as a WAN emulator, but it seems that in it's current implementation it does not allow for the full utilization of the emulated bandwidth regardless of how little or static th= e extra delay is set to. Sincerely, Nuno Diogo On Tue, May 18, 2010 at 12:12 PM, Nuno Diogo wrote: > Hi all, > > I=92m encountering the same situation, and I=92m not quite understanding > Luigi=92s explanation. > > If a pipe is configured with 10Mbps bandwidth and 25ms delay, it will tak= e > approximately 26.7ms for a 1470 byte packet to pass through it as per the > below math. > > IPerf can fully utilize the available emulated bandwidth with that delay. > > > > If we configure a profile with the same characteristics, 10Mbps and 25ms > overhead/extra-airtime/delay isn=92t the end result the same? > > A 1470 byte packet should still take ~26.7ms to pass through the pipe and > IPerf should still be able to fully utilize the emulated bandwidth, no? > > > > IPerf does not know how that delay is being emulated or configured, it ju= st > knows that it=92s taking ~26.7ms to get ACKs back etc, so I guess I=92m m= issing > something here? > > > > I use dummynet often for WAN acceleration testing, and have been trying t= o > use the new profile method to try and emulate =91jitter=92. > > With pings it works great, but when trying to use the full configured > bandwidth, I get the same results as Charles. > > Regardless of delay/overhead/bandwidth configuration IPerf can=92t push m= ore > than a fraction of the configured bandwidth with lots of packets queuing = and > dropping. > > > > Your patience is appreciated. > > > > Sincerely, > > > > > _________________________________________________________________________= ______ > > Nuno Diogo > > > > Luigi Rizzo > Tue, 24 Nov 2009 21:21:56 -0800 > > Hi, > > there is no bug, the 'pipe profile' code is working correctly. > > > > In your mail below you are comparing two different things. > > > > "pipe config bw 10Mbit/s delay 25ms" > > means that _after shaping_ at 10Mbps, all traffic will > > be subject to an additional delay of 25ms. > > Each packet (1470 bytes) will take Length/Bandwidth sec > > to come out or 1470*8/10M =3D 1.176ms , but you won't > > see them until you wait another 25ms (7500km at the speed > > of light). > > > > "pipe config bw 10Mbit/s profile "test" ..." > > means that in addition to the Length/Bandwidth, > > _each packet transmission_ will consume > > some additional air-time as specified in the profile > > (25ms in your case) > > > > So, in your case with 1470bytes/pkt each transmission > > will take len/bw (1.176ms) + 25ms (extra air time) =3D 26.76ms > > That is 25 times more than the previous case and explains > > the reduced bandwidth you see. > > > > The 'delay profile' is effectively extra air time used for each > > transmission. The name is probably confusing, i should have called > > it 'extra-time' or 'overhead' and not 'delay' > > > > cheers > > luigi > > > > On Tue, Nov 24, 2009 at 12:40:31PM -0500, Charles Henri de Boysson wrote: > > > Hi, > > > > > > I have a simple setup with two computer connected via a FreeBSD bridge > > > running 8.0 RELEASE. > > > I am trying to use dummynet to simulate a wireless network between the > > > two and for that I wanted to use the pipe profile feature of FreeBSD > > > 8.0. > > > But as I was experimenting with the pipe profile feature I ran into som= e > > > issues. > > > > > > I have setup ipfw to send traffic coming for either interface of the > > > bridge to a respective pipe as follow: > > > > > > # ipfw show > > > 00100 ?? ?? 0 ?? ?? ?? ?? 0 allow ip from any to any via lo0 > > > 00200 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from any to 127.0.0.0/8 > > > 00300 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from 127.0.0.0/8 to any > > > 01000 ?? ?? 0 ?? ?? ?? ?? 0 pipe 1 ip from any to any via vr0 layer2 > > > 01100 ?? ?? 0 ?? ?? ?? ?? 0 pipe 101 ip from any to any via vr4 layer2 > > > 65000 ??7089 ?? ??716987 allow ip from any to any > > > 65535 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from any to any > > > > > > When I setup my pipes as follow: > > > > > > # ipfw pipe 1 config bw 10Mbit delay 25 mask proto 0 > > > # ipfw pipe 101 config bw 10Mbit delay 25 mask proto 0 > > > # ipfw pipe show > > > > > > 00001: ??10.000 Mbit/s ?? 25 ms ?? 50 sl. 0 queues (1 buckets) droptail > > > burst: 0 Byte > > > 00101: ??10.000 Mbit/s ?? 25 ms ?? 50 sl. 0 queues (1 buckets) droptail > > > burst: 0 Byte > > > > > > With this setup, when I try to pass traffic through the bridge with > > > iperf, I obtain the desired speed: iperf reports about 9.7Mbits/sec in > > > UDP mode and 9.5 in TCP mode (I copied and pasted the iperf runs at > > > the end of this email). > > > > > > The problem arise when I setup pipe 1 (the downlink) with an > > > equivalent profile (I tried to simplify it as much as possible). > > > > > > # ipfw pipe 1 config profile test.pipeconf mask proto 0 > > > # ipfw pipe show > > > 00001: 10.000 Mbit/s 0 ms 50 sl. 0 queues (1 buckets) droptail > > > burst: 0 Byte > > > profile: name "test" loss 1.000000 samples 2 > > > 00101: 10.000 Mbit/s 25 ms 50 sl. 0 queues (1 buckets) droptail > > > burst: 0 Byte > > > > > > # cat test.pipeconf > > > name test > > > bw 10Mbit > > > loss-level 1.0 > > > samples 2 > > > prob delay > > > 0.0 25 > > > 1.0 25 > > > > > > The same iperf TCP tests then collapse to about 500Kbit/s with the > > > same settings (copy and pasted the output of iperf bellow) > > > > > > I can't figure out what is going on. There is no visible load on the br= idge. > > > I have an unmodified GENERIC kernel with the following sysctl. > > > > > > net.link.bridge.ipfw: 1 > > > kern.hz: 1000 > > > > > > The bridge configuration is as follow: > > > > > > bridge0: flags=3D8843 metric 0 = mtu 1500 > > > ether 1a:1f:2e:42:74:8d > > > id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15 > > > maxage 20 holdcnt 6 proto rstp maxaddr 100 timeout 1200 > > > root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0 > > > member: vr4 flags=3D143 > > > ?? ?? ?? ??ifmaxaddr 0 port 6 priority 128 path cost 200000 > > > member: vr0 flags=3D143 > > > ?? ?? ?? ??ifmaxaddr 0 port 2 priority 128 path cost 200000 > > > > > > > > > iperf runs without the profile set: > > > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 > > > ------------------------------------------------------------ > > > Client connecting to 10.0.0.254, TCP port 5001 > > > Binding to local address 10.1.0.1 > > > TCP window size: 16.0 KByte (default) > > > ------------------------------------------------------------ > > > [ 3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001 > > > [ ID] Interval Transfer Bandwidth > > > [ 3] 0.0-15.0 sec 17.0 MBytes 9.49 Mbits/sec > > > > > > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 -u -b 10Mbit > > > ------------------------------------------------------------ > > > Client connecting to 10.0.0.254, UDP port 5001 > > > Binding to local address 10.1.0.1 > > > Sending 1470 byte datagrams > > > UDP buffer size: 110 KByte (default) > > > ------------------------------------------------------------ > > > [ 3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001 > > > [ ID] Interval Transfer Bandwidth > > > [ 3] 0.0-15.0 sec 18.8 MBytes 10.5 Mbits/sec > > > [ 3] Sent 13382 datagrams > > > [ 3] Server Report: > > > [ 3] 0.0-15.1 sec 17.4 MBytes 9.72 Mbits/sec 0.822 ms 934/13381 (= 7%) > > > [ 3] 0.0-15.1 sec 1 datagrams received out-of-order > > > > > > > > > iperf runs with the profile set: > > > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 > > > ------------------------------------------------------------ > > > Client connecting to 10.0.0.254, TCP port 5001 > > > Binding to local address 10.1.0.1 > > > TCP window size: 16.0 KByte (default) > > > ------------------------------------------------------------ > > > [ 3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001 > > > [ ID] Interval Transfer Bandwidth > > > [ 3] 0.0-15.7 sec 968 KBytes 505 Kbits/sec > > > > > > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 -u -b 10Mbit > > > ------------------------------------------------------------ > > > Client connecting to 10.0.0.254, UDP port 5001 > > > Binding to local address 10.1.0.1 > > > Sending 1470 byte datagrams > > > UDP buffer size: 110 KByte (default) > > > ------------------------------------------------------------ > > > [ 3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001 > > > [ ID] Interval Transfer Bandwidth > > > [ 3] 0.0-15.0 sec 18.8 MBytes 10.5 Mbits/sec > > > [ 3] Sent 13382 datagrams > > > [ 3] Server Report: > > > [ 3] 0.0-16.3 sec 893 KBytes 449 Kbits/sec 1.810 ms 12757/1337= 9 (95%) > > > > > > > > > Let me know what other information you would need to help me debugging = this. > > > In advance, thank you for your help > > > > > > -- > > > Charles-Henri de Boysson > > > _______________________________________________ > > > freebsd-ipfw@freebsd.org mailing list > > > http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw > > > To unsubscribe, send any mail to "freebsd-ipfw-unsubscr...@freebsd.org" > > _______________________________________________ > > freebsd-ipfw@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw > > To unsubscribe, send any mail to "freebsd-ipfw-unsubscr...@freebsd.org" > > > --=20 ---------------------------------------------------------------------------= ---------------------- Nuno Diogo From owner-freebsd-ipfw@FreeBSD.ORG Fri May 21 07:24:42 2010 Return-Path: Delivered-To: freebsd-ipfw@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C1CD7106566C for ; Fri, 21 May 2010 07:24:42 +0000 (UTC) (envelope-from luigi@onelab2.iet.unipi.it) Received: from onelab2.iet.unipi.it (onelab2.iet.unipi.it [131.114.59.238]) by mx1.freebsd.org (Postfix) with ESMTP id 98E428FC17 for ; Fri, 21 May 2010 07:24:41 +0000 (UTC) Received: by onelab2.iet.unipi.it (Postfix, from userid 275) id B689873098; Fri, 21 May 2010 09:36:01 +0200 (CEST) Date: Fri, 21 May 2010 09:36:01 +0200 From: Luigi Rizzo To: Nuno Diogo Message-ID: <20100521073601.GA58353@onelab2.iet.unipi.it> References: <005a01caf6a4$e8cf9c70$ba6ed550$@com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.4.2.3i Cc: freebsd-ipfw@freebsd.org Subject: Re: Performance issue with new pipe profile feature in FreeBSD 8.0 RELEASE X-BeenThere: freebsd-ipfw@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: IPFW Technical Discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 May 2010 07:24:42 -0000 top post for convenience: you are making a common mistake -- "delay" and "profile" are not the same thing. + With "delay" you set the propagation delay of the link: once a packet is outside of the bottleneck, it takes some extra time to reach its destination. However, during this time, other traffic will flow through the bottleneck; + with "profile" you specify a distribution of the extra time that the packet will take to go through the bottleneck link (e.g. due to preambles, crc, framing and other stuff). The bottleneck is effectively unavailable for other traffic during this time. So the throughput you measure with a "profile" of X ms is usually much lower than the one you see with a "delay" of X ms. cheers luigi On Thu, May 20, 2010 at 06:56:41PM -0400, Nuno Diogo wrote: > Hi all, > Sorry to spam the list with this issue, but I do believe that this is not > working as intended so I performed some more testing in a controlled > environment. > Using a dedicated FreeBSD 8-RELEASE-p2 i386 with GENERIC kernel + the > following additions: > > - options HZ=2000 > - device if_bridge > - options IPFIREWALL > - options IPFIREWALL_DEFAULTS_TO_ACCEPT > - options DUMMYNET > > Routing between VR0 and EM0 interfaces. > Ipfer TCP transfers between a Win 7 laptop and a Linux virtual server. > Only one variable changed at a time: > > #So lets start with your typical pipe rule using bandwidth and delay > statement: > > *Test 1 with 10Mbps 10ms:* > > #Only one rule pushing packets to PIPE 1 if they're passing between these > two specific interfaces > FreeBSD-Test# ipfw list > 0100 pipe 1 ip from any to any recv em0 xmit vr0 > 65535 allow ip from any to any > > #Pipe configured with 10M bandwidth, 10ms delay and 50 slot queue > FreeBSD-Test# ipfw pipe 1 show > 00001: 10.000 Mbit/s 10 ms 50 sl. 1 queues (1 buckets) droptail > burst: 0 Byte > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte > Drp > 0 icmp 192.168.100.10/0 10.168.0.99/0 112431 154127874 0 0 > 168 > > #Traceroute from laptop to server showing just that one hop > C:\Users\nuno>tracert -d 10.168.0.99 > Tracing route to 10.168.0.99 over a maximum of 30 hops > 1 <1 ms <1 ms <1 ms 192.168.100.1 > 2 10 ms 10 ms 10 ms 10.168.0.99 > Trace complete. > > #Ping result for 1470 byte packet > C:\Users\nuno>ping 10.168.0.99 -t -l 1470 > > > > Pinging 10.168.0.99 with 1470 bytes of data: > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > > #Iperf performance, as we can see it utilizes the entire emulated pipe > > bin/iperf.exe -c 10.168.0.99 -P 1 -i 1 -p 5001 -f k -t 10000 > > ------------------------------------------------------------ > > Client connecting to 10.168.0.99, TCP port 5001 > > TCP window size: 63.0 KByte (default) > > ------------------------------------------------------------ > > [148] local 192.168.100.10 port 49225 connected with 10.168.0.99 port 5001 > > [ ID] Interval Transfer Bandwidth > > [148] 0.0- 1.0 sec 1392 KBytes 11403 Kbits/sec > > [148] 1.0- 2.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 2.0- 3.0 sec 1192 KBytes 9765 Kbits/sec > > [148] 3.0- 4.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 4.0- 5.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 5.0- 6.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 6.0- 7.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 7.0- 8.0 sec 1176 KBytes 9634 Kbits/sec > > [148] 8.0- 9.0 sec 1192 KBytes 9765 Kbits/sec > > [148] 9.0-10.0 sec 1200 KBytes 9830 Kbits/sec > > [148] 10.0-11.0 sec 1120 KBytes 9175 Kbits/sec > > [148] 11.0-12.0 sec 1248 KBytes 10224 Kbits/sec > > [148] 12.0-13.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 13.0-14.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 14.0-15.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 15.0-16.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 16.0-17.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 17.0-18.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 18.0-19.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 19.0-20.0 sec 1192 KBytes 9765 Kbits/sec > > > > #Now let configure the same emulation (from my understanding) but with a > profile > > FreeBSD-Test# cat ./profile > > name Test > > samples 100 > > bw 10M > > loss-level 1.0 > > prob delay > > 0.00 10 > > 1.00 10 > > > #Pipe 1 configured with the above profile file and no additional bandwidth > or delay parameters > > FreeBSD-Test# ipfw pipe 1 show > > 00001: 10.000 Mbit/s 0 ms 50 sl. 1 queues (1 buckets) droptail > > burst: 0 Byte > > profile: name "Test" loss 1.000000 samples 100 > > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 > > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte > Drp > > 0 icmp 192.168.100.10/0 10.168.0.99/0 131225 181884981 0 0 > 211 > > > #Ping time for a 1470 byte packet remains the same > > C:\Users\nuno>ping 10.168.0.99 -t -l 1470 > > > > Pinging 10.168.0.99 with 1470 bytes of data: > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=14ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=11ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > #Iperf transfer however drops considerable! > > bin/iperf.exe -c 10.168.0.99 -P 1 -i 1 -p 5001 -f k -t 10000 > > ------------------------------------------------------------ > > Client connecting to 10.168.0.99, TCP port 5001 > > TCP window size: 63.0 KByte (default) > > ------------------------------------------------------------ > > [148] local 192.168.100.10 port 49226 connected with 10.168.0.99 port 5001 > > [ ID] Interval Transfer Bandwidth > > [148] 0.0- 1.0 sec 248 KBytes 2032 Kbits/sec > > [148] 1.0- 2.0 sec 56.0 KBytes 459 Kbits/sec > > [148] 2.0- 3.0 sec 176 KBytes 1442 Kbits/sec > > [148] 3.0- 4.0 sec 128 KBytes 1049 Kbits/sec > > [148] 4.0- 5.0 sec 120 KBytes 983 Kbits/sec > > [148] 5.0- 6.0 sec 128 KBytes 1049 Kbits/sec > > [148] 6.0- 7.0 sec 128 KBytes 1049 Kbits/sec > > [148] 7.0- 8.0 sec 96.0 KBytes 786 Kbits/sec > > [148] 8.0- 9.0 sec 144 KBytes 1180 Kbits/sec > > [148] 9.0-10.0 sec 128 KBytes 1049 Kbits/sec > > [148] 10.0-11.0 sec 128 KBytes 1049 Kbits/sec > > [148] 11.0-12.0 sec 120 KBytes 983 Kbits/sec > > [148] 12.0-13.0 sec 120 KBytes 983 Kbits/sec > > [148] 13.0-14.0 sec 128 KBytes 1049 Kbits/sec > > [148] 14.0-15.0 sec 120 KBytes 983 Kbits/sec > > [148] 15.0-16.0 sec 128 KBytes 1049 Kbits/sec > > [148] 16.0-17.0 sec 120 KBytes 983 Kbits/sec > > [148] 17.0-18.0 sec 120 KBytes 983 Kbits/sec > > [148] 18.0-19.0 sec 128 KBytes 1049 Kbits/sec > > [148] 19.0-20.0 sec 64.0 KBytes 524 Kbits/sec > > > Lets do the exact same but this time reducing the emulate latency down to > just 2ms. > *Test 2 with 10Mbps 2ms:* > #Pipe 1 configured for 10Mbps bandwidth, 2ms latency and 50 slot queue > > FreeBSD-Test# ipfw pipe 1 show > > 00001: 10.000 Mbit/s 2 ms 50 sl. 1 queues (1 buckets) droptail > > burst: 0 Byte > > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 > > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte > Drp > > 0 icmp 192.168.100.10/0 10.168.0.99/0 21020 19358074 0 0 > 123 > > > #Ping time from laptop to server > > C:\Users\nuno>ping 10.168.0.99 -t -l 1470 > > > > Pinging 10.168.0.99 with 1470 bytes of data: > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=3ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=3ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > > #Ipfer throughput, again we can use all of the emulated bandwidth > > bin/iperf.exe -c 10.168.0.99 -P 1 -i 1 -p 5001 -f k -t 10000 > > ------------------------------------------------------------ > > Client connecting to 10.168.0.99, TCP port 5001 > > TCP window size: 63.0 KByte (default) > > ------------------------------------------------------------ > > [148] local 192.168.100.10 port 49196 connected with 10.168.0.99 port 5001 > > [ ID] Interval Transfer Bandwidth > > [148] 0.0- 1.0 sec 1264 KBytes 10355 Kbits/sec > > [148] 1.0- 2.0 sec 1192 KBytes 9765 Kbits/sec > > [148] 2.0- 3.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 3.0- 4.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 4.0- 5.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 5.0- 6.0 sec 1192 KBytes 9765 Kbits/sec > > [148] 6.0- 7.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 7.0- 8.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 8.0- 9.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 9.0-10.0 sec 1152 KBytes 9437 Kbits/sec > > [148] 10.0-11.0 sec 1240 KBytes 10158 Kbits/sec > > [148] 11.0-12.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 12.0-13.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 13.0-14.0 sec 1176 KBytes 9634 Kbits/sec > > [148] 14.0-15.0 sec 984 KBytes 8061 Kbits/sec > > [148] 15.0-16.0 sec 1192 KBytes 9765 Kbits/sec > > [148] 16.0-17.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 17.0-18.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 18.0-19.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 19.0-20.0 sec 1208 KBytes 9896 Kbits/sec > > > #Now lets configure the profile file to emulate 10Mbps and 2ms of added > overhead > > FreeBSD-Test# cat ./profile > > name Test > > samples 100 > > bw 10M > > loss-level 1.0 > > prob delay > > 0.00 2 > 1.00 2 > > > > #Pipe 1 configured with the above profile file and no additional bandwidth > or delay parameters > > FreeBSD-Test# ipfw pipe 1 show > > 00001: 10.000 Mbit/s 0 ms 50 sl. 1 queues (1 buckets) droptail > > burst: 0 Byte > > profile: name "Test" loss 1.000000 samples 100 > > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 > > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte > Drp > 0 icmp 192.168.100.10/0 10.168.0.99/0 39570 46750171 0 0 > 186 > > #Again, ping remains constant with this configuration > > C:\Users\nuno>ping 10.168.0.99 -t -l 1470 > > > > Pinging 10.168.0.99 with 1470 bytes of data: > > Reply from 10.168.0.99: bytes=1470 time=3ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=3ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > > #Iperf throughput again takes a big hit, although not as much as when we're > adding 10ms or overhead > > bin/iperf.exe -c 10.168.0.99 -P 1 -i 1 -p 5001 -f k -t 10000 > > ------------------------------------------------------------ > > Client connecting to 10.168.0.99, TCP port 5001 > > TCP window size: 63.0 KByte (default) > > ------------------------------------------------------------ > > [148] local 192.168.100.10 port 49197 connected with 10.168.0.99 port 5001 > > [ ID] Interval Transfer Bandwidth > > [148] 0.0- 1.0 sec 544 KBytes 4456 Kbits/sec > > [148] 1.0- 2.0 sec 440 KBytes 3604 Kbits/sec > > [148] 2.0- 3.0 sec 440 KBytes 3604 Kbits/sec > > [148] 3.0- 4.0 sec 432 KBytes 3539 Kbits/sec > > [148] 4.0- 5.0 sec 440 KBytes 3604 Kbits/sec > > [148] 5.0- 6.0 sec 448 KBytes 3670 Kbits/sec > > [148] 6.0- 7.0 sec 432 KBytes 3539 Kbits/sec > > [148] 7.0- 8.0 sec 440 KBytes 3604 Kbits/sec > > [148] 8.0- 9.0 sec 440 KBytes 3604 Kbits/sec > > [148] 9.0-10.0 sec 448 KBytes 3670 Kbits/sec > > [148] 10.0-11.0 sec 440 KBytes 3604 Kbits/sec > > [148] 11.0-12.0 sec 440 KBytes 3604 Kbits/sec > > [148] 12.0-13.0 sec 392 KBytes 3211 Kbits/sec > > [148] 13.0-14.0 sec 488 KBytes 3998 Kbits/sec > > [148] 14.0-15.0 sec 440 KBytes 3604 Kbits/sec > > [148] 15.0-16.0 sec 440 KBytes 3604 Kbits/sec > > [148] 16.0-17.0 sec 440 KBytes 3604 Kbits/sec > > [148] 17.0-18.0 sec 440 KBytes 3604 Kbits/sec > > [148] 18.0-19.0 sec 440 KBytes 3604 Kbits/sec > > [148] 19.0-20.0 sec 448 KBytes 3670 Kbits/sec > > > From my understanding, since the emulated RTT of the link remains the same, > Iperf performance should also stay the same. > > Regardless of how or why the RTT is present, (geographically induced > latency, MAC overhead, congestion etc) the effects on a TCP transmission > should be the same (assuming as in this test no jitter and packet loss) > > > On the first test we see throughput drop from ~9.7Mbps to 980Kbps-1050Kbps > with the addition of just 10ms of overhead in the profile! > > On the second test we see throughput drop from ~9.7Mbps to ~3.6Mbps with the > addition of just 2ms of overhead in the profile! > > So is this feature not working as intended or am I completely missing > something here? > > > I (and hopefully others) would highly appreciate any opinions as this new > feature could really expand the use of dummynet as a WAN emulator, but it > seems that in it's current implementation it does not allow for the full > utilization of the emulated bandwidth regardless of how little or static the > extra delay is set to. > > > Sincerely, > > Nuno Diogo > > On Tue, May 18, 2010 at 12:12 PM, Nuno Diogo wrote: > > > Hi all, > > > > I?m encountering the same situation, and I?m not quite understanding > > Luigi?s explanation. > > > > If a pipe is configured with 10Mbps bandwidth and 25ms delay, it will take > > approximately 26.7ms for a 1470 byte packet to pass through it as per the > > below math. > > > > IPerf can fully utilize the available emulated bandwidth with that delay. > > > > > > > > If we configure a profile with the same characteristics, 10Mbps and 25ms > > overhead/extra-airtime/delay isn?t the end result the same? > > > > A 1470 byte packet should still take ~26.7ms to pass through the pipe and > > IPerf should still be able to fully utilize the emulated bandwidth, no? > > > > > > > > IPerf does not know how that delay is being emulated or configured, it just > > knows that it?s taking ~26.7ms to get ACKs back etc, so I guess I?m missing > > something here? > > > > > > > > I use dummynet often for WAN acceleration testing, and have been trying to > > use the new profile method to try and emulate ?jitter?. > > > > With pings it works great, but when trying to use the full configured > > bandwidth, I get the same results as Charles. > > > > Regardless of delay/overhead/bandwidth configuration IPerf can?t push more > > than a fraction of the configured bandwidth with lots of packets queuing and > > dropping. > > > > > > > > Your patience is appreciated. > > > > > > > > Sincerely, > > > > > > > > > > _______________________________________________________________________________ > > > > Nuno Diogo > > > > > > > > Luigi Rizzo > > Tue, 24 Nov 2009 21:21:56 -0800 > > > > Hi, > > > > there is no bug, the 'pipe profile' code is working correctly. > > > > > > > > In your mail below you are comparing two different things. > > > > > > > > "pipe config bw 10Mbit/s delay 25ms" > > > > means that _after shaping_ at 10Mbps, all traffic will > > > > be subject to an additional delay of 25ms. > > > > Each packet (1470 bytes) will take Length/Bandwidth sec > > > > to come out or 1470*8/10M = 1.176ms , but you won't > > > > see them until you wait another 25ms (7500km at the speed > > > > of light). > > > > > > > > "pipe config bw 10Mbit/s profile "test" ..." > > > > means that in addition to the Length/Bandwidth, > > > > _each packet transmission_ will consume > > > > some additional air-time as specified in the profile > > > > (25ms in your case) > > > > > > > > So, in your case with 1470bytes/pkt each transmission > > > > will take len/bw (1.176ms) + 25ms (extra air time) = 26.76ms > > > > That is 25 times more than the previous case and explains > > > > the reduced bandwidth you see. > > > > > > > > The 'delay profile' is effectively extra air time used for each > > > > transmission. The name is probably confusing, i should have called > > > > it 'extra-time' or 'overhead' and not 'delay' > > > > > > > > cheers > > > > luigi > > > > > > > > On Tue, Nov 24, 2009 at 12:40:31PM -0500, Charles Henri de Boysson wrote: > > > > > Hi, > > > > > > > > > > I have a simple setup with two computer connected via a FreeBSD bridge > > > > > running 8.0 RELEASE. > > > > > I am trying to use dummynet to simulate a wireless network between the > > > > > two and for that I wanted to use the pipe profile feature of FreeBSD > > > > > 8.0. > > > > > But as I was experimenting with the pipe profile feature I ran into some > > > > > issues. > > > > > > > > > > I have setup ipfw to send traffic coming for either interface of the > > > > > bridge to a respective pipe as follow: > > > > > > > > > > # ipfw show > > > > > 00100 ?? ?? 0 ?? ?? ?? ?? 0 allow ip from any to any via lo0 > > > > > 00200 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from any to 127.0.0.0/8 > > > > > 00300 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from 127.0.0.0/8 to any > > > > > 01000 ?? ?? 0 ?? ?? ?? ?? 0 pipe 1 ip from any to any via vr0 layer2 > > > > > 01100 ?? ?? 0 ?? ?? ?? ?? 0 pipe 101 ip from any to any via vr4 layer2 > > > > > 65000 ??7089 ?? ??716987 allow ip from any to any > > > > > 65535 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from any to any > > > > > > > > > > When I setup my pipes as follow: > > > > > > > > > > # ipfw pipe 1 config bw 10Mbit delay 25 mask proto 0 > > > > > # ipfw pipe 101 config bw 10Mbit delay 25 mask proto 0 > > > > > # ipfw pipe show > > > > > > > > > > 00001: ??10.000 Mbit/s ?? 25 ms ?? 50 sl. 0 queues (1 buckets) droptail > > > > > burst: 0 Byte > > > > > 00101: ??10.000 Mbit/s ?? 25 ms ?? 50 sl. 0 queues (1 buckets) droptail > > > > > burst: 0 Byte > > > > > > > > > > With this setup, when I try to pass traffic through the bridge with > > > > > iperf, I obtain the desired speed: iperf reports about 9.7Mbits/sec in > > > > > UDP mode and 9.5 in TCP mode (I copied and pasted the iperf runs at > > > > > the end of this email). > > > > > > > > > > The problem arise when I setup pipe 1 (the downlink) with an > > > > > equivalent profile (I tried to simplify it as much as possible). > > > > > > > > > > # ipfw pipe 1 config profile test.pipeconf mask proto 0 > > > > > # ipfw pipe show > > > > > 00001: 10.000 Mbit/s 0 ms 50 sl. 0 queues (1 buckets) droptail > > > > > burst: 0 Byte > > > > > profile: name "test" loss 1.000000 samples 2 > > > > > 00101: 10.000 Mbit/s 25 ms 50 sl. 0 queues (1 buckets) droptail > > > > > burst: 0 Byte > > > > > > > > > > # cat test.pipeconf > > > > > name test > > > > > bw 10Mbit > > > > > loss-level 1.0 > > > > > samples 2 > > > > > prob delay > > > > > 0.0 25 > > > > > 1.0 25 > > > > > > > > > > The same iperf TCP tests then collapse to about 500Kbit/s with the > > > > > same settings (copy and pasted the output of iperf bellow) > > > > > > > > > > I can't figure out what is going on. There is no visible load on the bridge. > > > > > I have an unmodified GENERIC kernel with the following sysctl. > > > > > > > > > > net.link.bridge.ipfw: 1 > > > > > kern.hz: 1000 > > > > > > > > > > The bridge configuration is as follow: > > > > > > > > > > bridge0: flags=8843 metric 0 mtu 1500 > > > > > ether 1a:1f:2e:42:74:8d > > > > > id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15 > > > > > maxage 20 holdcnt 6 proto rstp maxaddr 100 timeout 1200 > > > > > root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0 > > > > > member: vr4 flags=143 > > > > > ?? ?? ?? ??ifmaxaddr 0 port 6 priority 128 path cost 200000 > > > > > member: vr0 flags=143 > > > > > ?? ?? ?? ??ifmaxaddr 0 port 2 priority 128 path cost 200000 > > > > > > > > > > > > > > > iperf runs without the profile set: > > > > > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 > > > > > ------------------------------------------------------------ > > > > > Client connecting to 10.0.0.254, TCP port 5001 > > > > > Binding to local address 10.1.0.1 > > > > > TCP window size: 16.0 KByte (default) > > > > > ------------------------------------------------------------ > > > > > [ 3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001 > > > > > [ ID] Interval Transfer Bandwidth > > > > > [ 3] 0.0-15.0 sec 17.0 MBytes 9.49 Mbits/sec > > > > > > > > > > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 -u -b 10Mbit > > > > > ------------------------------------------------------------ > > > > > Client connecting to 10.0.0.254, UDP port 5001 > > > > > Binding to local address 10.1.0.1 > > > > > Sending 1470 byte datagrams > > > > > UDP buffer size: 110 KByte (default) > > > > > ------------------------------------------------------------ > > > > > [ 3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001 > > > > > [ ID] Interval Transfer Bandwidth > > > > > [ 3] 0.0-15.0 sec 18.8 MBytes 10.5 Mbits/sec > > > > > [ 3] Sent 13382 datagrams > > > > > [ 3] Server Report: > > > > > [ 3] 0.0-15.1 sec 17.4 MBytes 9.72 Mbits/sec 0.822 ms 934/13381 (7%) > > > > > [ 3] 0.0-15.1 sec 1 datagrams received out-of-order > > > > > > > > > > > > > > > iperf runs with the profile set: > > > > > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 > > > > > ------------------------------------------------------------ > > > > > Client connecting to 10.0.0.254, TCP port 5001 > > > > > Binding to local address 10.1.0.1 > > > > > TCP window size: 16.0 KByte (default) > > > > > ------------------------------------------------------------ > > > > > [ 3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001 > > > > > [ ID] Interval Transfer Bandwidth > > > > > [ 3] 0.0-15.7 sec 968 KBytes 505 Kbits/sec > > > > > > > > > > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 -u -b 10Mbit > > > > > ------------------------------------------------------------ > > > > > Client connecting to 10.0.0.254, UDP port 5001 > > > > > Binding to local address 10.1.0.1 > > > > > Sending 1470 byte datagrams > > > > > UDP buffer size: 110 KByte (default) > > > > > ------------------------------------------------------------ > > > > > [ 3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001 > > > > > [ ID] Interval Transfer Bandwidth > > > > > [ 3] 0.0-15.0 sec 18.8 MBytes 10.5 Mbits/sec > > > > > [ 3] Sent 13382 datagrams > > > > > [ 3] Server Report: > > > > > [ 3] 0.0-16.3 sec 893 KBytes 449 Kbits/sec 1.810 ms 12757/13379 (95%) > > > > > > > > > > > > > > > Let me know what other information you would need to help me debugging this. > > > > > In advance, thank you for your help > > > > > > > > > > -- > > > > > Charles-Henri de Boysson > > > > > _______________________________________________ > > > > > freebsd-ipfw@freebsd.org mailing list > > > > > http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw > > > > > To unsubscribe, send any mail to "freebsd-ipfw-unsubscr...@freebsd.org" > > > > _______________________________________________ > > > > freebsd-ipfw@freebsd.org mailing list > > > > http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw > > > > To unsubscribe, send any mail to "freebsd-ipfw-unsubscr...@freebsd.org" > > > > > > > > > > -- > ------------------------------------------------------------------------------------------------- > > Nuno Diogo > _______________________________________________ > freebsd-ipfw@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw > To unsubscribe, send any mail to "freebsd-ipfw-unsubscribe@freebsd.org" From owner-freebsd-ipfw@FreeBSD.ORG Fri May 21 14:18:46 2010 Return-Path: Delivered-To: freebsd-ipfw@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2B039106566C for ; Fri, 21 May 2010 14:18:46 +0000 (UTC) (envelope-from nuno@diogonet.com) Received: from mail-gx0-f226.google.com (mail-gx0-f226.google.com [209.85.217.226]) by mx1.freebsd.org (Postfix) with ESMTP id D30A78FC08 for ; Fri, 21 May 2010 14:18:45 +0000 (UTC) Received: by gxk26 with SMTP id 26so550019gxk.13 for ; Fri, 21 May 2010 07:18:45 -0700 (PDT) Received: by 10.101.105.22 with SMTP id h22mr2287076anm.35.1274451524626; Fri, 21 May 2010 07:18:44 -0700 (PDT) Received: from nunopc (c-65-34-225-233.hsd1.fl.comcast.net [65.34.225.233]) by mx.google.com with ESMTPS id e4sm720331anb.5.2010.05.21.07.18.41 (version=SSLv3 cipher=RC4-MD5); Fri, 21 May 2010 07:18:43 -0700 (PDT) From: "Nuno Diogo" To: "'Luigi Rizzo'" References: <005a01caf6a4$e8cf9c70$ba6ed550$@com> <20100521073601.GA58353@onelab2.iet.unipi.it> In-Reply-To: <20100521073601.GA58353@onelab2.iet.unipi.it> Date: Fri, 21 May 2010 10:18:33 -0400 Message-ID: <004b01caf8f0$82f78720$88e69560$@com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Office Outlook 12.0 thread-index: Acr4tq1gLZn6d+u9Rn2HpmF73ut1ywANtTBA Content-Language: en-us Cc: freebsd-ipfw@freebsd.org Subject: RE: Performance issue with new pipe profile feature in FreeBSD 8.0 RELEASE X-BeenThere: freebsd-ipfw@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: IPFW Technical Discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 May 2010 14:18:46 -0000 Thank you for the break down, I get it now I hope. Delay is applied AFTER the bandwidth bottleneck, therefore emulating other hops the packet may have to traverse. Profile 'delay' is applied IN the bandwidth bottleneck, emulating overhead and unavailability for that one hop. You are right, that parameter should be called something besides 'delay'. Also the diagram in "Dummynet Revisited" page three, shows "delay" being applied within the "bw" bottleneck instead of after, so that threw me off as well. So unfortunately utilizing the profile delay distribution to emulate a typical internet connection's fluctuating latency such as my ping to yahoo below will not achieve accurate throughput emulation. Since you already have the code that varies the overhead based on empirical curve, how hard would it be to extend that mechanism to the delay so that these fluctuating latencies can be emulated with dummynet? Can you point me to the source code that handles that? I'm not a developer by any stretch of the imagination but maybe I can learn something while trying to hack at it? Thank you for your reply and your time. C:\Users\nuno>ping www.yahoo.com -t -l 1470 Pinging any-fp.wa1.b.yahoo.com [69.147.125.65] with 1470 bytes of data: Reply from 69.147.125.65: bytes=1470 time=45ms TTL=49 Reply from 69.147.125.65: bytes=1470 time=45ms TTL=49 Reply from 69.147.125.65: bytes=1470 time=48ms TTL=49 Reply from 69.147.125.65: bytes=1470 time=44ms TTL=49 Reply from 69.147.125.65: bytes=1470 time=46ms TTL=49 Reply from 69.147.125.65: bytes=1470 time=42ms TTL=49 Reply from 69.147.125.65: bytes=1470 time=50ms TTL=49 Reply from 69.147.125.65: bytes=1470 time=43ms TTL=49 Reply from 69.147.125.65: bytes=1470 time=45ms TTL=49 Reply from 69.147.125.65: bytes=1470 time=45ms TTL=49 Reply from 69.147.125.65: bytes=1470 time=45ms TTL=49 Reply from 69.147.125.65: bytes=1470 time=43ms TTL=49 Reply from 69.147.125.65: bytes=1470 time=72ms TTL=49 Reply from 69.147.125.65: bytes=1470 time=45ms TTL=49 Reply from 69.147.125.65: bytes=1470 time=46ms TTL=49 Reply from 69.147.125.65: bytes=1470 time=44ms TTL=49 Reply from 69.147.125.65: bytes=1470 time=46ms TTL=49 Reply from 69.147.125.65: bytes=1470 time=59ms TTL=49 Reply from 69.147.125.65: bytes=1470 time=43ms TTL=49 Reply from 69.147.125.65: bytes=1470 time=43ms TTL=49 Reply from 69.147.125.65: bytes=1470 time=45ms TTL=49 Reply from 69.147.125.65: bytes=1470 time=42ms TTL=49 Ping statistics for 69.147.125.65: Packets: Sent = 22, Received = 22, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 42ms, Maximum = 72ms, Average = 46ms ____________________________________________________________________________ ___ Nuno Diogo -----Original Message----- From: Luigi Rizzo [mailto:rizzo@iet.unipi.it] Sent: Friday, May 21, 2010 3:36 AM To: Nuno Diogo Cc: freebsd-ipfw@freebsd.org Subject: Re: Performance issue with new pipe profile feature in FreeBSD 8.0 RELEASE top post for convenience: you are making a common mistake -- "delay" and "profile" are not the same thing. + With "delay" you set the propagation delay of the link: once a packet is outside of the bottleneck, it takes some extra time to reach its destination. However, during this time, other traffic will flow through the bottleneck; + with "profile" you specify a distribution of the extra time that the packet will take to go through the bottleneck link (e.g. due to preambles, crc, framing and other stuff). The bottleneck is effectively unavailable for other traffic during this time. So the throughput you measure with a "profile" of X ms is usually much lower than the one you see with a "delay" of X ms. cheers luigi On Thu, May 20, 2010 at 06:56:41PM -0400, Nuno Diogo wrote: > Hi all, > Sorry to spam the list with this issue, but I do believe that this is not > working as intended so I performed some more testing in a controlled > environment. > Using a dedicated FreeBSD 8-RELEASE-p2 i386 with GENERIC kernel + the > following additions: > > - options HZ=2000 > - device if_bridge > - options IPFIREWALL > - options IPFIREWALL_DEFAULTS_TO_ACCEPT > - options DUMMYNET > > Routing between VR0 and EM0 interfaces. > Ipfer TCP transfers between a Win 7 laptop and a Linux virtual server. > Only one variable changed at a time: > > #So lets start with your typical pipe rule using bandwidth and delay > statement: > > *Test 1 with 10Mbps 10ms:* > > #Only one rule pushing packets to PIPE 1 if they're passing between these > two specific interfaces > FreeBSD-Test# ipfw list > 0100 pipe 1 ip from any to any recv em0 xmit vr0 > 65535 allow ip from any to any > > #Pipe configured with 10M bandwidth, 10ms delay and 50 slot queue > FreeBSD-Test# ipfw pipe 1 show > 00001: 10.000 Mbit/s 10 ms 50 sl. 1 queues (1 buckets) droptail > burst: 0 Byte > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte > Drp > 0 icmp 192.168.100.10/0 10.168.0.99/0 112431 154127874 0 0 > 168 > > #Traceroute from laptop to server showing just that one hop > C:\Users\nuno>tracert -d 10.168.0.99 > Tracing route to 10.168.0.99 over a maximum of 30 hops > 1 <1 ms <1 ms <1 ms 192.168.100.1 > 2 10 ms 10 ms 10 ms 10.168.0.99 > Trace complete. > > #Ping result for 1470 byte packet > C:\Users\nuno>ping 10.168.0.99 -t -l 1470 > > > > Pinging 10.168.0.99 with 1470 bytes of data: > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > > #Iperf performance, as we can see it utilizes the entire emulated pipe > > bin/iperf.exe -c 10.168.0.99 -P 1 -i 1 -p 5001 -f k -t 10000 > > ------------------------------------------------------------ > > Client connecting to 10.168.0.99, TCP port 5001 > > TCP window size: 63.0 KByte (default) > > ------------------------------------------------------------ > > [148] local 192.168.100.10 port 49225 connected with 10.168.0.99 port 5001 > > [ ID] Interval Transfer Bandwidth > > [148] 0.0- 1.0 sec 1392 KBytes 11403 Kbits/sec > > [148] 1.0- 2.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 2.0- 3.0 sec 1192 KBytes 9765 Kbits/sec > > [148] 3.0- 4.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 4.0- 5.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 5.0- 6.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 6.0- 7.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 7.0- 8.0 sec 1176 KBytes 9634 Kbits/sec > > [148] 8.0- 9.0 sec 1192 KBytes 9765 Kbits/sec > > [148] 9.0-10.0 sec 1200 KBytes 9830 Kbits/sec > > [148] 10.0-11.0 sec 1120 KBytes 9175 Kbits/sec > > [148] 11.0-12.0 sec 1248 KBytes 10224 Kbits/sec > > [148] 12.0-13.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 13.0-14.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 14.0-15.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 15.0-16.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 16.0-17.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 17.0-18.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 18.0-19.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 19.0-20.0 sec 1192 KBytes 9765 Kbits/sec > > > > #Now let configure the same emulation (from my understanding) but with a > profile > > FreeBSD-Test# cat ./profile > > name Test > > samples 100 > > bw 10M > > loss-level 1.0 > > prob delay > > 0.00 10 > > 1.00 10 > > > #Pipe 1 configured with the above profile file and no additional bandwidth > or delay parameters > > FreeBSD-Test# ipfw pipe 1 show > > 00001: 10.000 Mbit/s 0 ms 50 sl. 1 queues (1 buckets) droptail > > burst: 0 Byte > > profile: name "Test" loss 1.000000 samples 100 > > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 > > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte > Drp > > 0 icmp 192.168.100.10/0 10.168.0.99/0 131225 181884981 0 0 > 211 > > > #Ping time for a 1470 byte packet remains the same > > C:\Users\nuno>ping 10.168.0.99 -t -l 1470 > > > > Pinging 10.168.0.99 with 1470 bytes of data: > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=14ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=11ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=12ms TTL=63 > > #Iperf transfer however drops considerable! > > bin/iperf.exe -c 10.168.0.99 -P 1 -i 1 -p 5001 -f k -t 10000 > > ------------------------------------------------------------ > > Client connecting to 10.168.0.99, TCP port 5001 > > TCP window size: 63.0 KByte (default) > > ------------------------------------------------------------ > > [148] local 192.168.100.10 port 49226 connected with 10.168.0.99 port 5001 > > [ ID] Interval Transfer Bandwidth > > [148] 0.0- 1.0 sec 248 KBytes 2032 Kbits/sec > > [148] 1.0- 2.0 sec 56.0 KBytes 459 Kbits/sec > > [148] 2.0- 3.0 sec 176 KBytes 1442 Kbits/sec > > [148] 3.0- 4.0 sec 128 KBytes 1049 Kbits/sec > > [148] 4.0- 5.0 sec 120 KBytes 983 Kbits/sec > > [148] 5.0- 6.0 sec 128 KBytes 1049 Kbits/sec > > [148] 6.0- 7.0 sec 128 KBytes 1049 Kbits/sec > > [148] 7.0- 8.0 sec 96.0 KBytes 786 Kbits/sec > > [148] 8.0- 9.0 sec 144 KBytes 1180 Kbits/sec > > [148] 9.0-10.0 sec 128 KBytes 1049 Kbits/sec > > [148] 10.0-11.0 sec 128 KBytes 1049 Kbits/sec > > [148] 11.0-12.0 sec 120 KBytes 983 Kbits/sec > > [148] 12.0-13.0 sec 120 KBytes 983 Kbits/sec > > [148] 13.0-14.0 sec 128 KBytes 1049 Kbits/sec > > [148] 14.0-15.0 sec 120 KBytes 983 Kbits/sec > > [148] 15.0-16.0 sec 128 KBytes 1049 Kbits/sec > > [148] 16.0-17.0 sec 120 KBytes 983 Kbits/sec > > [148] 17.0-18.0 sec 120 KBytes 983 Kbits/sec > > [148] 18.0-19.0 sec 128 KBytes 1049 Kbits/sec > > [148] 19.0-20.0 sec 64.0 KBytes 524 Kbits/sec > > > Lets do the exact same but this time reducing the emulate latency down to > just 2ms. > *Test 2 with 10Mbps 2ms:* > #Pipe 1 configured for 10Mbps bandwidth, 2ms latency and 50 slot queue > > FreeBSD-Test# ipfw pipe 1 show > > 00001: 10.000 Mbit/s 2 ms 50 sl. 1 queues (1 buckets) droptail > > burst: 0 Byte > > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 > > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte > Drp > > 0 icmp 192.168.100.10/0 10.168.0.99/0 21020 19358074 0 0 > 123 > > > #Ping time from laptop to server > > C:\Users\nuno>ping 10.168.0.99 -t -l 1470 > > > > Pinging 10.168.0.99 with 1470 bytes of data: > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=3ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=3ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > > #Ipfer throughput, again we can use all of the emulated bandwidth > > bin/iperf.exe -c 10.168.0.99 -P 1 -i 1 -p 5001 -f k -t 10000 > > ------------------------------------------------------------ > > Client connecting to 10.168.0.99, TCP port 5001 > > TCP window size: 63.0 KByte (default) > > ------------------------------------------------------------ > > [148] local 192.168.100.10 port 49196 connected with 10.168.0.99 port 5001 > > [ ID] Interval Transfer Bandwidth > > [148] 0.0- 1.0 sec 1264 KBytes 10355 Kbits/sec > > [148] 1.0- 2.0 sec 1192 KBytes 9765 Kbits/sec > > [148] 2.0- 3.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 3.0- 4.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 4.0- 5.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 5.0- 6.0 sec 1192 KBytes 9765 Kbits/sec > > [148] 6.0- 7.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 7.0- 8.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 8.0- 9.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 9.0-10.0 sec 1152 KBytes 9437 Kbits/sec > > [148] 10.0-11.0 sec 1240 KBytes 10158 Kbits/sec > > [148] 11.0-12.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 12.0-13.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 13.0-14.0 sec 1176 KBytes 9634 Kbits/sec > > [148] 14.0-15.0 sec 984 KBytes 8061 Kbits/sec > > [148] 15.0-16.0 sec 1192 KBytes 9765 Kbits/sec > > [148] 16.0-17.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 17.0-18.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 18.0-19.0 sec 1184 KBytes 9699 Kbits/sec > > [148] 19.0-20.0 sec 1208 KBytes 9896 Kbits/sec > > > #Now lets configure the profile file to emulate 10Mbps and 2ms of added > overhead > > FreeBSD-Test# cat ./profile > > name Test > > samples 100 > > bw 10M > > loss-level 1.0 > > prob delay > > 0.00 2 > 1.00 2 > > > > #Pipe 1 configured with the above profile file and no additional bandwidth > or delay parameters > > FreeBSD-Test# ipfw pipe 1 show > > 00001: 10.000 Mbit/s 0 ms 50 sl. 1 queues (1 buckets) droptail > > burst: 0 Byte > > profile: name "Test" loss 1.000000 samples 100 > > mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000 > > BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte > Drp > 0 icmp 192.168.100.10/0 10.168.0.99/0 39570 46750171 0 0 > 186 > > #Again, ping remains constant with this configuration > > C:\Users\nuno>ping 10.168.0.99 -t -l 1470 > > > > Pinging 10.168.0.99 with 1470 bytes of data: > > Reply from 10.168.0.99: bytes=1470 time=3ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=3ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > Reply from 10.168.0.99: bytes=1470 time=4ms TTL=63 > > > #Iperf throughput again takes a big hit, although not as much as when we're > adding 10ms or overhead > > bin/iperf.exe -c 10.168.0.99 -P 1 -i 1 -p 5001 -f k -t 10000 > > ------------------------------------------------------------ > > Client connecting to 10.168.0.99, TCP port 5001 > > TCP window size: 63.0 KByte (default) > > ------------------------------------------------------------ > > [148] local 192.168.100.10 port 49197 connected with 10.168.0.99 port 5001 > > [ ID] Interval Transfer Bandwidth > > [148] 0.0- 1.0 sec 544 KBytes 4456 Kbits/sec > > [148] 1.0- 2.0 sec 440 KBytes 3604 Kbits/sec > > [148] 2.0- 3.0 sec 440 KBytes 3604 Kbits/sec > > [148] 3.0- 4.0 sec 432 KBytes 3539 Kbits/sec > > [148] 4.0- 5.0 sec 440 KBytes 3604 Kbits/sec > > [148] 5.0- 6.0 sec 448 KBytes 3670 Kbits/sec > > [148] 6.0- 7.0 sec 432 KBytes 3539 Kbits/sec > > [148] 7.0- 8.0 sec 440 KBytes 3604 Kbits/sec > > [148] 8.0- 9.0 sec 440 KBytes 3604 Kbits/sec > > [148] 9.0-10.0 sec 448 KBytes 3670 Kbits/sec > > [148] 10.0-11.0 sec 440 KBytes 3604 Kbits/sec > > [148] 11.0-12.0 sec 440 KBytes 3604 Kbits/sec > > [148] 12.0-13.0 sec 392 KBytes 3211 Kbits/sec > > [148] 13.0-14.0 sec 488 KBytes 3998 Kbits/sec > > [148] 14.0-15.0 sec 440 KBytes 3604 Kbits/sec > > [148] 15.0-16.0 sec 440 KBytes 3604 Kbits/sec > > [148] 16.0-17.0 sec 440 KBytes 3604 Kbits/sec > > [148] 17.0-18.0 sec 440 KBytes 3604 Kbits/sec > > [148] 18.0-19.0 sec 440 KBytes 3604 Kbits/sec > > [148] 19.0-20.0 sec 448 KBytes 3670 Kbits/sec > > > From my understanding, since the emulated RTT of the link remains the same, > Iperf performance should also stay the same. > > Regardless of how or why the RTT is present, (geographically induced > latency, MAC overhead, congestion etc) the effects on a TCP transmission > should be the same (assuming as in this test no jitter and packet loss) > > > On the first test we see throughput drop from ~9.7Mbps to 980Kbps-1050Kbps > with the addition of just 10ms of overhead in the profile! > > On the second test we see throughput drop from ~9.7Mbps to ~3.6Mbps with the > addition of just 2ms of overhead in the profile! > > So is this feature not working as intended or am I completely missing > something here? > > > I (and hopefully others) would highly appreciate any opinions as this new > feature could really expand the use of dummynet as a WAN emulator, but it > seems that in it's current implementation it does not allow for the full > utilization of the emulated bandwidth regardless of how little or static the > extra delay is set to. > > > Sincerely, > > Nuno Diogo > > On Tue, May 18, 2010 at 12:12 PM, Nuno Diogo wrote: > > > Hi all, > > > > I?m encountering the same situation, and I?m not quite understanding > > Luigi?s explanation. > > > > If a pipe is configured with 10Mbps bandwidth and 25ms delay, it will take > > approximately 26.7ms for a 1470 byte packet to pass through it as per the > > below math. > > > > IPerf can fully utilize the available emulated bandwidth with that delay. > > > > > > > > If we configure a profile with the same characteristics, 10Mbps and 25ms > > overhead/extra-airtime/delay isn?t the end result the same? > > > > A 1470 byte packet should still take ~26.7ms to pass through the pipe and > > IPerf should still be able to fully utilize the emulated bandwidth, no? > > > > > > > > IPerf does not know how that delay is being emulated or configured, it just > > knows that it?s taking ~26.7ms to get ACKs back etc, so I guess I?m missing > > something here? > > > > > > > > I use dummynet often for WAN acceleration testing, and have been trying to > > use the new profile method to try and emulate ?jitter?. > > > > With pings it works great, but when trying to use the full configured > > bandwidth, I get the same results as Charles. > > > > Regardless of delay/overhead/bandwidth configuration IPerf can?t push more > > than a fraction of the configured bandwidth with lots of packets queuing and > > dropping. > > > > > > > > Your patience is appreciated. > > > > > > > > Sincerely, > > > > > > > > > > ____________________________________________________________________________ ___ > > > > Nuno Diogo > > > > > > > > Luigi Rizzo > > Tue, 24 Nov 2009 21:21:56 -0800 > > > > Hi, > > > > there is no bug, the 'pipe profile' code is working correctly. > > > > > > > > In your mail below you are comparing two different things. > > > > > > > > "pipe config bw 10Mbit/s delay 25ms" > > > > means that _after shaping_ at 10Mbps, all traffic will > > > > be subject to an additional delay of 25ms. > > > > Each packet (1470 bytes) will take Length/Bandwidth sec > > > > to come out or 1470*8/10M = 1.176ms , but you won't > > > > see them until you wait another 25ms (7500km at the speed > > > > of light). > > > > > > > > "pipe config bw 10Mbit/s profile "test" ..." > > > > means that in addition to the Length/Bandwidth, > > > > _each packet transmission_ will consume > > > > some additional air-time as specified in the profile > > > > (25ms in your case) > > > > > > > > So, in your case with 1470bytes/pkt each transmission > > > > will take len/bw (1.176ms) + 25ms (extra air time) = 26.76ms > > > > That is 25 times more than the previous case and explains > > > > the reduced bandwidth you see. > > > > > > > > The 'delay profile' is effectively extra air time used for each > > > > transmission. The name is probably confusing, i should have called > > > > it 'extra-time' or 'overhead' and not 'delay' > > > > > > > > cheers > > > > luigi > > > > > > > > On Tue, Nov 24, 2009 at 12:40:31PM -0500, Charles Henri de Boysson wrote: > > > > > Hi, > > > > > > > > > > I have a simple setup with two computer connected via a FreeBSD bridge > > > > > running 8.0 RELEASE. > > > > > I am trying to use dummynet to simulate a wireless network between the > > > > > two and for that I wanted to use the pipe profile feature of FreeBSD > > > > > 8.0. > > > > > But as I was experimenting with the pipe profile feature I ran into some > > > > > issues. > > > > > > > > > > I have setup ipfw to send traffic coming for either interface of the > > > > > bridge to a respective pipe as follow: > > > > > > > > > > # ipfw show > > > > > 00100 ?? ?? 0 ?? ?? ?? ?? 0 allow ip from any to any via lo0 > > > > > 00200 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from any to 127.0.0.0/8 > > > > > 00300 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from 127.0.0.0/8 to any > > > > > 01000 ?? ?? 0 ?? ?? ?? ?? 0 pipe 1 ip from any to any via vr0 layer2 > > > > > 01100 ?? ?? 0 ?? ?? ?? ?? 0 pipe 101 ip from any to any via vr4 layer2 > > > > > 65000 ??7089 ?? ??716987 allow ip from any to any > > > > > 65535 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from any to any > > > > > > > > > > When I setup my pipes as follow: > > > > > > > > > > # ipfw pipe 1 config bw 10Mbit delay 25 mask proto 0 > > > > > # ipfw pipe 101 config bw 10Mbit delay 25 mask proto 0 > > > > > # ipfw pipe show > > > > > > > > > > 00001: ??10.000 Mbit/s ?? 25 ms ?? 50 sl. 0 queues (1 buckets) droptail > > > > > burst: 0 Byte > > > > > 00101: ??10.000 Mbit/s ?? 25 ms ?? 50 sl. 0 queues (1 buckets) droptail > > > > > burst: 0 Byte > > > > > > > > > > With this setup, when I try to pass traffic through the bridge with > > > > > iperf, I obtain the desired speed: iperf reports about 9.7Mbits/sec in > > > > > UDP mode and 9.5 in TCP mode (I copied and pasted the iperf runs at > > > > > the end of this email). > > > > > > > > > > The problem arise when I setup pipe 1 (the downlink) with an > > > > > equivalent profile (I tried to simplify it as much as possible). > > > > > > > > > > # ipfw pipe 1 config profile test.pipeconf mask proto 0 > > > > > # ipfw pipe show > > > > > 00001: 10.000 Mbit/s 0 ms 50 sl. 0 queues (1 buckets) droptail > > > > > burst: 0 Byte > > > > > profile: name "test" loss 1.000000 samples 2 > > > > > 00101: 10.000 Mbit/s 25 ms 50 sl. 0 queues (1 buckets) droptail > > > > > burst: 0 Byte > > > > > > > > > > # cat test.pipeconf > > > > > name test > > > > > bw 10Mbit > > > > > loss-level 1.0 > > > > > samples 2 > > > > > prob delay > > > > > 0.0 25 > > > > > 1.0 25 > > > > > > > > > > The same iperf TCP tests then collapse to about 500Kbit/s with the > > > > > same settings (copy and pasted the output of iperf bellow) > > > > > > > > > > I can't figure out what is going on. There is no visible load on the bridge. > > > > > I have an unmodified GENERIC kernel with the following sysctl. > > > > > > > > > > net.link.bridge.ipfw: 1 > > > > > kern.hz: 1000 > > > > > > > > > > The bridge configuration is as follow: > > > > > > > > > > bridge0: flags=8843 metric 0 mtu 1500 > > > > > ether 1a:1f:2e:42:74:8d > > > > > id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15 > > > > > maxage 20 holdcnt 6 proto rstp maxaddr 100 timeout 1200 > > > > > root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0 > > > > > member: vr4 flags=143 > > > > > ?? ?? ?? ??ifmaxaddr 0 port 6 priority 128 path cost 200000 > > > > > member: vr0 flags=143 > > > > > ?? ?? ?? ??ifmaxaddr 0 port 2 priority 128 path cost 200000 > > > > > > > > > > > > > > > iperf runs without the profile set: > > > > > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 > > > > > ------------------------------------------------------------ > > > > > Client connecting to 10.0.0.254, TCP port 5001 > > > > > Binding to local address 10.1.0.1 > > > > > TCP window size: 16.0 KByte (default) > > > > > ------------------------------------------------------------ > > > > > [ 3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001 > > > > > [ ID] Interval Transfer Bandwidth > > > > > [ 3] 0.0-15.0 sec 17.0 MBytes 9.49 Mbits/sec > > > > > > > > > > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 -u -b 10Mbit > > > > > ------------------------------------------------------------ > > > > > Client connecting to 10.0.0.254, UDP port 5001 > > > > > Binding to local address 10.1.0.1 > > > > > Sending 1470 byte datagrams > > > > > UDP buffer size: 110 KByte (default) > > > > > ------------------------------------------------------------ > > > > > [ 3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001 > > > > > [ ID] Interval Transfer Bandwidth > > > > > [ 3] 0.0-15.0 sec 18.8 MBytes 10.5 Mbits/sec > > > > > [ 3] Sent 13382 datagrams > > > > > [ 3] Server Report: > > > > > [ 3] 0.0-15.1 sec 17.4 MBytes 9.72 Mbits/sec 0.822 ms 934/13381 (7%) > > > > > [ 3] 0.0-15.1 sec 1 datagrams received out-of-order > > > > > > > > > > > > > > > iperf runs with the profile set: > > > > > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 > > > > > ------------------------------------------------------------ > > > > > Client connecting to 10.0.0.254, TCP port 5001 > > > > > Binding to local address 10.1.0.1 > > > > > TCP window size: 16.0 KByte (default) > > > > > ------------------------------------------------------------ > > > > > [ 3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001 > > > > > [ ID] Interval Transfer Bandwidth > > > > > [ 3] 0.0-15.7 sec 968 KBytes 505 Kbits/sec > > > > > > > > > > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 -u -b 10Mbit > > > > > ------------------------------------------------------------ > > > > > Client connecting to 10.0.0.254, UDP port 5001 > > > > > Binding to local address 10.1.0.1 > > > > > Sending 1470 byte datagrams > > > > > UDP buffer size: 110 KByte (default) > > > > > ------------------------------------------------------------ > > > > > [ 3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001 > > > > > [ ID] Interval Transfer Bandwidth > > > > > [ 3] 0.0-15.0 sec 18.8 MBytes 10.5 Mbits/sec > > > > > [ 3] Sent 13382 datagrams > > > > > [ 3] Server Report: > > > > > [ 3] 0.0-16.3 sec 893 KBytes 449 Kbits/sec 1.810 ms 12757/13379 (95%) > > > > > > > > > > > > > > > Let me know what other information you would need to help me debugging this. > > > > > In advance, thank you for your help > > > > > > > > > > -- > > > > > Charles-Henri de Boysson > > > > > _______________________________________________ > > > > > freebsd-ipfw@freebsd.org mailing list > > > > > http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw > > > > > To unsubscribe, send any mail to "freebsd-ipfw-unsubscr...@freebsd.org" > > > > _______________________________________________ > > > > freebsd-ipfw@freebsd.org mailing list > > > > http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw > > > > To unsubscribe, send any mail to "freebsd-ipfw-unsubscr...@freebsd.org" > > > > > > > > > > -- > ---------------------------------------------------------------------------- --------------------- > > Nuno Diogo > _______________________________________________ > freebsd-ipfw@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw > To unsubscribe, send any mail to "freebsd-ipfw-unsubscribe@freebsd.org"