Date: Tue, 14 Jun 2011 10:15:07 -0700 From: Jack Vogel <jfvogel@gmail.com> To: Mario Spinthiras <spinthiras.mario@gmail.com> Cc: freebsd-net@freebsd.org Subject: Re: strange igb interface performance problems Message-ID: <BANLkTi=XA0rq0c1hUEiwGDdG6HhOiGExUw@mail.gmail.com> In-Reply-To: <BANLkTimEVx9bj0eB7mXPRmbAT-D3T-dxDw@mail.gmail.com> References: <BANLkTikH6PiXU9PUF9nS-jA8jhM0ozyHfw@mail.gmail.com> <BANLkTi=OuCv19w8hwGQ7%2BcYLrQ%2BZxAoM2A@mail.gmail.com> <BANLkTimEVx9bj0eB7mXPRmbAT-D3T-dxDw@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Why do you have TSO off. If you run the same connection without the vlan what do you see? Jack On Tue, Jun 14, 2011 at 7:38 AM, Mario Spinthiras < spinthiras.mario@gmail.com> wrote: > Hello everyone, > > I'm back with more updates on the issue. Forgive my lack of elaboration on > the topic in my previous post; over the frustration of this problem I > couldn't see this topic through so I let it go for a few days to revisit > with a clear head perhaps. > > As I mentioned in a previous post, the problem is between 2 nodes running > PfSense 2.0 RC1 over a point to point link. The uname from the machines: > > Endpoint A: > > [2.0-RC1][root@pfSense.localdomain]/root(3): uname -a > FreeBSD pfSense.localdomain 8.1-RELEASE-p2 FreeBSD 8.1-RELEASE-p2 #0: Sat > Feb 26 18:05:58 EST 2011 > root@FreeBSD_8.0_pfSense_2.0-AMD64.snaps.pfsense.org: > /usr/obj.pfSense/usr/pfSensesrc/src/sys/pfSense_SMP.8 > amd64 > [2.0-RC1][root@pfSense.localdomain]/root(4): > > Enpoint B: > > [2.0-RC1][root@pfsense.localdomain]/root(1): uname -a > FreeBSD pfsense.localdomain 8.1-RELEASE-p3 FreeBSD 8.1-RELEASE-p3 #1: Tue > Apr 26 20:56:16 EDT 2011 > sullrich@FreeBSD_8.0_pfSense_2.0-snaps.pfsense.org: > /usr/obj.pfSense/usr/pfSensesrc/src/sys/pfSense_SMP.8 > i386 > [2.0-RC1][root@pfsense.localdomain]/root(2): > > > The link has a RTT of 130ms. The link's traffic is for the most part TCP > with UDP not being used much. The performance of the link in one direction > is fine (expected 160 odd Mbps). In the other direction it seems the link > struggles to climb even with a 3 MB window size and all sorts of tuning on > the stack. An output of the iperf test running for 20 seconds produces the > following results: > > Node A: > > [2.0-RC1][root@pfSense.localdomain]/root(24): iperf -c b.b.b.b -w 3M -t 20 > -i 5 > ------------------------------------------------------------ > Client connecting to b.b.b.b, TCP port 5001 > TCP window size: 3.00 MByte (WARNING: requested 3.00 MByte) > ------------------------------------------------------------ > [ 3] local a.a.a.a port 22995 connected with b.b.b.b port 5001 > [ ID] Interval Transfer Bandwidth > [ 3] 0.0- 5.0 sec 75.3 MBytes 126 Mbits/sec > [ ID] Interval Transfer Bandwidth > [ 3] 5.0-10.0 sec 107 MBytes 180 Mbits/sec > [ ID] Interval Transfer Bandwidth > [ 3] 10.0-15.0 sec 107 MBytes 180 Mbits/sec > [ ID] Interval Transfer Bandwidth > [ 3] 15.0-20.0 sec 108 MBytes 181 Mbits/sec > [ ID] Interval Transfer Bandwidth > [ 3] 0.0-20.2 sec 404 MBytes 167 Mbits/sec > [2.0-RC1][root@pfSense.localdomain]/root(25): > > > Node B: > > [2.0-RC1][root@pfsense.localdomain]/root(2): iperf -c a.a.a.a -t 20 -i 10 > -w > 3M > ------------------------------------------------------------ > Client connecting to a.a.a.a, TCP port 5001 > TCP window size: 3.00 MByte (WARNING: requested 3.00 MByte) > ------------------------------------------------------------ > [ 3] local b.b.b.b port 44160 connected with a.a.a.a port 5001 > [ ID] Interval Transfer Bandwidth > [ 3] 0.0-10.0 sec 7.73 MBytes 6.49 Mbits/sec > [ ID] Interval Transfer Bandwidth > [ 3] 10.0-20.0 sec 9.95 MBytes 8.34 Mbits/sec > [ ID] Interval Transfer Bandwidth > [ 3] 0.0-20.3 sec 18.1 MBytes 7.47 Mbits/sec > [2.0-RC1][root@pfsense.localdomain]/root(3): > > The performance is terrible in Node B. > > Their corresponding interfaces look like this: > > Node A: > > [2.0-RC1][root@pfSense.localdomain]/root(26): ifconfig igb1 > igb1: flags=28943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST,PPROMISC> > metric 0 mtu 1500 > > > options=100bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,VLAN_HWFILTER> > ether 00:1b:21:9b:6e:ab > inet6 fe80::21b:21ff:fe9b:6eab%igb1 prefixlen 64 scopeid 0x4 > nd6 options=3<PERFORMNUD,ACCEPT_RTADV> > media: Ethernet autoselect (1000baseT <full-duplex>) > status: active > [2.0-RC1][root@pfSense.localdomain]/root(27): > > [2.0-RC1][root@pfSense.localdomain]/root(28): ifconfig igb1_593 > igb1_593: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu > 1500 > options=3<RXCSUM,TXCSUM> > ether 00:1b:21:9b:6e:ab > inet6 fe80::225:90ff:fe20:5510%igb1_593 prefixlen 64 scopeid 0xa > inet a.a.a.a netmask 0xfffffff8 broadcast x.x.x.x > nd6 options=3<PERFORMNUD,ACCEPT_RTADV> > media: Ethernet autoselect (1000baseT <full-duplex>) > status: active > vlan: 593 parent interface: igb1 > [2.0-RC1][root@pfSense.localdomain]/root(29): > > Node B: > > [2.0-RC1][root@pfsense.localdomain]/root(48): ifconfig igb3 > igb3: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 > > > options=101bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,VLAN_HWFILTER> > ether 00:1b:21:8e:2b:5f > inet6 fe80::21b:21ff:fe8e:2b5f%igb3 prefixlen 64 scopeid 0x4 > nd6 options=3<PERFORMNUD,ACCEPT_RTADV> > media: Ethernet autoselect (1000baseT <full-duplex>) > status: active > [2.0-RC1][root@pfsense.localdomain]/root(49): ifconfig igb3_vlan593 > igb3_vlan593: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 > mtu 1500 > options=3<RXCSUM,TXCSUM> > ether 00:1b:21:8e:2b:5f > inet6 fe80::21b:21ff:fe8e:2b5c%igb3_vlan593 prefixlen 64 scopeid 0xb > inet b.b.b.b netmask 0xfffffff8 broadcast x.x.x.x > nd6 options=3<PERFORMNUD,ACCEPT_RTADV> > media: Ethernet autoselect (1000baseT <full-duplex>) > status: active > vlan: 593 parent interface: igb3 > [2.0-RC1][root@pfsense.localdomain]/root(50): > > > As you can see from the ifconfig output, the link runs dot1q with a VID of > 593. > > The netstat -m counters: > > Node A: > > [2.0-RC1][root@pfSense.localdomain]/root(29): netstat -m > 10261/10606/20867 mbufs in use (current/cache/total) > 10241/5957/16198/25600 mbuf clusters in use (current/cache/total/max) > 10240/5120 mbuf+clusters out of packet secondary zone in use > (current/cache) > 0/3989/3989/12800 4k (page size) jumbo clusters in use > (current/cache/total/max) > 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max) > 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max) > 25612K/33173K/58785K bytes allocated to network (current/cache/total) > 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) > 0/0/0 requests for jumbo clusters denied (4k/9k/16k) > 0/0/0 sfbufs in use (current/peak/max) > 0 requests for sfbufs denied > 0 requests for sfbufs delayed > 0 requests for I/O initiated by sendfile > 0 calls to protocol drain routines > [2.0-RC1][root@pfSense.localdomain]/root(30): > > Node B: > > [2.0-RC1][root@pfsense.localdomain]/root(50): netstat -m > 9691/4274/13965 mbufs in use (current/cache/total) > 6355/1891/8246/25600 mbuf clusters in use (current/cache/total/max) > 6354/1070 mbuf+clusters out of packet secondary zone in use (current/cache) > 1216/1271/2487/12800 4k (page size) jumbo clusters in use > (current/cache/total/max) > 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max) > 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max) > 20091K/9934K/30026K bytes allocated to network (current/cache/total) > 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) > 0/0/0 requests for jumbo clusters denied (4k/9k/16k) > 0/8/6656 sfbufs in use (current/peak/max) > 0 requests for sfbufs denied > 0 requests for sfbufs delayed > 0 requests for I/O initiated by sendfile > 0 calls to protocol drain routines > [2.0-RC1][root@pfsense.localdomain]/root(51): > > > dmesg output on both nodes. > > Node A: > > igb1: <Intel(R) PRO/1000 Network Connection version - 2.0.7> port > 0xe880-0xe89f mem > 0xf97e0000-0xf97fffff,0xf9c00000-0xf9ffffff,0xf97dc000-0xf97dffff irq 17 at > device 0.1 on pci3 > igb1: Using MSIX interrupts with 5 vectors > igb1: [ITHREAD] > igb1: [ITHREAD] > igb1: [ITHREAD] > igb1: [ITHREAD] > igb1: [ITHREAD] > > > Node B: > > igb3: <Intel(R) PRO/1000 Network Connection version - 2.1.7> port > 0xc880-0xc89f mem > 0xf9fe0000-0xf9ffffff,0xfa000000-0xfa3fffff,0xf9fdc000-0xf9fdffff irq 37 at > device 0.1 on pci6 > igb3: Using MSIX interrupts with 5 vectors > igb3: [ITHREAD] > igb3: [ITHREAD] > igb3: [ITHREAD] > igb3: [ITHREAD] > igb3: [ITHREAD] > igb3: link state changed to UP > igb3_vlan593: link state changed to UP > > > pciconf output: > > Node A: > > igb1@pci0:3:0:1: class=0x020000 card=0xa03c8086 chip=0x10c98086 > rev=0x01hdr=0x00 > class = network > subclass = ethernet > bar [10] = type Memory, range 32, base 0xf97e0000, size 131072, > enabled > bar [14] = type Memory, range 32, base 0xf9c00000, size 4194304, > enabled > bar [18] = type I/O Port, range 32, base 0xe880, size 32, enabled > bar [1c] = type Memory, range 32, base 0xf97dc000, size 16384, enabled > > > Node B: > > igb3@pci0:6:0:1: class=0x020000 card=0xa02b8086 chip=0x10e88086 > rev=0x01 hdr=0x00 > class = network > subclass = ethernet > bar [10] = type Memory, range 32, base 0xf9fe0000, size 131072, > enabled > bar [14] = type Memory, range 32, base 0xfa000000, size 4194304, > enabled > bar [18] = type I/O Port, range 32, base 0xc880, size 32, enabled > bar [1c] = type Memory, range 32, base 0xf9fdc000, size 16384, enabled > > > I've been looking at issues regarding performance with the igb issue from a > few years back at this thread : > http://lists.freebsd.org/pipermail/freebsd-doc/2009-June/015983.html. > These > are very similar problems to what I'm having however disabling LRO and TSO > does not work for me. > > I'm quite frustrated with this because I'm not getting an error or not > looking in the right place. Can someone out there point me in the right > direction? Any help will be much appreciated. > > Warm Regards, > Mario Spinthiras > _______________________________________________ > freebsd-net@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org" >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?BANLkTi=XA0rq0c1hUEiwGDdG6HhOiGExUw>