Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 28 Apr 2011 17:02:28 -0700
From:      Matt Connor <bsd@xerq.net>
To:        <freebsd-net@freebsd.org>
Subject:   Re: em0 performance subpar
Message-ID:  <2290ea840693c579fef9ac04ab58ee3c@www1.xerq.net>
In-Reply-To: <20110428072946.GA11391@zephyr.adamsnet>
References:  <20110428072946.GA11391@zephyr.adamsnet>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, 28 Apr 2011 03:29:46 -0400, Adam Stylinski wrote:
> Hello,
>
> I have an intel gigabit network adapter (the 1000 GT w/chipset
> 82541PI) which performs poorly in Freebsd compared to the same card 
> in
> Linux.  I've tried this card in two different freebsd boxes and for
> whatever reason I get poor transmit performance.  I've done all of 
> the
> tweaking specified in just about every guide out there (the usual TCP
> window scaling, larger nmbclusters, delayed acks, etc) and still I 
> get
> only around 600mbps.  I'm using jumbo frames, with an MTU of 9000.
> I'm testing this with iperf.  While I realize that this may not be 
> the
> most realistic test, linux hosts with the same card can achieve
> 995Mbit/s to another host running this.  When the Freebsd box is the
> server, Linux hosts can transmit to it at around 800 something 
> Mbit/s.
> I've increased the transmit descriptors as specified in the if_em man
> page, and while that gave me 20 or 30 more mbit/s, my transmit
> performance is still below normal.
>
> sysctl stats report that the card is trigger a lot of tx_desc_fail2:
> 	dev.em.0.tx_desc_fail2: 3431
>
> Looking at a comment in the source code this indicates that the card
> was not able to obtain enough transmit descriptors (but I've given 
> the
> card the maximum of 4096 in my loader.conf tunable).  Is this a bug 
> or
> a performance regression of some kind?  Does anybody have a fix for
> this?  I tried another card with the same chip in a different box on
> 8-STABLE to no avail (the box I'm trying to improve performance on is
> on 8.2-RELEASE-p1).
>
> Anybody manage to make this card push above 600mbps in ideal network
> benchmarks?  Any help would be gladly appreciated.


We've had this issue with Intel in the past, and these were the changes 
made. Let me know if this helps or makes things worse.


# dmesg | grep Intel
CPU: Intel(R) Xeon(R) CPU E31270 @ 3.40GHz (3392.31-MHz K8-class CPU)
   Origin = "GenuineIntel"  Id = 0x206a7  Family = 6  Model = 2a  
Stepping = 7
em0: <Intel(R) PRO/1000 Network Connection 7.1.9> port 0xe000-0xe01f 
mem 0xfbc00000-0xfbc1ffff,0xfbc20000-0xfbc23fff irq 16 at device 0.0 on 
pci4
em1: <Intel(R) PRO/1000 Network Connection 7.1.9> port 0xd000-0xd01f 
mem 0xfbb00000-0xfbb1ffff,0xfbb20000-0xfbb23fff irq 17 at device 0.0 on 
pci5
em2: <Intel(R) PRO/1000 Network Connection 7.1.9> port 0xc000-0xc01f 
mem 0xfba00000-0xfba1ffff,0xfba20000-0xfba23fff irq 18 at device 0.0 on 
pci6
em3: <Intel(R) PRO/1000 Network Connection 7.1.9> port 0xb000-0xb01f 
mem 0xfb900000-0xfb91ffff,0xfb920000-0xfb923fff irq 19 at device 0.0 on 
pci7


# vmstat -z | egrep 'ITEM|mbuf'
ITEM                     SIZE     LIMIT      USED      FREE  REQUESTS  
FAILURES
mbuf_packet:              256,        0,     2048,     1409, 19787794,  
      0
mbuf:                     256,        0,        2,     2166, 45417342,  
      0
mbuf_cluster:            2048,    32768,     3457,     1397,    14464,  
      0
mbuf_jumbo_page:         4096,    12800,        0,      715,   466607,  
      0
mbuf_jumbo_9k:           9216,     6400,        0,        0,        0,  
      0
mbuf_jumbo_16k:         16384,     3200,        0,        0,        0,  
      0
mbuf_ext_refcnt:            4,        0,        0,     1680,  7596019,  
      0


# pciconf -lvc

em1@pci0:5:0:0: class=0x020000 card=0x10d315d9 chip=0x10d38086 rev=0x00 
hdr=0x00
     vendor     = 'Intel Corporation'
     device     = 'Intel 82574L Gigabit Ethernet Controller (82574L)'
     class      = network
     subclass   = ethernet
     cap 01[c8] = powerspec 2  supports D0 D3  current D0
     cap 05[d0] = MSI supports 1 message, 64 bit
     cap 10[e0] = PCI-Express 1 endpoint max data 128(256) link x1(x1)
     cap 11[a0] = MSI-X supports 5 messages in map 0x1c enabled
ecap 0001[100] = AER 1 0 fatal 0 non-fatal 1 corrected
ecap 0003[140] = Serial 1 002590ffff247fef


# vmstat -i
interrupt                          total       rate
irq1: atkbd0                           6          0
irq0:                                  1          0
stray irq0                             1          0
irq16: ehci0                      292820          1
irq19: atapci0                   3473449         17
irq23: ehci1                      293594          1
cpu0: timer                    389382023       1999
irq256: em0:rx 0                  150420          0
irq257: em0:tx 0                  131374          0
irq258: em0:link                       2          0
irq259: em1:rx 0                17554401         90
irq260: em1:tx 0                13141176         67
irq261: em1:link                       3          0
cpu1: timer                    389381624       1999
cpu4: timer                    389381747       1999
cpu5: timer                    389381781       1999
cpu3: timer                    389381685       1999
cpu2: timer                    389381724       1999
cpu6: timer                    389381781       1999
cpu7: timer                    389381587       1999
Total                         3150091199      16179




-/etc/sysctl.conf

net.inet.tcp.msl=7500

##added by SoftLayer
kern.ipc.maxsockbuf=16777216
net.inet.tcp.rfc1323=1

# Disk Speed tweaks
#vfs.ufs.dirhash_maxmem=64777216
vfs.write_behind=1

# Kernel Tuning
kern.ipc.somaxconn=2048
kern.ipc.nmbclusters=32768

# Experimental
kern.maxfilesperproc=32768
kern.maxvnodes=400000
net.local.stream.recvspace=65536
kern.maxfiles=65536
net.inet.udp.maxdgram=57344
net.inet.tcp.mssdflt=1460

net.inet.tcp.sendbuf_max=67108864
net.inet.tcp.recvbuf_max=67108864
net.inet.tcp.inflight.enable=0
net.inet.tcp.sendspace=262144
net.inet.tcp.recvspace=262144
net.inet.udp.recvspace=262144
net.inet.tcp.sack.enable=1
net.inet.tcp.path_mtu_discovery=1
net.inet.tcp.sendbuf_auto=1
net.inet.tcp.sendbuf_inc=16384
net.inet.tcp.recvbuf_auto=1
net.inet.tcp.recvbuf_inc=524288
net.inet.tcp.hostcache.expire=1



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?2290ea840693c579fef9ac04ab58ee3c>