Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 4 May 2012 13:06:21 -0400
From:      Arnaud Lacombe <lacombar@gmail.com>
To:        FreeBSD Net <freebsd-net@freebsd.org>
Subject:   High interrupt load on idle igb interfaces
Message-ID:  <CACqU3MWNPmMQq8HU7LFSbkLs%2B=Yu8N=MbHB7XeJOAZw6qT3ajg@mail.gmail.com>

next in thread | raw e-mail | index | archive | help
Hi,

We are currently evaluating a new hardware platform using 4 Intel
82580 interfaces. System is running FreeBSD 7.1 with backported igb(4)
from HEAD (v2.2.5). After about 12h routing around 155Mbps of traffic,
vmstat(1) shows a high constant interrupt rate on igb0 and igb3:

# vmstat -i
irq260: igb0                   672410485      11084
irq261: igb0                    86304338       1422
irq262: igb0                    86262625       1421
irq263: igb0                    86356261       1423
irq264: igb0                    86374226       1423
irq265: igb0                    86202637       1421
irq266: igb0                    86290286       1422
irq267: igb0                    86415203       1424
irq268: igb0                          10          0
[...]
irq287: igb3                   294639203       4856
irq288: igb3                   112026336       1846
irq289: igb3                   112503183       1854
irq290: igb3                   112154972       1848
irq291: igb3                   112441454       1853
irq292: igb3                   112224418       1849
irq293: igb3                   112485042       1854
irq294: igb3                   112290634       1851
irq295: igb3                           9          0

Despite the relatively high rate, top(1) does not show anything
relevant taking CPU time, and no traffic seems to actually flow:

# netstat -i | grep 'igb[03]'; sleep 10; netstat -i | grep 'igb[03]'
igb0   1500 <Link#5> 00:90:0b:1e:df:f0 807511568 4325295 872538069 0 0
igb0   1500 10.4.4.0 10.4.4.4 0 - 864121185 - -
igb3   1500 <Link#8> 00:90:0b:1e:df:f3 906072418 0 616715810 0 0
igb3   1500 10.3.3.0 10.3.3.4 0     - 609718187 - -
[...]
igb0   1500 <Link#5> 00:90:0b:1e:df:f0 807511568 4325295 872538069 0 0
igb0   1500 10.4.4.0 10.4.4.4 0 - 864121185 - -
igb3   1500 <Link#8> 00:90:0b:1e:df:f3 906072418 0 616715810 0 0
igb3   1500 10.3.3.0 10.3.3.4 0 - 609718187 - -

Here is some device stat, for igb0:

# sysctl dev.igb.0 | grep -v ' 0$'
dev.igb.0.%desc: Intel(R) PRO/1000 Network Connection version - 2.2.5
dev.igb.0.%driver: igb
dev.igb.0.%location: slot=0 function=0
dev.igb.0.%pnpinfo: vendor=0x8086 device=0x150e subvendor=0x8086
subdevice=0x0000 class=0x020000
dev.igb.0.%parent: pci6
dev.igb.0.nvm: -1
dev.igb.0.enable_aim: 1
dev.igb.0.fc: 65536003
dev.igb.0.rx_processing_limit: 100
dev.igb.0.link_irq: 10
dev.igb.0.device_control: 1087373889
dev.igb.0.rx_control: 67141634
dev.igb.0.interrupt_mask: 4
dev.igb.0.extended_int_mask: 2147484159
dev.igb.0.fc_high_water: 33168
dev.igb.0.fc_low_water: 33152
dev.igb.0.queue0.interrupt_rate: 111111
dev.igb.0.queue0.txd_head: 454
dev.igb.0.queue0.txd_tail: 454
dev.igb.0.queue0.tx_packets: 872538069
dev.igb.0.queue0.rxd_head: 349
dev.igb.0.queue0.rxd_tail: 348
dev.igb.0.queue0.rx_packets: 100975965
dev.igb.0.queue0.rx_bytes: 106639501004
dev.igb.0.queue1.interrupt_rate: 100000
dev.igb.0.queue1.rxd_head: 176
dev.igb.0.queue1.rxd_tail: 175
dev.igb.0.queue1.rx_packets: 101202096
dev.igb.0.queue1.rx_bytes: 106788129111
dev.igb.0.queue2.interrupt_rate: 100000
dev.igb.0.queue2.rxd_head: 1010
dev.igb.0.queue2.rxd_tail: 1009
dev.igb.0.queue2.rx_packets: 100999154
dev.igb.0.queue2.rx_bytes: 106597010381
dev.igb.0.queue3.interrupt_rate: 100000
dev.igb.0.queue3.rxd_head: 918
dev.igb.0.queue3.rxd_tail: 917
dev.igb.0.queue3.rx_packets: 101185430
dev.igb.0.queue3.rx_bytes: 106764061444
dev.igb.0.queue4.interrupt_rate: 100000
dev.igb.0.queue4.rxd_head: 504
dev.igb.0.queue4.rxd_tail: 503
dev.igb.0.queue4.rx_packets: 101162488
dev.igb.0.queue4.rx_bytes: 106772530472
dev.igb.0.queue5.interrupt_rate: 100000
dev.igb.0.queue5.rxd_head: 967
dev.igb.0.queue5.rxd_tail: 966
dev.igb.0.queue5.rx_packets: 100960199
dev.igb.0.queue5.rx_bytes: 106548050559
dev.igb.0.queue6.interrupt_rate: 100000
dev.igb.0.queue6.rxd_head: 457
dev.igb.0.queue6.rxd_tail: 456
dev.igb.0.queue6.rx_packets: 101045705
dev.igb.0.queue6.rx_bytes: 106654181280
dev.igb.0.queue7.interrupt_rate: 100000
dev.igb.0.queue7.rxd_head: 409
dev.igb.0.queue7.rxd_tail: 408
dev.igb.0.queue7.rx_packets: 101220761
dev.igb.0.queue7.rx_bytes: 106820232038
dev.igb.0.mac_stats.missed_packets: 4325192
dev.igb.0.mac_stats.recv_no_buff: 963
dev.igb.0.mac_stats.recv_jabber: 15
dev.igb.0.mac_stats.recv_errs: 18
dev.igb.0.mac_stats.crc_errs: 85
dev.igb.0.mac_stats.total_pkts_recvd: 813077091
dev.igb.0.mac_stats.good_pkts_recvd: 808751798
dev.igb.0.mac_stats.bcast_pkts_recvd: 428
dev.igb.0.mac_stats.rx_frames_64: 221567998
dev.igb.0.mac_stats.rx_frames_65_127: 17124186
dev.igb.0.mac_stats.rx_frames_128_255: 5804200
dev.igb.0.mac_stats.rx_frames_256_511: 7252691
dev.igb.0.mac_stats.rx_frames_512_1023: 6689701
dev.igb.0.mac_stats.rx_frames_1024_1522: 550313022
dev.igb.0.mac_stats.good_octets_recvd: 856818703481
dev.igb.0.mac_stats.good_octets_txd: 59929870623
dev.igb.0.mac_stats.total_pkts_txd: 872538069
dev.igb.0.mac_stats.good_pkts_txd: 872538069
dev.igb.0.mac_stats.bcast_pkts_txd: 3913
dev.igb.0.mac_stats.tx_frames_64: 834225325
dev.igb.0.mac_stats.tx_frames_65_127: 25282920
dev.igb.0.mac_stats.tx_frames_256_511: 13029824
dev.igb.0.interrupts.asserts: 1276620451
dev.igb.0.interrupts.rx_pkt_timer: 808744060
dev.igb.0.interrupts.tx_queue_empty: 872532141
dev.igb.0.interrupts.tx_queue_min_thresh: 809033732
dev.igb.0.host.rx_pkt: 7738
dev.igb.0.host.tx_good_pkt: 5928
dev.igb.0.host.rx_good_bytes: 856818710661
dev.igb.0.host.tx_good_bytes: 59929870623

Over a 10s period, the following changes happens:

dev.igb.0.mac_stats.tx_frames_64: 834225325
 dev.igb.0.mac_stats.tx_frames_65_127: 25282920
 dev.igb.0.mac_stats.tx_frames_256_511: 13029824
-dev.igb.0.interrupts.asserts: 1276621739
+dev.igb.0.interrupts.asserts: 1276621819
 dev.igb.0.interrupts.rx_pkt_timer: 808744060
 dev.igb.0.interrupts.tx_queue_empty: 872532141
-dev.igb.0.interrupts.tx_queue_min_thresh: 809034668
+dev.igb.0.interrupts.tx_queue_min_thresh: 809034719
 dev.igb.0.host.rx_pkt: 7738
 dev.igb.0.host.tx_good_pkt: 5928
 dev.igb.0.host.rx_good_bytes: 856818710661

Moreover, igb3 is not showing any `missed_packets'

Here is relevant dmesg(8) except:

igb0: <Intel(R) PRO/1000 Network Connection version - 2.2.5> port
0xb880-0xb89f mem 0xfb980000-0xfb9fffff,0xfba78000-0xfba7bfff irq 16
at device 0.0 on pci6
igb0: Using MSIX interrupts with 9 vectors
igb0: [ITHREAD]
[...]
igb0: Ethernet address: 00:90:0b:1e:df:f0
igb1: <Intel(R) PRO/1000 Network Connection version - 2.2.5> port
0xbc00-0xbc1f mem 0xfba80000-0xfbafffff,0xfba7c000-0xfba7ffff irq 17
at device 0.1 on pci6
igb1: Using MSIX interrupts with 9 vectors
igb1: [ITHREAD]
[...]
igb1: Ethernet address: 00:90:0b:1e:df:f1
igb2: <Intel(R) PRO/1000 Network Connection version - 2.2.5> port
0xc880-0xc89f mem 0xfbb80000-0xfbbfffff,0xfbc78000-0xfbc7bfff irq 16
at device 0.0 on pci7
igb2: Using MSIX interrupts with 9 vectors
igb2: [ITHREAD]
[...]
igb2: Ethernet address: 00:90:0b:1e:df:f2
igb3: <Intel(R) PRO/1000 Network Connection version - 2.2.5> port
0xcc00-0xcc1f mem 0xfbc80000-0xfbcfffff,0xfbc7c000-0xfbc7ffff irq 17
at device 0.1 on pci7
igb3: Using MSIX interrupts with 9 vectors
igb3: [ITHREAD]
[...]

as well as pciconf(8):

igb0@pci0:6:0:0:        class=0x020000 card=0x00008086 chip=0x150e8086
rev=0x01 hdr=0x00
igb1@pci0:6:0:1:        class=0x020000 card=0x00008086 chip=0x150e8086
rev=0x01 hdr=0x00
igb2@pci0:7:0:0:        class=0x020000 card=0x00008086 chip=0x150e8086
rev=0x01 hdr=0x00
igb3@pci0:7:0:1:        class=0x020000 card=0x00008086 chip=0x150e8086
rev=0x01 hdr=0x00

So why would vmstat show such an high interrupt rate on idle interfaces ?

Thanks,
 - Arnaud



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CACqU3MWNPmMQq8HU7LFSbkLs%2B=Yu8N=MbHB7XeJOAZw6qT3ajg>