Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 19 Nov 2020 22:50:17 +0100
From:      Vincenzo Maffione <vmaffione@freebsd.org>
To:        Rajesh Kumar <rajfbsd@gmail.com>
Cc:        "freebsd-net@freebsd.org" <freebsd-net@freebsd.org>, FreeBSD Hackers <freebsd-hackers@freebsd.org>
Subject:   Re: Netmap bridge not working with 10G Ethernet ports
Message-ID:  <CA%2B_eA9ixySZtvJ8C9mwPj6q2fAQmZJicVgLHKkpb398-u_PaJw@mail.gmail.com>
In-Reply-To: <CAAO%2BANP3PcqKo1nUfZTB92uKoJ40VA9YLo9MMqSN9AkMhq55tw@mail.gmail.com>
References:  <CAAO%2BANOg5MEfHf9bV5x4L_QXNY2O9vQk0s%2BJrD7yzeXCQfHt8w@mail.gmail.com> <CA%2B_eA9hR8ysiFGj-iriMpqXcDbc4X_h_C1sgNoO05KoLy5orCA@mail.gmail.com> <CAAO%2BANP3PcqKo1nUfZTB92uKoJ40VA9YLo9MMqSN9AkMhq55tw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Il giorno gio 19 nov 2020 alle ore 12:28 Rajesh Kumar <rajfbsd@gmail.com>
ha scritto:

> Hi Vincenzo,
>
> Thanks for your reply.
>
> On Thu, Nov 19, 2020 at 3:16 AM Vincenzo Maffione <vmaffione@freebsd.org>
> wrote:
>
>>
>> This looks like if_axe(4) driver, and therefore there's no native netmap
>> support, which means you are falling back on
>> the emulated netmap adapter. Are these USB dongles? If so, how can they
>> be 10G?
>>
>
> The Driver I am working with is "if_axp" (sys/dev/axgbe).  This is AMD
> 10Gigabit Ethernet Driver. This is recently committed upstream. Yes, it
> doesn't have a Native netmap support, but uses the netmap stack which is
> existing already.  These are inbuilt SFP ports with our test board and not
> USB dongles.
>

Ok, now it makes sense. Thanks for clarifying. I see that if_axp(4) uses
iflib(4). This means that actually if_axp(4) has native netmap support,
because iflib(4) has native netmap support.


> Does Native netmap mean the hardware capability which needs to be
> programmed appropriately from driver side?  Any generic documentation
> regarding the same?
>

It means that the driver has some modifications to allow netmap to directly
program the NIC rings. These modifications are mostly the per-driver txsync
and rxsyng routines.
In case of iflib(4) drivers, these modifications are provided directly
within the iflib(4) code, and therefore any driver using iflib will have
native netmap support.


>
>> In this kind of configuration it is mandatory to disable all the NIC
>> offloads, because netmap does not program the NIC
>> to honor them, e.g.:
>>
>> # ifconfig ax0 -txcsum -rxcsum -tso4 -tso6 -lro -txcsum6 -rxcsum6
>> # ifconfig ax1 -txcsum -rxcsum -tso4 -tso6 -lro -txcsum6 -rxcsum6
>>
>
> Earlier, I haven't tried disabling the Offload capabilities.  But I tried
> now, but it still behaves the same way.  ARP replies doesn't seem to reach
> the bridge (or dropped) to be forwarded.  I will collect the details for
> AMD driver. Tried the same test with another 10G card (Intel "ix" driver)
> also exhibits similar behavior.  Details below.
>

Ok, this makes sense, because also ix(4) uses iflib, and therefore you are
basically hitting the same issue of if_axp(4)
At this point I must think that there is still some issue with the
interaction between iflib(4) and netmap(4).


>
>
>> a) I tried with another vendor 10G NIC card. It behaves the same way. So
>>> this issue doesn't seem to be generic and not hardware specific.
>>>
>>
>> Which driver are those NICs using? That makes the difference. I guess
>> it's still a driver with no native netmap support, hence
>> you are using the same emulated adapter
>>
>
> I am using the "ix" driver (Intel 10G NIC adapter).  I guess this driver
> also doesn't support Native Netmap.  Please correct me if I am wrong.  I
> tried disabling the offload capabilities with this device/driver and tested
> and still observed the netmap bridging fails.
>

As I stated above, ix(4) has netmap support, like any iflib(4) driver.


> root@fbsd_cur# sysctl dev.ix.0 | grep tx_packets
> dev.ix.0.queue7.tx_packets: 0
> dev.ix.0.queue6.tx_packets: 0
> dev.ix.0.queue5.tx_packets: 0
> dev.ix.0.queue4.tx_packets: 0
> dev.ix.0.queue3.tx_packets: 0
> dev.ix.0.queue2.tx_packets: 0
> dev.ix.0.queue1.tx_packets: 0
> *dev.ix.0.queue0.tx_packets: 3*
> root@fbsd_cur# sysctl dev.ix.0 | grep rx_packets
> dev.ix.0.queue7.rx_packets: 0
> dev.ix.0.queue6.rx_packets: 0
> dev.ix.0.queue5.rx_packets: 0
> dev.ix.0.queue4.rx_packets: 0
> dev.ix.0.queue3.rx_packets: 0
> dev.ix.0.queue2.rx_packets: 0
> dev.ix.0.queue1.rx_packets: 0
> dev.ix.0.queue0.rx_packets: 0
> root@fbsd_cur # sysctl dev.ix.1 | grep tx_packets
> dev.ix.1.queue7.tx_packets: 0
> dev.ix.1.queue6.tx_packets: 0
> dev.ix.1.queue5.tx_packets: 0
> dev.ix.1.queue4.tx_packets: 0
> dev.ix.1.queue3.tx_packets: 0
> dev.ix.1.queue2.tx_packets: 0
> dev.ix.1.queue1.tx_packets: 0
> dev.ix.1.queue0.tx_packets: 0
> root@fbsd_cur # sysctl dev.ix.1 | grep rx_packets
> dev.ix.1.queue7.rx_packets: 0
> dev.ix.1.queue6.rx_packets: 0
> dev.ix.1.queue5.rx_packets: 0
> dev.ix.1.queue4.rx_packets: 0
> dev.ix.1.queue3.rx_packets: 0
> dev.ix.1.queue2.rx_packets: 0
> dev.ix.1.queue1.rx_packets: 0
>
> *dev.ix.1.queue0.rx_packets: 3*
>
> You can see "ix1" received 3 packets (ARP requests) from system 1 and
> transmitted 3 packets to system 2 via "ix0". But ARP reply from system 2 is
> not captured or forwared properly.
>

I see. This info may be useful. Have you tried to look at interrupts (e.g.
`vmstat -i`), to see if "ix0" gets any RX interrupts (for the missing ARP
replies)?


>
> You can see the checksum features disabled (except VLAN_HWCSIM) on both
> interfaces.  And you can see both interfaces active and Link up.
>
> root@fbsd_cur # ifconfig -a
> ix0: flags=8862<BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
>
> options=48538b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,WOL_UCAST,WOL_MCAST,WOL_MAGIC,VLAN_HWFILTER,VLAN_HWTSO>
>         ether a0:36:9f:a5:49:90
>         media: Ethernet autoselect (100baseTX <full-duplex>)
>         status: active
>         nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
>
> ix1: flags=8862<BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
>
> options=48538b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,WOL_UCAST,WOL_MCAST,WOL_MAGIC,VLAN_HWFILTER,VLAN_HWTSO>
>         ether a0:36:9f:a5:49:92
>         media: Ethernet autoselect (1000baseT
> <full-duplex,rxpause,txpause>)
>         status: active
>         nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
>
>>
>> b) Trying with another vendor 1G NIC card, things are working.  So not
>>> sure, what makes a difference here.  The ports in System 1 and System 2
>>> are
>>> USB attached Ethernet capable of maximum speed of 1G.  So does connecting
>>> 1G to 10G bridge ports is having any impact?
>>>
>>
>> I don't think so. On each p2p link the NICs will negotiate 1G speed.
>> In any case, what driver was this one?
>>
>
> This is "igb" driver. Intel 1G NIC Card.
>

Also the igb(4) driver is using iflib(4). So the involved netmap code is
the same as ix(4) and if_axp(4).
This is something that I'm not able to understand right now.
It does not look like something related to offloads.

Next week I will try to see if I can reproduce your issue with em(4), and
report back. That's still an Intel driver using iflib(4).

Thanks,
  Vincenzo


>
> Thanks,
> Rajesh.
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CA%2B_eA9ixySZtvJ8C9mwPj6q2fAQmZJicVgLHKkpb398-u_PaJw>