Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 19 Nov 2020 16:58:42 +0530
From:      Rajesh Kumar <rajfbsd@gmail.com>
To:        Vincenzo Maffione <vmaffione@freebsd.org>
Cc:        "freebsd-net@freebsd.org" <freebsd-net@freebsd.org>, FreeBSD Hackers <freebsd-hackers@freebsd.org>
Subject:   Re: Netmap bridge not working with 10G Ethernet ports
Message-ID:  <CAAO%2BANP3PcqKo1nUfZTB92uKoJ40VA9YLo9MMqSN9AkMhq55tw@mail.gmail.com>
In-Reply-To: <CA%2B_eA9hR8ysiFGj-iriMpqXcDbc4X_h_C1sgNoO05KoLy5orCA@mail.gmail.com>
References:  <CAAO%2BANOg5MEfHf9bV5x4L_QXNY2O9vQk0s%2BJrD7yzeXCQfHt8w@mail.gmail.com> <CA%2B_eA9hR8ysiFGj-iriMpqXcDbc4X_h_C1sgNoO05KoLy5orCA@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi Vincenzo,

Thanks for your reply.

On Thu, Nov 19, 2020 at 3:16 AM Vincenzo Maffione <vmaffione@freebsd.org>
wrote:

>
> This looks like if_axe(4) driver, and therefore there's no native netmap
> support, which means you are falling back on
> the emulated netmap adapter. Are these USB dongles? If so, how can they be
> 10G?
>

The Driver I am working with is "if_axp" (sys/dev/axgbe).  This is AMD
10Gigabit Ethernet Driver. This is recently committed upstream. Yes, it
doesn't have a Native netmap support, but uses the netmap stack which is
existing already.  These are inbuilt SFP ports with our test board and not
USB dongles.

Does Native netmap mean the hardware capability which needs to be
programmed appropriately from driver side?  Any generic documentation
regarding the same?


> In this kind of configuration it is mandatory to disable all the NIC
> offloads, because netmap does not program the NIC
> to honor them, e.g.:
>
> # ifconfig ax0 -txcsum -rxcsum -tso4 -tso6 -lro -txcsum6 -rxcsum6
> # ifconfig ax1 -txcsum -rxcsum -tso4 -tso6 -lro -txcsum6 -rxcsum6
>

Earlier, I haven't tried disabling the Offload capabilities.  But I tried
now, but it still behaves the same way.  ARP replies doesn't seem to reach
the bridge (or dropped) to be forwarded.  I will collect the details for
AMD driver. Tried the same test with another 10G card (Intel "ix" driver)
also exhibits similar behavior.  Details below.


> a) I tried with another vendor 10G NIC card. It behaves the same way. So
>> this issue doesn't seem to be generic and not hardware specific.
>>
>
> Which driver are those NICs using? That makes the difference. I guess it's
> still a driver with no native netmap support, hence
> you are using the same emulated adapter
>

I am using the "ix" driver (Intel 10G NIC adapter).  I guess this driver
also doesn't support Native Netmap.  Please correct me if I am wrong.  I
tried disabling the offload capabilities with this device/driver and tested
and still observed the netmap bridging fails.

root@fbsd_cur# sysctl dev.ix.0 | grep tx_packets
dev.ix.0.queue7.tx_packets: 0
dev.ix.0.queue6.tx_packets: 0
dev.ix.0.queue5.tx_packets: 0
dev.ix.0.queue4.tx_packets: 0
dev.ix.0.queue3.tx_packets: 0
dev.ix.0.queue2.tx_packets: 0
dev.ix.0.queue1.tx_packets: 0
*dev.ix.0.queue0.tx_packets: 3*
root@fbsd_cur# sysctl dev.ix.0 | grep rx_packets
dev.ix.0.queue7.rx_packets: 0
dev.ix.0.queue6.rx_packets: 0
dev.ix.0.queue5.rx_packets: 0
dev.ix.0.queue4.rx_packets: 0
dev.ix.0.queue3.rx_packets: 0
dev.ix.0.queue2.rx_packets: 0
dev.ix.0.queue1.rx_packets: 0
dev.ix.0.queue0.rx_packets: 0
root@fbsd_cur # sysctl dev.ix.1 | grep tx_packets
dev.ix.1.queue7.tx_packets: 0
dev.ix.1.queue6.tx_packets: 0
dev.ix.1.queue5.tx_packets: 0
dev.ix.1.queue4.tx_packets: 0
dev.ix.1.queue3.tx_packets: 0
dev.ix.1.queue2.tx_packets: 0
dev.ix.1.queue1.tx_packets: 0
dev.ix.1.queue0.tx_packets: 0
root@fbsd_cur # sysctl dev.ix.1 | grep rx_packets
dev.ix.1.queue7.rx_packets: 0
dev.ix.1.queue6.rx_packets: 0
dev.ix.1.queue5.rx_packets: 0
dev.ix.1.queue4.rx_packets: 0
dev.ix.1.queue3.rx_packets: 0
dev.ix.1.queue2.rx_packets: 0
dev.ix.1.queue1.rx_packets: 0

*dev.ix.1.queue0.rx_packets: 3*

You can see "ix1" received 3 packets (ARP requests) from system 1 and
transmitted 3 packets to system 2 via "ix0". But ARP reply from system 2 is
not captured or forwared properly.

You can see the checksum features disabled (except VLAN_HWCSIM) on both
interfaces.  And you can see both interfaces active and Link up.

root@fbsd_cur # ifconfig -a
ix0: flags=8862<BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500

options=48538b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,WOL_UCAST,WOL_MCAST,WOL_MAGIC,VLAN_HWFILTER,VLAN_HWTSO>
        ether a0:36:9f:a5:49:90
        media: Ethernet autoselect (100baseTX <full-duplex>)
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>

ix1: flags=8862<BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500

options=48538b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,WOL_UCAST,WOL_MCAST,WOL_MAGIC,VLAN_HWFILTER,VLAN_HWTSO>
        ether a0:36:9f:a5:49:92
        media: Ethernet autoselect (1000baseT <full-duplex,rxpause,txpause>)
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>

>
> b) Trying with another vendor 1G NIC card, things are working.  So not
>> sure, what makes a difference here.  The ports in System 1 and System 2
>> are
>> USB attached Ethernet capable of maximum speed of 1G.  So does connecting
>> 1G to 10G bridge ports is having any impact?
>>
>
> I don't think so. On each p2p link the NICs will negotiate 1G speed.
> In any case, what driver was this one?
>

This is "igb" driver. Intel 1G NIC Card.

Thanks,
Rajesh.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAAO%2BANP3PcqKo1nUfZTB92uKoJ40VA9YLo9MMqSN9AkMhq55tw>