From owner-freebsd-hackers@freebsd.org Thu Nov 19 21:50:30 2020 Return-Path: Delivered-To: freebsd-hackers@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 5CA49475BD5; Thu, 19 Nov 2020 21:50:30 +0000 (UTC) (envelope-from vmaffione@freebsd.org) Received: from smtp.freebsd.org (smtp.freebsd.org [96.47.72.83]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "smtp.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4CcYJQ27xGz3hbl; Thu, 19 Nov 2020 21:50:30 +0000 (UTC) (envelope-from vmaffione@freebsd.org) Received: from mail-pg1-f181.google.com (mail-pg1-f181.google.com [209.85.215.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "GTS CA 1O1" (verified OK)) (Authenticated sender: vmaffione) by smtp.freebsd.org (Postfix) with ESMTPSA id 330A0775E; Thu, 19 Nov 2020 21:50:30 +0000 (UTC) (envelope-from vmaffione@freebsd.org) Received: by mail-pg1-f181.google.com with SMTP id m9so5464693pgb.4; Thu, 19 Nov 2020 13:50:30 -0800 (PST) X-Gm-Message-State: AOAM532nPHvAjZHt4g33uvYxgMh9IaimwHwGT5yQM3RcYgFvHGdeUOiX m4MjldI/qgZs2SfvvGvoPReh1siqvZvKLHmiJwQ= X-Google-Smtp-Source: ABdhPJxkWhUT7ndeg0YXYrdmVME254+XCg4pAlvO3R1NXIsWdWiL2g3HxMGliqfBKsr1GNRIyIjK9ugoKDZGJ6TtvU4= X-Received: by 2002:a63:d650:: with SMTP id d16mr13835611pgj.277.1605822629178; Thu, 19 Nov 2020 13:50:29 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: Vincenzo Maffione Date: Thu, 19 Nov 2020 22:50:17 +0100 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: Netmap bridge not working with 10G Ethernet ports To: Rajesh Kumar Cc: "freebsd-net@freebsd.org" , FreeBSD Hackers Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.34 X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Nov 2020 21:50:30 -0000 Il giorno gio 19 nov 2020 alle ore 12:28 Rajesh Kumar ha scritto: > Hi Vincenzo, > > Thanks for your reply. > > On Thu, Nov 19, 2020 at 3:16 AM Vincenzo Maffione > wrote: > >> >> This looks like if_axe(4) driver, and therefore there's no native netmap >> support, which means you are falling back on >> the emulated netmap adapter. Are these USB dongles? If so, how can they >> be 10G? >> > > The Driver I am working with is "if_axp" (sys/dev/axgbe). This is AMD > 10Gigabit Ethernet Driver. This is recently committed upstream. Yes, it > doesn't have a Native netmap support, but uses the netmap stack which is > existing already. These are inbuilt SFP ports with our test board and not > USB dongles. > Ok, now it makes sense. Thanks for clarifying. I see that if_axp(4) uses iflib(4). This means that actually if_axp(4) has native netmap support, because iflib(4) has native netmap support. > Does Native netmap mean the hardware capability which needs to be > programmed appropriately from driver side? Any generic documentation > regarding the same? > It means that the driver has some modifications to allow netmap to directly program the NIC rings. These modifications are mostly the per-driver txsync and rxsyng routines. In case of iflib(4) drivers, these modifications are provided directly within the iflib(4) code, and therefore any driver using iflib will have native netmap support. > >> In this kind of configuration it is mandatory to disable all the NIC >> offloads, because netmap does not program the NIC >> to honor them, e.g.: >> >> # ifconfig ax0 -txcsum -rxcsum -tso4 -tso6 -lro -txcsum6 -rxcsum6 >> # ifconfig ax1 -txcsum -rxcsum -tso4 -tso6 -lro -txcsum6 -rxcsum6 >> > > Earlier, I haven't tried disabling the Offload capabilities. But I tried > now, but it still behaves the same way. ARP replies doesn't seem to reach > the bridge (or dropped) to be forwarded. I will collect the details for > AMD driver. Tried the same test with another 10G card (Intel "ix" driver) > also exhibits similar behavior. Details below. > Ok, this makes sense, because also ix(4) uses iflib, and therefore you are basically hitting the same issue of if_axp(4) At this point I must think that there is still some issue with the interaction between iflib(4) and netmap(4). > > >> a) I tried with another vendor 10G NIC card. It behaves the same way. So >>> this issue doesn't seem to be generic and not hardware specific. >>> >> >> Which driver are those NICs using? That makes the difference. I guess >> it's still a driver with no native netmap support, hence >> you are using the same emulated adapter >> > > I am using the "ix" driver (Intel 10G NIC adapter). I guess this driver > also doesn't support Native Netmap. Please correct me if I am wrong. I > tried disabling the offload capabilities with this device/driver and tested > and still observed the netmap bridging fails. > As I stated above, ix(4) has netmap support, like any iflib(4) driver. > root@fbsd_cur# sysctl dev.ix.0 | grep tx_packets > dev.ix.0.queue7.tx_packets: 0 > dev.ix.0.queue6.tx_packets: 0 > dev.ix.0.queue5.tx_packets: 0 > dev.ix.0.queue4.tx_packets: 0 > dev.ix.0.queue3.tx_packets: 0 > dev.ix.0.queue2.tx_packets: 0 > dev.ix.0.queue1.tx_packets: 0 > *dev.ix.0.queue0.tx_packets: 3* > root@fbsd_cur# sysctl dev.ix.0 | grep rx_packets > dev.ix.0.queue7.rx_packets: 0 > dev.ix.0.queue6.rx_packets: 0 > dev.ix.0.queue5.rx_packets: 0 > dev.ix.0.queue4.rx_packets: 0 > dev.ix.0.queue3.rx_packets: 0 > dev.ix.0.queue2.rx_packets: 0 > dev.ix.0.queue1.rx_packets: 0 > dev.ix.0.queue0.rx_packets: 0 > root@fbsd_cur # sysctl dev.ix.1 | grep tx_packets > dev.ix.1.queue7.tx_packets: 0 > dev.ix.1.queue6.tx_packets: 0 > dev.ix.1.queue5.tx_packets: 0 > dev.ix.1.queue4.tx_packets: 0 > dev.ix.1.queue3.tx_packets: 0 > dev.ix.1.queue2.tx_packets: 0 > dev.ix.1.queue1.tx_packets: 0 > dev.ix.1.queue0.tx_packets: 0 > root@fbsd_cur # sysctl dev.ix.1 | grep rx_packets > dev.ix.1.queue7.rx_packets: 0 > dev.ix.1.queue6.rx_packets: 0 > dev.ix.1.queue5.rx_packets: 0 > dev.ix.1.queue4.rx_packets: 0 > dev.ix.1.queue3.rx_packets: 0 > dev.ix.1.queue2.rx_packets: 0 > dev.ix.1.queue1.rx_packets: 0 > > *dev.ix.1.queue0.rx_packets: 3* > > You can see "ix1" received 3 packets (ARP requests) from system 1 and > transmitted 3 packets to system 2 via "ix0". But ARP reply from system 2 is > not captured or forwared properly. > I see. This info may be useful. Have you tried to look at interrupts (e.g. `vmstat -i`), to see if "ix0" gets any RX interrupts (for the missing ARP replies)? > > You can see the checksum features disabled (except VLAN_HWCSIM) on both > interfaces. And you can see both interfaces active and Link up. > > root@fbsd_cur # ifconfig -a > ix0: flags=8862 metric 0 mtu 1500 > > options=48538b8 > ether a0:36:9f:a5:49:90 > media: Ethernet autoselect (100baseTX ) > status: active > nd6 options=29 > > ix1: flags=8862 metric 0 mtu 1500 > > options=48538b8 > ether a0:36:9f:a5:49:92 > media: Ethernet autoselect (1000baseT > ) > status: active > nd6 options=29 > >> >> b) Trying with another vendor 1G NIC card, things are working. So not >>> sure, what makes a difference here. The ports in System 1 and System 2 >>> are >>> USB attached Ethernet capable of maximum speed of 1G. So does connecting >>> 1G to 10G bridge ports is having any impact? >>> >> >> I don't think so. On each p2p link the NICs will negotiate 1G speed. >> In any case, what driver was this one? >> > > This is "igb" driver. Intel 1G NIC Card. > Also the igb(4) driver is using iflib(4). So the involved netmap code is the same as ix(4) and if_axp(4). This is something that I'm not able to understand right now. It does not look like something related to offloads. Next week I will try to see if I can reproduce your issue with em(4), and report back. That's still an Intel driver using iflib(4). Thanks, Vincenzo > > Thanks, > Rajesh. >