From owner-freebsd-net@freebsd.org Tue May 18 20:17:40 2021 Return-Path: Delivered-To: freebsd-net@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 614A76565EA for ; Tue, 18 May 2021 20:17:40 +0000 (UTC) (envelope-from vmaffione@freebsd.org) Received: from smtp.freebsd.org (smtp.freebsd.org [96.47.72.83]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "smtp.freebsd.org", Issuer "R3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4Fl6kD28xbz4r20 for ; Tue, 18 May 2021 20:17:40 +0000 (UTC) (envelope-from vmaffione@freebsd.org) Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "GTS CA 1O1" (verified OK)) (Authenticated sender: vmaffione) by smtp.freebsd.org (Postfix) with ESMTPSA id 2A2455E23 for ; Tue, 18 May 2021 20:17:40 +0000 (UTC) (envelope-from vmaffione@freebsd.org) Received: by mail-pf1-f173.google.com with SMTP id x188so8261316pfd.7 for ; Tue, 18 May 2021 13:17:40 -0700 (PDT) X-Gm-Message-State: AOAM531EdQwItDST2v76Nv5FqcpqjvkgZ0muVOfmieGvrVji/mjXPNPR 6Xx3WQl9quq8+53hOB8+w23eXPL+Mm0m5vd7mnI= X-Google-Smtp-Source: ABdhPJzufhOBiSIUY9FwcErk9kD2xCiboJzTE5puhVupMg+KAZZlCwHIQelsglbimsFo6yuM8D90Qq0/3Bp6qd4tPm0= X-Received: by 2002:a05:6a00:134b:b029:2bf:2c30:ebbd with SMTP id k11-20020a056a00134bb02902bf2c30ebbdmr6940452pfu.74.1621369059026; Tue, 18 May 2021 13:17:39 -0700 (PDT) MIME-Version: 1.0 References: <91e21d18a4214af4898dd09f11144493@EX16-05.ad.unipi.it> <20210517192054.0907beea@x23> In-Reply-To: <20210517192054.0907beea@x23> From: Vincenzo Maffione Date: Tue, 18 May 2021 22:17:27 +0200 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: Vector Packet Processing (VPP) portability on FreeBSD To: Marko Zec Cc: Francois ten Krooden , "freebsd-net@freebsd.org" , Jacques Fourie Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.34 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 May 2021 20:17:40 -0000 +1 Thanks, Vincenzo Il giorno lun 17 mag 2021 alle ore 19:20 Marko Zec ha scritto: > On Mon, 17 May 2021 09:53:25 +0000 > Francois ten Krooden wrote: > > > On 2021/05/16 09:22, Vincenzo Maffione wrote: > > > > > > > > Hi, > > > Yes, you are not using emulated netmap mode. > > > > > > In the test setup depicted here > > > https://github.com/ftk-ntq/vpp/wiki/VPP-throughput-using-netmap- > > > interfaces#test-setup > > > I think you should really try to replace VPP with the netmap > > > "bridge" application (tools/tools/netmap/bridge.c), and see what > > > numbers you get. > > > > > > You would run the application this way > > > # bridge -i ix0 -i ix1 > > > and this will forward any traffic between ix0 and ix1 (in both > > > directions). > > > > > > These numbers would give you a better idea of where to look next > > > (e.g. VPP code improvements or system tuning such as NIC > > > interrupts, CPU binding, etc.). > > > > Thank you for the suggestion. > > I did run a test with the bridge this morning, and updated the > > results as well. +-------------+------------------+ > > | Packet Size | Throughput (pps) | > > +-------------+------------------+ > > | 64 bytes | 7.197 Mpps | > > | 128 bytes | 7.638 Mpps | > > | 512 bytes | 2.358 Mpps | > > | 1280 bytes | 964.915 kpps | > > | 1518 bytes | 815.239 kpps | > > +-------------+------------------+ > > I assume you're on 13.0 where netmap throughput is lower compared to > 11.x due to migration of most drivers to iflib (apparently increased > overhead) and different driver defaults. On 11.x I could move 10G line > rate from one ix to another at low CPU freqs, where on 13.x the CPU > must be set to max speed, and still can't do 14.88 Mpps. > > #1 thing which changed: default # of packets per ring dropped down from > 2048 (11.x) to 1024 (13.x). Try changing this in /boot/loader.conf: > > dev.ixl.0.iflib.override_nrxds=2048 > dev.ixl.0.iflib.override_ntxds=2048 > dev.ixl.1.iflib.override_nrxds=2048 > dev.ixl.1.iflib.override_ntxds=2048 > etc. > > For me this increases the throughput of > bridge -i netmap:ixl0 -i netmap:ixl1 > from 9.3 Mpps to 11.4 Mpps > > #2: default interrupt moderation delays seem to be too long. Combined > with increasing the ring sizes, reducing dev.ixl.0.rx_itr from 62 > (default) to 40 increases the throughput further from 11.4 to 14.5 Mpps > > Hope this helps, > > Marko > > > > Besides for the 64-byte and 128-byte packets the other sizes where > > matching the maximum rates possible on 10Gbps. This was when the > > bridge application was running on a single core, and the cpu core was > > maxing out at a 100%. > > > > I think there might be a bit of system tuning needed, but I suspect > > most of the improvement would be needed in VPP. > > > > Regards > > Francois >