Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 3 May 2015 04:09:33 -0300
From:      Raimundo Santos <raitech@gmail.com>
To:        "freebsd-net@freebsd.org" <freebsd-net@freebsd.org>
Subject:   Fwd: netmap-ipfw on em0 em1
Message-ID:  <CAGQ6iC8NZgNW%2BE1wtap-A7ihchDQQ5L3w=VdRCDFXy9%2BtgExWg@mail.gmail.com>
In-Reply-To: <CAGQ6iC9g9pgw25P0AWNBx_0K_m8TUd0rZp1y55RcTk6KtyrY3g@mail.gmail.com>
References:  <CABfVBTktfLGacJ3PerR%2BgTewbS%2B52Vmno9mcT-XQBNktPFw5%2Bw@mail.gmail.com> <CAG4HiT7qery5wEevFUS2bb=91tyF77ZmTdZL0WUi3APCcCYT4Q@mail.gmail.com> <9C799778-79DC-4D5F-BA5C-EA94A573ED10@freebsdbrasil.com.br> <CAG4HiT4UK2tyj%2B0ggjNAfY35oG=zHPW5%2BKXtCyUBn-vPPpCWhg@mail.gmail.com> <CAG4HiT7_3p2f=XLqzr0DYyRsL2R8S0opXKkBHAPH%2B9c8kcw_Jg@mail.gmail.com> <CA%2BhQ2%2Bivy2XaddtQMQ=fr5CHt4_cnejt%2BjFZHTcGkyQ8zS25gw@mail.gmail.com> <CAG4HiT4stMEo9BjFqfCNmd9oHgHfdtmfduaAihX4kqwoCow9hA@mail.gmail.com> <CAG4HiT4%2BJeLHMjrGd=egcB%2B-67PBJCq5vqG7S5sUgXzD9tc1kg@mail.gmail.com> <CAGQ6iC9g9pgw25P0AWNBx_0K_m8TUd0rZp1y55RcTk6KtyrY3g@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Clarifying things for the sake of documentation:

To use the host stack, append a ^ character after the name of the interface
you want to use. (Info from netmap(4) shipped with FreeBSD 10.1 RELEASE.)

Examples:

"kipfw em0" does nothing useful.
"kipfw netmap:em0" disconnects the NIC from the usual data path, i.e.,
there are no host communications.
"kipfw netmap:em0 netmap:em0^" or "kipfw netmap:em0+" places the
netmap-ipfw rules between the NIC and the host stack entry point associated
(the IP addresses configured on it with ifconfig, ARP and RARP, etc...)
with the same NIC.

On 10 November 2014 at 18:29, Evandro Nunes <evandronunes12@gmail.com>
wrote:

> dear professor luigi,
> i have some numbers, I am filtering 773Kpps with kipfw using 60% of CPU and
> system using the rest, this system is a 8core at 2.4Ghz, but only one core
> is in use
> in this next round of tests, my NIC is now an avoton with igb(4) driver,
> currently with 4 queues per NIC (total 8 queues for kipfw bridge)
> i have read in your papers we should expect something similar to 1.48Mpps
> how can I benefit from the other CPUs which are completely idle? I tried
> CPU Affinity (cpuset) kipfw but system CPU usage follows userland kipfw so
> I could not set one CPU to userland while other for system
>

All the papers talk about *generating* lots of packets, not *processing*
lots of packets. What this netmap example does is processing. If someone
really wants to use the host stack, the expected performance WILL BE worse
- what's the point of using a host stack bypassing tool/framework if
someone will end up using the host stack?

And by generating, usually the papers means: minimum sized UDP packets.


>
> can you please enlighten?
>

For everyone: read the manuals, read related and indicated materials
(papers, web sites, etc), and, as a least resource, read the code. Within
netmap's codes, it's more easy than it sounds.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAGQ6iC8NZgNW%2BE1wtap-A7ihchDQQ5L3w=VdRCDFXy9%2BtgExWg>