Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 8 Dec 2016 12:39:40 +0100
From:      Vincenzo Maffione <v.maffione@gmail.com>
To:        Xiaoye Sun <Xiaoye.Sun@rice.edu>
Cc:        FreeBSD Net <freebsd-net@freebsd.org>
Subject:   Fwd: Can netmap be more efficient when it just does bridging between NIC and Linux kernal?
Message-ID:  <CA%2B_eA9hPmDjz7VZZ2BUeRpoZG7U2at_pLk40p0c7SR_XNgBXBQ@mail.gmail.com>
In-Reply-To: <CA%2B_eA9iROAt3qWmpmHj=05Cfz%2BtBizSSvPWB9eEcw4%2BcFmaT-g@mail.gmail.com>
References:  <CAJnByzh8ypkWYfXd8U5ACLKp1d_KcJjHBY740wUFnS1WKiEdfw@mail.gmail.com> <CA%2B_eA9iROAt3qWmpmHj=05Cfz%2BtBizSSvPWB9eEcw4%2BcFmaT-g@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi,

2016-12-07 2:36 GMT+01:00 Xiaoye Sun <Xiaoye.Sun@rice.edu>:

> Hi,
>
> I am wondering if there a way to reduce the CPU usage of a netmap program
> similar to the bridge.c example.
>
> In my use case, I have a distributed application/framework (e.g. Spark or
> Hadoop) running on a cluster of machines (each of the machines runs Linux
> and has an Intel 10Gbps NIC). The application is both computation and
> network intensive. So there is a lot of data transfers between machines. I
> divide different data into two types (type 1 and type 2). Packets of type 1
> data are sent through netmap (these packets don't go through Linux network
> stack). Packets of type 2 data are sent through Linux network stack. Both
> type 1 and type 2 data could be small or large.
>
> My netmap program runs on all the machines in the cluster and processes the
> packets of type 1 data  (create, send, receive the packets) and forward
> packets of type 2 data between the NIC and the kernel by swapping the
> pointer to the NIC slot and the pointer to the kernel stack slot (similar
> to the bridge.c example in netmap repository).
>
> With my netmap program running on the machines, for an application having
> no type 1 data (netmap program behaves like a bridge which only does slot
> pointer swapping), the total running time of the application is longer than
> the case where no netmap program runs on the machines.
>

Yes, but this is not surprising. If the only thing your netmap application
is doing is forwardinig all the traffic between the nework stack and the
NIC, then your netmap application is a process that is doing an useless
job: netmap is intercepting packets from the network stack and reinjecting
them back in the network stack (where their goes on as they were not
intercepted). It's just wasting resources. Netmap is designed to let netmap
applications use efficiently the NICs and/or talk efficently to each other
(e.g. using the VALE switch or the virtualization extensions).
The "host rings" are instead useful in some use-cases, for example (1) you
want to implement an high performance input packet filter for your network
stack, that is able to manage Ddos attacks: your netmap application would
receive somthing like 10 Mpps from the NIC, drop 99% of it (since it
realize it is not legitimate traffic) and forward the remaining packets to
the network stack; (2) you want to manage (forward, drop, modify, etc.)
most of the traffic in your netmap application, but there are some low
badwidth protocols that you want to manage using standard socket
applications (e.g. SSH).


>
> It seems to me that the netmap program either slows down the network
> transfer for type 2 data, or it eats up too many CPU cycles and competes
> with the application process. However, with my netmap program running,
> iperf can reach 10Gbps bandwidth with 40-50% CPU usage on the netmap
> program (the netmap program is doing pointer swaping for iperf packets). I
> also found that after each poll returns, most of the time, the program
> might just swap one pointer, so there is a lot of system call overhead.
>
> This is also not surprising, since you are probably iperf is generating
large packets (1500 bytes or more). As a consequence, the packet rate is
something like 800Kpps, which is not extremely high (netmap applications
can work with workloads of 5, 10, 20 or more Mpps; since the packet rate is
not high, it means that the interval between two packets arriving is
greater than the time needed to do a poll()/ioctl() syscall and process the
packet, and so the batches don't get formed.


> Can anybody help me diagnose the source of the problem or is there a better
> way to write such program?



> I am wondering if there is a way to tuning the configuration so that the
> netmap program won't take up too much extra CPU when it runs like the
> bridge.c program.
>

The point is that when you have only type 2 data you shouldn't use netmap,
as it does not make sense. Unfortunately, the fact that packet batches
(with more than 1 packet) get formed or not depends on the external traffic
input patterns: it's basically a producer/consumer problem, and there are
no tunable for this. One thing you may do is to rate-limit the calls to
poll()/ioctl() in order to artificially create the batches; in this way you
would trade off a bit of latency for the sake of energy efficiency.

Another approach that you may be interested in is using NIC hardware
features like "flow-director" or "receive-flow-steering" to classify input
packets and steer different classes into specific NIC queues. In this way
you could open with netmap just a subset of the NIC queues (the type 1
data1traffic), and let the network stack directly process the traffic on
the other queues (type 2 data). There are some blog posts about this kind
of setup, here is one https://blog.cloudflare.com/
single-rx-queue-kernel-bypass-with-netmap/

Cheers,
  Vincenzo

>
>
> Best,
> Xiaoye
> _______________________________________________
> freebsd-net@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"
>



-- 
Vincenzo Maffione



-- 
Vincenzo Maffione



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CA%2B_eA9hPmDjz7VZZ2BUeRpoZG7U2at_pLk40p0c7SR_XNgBXBQ>