Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 18 Nov 2016 11:02:47 +0100
From:      Vincenzo Maffione <v.maffione@gmail.com>
To:        Paras Jha <dreadiscool@gmail.com>
Cc:        FreeBSD Net <freebsd-net@freebsd.org>
Subject:   Re: Advantages of Netmap NM_OPEN_NO_MMAP
Message-ID:  <CA%2B_eA9go4HA9_N32FQibZCbyd9njTM2OM2%2Br6Xubm3vXeXapXQ@mail.gmail.com>
In-Reply-To: <CAMs8r4NUxH_2OR5xQ-fZyEFTvTzG9x=f7bzt=qjEc%2BZy9ei%2BSw@mail.gmail.com>
References:  <CAMs8r4PZGUY=iOHm3P1XEZ7%2BcLEya%2B_hFCw-LGvL=50S47hwXQ@mail.gmail.com> <CA%2B_eA9gcsrcEHDvjctsRJMd0%2Bsf-McRPsGkeMNKAJ%2BZaxP5c2Q@mail.gmail.com> <CAMs8r4NUxH_2OR5xQ-fZyEFTvTzG9x=f7bzt=qjEc%2BZy9ei%2BSw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
(It is not unusual to let a thread read from multiple rings (e.g. all the
rings) of a given interface.)

If I understand your scheme correctly, you want to have one thread reading
from interface A, and split the packets across *x* TX threads (on
interfaces, B, C, ...). To operate locklessly (and zcopy) in this case you
would need an intermediate additional (lockless) queue between the RX
thread and each of the TX threads. This intermediate queue can be netmap
pipes (included in netmap). You can take a look at the "lb" application (in
the netmap github repository), which is an example on how you can load
balance RX traffic across multiple pipes, that can then be processed by
other processes/threads (in your case the latter threads would transmit on
other interfaces).

Obviously you need to use locks with the netmap rings if and only if two
threads need to access concurrently the same netmap RX or TX ring.
So if you want to design your system to be lockless zcopy you have two
options that I can think of:

1) associate statically each thread to 1 RX ring (or a set of them) of an
interface, and to 1 TX ring (or a set of them) of another interface, and
let each thread zcopy forward from its RX rings to its TX rings. This means
that each thread is both receiving and transmitting. The bridge application
(netmap github repository) is a simple example of this strategy.
2) Have threads that only receive, and threads that only transmit (on
physical interfaces), using lockless queues to let the RX threads pass
packets with the TX threads.

Cheers,
  Vincenzo

2016-11-18 0:26 GMT+01:00 Paras Jha <dreadiscool@gmail.com>:

> Hi,
>
> Thanks for the quick reply, it clarified things for me. Regarding the idea
> of independent TX-RX ring couples per thread, would a possible variation of
> this methodology be 1 RX ring per interface (only one thread reads), but
> *x* amount of TX rings, where *x *is the number of threads. Am I correct
> in assuming that such a structure will allow lockless zero-copy between an
> arbitrary number of interfaces (3 or greater)?
>
> Regards
>
> On Thu, Nov 17, 2016 at 5:11 PM, Vincenzo Maffione <v.maffione@gmail.com>
> wrote:
>
>> Hi,
>>   No, each interface open in netmap mode has its own netmap buffers (for
>> packet data) and netmap rings, independently of what is the netmap memory
>> region where buffers/rings resides.
>> Two applications using different interfaces can work independently on
>> each other (no locks needed), it does not matter whether they are in the
>> same netmap memory region or not, because the data structures (rings and
>> buffers) are separated.
>> It does matter in terms of isolation, of course: two applications using
>> the same shared memory must trust each other.
>>
>> Typically you want two interfaces to be in the same netmap memory region
>> because you want an application to do zerocopy packet forwarding between
>> the two interfaces.
>> Also in this case locks are not usually needed, since you would have a
>> single thread accessing both interfaces. (Or you could have a separate
>> thread for each TX-RX ring couple, but always without using locks.)
>>
>> You can imagine a netmap memory region as a pool of buffers and rings,
>> where one or more interfaces can allocate their buffers/rings when they
>> enter netmap mode (deallocation is on exit from netmap mode).
>> Your system can have many memory regions. By default all the physical
>> interfaces share the same memory region, while each VALE port uses a
>> private memory region.
>> This is however being changed to ease arbitrary association between
>> interfaces and memory regions.
>>
>> Cheers,
>>   Vincenzo
>>
>> 2016-11-17 22:50 GMT+01:00 Paras Jha <dreadiscool@gmail.com>:
>>
>>> Hi all,
>>>
>>> I had a quick question about some of the implications of sharing packet
>>> buffer memory between multiple interfaces. Assuming an arbitrary amount
>>> of
>>> interfaces (> 2) are linked together with NM_OPEN_NO_MMAP and share the
>>> same memory, would this have any issues with lock contention?
>>>
>>> Sorry in advance if this is the wrong place to post, I had seen several
>>> other archives about Netmap on this mailing list and I thought it was the
>>> most appropriate place.
>>>
>>> Regards
>>> _______________________________________________
>>> freebsd-net@freebsd.org mailing list
>>> https://lists.freebsd.org/mailman/listinfo/freebsd-net
>>> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"
>>>
>>
>>
>>
>> --
>> Vincenzo Maffione
>>
>
>


-- 
Vincenzo Maffione



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CA%2B_eA9go4HA9_N32FQibZCbyd9njTM2OM2%2Br6Xubm3vXeXapXQ>