Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 29 Mar 2015 02:41:16 +0300
From:      Slawa Olhovchenkov <slw@zxy.spb.ru>
To:        Adrian Chadd <adrian@freebsd.org>
Cc:        "freebsd-hackers@freebsd.org" <freebsd-hackers@freebsd.org>
Subject:   Re: irq cpu binding
Message-ID:  <20150328234116.GJ23643@zxy.spb.ru>
In-Reply-To: <CAJ-VmongWE_z7Rod8-SoFmyiLqiTbHtSaAwjgAs05L_Z3jrWXA@mail.gmail.com>
References:  <20150328194959.GE23643@zxy.spb.ru> <CAJ-Vmo=1rzB%2BYNGNuV9s=FnSse7FL7S42OSS4u-PzUs74b850A@mail.gmail.com> <20150328201219.GF23643@zxy.spb.ru> <CAJ-Vmo=wecgoVYcS14gsOnT86p=HEMdao65aXTi7jLfVVyOELg@mail.gmail.com> <20150328221621.GG23643@zxy.spb.ru> <CAJ-Vmomd6Z5Ou7cvV1Kg4m=X2907507hqKMWiz6ssZ45Pi_-Dg@mail.gmail.com> <20150328224634.GH23643@zxy.spb.ru> <CAJ-VmokwGgHGP6AjBcGbyJShBPX6dyJjjNeCBcjxLi1obaiRtQ@mail.gmail.com> <20150328230533.GI23643@zxy.spb.ru> <CAJ-VmongWE_z7Rod8-SoFmyiLqiTbHtSaAwjgAs05L_Z3jrWXA@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Mar 28, 2015 at 04:23:54PM -0700, Adrian Chadd wrote:

> On 28 March 2015 at 16:05, Slawa Olhovchenkov <slw@zxy.spb.ru> wrote:
> > On Sat, Mar 28, 2015 at 03:49:48PM -0700, Adrian Chadd wrote:
> >
> >> You should totally join #bsdcode on efnet and ask me about it. :)
> >
> > I am totaly don't use IRC (last 20 years).
> > May be skype?
> 
> Heh, IRC is better. There are more FreeBSD people in the channel. :)

I don't understund IRC. I don't know how I can got history of chat in
case of long disconnect.

> >> on RSS, this is what would happen:
> >>
> >> * ALL NICs RSS BUCKET 0 -> core 0
> >> * ...
> >> * ALL NICs RSS BUCKET 7 -> core 7
> >
> > My expirens: this is worse vs dedicated core (one core handeled only
> > one bucket of one NIC).
> 
> The only reason(s) this becomes problematic is if things preempt other
> things on that CPU.
> Hopefully enough work gets done in each interrupt run - but, maybe the
> scheduler is doing something odd and interleaving all the
> supposedly-equivalent-ithreads based on what's blocking in locks and
> what isn't. It's worth digging into.

Sorry, I am missunderstund you there.
Or, may be, I use unintelligible explanation.

dedicated core != core only NIC and don't do other work.
dedicated core :== for this NIC bucket only this core. on this core no
other NIC buckets.
Other task may be used this core.

What you mean? Can you explain?

> Not only that, but I also do handle the case of fragments going to the
> "wrong" queue - then getting reassembled and reinjected back into the
> right RSS CPU. That way things are correctly in-order.

fragment reassembled done in-kernel and (for TCP) go to right queue automatic.

> >
> >> Now, that's not really 100% optimal for NUMA and multiple PCIe
> >> controllers, but we're not there yet.
> >>
> >> Hopefully I can twist/cajole navdeep @ chelsio to continue doing a
> >> little more RSS work so I can teach cxgbe/cxl about RSS configuration,
> >> but ixgbe, igb and ixl all do the above when RSS is enabled.
> >
> > Most part of my setup use cxgbe.
> 
> Ok.
> 
> Well, that (and other stuff) will happen at the speed of "adrian's
> doing this for fun as his home project", so if you/others would like
> to help out then please do. I'd like to get this stuff very much done
> and in -11 before it's released next year.

I am understand that you home project.
I can't request you.
I only think that propose more light and simple solution (don't need
drivers modification, don't break API/ABI/KBI).



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20150328234116.GJ23643>