Date: Fri, 6 Dec 2013 11:04:29 +0800 From: Sepherosa Ziehau <sepherosa@gmail.com> To: Adrian Chadd <adrian@freebsd.org> Cc: =?ISO-8859-1?Q?Ermal_Lu=E7i?= <eri@freebsd.org>, freebsd-net <freebsd-net@freebsd.org>, Oleg Moskalenko <mom040267@gmail.com>, Tim Kientzle <kientzle@freebsd.org>, "freebsd-current@freebsd.org" <freebsd-current@freebsd.org> Subject: Re: [PATCH] SO_REUSEADDR and SO_REUSEPORT behaviour Message-ID: <CAMOc5cznTT-0qOUrG1F55=yPpbWfn3EqWT4%2BV_RDqG6%2BDCckOQ@mail.gmail.com> In-Reply-To: <CAJ-VmokQ_C_t=pZF5QnWMzjzw6YVqTD4ny3hv_cLDch-m2EOmg@mail.gmail.com> References: <CAPBZQG29BEJJ8BK=gn%2Bg_n5o7JSnPbsKQ-=3=6AkFOxzt%2B=wGQ@mail.gmail.com> <4053E074-EDC5-49AB-91A7-E50ABE36602E@freebsd.org> <CALDtMrKvwXW-ou8X7zsKx2ST=dKD7FqHvvnQtGo30znTWU%2BVQQ@mail.gmail.com> <CAPBZQG0=bcHyv7aZse=WKfjk5=6D2-%2B6EQHiAaDZqGtaodhMMA@mail.gmail.com> <CAMOc5cwFGwk0dS5VT-YxfP3Yt38R8aO-KJTX6W832uOFEdavgA@mail.gmail.com> <CAJ-Vmonc7SVxndmVN1jphFRa5svD5BdnMrCudSbYkx4djHXW0A@mail.gmail.com> <CAMOc5cyM-%2Bvau7BsZQ5F5L95EQgN=pJqru=9aK_0aJ%2BVUk=gxQ@mail.gmail.com> <CAJ-VmokQ_C_t=pZF5QnWMzjzw6YVqTD4ny3hv_cLDch-m2EOmg@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Dec 3, 2013 at 5:41 AM, Adrian Chadd <adrian@freebsd.org> wrote: > > On 2 December 2013 03:45, Sepherosa Ziehau <sepherosa@gmail.com> wrote: > > > > On Mon, Dec 2, 2013 at 1:02 PM, Adrian Chadd <adrian@freebsd.org> wrote: > > > >> Ok, so given this, how do you guarantee the UTHREAD stays on the given > >> CPU? You assume it stays on the CPU that the initial listen socket was > >> created on, right? If it's migrated to another CPU core then the > >> listen queue still stays in the original hash group that's in a netisr > >> on a different CPU? > > > > As I wrote in the above brief introduction, Dfly currently relies on the > > scheduler doing the proper thing (the scheduler does do a very good job > > during my tests). I need to export certain kind of socket option to make > > that information available to user space programs. Force UTHREAD binding in > > kernel is not helpful, given in reverse proxy application, things are > > different. And even if that kind of binding information was exported to > > user space, user space program still would have to poll it periodically (in > > Dfly at least), since other programs binding to the same addr/port could > > come and go, which will cause reorganizing of the inp localgroup in the > > current Dfly implementation. > > Right. I kinda gathered that. It's fine, I was conceptually thinking > of doing some thead pinning into this anyway. > > How do you see this scaling on massively multi-core machines? Like 32, > 48, 64, 128 cores? I had some vague handwav-y notion of maybe limiting We do have a 48 core box. It is mainly used for package building and other stuffs. I didn't run network stress tests on it. However, we do address some message passing problems on it which will not be unveiled on 8 cpu boxes. > the concept of pcbgroup hash / netisr threads to a subset of CPUs, or > have them be able to float between sockets but only have 1 (or n, Floating around may be good, but by pinning netisr to a specific CPU you could enjoy lockless per-cpu data. > maybe) per socket. Or just have a fixed, smaller pool. The idea then We used to have dedicated threads for UDP and TCP processing, but it turns out that one netisr per cpu works best in Dfly. You probably need to try and measure before deciding to move to 1 or N netisrs per cpu. Best Regards, sephe > is the scheduler would need to be told that a given userland > thread/process belongs to a given netisr thread, and to schedule them > on the same CPU when possible. > > Anyway, thanks for doing this work. I only wish that you'd do it for > FreeBSD. :-) > > > > -adrian -- Tomorrow Will Never Die
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAMOc5cznTT-0qOUrG1F55=yPpbWfn3EqWT4%2BV_RDqG6%2BDCckOQ>