Date: Mon, 22 Jul 2013 14:26:18 -0700 From: trafdev <trafdev@mail.ru> To: John-Mark Gurney <jmg@funkthat.com> Cc: Sepherosa Ziehau <sepherosa@gmail.com>, freebsd-net@freebsd.org, Adrian Chadd <adrian@freebsd.org> Subject: Re: SO_REUSEPORT: strange kernel balancer behaviour Message-ID: <51EDA37A.9040200@mail.ru> In-Reply-To: <20130722200205.GO26412@funkthat.com> References: <51E0E2AF.7090404@mail.ru> <CAMOc5cz6gP2N62T4QhbTdVar94O4FSdPDsqktD_9vJ0mYVqt_Q@mail.gmail.com> <51E44E2F.8060700@mail.ru> <CAJ-VmomHHfhExa4g63tT_sf0hTPa2T7jPKQGHrD0fchq=-k%2B=g@mail.gmail.com> <51E455D5.2090403@mail.ru> <20130722200205.GO26412@funkthat.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Actually overhead is almost zero, the real problem is in non-equivalent load distribution between processes. As https://lwn.net/Articles/542629/ mentions - "At Google, they have seen a factor-of-three difference between the thread accepting the most connections and the thread accepting the fewest connections;" I'm getting almost same results On Mon Jul 22 13:02:05 2013, John-Mark Gurney wrote: > trafdev wrote this message on Mon, Jul 15, 2013 at 13:04 -0700: >> Yep I think it's wasting of resources, poll manager should somehow be >> configured to update only one process/thread. >> Anyone know how to do that? > > This isn't currently possible w/o a shared kqueue, since the event is > level triggered, not edge.. You could do it w/ a shared kqueue using > _ONESHOT (but then you'd also have a shared listen fd which obviously > isn't what the OP wants)... > > I guess it wouldn't be too hard to do a wake one style thing, where > kqueue only deliveres the event once per "item/level", but right now > kqueue doesn't know anything about the format of data (which would be > number of listeners waiting)... Even if it did, there would be this > dangerous contract that if an event is returned that the user land > process would handle it... How is kqueue suppose to handle a server > that crashes/dies between getting the event and accepting a connection? > How is userland suppose to know that an event wasn't handled, or is > just taking a long time? > > Sadly, if you want to avoid the thundering heards problem, I think > blocking on accept is the best method, or using a fd passing scheme > where only on process accept's connections... > >> On Mon Jul 15 12:53:55 2013, Adrian Chadd wrote: >>> i've noticed this when doing this stuff in a threaded program with >>> each thread listening on the same port. >>> >>> All threads wake up on each accepted connection, one thread wins and >>> the other threads get EAGAIN. >>> >>> >>> >>> -adrian >>> >>> On 15 July 2013 12:31, trafdev <trafdev@mail.ru> wrote: >>>> Thanks for reply. >>>> >>>> This approach produces lot of "resource temporary unavailable" (eagain) on >>>> accept-ing connections in N-1 processes. >>>> Is this possible to avoid this by e.g. tweaking kqueue? >>>> >>>> >>>> On Sun Jul 14 19:37:59 2013, Sepherosa Ziehau wrote: >>>>> >>>>> On Sat, Jul 13, 2013 at 1:16 PM, trafdev <trafdev@mail.ru> wrote: >>>>>> >>>>>> Hello. >>>>>> >>>>>> Could someone help with following problem of SO_REUSEPORT. >>>>> >>>>> >>>>> The most portable "load balance" between processes listening on the >>>>> same TCP addr/port probably is: >>>>> >>>>> s=socket(); >>>>> bind(s); >>>>> listen(s); >>>>> /* various socketopt and fcntl as you needed */ >>>>> pid=fork(); >>>>> if (pid==0) { >>>>> server_loop(s); >>>>> exit(1); >>>>> } >>>>> server_loop(s); >>>>> exit(1); >>>>> >>>>> Even in Linux or DragonFly SO_REUSEPORT "load balance" between >>>>> processes listening on the same TCP addr/port was introduced recently, >>>>> so you probably won't want to rely on it. >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?51EDA37A.9040200>