Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 4 Apr 1997 10:39:06 -0700 (MST)
From:      Terry Lambert <terry@lambert.org>
To:        james@wgold.demon.co.uk (James Mansion)
Cc:        FreeBSD-hackers@freebsd.org
Subject:   Re: apache like preforking apps and high loads
Message-ID:  <199704041739.KAA19497@phaeton.artisoft.com>
In-Reply-To: <3344D3D3.618A@wgold.demon.co.uk> from "James Mansion" at Apr 4, 97 11:11:31 am

next in thread | previous in thread | raw e-mail | index | archive | help
> Bakul Shah wrote:
> > Fairness is probably not an issue when an app. consists of a number
> > of anonymous servers but in general one would want to make sure that
> > if N processes are waiting on accept() on the same socket, no one
> > process is starved of accepting.  How do you ensure that?
> > 
> 
> Why would you necessarily want this, apart from aesthetics?
> 
> I don't think this behaviour is mandated anywhere.
> 
> In any case, presumably (hopefully) the wakeup goes to the
> process with the highest scheduling priority.  If the scheduler
> does dynamic adjustment based on CPU time consumed 'recently' then
> waiters will be favoured over processes that ran recently, and you'll
> get th eeffect that you want.

In point of fact, you often *do* want this, unless you are attempting
to load balance work-to-do between processors.

One of the most clever things I came up with for the NetWare for
UNIX product (Steve and Marty implemented it, not me) was the idea
that the recv requests to the streams NCP MUX should be LIFO'ed.

This increases the probability that the next work-to-do engine to
which you give work will have all of its pages already in core and
in its map (the biggest danger here is the loss of transaction data
pages, which must be, by definition, per engine).

If you were attempting to balance between processors, then you really
want two effective queues, and you select which "hot" queue head by
managing the insertion order by processor-by-time (the top is the
hottest engine on processor A, the next is the hottest on processor
B, the next is the second hottest on processor A, etc..  This assumes
a high or fixed CPU affinity for the engines).

Because you can control whose call returns immediately and whose
doesn't, you can effect a private scheduling policy using a MUX in a
work-to-do process model.

This is one of the reasons NetWare for UNIX beat native NetWare by
up to 12% on most benchmarks when running on the exact same hardware,
even after you take UnixWare's high latency stack (mostly because of
the ODI drivers) into account.


					Regards,
					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199704041739.KAA19497>