From owner-freebsd-hackers Fri Apr 4 06:43:29 1997 Return-Path: Received: (from root@localhost) by freefall.freebsd.org (8.8.5/8.8.5) id GAA10751 for hackers-outgoing; Fri, 4 Apr 1997 06:43:29 -0800 (PST) Received: from chai.plexuscom.com (chai.plexuscom.com [207.87.46.100]) by freefall.freebsd.org (8.8.5/8.8.5) with ESMTP id GAA10744 for ; Fri, 4 Apr 1997 06:43:26 -0800 (PST) Received: from chai.plexuscom.com (localhost [127.0.0.1]) by chai.plexuscom.com (8.8.5/8.8.5) with ESMTP id JAA23469; Fri, 4 Apr 1997 09:44:16 -0500 (EST) Message-Id: <199704041444.JAA23469@chai.plexuscom.com> To: Marc Slemko , David Greenman Cc: FreeBSD-hackers Subject: Re: apache like preforking apps and high loads In-reply-to: Your message of "Thu, 03 Apr 1997 10:50:19 MST." Date: Fri, 04 Apr 1997 09:44:16 -0500 From: Bakul Shah Sender: owner-hackers@freebsd.org X-Loop: FreeBSD.org Precedence: bulk David Greenman writes: > The processes blocked on accept are handled in a round-robin fashion, > oldest first. Thanks! In response to > > Fairness is probably not an issue when an app. consists of a number > > of anonymous servers but in general one would want to make sure that > > if N processes are waiting on accept() on the same socket, no one > > process is starved of accepting. How do you ensure that? Marc Slemko writes: > For something like Apache you want the _least_ equal distribution > possible, ie. a stack of most recently used processes from which you pop a > server to process a request. Right now, if you have 100 servers running > and get requests at a rate that one could handle, all of them will still > be used. This is bad; it hurts locality of reference a lot. On some > architectures, it has a significant impact WRT caching, and also means > that if you don't have enough RAM for all server processes then they will > each end up being swapped out and in again. Granted that Apache like apps don't care about fairness to _server_ processes and may even provide worse performance with fair scheduling. Which is why such apps should do their own scheduling if at all possible. But without some hints (and a mechanism to specify such hints) about *how* some processes are cooperating, a generic kernel must treat all processes equally. If providing equal service to all _accepted_ connections is one's goal, handling accepts in a stack fashion is not ideal either: clients serveed by a process deep in the stack will get worse service compared to one handled by a process near the top of the stack. > The reason why Apache doesn't do this is the implementation details. It > is difficult to get a good implementation that is efficient (so you don't > create more overhead than you remove) and portable. I don't know Apache details but handling one client at-a-time per process does not seem like the most efficient use of resources given Unix'es heavyweight processes. As for portability *and* performance, you pretty much need _some_ OS/version/processor specific code. > > To guido: For apache like apps one idea is to have one process be > > the acceptor and have it pass a new socket to individual server > > processes (including doing some initial processing to determine > > which process should get this socket if not all server processes are > > equivalent). > > This has a lot of portability issues and can lead to running into things > such as per process fd limits more quickly on some architectures. Even Linux handles passing file descriptors via sendmsg(). Running out of fds should not be a problem if the acceptor process simply passes on the fd to another process. For initial processing things get a bit trickier and you may need more acceptor processes (but N/FD_SETSIZE cuts down your accept() load considerably). -- bakul