Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 18 Dec 1999 13:14:25 -0800
From:      "Ronald F. Guilmette" <rfg@monkeys.com>
To:        Kevin Day <toasty@dragondata.com>
Cc:        hackers@FreeBSD.ORG
Subject:   Re: Practical limit for number of TCP connections? 
Message-ID:  <43033.945551665@monkeys.com>
In-Reply-To: Your message of Sat, 18 Dec 1999 14:58:09 -0600. <199912182058.OAA42531@celery.dragondata.com> 

next in thread | previous in thread | raw e-mail | index | archive | help

In message <199912182058.OAA42531@celery.dragondata.com>, you wrote:

>Speaking of accepting... What's the upper limit on listen queues? Something
>around 64, correct?

I don't know, but why do you ask?  Do you have some reason to believe that
the length of listen queues is going to be an issue?

>> Quite a lot of memory (either virtual or real) will also get sucked up
>> *if* you have a separate and independent process handling each separate
>> connection.  A simple experiment I did awhile back indicated that on
>> recent-vintage versions of FreeBSD, the absolute minimum per-process
>> overhead was 12KB.  That is a *minimum*, e.g. for a process that contains
>> essentially no code and no data.  But you will probably never see that in
>> practice, which is to say your minimum per-process overhead is going to
>> be bigger than that.
>
>Yeah, I don't plan on doing things the apache way. :) One process per client
>seems silly here, since nearly every client will be getting the exact same
>data.

I think that you mean that you don't plan on doing things the Sendmail way.
(1/2 :-)

Seriously, Sendmail forks a child for every connection... two actually,
assuming that the client actually DOES SOMTHING (e.g. sending mail) and
then that child/children hang around until the _client_ finishes what
it is doing... which can sometimes take a long long time if the client
itself is slow.  But unless I'm mistaken... which I very well might be...
apache just gets a request, services it, and that that it... That par-
ticular instance of Apache then goes immediately back to the free servers
pool.

That's what I was told anyway.

>> The _clean_ way of doing it would be to write your multi-user server using
>> threads, and to assign one thread to each connection.  If you can do that,
>> then the logic in the program becomes quite simple.  Each thread just sits
>> there, blocked on a call to read(), until something comes in, and then it
>> just parses the command, does whatever it is supposed to do in response to
>> that command, and then goes back to the read() again.
>> 
>> But as I understand it, there is not yet sufficient threads support in the
>> FreeBSD kernel to make this work well/properly.  (I may perhaps be misinform
>ed
>> about that, but that's what I have been told anyway.)
>
>I believe this is how ConferenceRoom works, so it seems ok, but I remember
>the comments that FreeBSD was their least preferred platform because of
>thread problems.

Yes.

As I say, my understanding is that FreeBSD still doesn't have real and/or
complete thread support in the kernel.  So if you have a multi-threaded
application and one thread blocks (e.g. on I/O) then the whole thing is
blocked.

>> The other way is just have your server be a single thread/process, and to
>> have it keep one big list of all of the connections (i.e. socket fds) that
>> it has open at present.  Then it just executes mail loop, over and over
>> again.  At the top of the main look is one big honkin' call to select()
>> in which you find out which of your connections is ready to be read or
>> written.  Then you go off and read/write those as appropriate, and then
>> just come back and do the big select() again.  (You could do this using
>> calls to poll() instead of calls to select(), and that might be a bit
>> more efficient.)
>
>This is how traditional ircd's handle it, and how I was planning to. It
>seems the most efficient, especially since 99.999% of the clients will be
>receiving the same data, it'll make transmission pretty easy.

Not really.

To be safe, you really shouldn't just assume that you can just blast out
the same output data to all of the connections at the same time.  You really
should check (using select() or poll()) to see which ones are actually and
currently in a state where you _can_ write to them.  Some may not be, in
which case writing to them is a Bad Idea... either that data you write will
be thrown away (if you set the socket to non-blocking mode) or else worse,
your whole server will block on that one socket and the one call to write()
or send().

>> If you want a lot of connections, start by increasing the values of
>> "maxusers" and NMBCLUSTERS in your kernel config file.  Then build and
>> install a new kernel.  How much is enough for these parameters?  Beats
>> me.  If you find that you are running out of resources, increase them
>> some more and then try again.
>
>I really wish more of those options were dynamically tunable, instead of a
>magic formula that maxusers controls. :)

I can only agree.

The system I was running that had more than 8,000 connections open at a
time in fact had only a small handful of ``users'', i.e. me and a few
daemon processes, and the big server process I described.

It therefore seemed kinda weird to have to set maxusers to something in
excess of 200, and doing so probably caused a bunch of kernel tables to
be a lot bigger than they really had to be.



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?43033.945551665>