Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 14 Oct 95 23:49:24 -0700
From:      Bakul Shah <bakul@netcom.com>
To:        Terry Lambert <terry@lambert.org>
Cc:        bde@zeta.org.au (Bruce Evans), hackers@freefall.freebsd.org, rdm@ic.net, current@freefall.freebsd.org
Subject:   Re: getdtablesize() broken? 
Message-ID:  <199510150649.XAA15664@netcom15.netcom.com>
In-Reply-To: Your message of "Sat, 14 Oct 95 19:14:32 PDT." <199510150214.TAA22230@phaeton.artisoft.com> 

next in thread | previous in thread | raw e-mail | index | archive | help
> The correct limit on the largest number is FD_SETSIZE, as defined in
> sys/types.h.

IMHO limiting the fdset bitarray size like this *within* the
kernel is a mistake.  I have an application where I run into
this and am forced to use a multi process solution.  Imagine
a server handling > FD_SETSIZE (i.e. 256)  TCP connections
to clients -- requests are not all that frequent and each
takes just a little bit of time to service so they *can* all
be handled by one process easily.  A multi process solution
gets complicated (need to put shared state in shared memory,
use locking etc.) and slower (extra contex switches,
lock/unlock time).

Using a limit of FD_SETSIZE does not buy you extra
protection or anything.  RLIMIT_NOFILE is the right limit to
check against in kern/sys_generic.c:select().  Mercifully
this limit is changeable via sysctl so server machines can
up it.  NetBSD, FreeBSD, Linux and may be even bsdi (I
haven't checked recently) are all guilty here.  Small upper
limits is another thing that separates PeeCees from serious
server machines.

Let me say this another way.  If I can create N files, I
should damn well be able to select() on any one of them.

-- bakul



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199510150649.XAA15664>