Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 31 Jan 1998 08:00:52 -0500
From:      Geoffrey Robinson <grobin@accessv.com>
To:        questions@FreeBSD.ORG
Subject:   Setting Max open files for database CGI
Message-ID:  <34D32084.7BF2D563@accessv.com>

next in thread | raw e-mail | index | archive | help
This is a question about setting the maximum number of files that can be
open by a process but I'm going to explain a bit about what I need it
for because I'm interested in hearing some opinions about what I plan to
do.

I'm working on a CGI in C that has to be fast enough to run at *least*
40 times a second on a p-200 so it has to be coded for speed above all
else. The program is basically a database the deals with client data.
Originally the design was centered around two data file, a fixed width
record, binary hash file and a fixed width record, unsorted binary file.
The hash file was to contain client data that would be searched for by
the query-string. The client records in the hash file would contain
pointers to one or more secondary records in the unsorted data file.
This was the fastest system I could think of but then when I brought
file locking into the equation in created a bottleneck (only one
instance of the CGI can access the file at a time). I'm aware that I can
lock specific ranges of bytes in a file but my experience is limited so
I can't let this get too complicated or it'll never get done.

I happened to stumble on a train to though that led me to the idea of
using a separate file for reach record instead of two files containing
all the records. That would further speed things up (but not much) by
eliminating the hash search if the query-string is the file name of the
main client record and the client record contains the file names of the
secondary records. The files would still have to be locked when in use
but other instances could operate on other records at the same time.
There will never be more then a few hundred records/files at a time in
the entire database.

I think this would run faster than the other system and would be easier
to write. What I want to know is, do the experienced programmers in this
group agree with me or am I making a big mistake? Are there any hidden
performance penalties involved in the model?


Anyway, about open files. I wrote a test that attempts to open the same
file 1000 times. When run under root the program gets to about 800
before I get a kernel error which is more than I will ever need. But
when I run the same program under a regular user it gets to 60 before
the open fails. I spent about an hour in /etc/login.conf trying to get
it to go higher (at least 100) but I can't. How do I set it higher?
Also, dose the openfiles=n line in /etc/login.conf refer to the max
number of files a single instance of a program can have open or is it
the max number of files all the processes running under a particular
user can have open? 


Thanks for any help, opinions or recommendations.
-- 
Geoffrey Robinson
grobin@accessv.com
Oakville, Ontario, Canada.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?34D32084.7BF2D563>