Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 24 Apr 2004 10:12:31 +1000
From:      Peter Jeremy <PeterJeremy@optushome.com.au>
To:        Eric Anderson <anderson@centtech.com>
Cc:        freebsd-current@freebsd.org
Subject:   Re: Directories with 2million files
Message-ID:  <20040424001231.GH53327@cirb503493.alcatel.com.au>
In-Reply-To: <408919BA.5070702@centtech.com>
References:  <40867A5D.9010600@centtech.com> <40887100.3040606@kientzle.com> <408919BA.5070702@centtech.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Apr 23, 2004 at 08:27:22AM -0500, Eric Anderson wrote:
>Resource limits (current):
> datasize           524288 kb
>
>Ouch!  That seems pretty low to me.  1gb would be closer to reasonable 
>if you ask me, but I'm nobody, so take it with a grain of salt.

Why do you feel this is low?  The intent of this limit is to stop a
runaway process eating all of RAM and swap.  It is probably reasonable
as a default for a workstation (X, KDE, Mozilla & OpenOffice are each
unlikely to exceed 512MB during normal use) or server.  People with
atypical requirements will need to do some tuning.

I agree that the defaults mean you can't run 'ls' on a directory with
2e6 files but this isn't a typical requirement.

Upping the default limit to 1GB increases the risk that a runaway process
will make the machine unusable (think how your machine with 768MB RAM
would behave if you increased datasize to 1GB and tried to run ls on a
directory with just under 4e6 files).

As for ls(1), its theoretical memory requirements are of the order of
32 bytes per entry plus the size of the directory in order to run
'ls -lsio'.  It should be reasonably easy to remove the need to store
anything if you don't require sorting or column alignment but beyond
that, the code complexity starts to increase significantly.

-- 
Peter Jeremy



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20040424001231.GH53327>