Date: Wed, 21 Apr 2004 18:17:15 +0200 (CEST) From: Oliver Fromme <olli@lurza.secnetix.de> To: freebsd-current@FreeBSD.ORG Subject: Re: Directories with 2million files Message-ID: <200404211617.i3LGHFN9046352@lurza.secnetix.de> In-Reply-To: <20040421152233.GA23501@cat.robbins.dropbear.id.au>
next in thread | previous in thread | raw e-mail | index | archive | help
Tim Robbins <tjr@freebsd.org> wrote: > On Wed, Apr 21, 2004 at 08:42:53AM -0500, Eric Anderson wrote: > > First, let me say that I am impressed (but not shocked) - FreeBSD > > quietly handled my building of a directory with 2055476 files in it. > > I'm not sure if there is a limit to this number, but at least we know it > > works to 2million. I'm running 5.2.1-RELEASE. > > > > However, several tools seem to choke on that many files - mainly ls and > > du. Find works just fine. Here's what my directory looks like (from > > the parent): > > > > drwxr-xr-x 2 anderson anderson 50919936 Apr 21 08:25 data > > > > and when I cd into that directory, and do an ls: > > > > $ ls -al | wc -l > > ls: fts_read: Cannot allocate memory > > 0 > > The problem here is likely to be that ls is trying to store all the > filenames in memory in order to sort them. Try using the -f option > to disable sorting. If you really do need a sorted list of filenames, > pipe the output through 'sort'. I think it will still try to read everything into memory first, in order to calculate column widths for "ls -l" output. I would try something like this: $ ls -f | xargs ls -ld Or, if that still fails: $ find . -maxdepth 1 | xargs ls -ld Regards Oliver -- Oliver Fromme, secnetix GmbH & Co KG, Oettingenstr. 2, 80538 München Any opinions expressed in this message may be personal to the author and may not necessarily reflect the opinions of secnetix in any way. Python is executable pseudocode. Perl is executable line noise.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200404211617.i3LGHFN9046352>