From owner-freebsd-current@FreeBSD.ORG Wed Apr 21 08:11:17 2004 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 114F816A4CE; Wed, 21 Apr 2004 08:11:17 -0700 (PDT) Received: from otter3.centtech.com (moat3.centtech.com [207.200.51.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id 7849643D39; Wed, 21 Apr 2004 08:11:16 -0700 (PDT) (envelope-from anderson@centtech.com) Received: from centtech.com (neutrino.centtech.com [10.177.171.220]) by otter3.centtech.com (8.12.3/8.12.3) with ESMTP id i3LFBFE8098930; Wed, 21 Apr 2004 10:11:15 -0500 (CDT) (envelope-from anderson@centtech.com) Message-ID: <40868F08.20301@centtech.com> Date: Wed, 21 Apr 2004 10:11:04 -0500 From: Eric Anderson User-Agent: Mozilla Thunderbird 0.5 (X11/20040406) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Tim Robbins References: <40867A5D.9010600@centtech.com> <20040421152233.GA23501@cat.robbins.dropbear.id.au> In-Reply-To: <20040421152233.GA23501@cat.robbins.dropbear.id.au> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit cc: freebsd-current@freebsd.org Subject: Re: Directories with 2million files X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Apr 2004 15:11:17 -0000 Tim Robbins wrote: >On Wed, Apr 21, 2004 at 08:42:53AM -0500, Eric Anderson wrote: > > > >>First, let me say that I am impressed (but not shocked) - FreeBSD >>quietly handled my building of a directory with 2055476 files in it. >>I'm not sure if there is a limit to this number, but at least we know it >>works to 2million. I'm running 5.2.1-RELEASE. >> >>However, several tools seem to choke on that many files - mainly ls and >>du. Find works just fine. Here's what my directory looks like (from >>the parent): >> >>drwxr-xr-x 2 anderson anderson 50919936 Apr 21 08:25 data >> >>and when I cd into that directory, and do an ls: >> >>$ ls -al | wc -l >>ls: fts_read: Cannot allocate memory >> 0 >> >> > >The problem here is likely to be that ls is trying to store all the >filenames in memory in order to sort them. Try using the -f option >to disable sorting. If you really do need a sorted list of filenames, >pipe the output through 'sort'. > > Doing 'ls -f' works, but still manages to munch up about 260MB of ram, which runs since I have enough, but otherwise would not. An ls -alf does not work (I assume because it is trying to sum the total bytes by all files, prior to printing the data). I just noticed that find also eats up the same amount of memory before it prints the list. This perl script does it in about 2.5 seconds, with minimal memory: opendir(INDEX_PATH,"./"); while ($file = readdir(INDEX_PATH)) { $count++; } print "$count\n"; Eric -- ------------------------------------------------------------------ Eric Anderson Sr. Systems Administrator Centaur Technology Today is the tomorrow you worried about yesterday. ------------------------------------------------------------------