From owner-freebsd-current@FreeBSD.ORG Wed Apr 21 13:10:09 2004 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 597E116A506 for ; Wed, 21 Apr 2004 13:10:09 -0700 (PDT) Received: from otter3.centtech.com (moat3.centtech.com [207.200.51.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id 9124943D1F for ; Wed, 21 Apr 2004 13:10:08 -0700 (PDT) (envelope-from anderson@centtech.com) Received: from centtech.com (neutrino.centtech.com [10.177.171.220]) by otter3.centtech.com (8.12.3/8.12.3) with ESMTP id i3LKA7E8058845; Wed, 21 Apr 2004 15:10:08 -0500 (CDT) (envelope-from anderson@centtech.com) Message-ID: <4086D513.9010605@centtech.com> Date: Wed, 21 Apr 2004 15:09:55 -0500 From: Eric Anderson User-Agent: Mozilla Thunderbird 0.5 (X11/20040406) X-Accept-Language: en-us, en MIME-Version: 1.0 To: Garance A Drosihn References: <40867A5D.9010600@centtech.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit cc: freebsd-current@freebsd.org Subject: Re: Directories with 2million files X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Apr 2004 20:10:09 -0000 Garance A Drosihn wrote: > At 8:42 AM -0500 4/21/04, Eric Anderson wrote: > >> ... I'm not sure if there is a limit to this number, but at >> least we know it works to 2million. I'm running 5.2.1-RELEASE. >> >> However, several tools seem to choke on that many files - mainly >> ls and du. Find works just fine. Here's what my directory looks >> like (from the parent): >> >> drwxr-xr-x 2 anderson anderson 50919936 Apr 21 08:25 data >> >> and when I cd into that directory, and do an ls: >> >> $ ls -al | wc -l >> ls: fts_read: Cannot allocate memory >> 0 >> >> Watching memory usage, it goes up to about 515Mb, and runs out >> of memory (can't swap it), and then dies. (I only have 768Mb in >> this machine). > > > An `ls -al' is going to be doing a lot of work, most of which you > probably do not care about. (Certainly not if you're just piping > it to `wc'!). Depending on what you are looking for, an `ls -1Af' > might work better. If you really do want the -l (lowercase L) > instead of -1 (digit one), it *might* help to add the -h option. > I probably should look at the source code to see if that's really > true, but it's so much easier to just have you type in the command > and see what happens... Used 263MB, before returning the correct number.. It's functional, but only if you have a lot of ram. > Another option is to use the `stat' command instead of `ls'. > (I don't know if `stat' will work any better, I'm just saying > it's another option you might want to try...). One advantage > is that you'd have much better control over what information is > printed. I'm not sure how to use stat to get that same info. It's not so much that I have to have this option, it's that I believe it should work, without gobbling hundreds of MB's of memory. Also just for "information's sake". >> du does the exact same thing. > > > Just a plain `du'? If all you want is the total, did you > try `du -s'? I would not expect any problem from `du -s'. $ du -s du: fts_read: Cannot allocate memory >> I'd work on some patches, but I'm not worth much when it comes >> to C/C++. If someone has some patches, or code to try, let me >> know - I'd be more than willing to test, possibly even give out >> an account on the machine. > > > It is probably possible to make `ls' behave better in this > situation, though I don't know how much of a special-case > we would need to make it. I suppose this is one of those "who needs files bigger than 2gb?" things.. Eric -- ------------------------------------------------------------------ Eric Anderson Sr. Systems Administrator Centaur Technology Today is the tomorrow you worried about yesterday. ------------------------------------------------------------------