Date: Wed, 21 Apr 2004 15:46:25 -0500 From: masta <diz@linuxpowered.com> To: Garance A Drosihn <drosih@rpi.edu> Cc: Eric Anderson <anderson@centtech.com> Subject: Re: Directories with 2million files Message-ID: <4086DDA1.3080401@linuxpowered.com> In-Reply-To: <p0602041abcac87487694@[128.113.24.47]> References: <40867A5D.9010600@centtech.com> <p06020415bcac7bcec60c@[128.113.24.47]> <4086D513.9010605@centtech.com> <p0602041abcac87487694@[128.113.24.47]>
next in thread | previous in thread | raw e-mail | index | archive | help
Garance A Drosihn wrote: > At 3:09 PM -0500 4/21/04, Eric Anderson wrote: > >> Garance A Drosihn wrote: >> >> I suppose this is one of those "who needs files bigger than 2gb?" >> things.. > > > Perhaps, but as a general rule we'd like our system utilities to > at least *work* in extreme situations. This is something I'd > love to dig into if I had the time, but I'm not sure I have the > time right now. > I'm not sure how we can improve this situation. Considering that an `ls -l` is forced to stat every file, and store that info until the time comes to dump it to the tty for the human operator. The problem seems somewhat geometric, and un-fixable unless you want to find a way to page out the stat information of each file to a dump file of some sort, then cat that info back to the operator upon conclusion of the main loop. Even then, list 2 million files will be excesive just storing the file names for display. -Jon
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4086DDA1.3080401>