Date: Thu, 22 Apr 2004 11:27:15 -0500 From: Eric Anderson <anderson@centtech.com> To: freebsd-current@freebsd.org Subject: Re: Directories with 2million files Message-ID: <4087F263.2000609@centtech.com> In-Reply-To: <20040422150120.GB78422@dragon.nuxi.com> References: <40867A5D.9010600@centtech.com> <20040421152233.GA23501@cat.robbins.dropbear.id.au> <40868F08.20301@centtech.com> <20040422150120.GB78422@dragon.nuxi.com>
next in thread | previous in thread | raw e-mail | index | archive | help
David O'Brien wrote: >On Wed, Apr 21, 2004 at 10:11:04AM -0500, Eric Anderson wrote: > > >>Doing 'ls -f' works, but still manages to munch up about 260MB of ram, >>which runs since I have enough, but otherwise would not. >> >> > >It used 260MB of VM, not physial RAM. Even with less in your machine, it >would have worked fine -- no one is going to have less than than much >virutal memory (i.e., swap) if they run Netscape on the same machine. > Ok - here's the snippet from 'top': Mem: 268M Active, 147M Inact, 155M Wired, 32M Cache, 86M Buf, 144M Free Swap: 1024M Total, 2356K Used, 1022M Free PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND 36102 anderson 132 0 263M 263M RUN 0:02 68.63% 9.57% ls 36103 anderson 119 0 1180K 560K RUN 0:00 5.60% 0.78% wc However, I'm not sure about the Netscape comment - I don't really know what you are referring to, but I'd guess most likely a person with 2million files in one directory isn't going to be running Netscape on it anyhow. Anyway, the real issue is du I believe. Not being able to du a directory with 2million files seems like a bad thing. Eric -- ------------------------------------------------------------------ Eric Anderson Sr. Systems Administrator Centaur Technology Today is the tomorrow you worried about yesterday. ------------------------------------------------------------------
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4087F263.2000609>