Date: Wed, 21 Apr 2004 16:06:14 -0500 From: Dan Nelson <dnelson@allantgroup.com> To: Garance A Drosihn <drosih@rpi.edu> Cc: Eric Anderson <anderson@centtech.com> Subject: Re: Directories with 2million files Message-ID: <20040421210613.GC61380@dan.emsphone.com> In-Reply-To: <p0602041abcac87487694@[128.113.24.47]> References: <40867A5D.9010600@centtech.com> <p06020415bcac7bcec60c@[128.113.24.47]> <4086D513.9010605@centtech.com> <p0602041abcac87487694@[128.113.24.47]>
next in thread | previous in thread | raw e-mail | index | archive | help
In the last episode (Apr 21), Garance A Drosihn said: > At 3:09 PM -0500 4/21/04, Eric Anderson wrote: > >$ du -s > >du: fts_read: Cannot allocate memory > > Huh. Well, that seems pretty broken... The only allocation du does is for its hardlink cache, and it only stores inodes with a link count >1 in it, so no amount of regular files should make a difference. I think it's the fts code that's at fault. See the fts_build function in src/lib/libc/gen/fts.c: /* * This is the tricky part -- do not casually change *anything* in * here. The idea is to build the linked list of entries that are used * by fts_children and fts_read. There are lots of special cases. I know building the list is required for fts_children(), but I don't know how feasible it would be to rewrite it so a plain fts_open()/fts_read() loop doesn't create the list internally. -- Dan Nelson dnelson@allantgroup.com
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20040421210613.GC61380>