Date: Thu, 20 Jan 2005 20:45:52 +1100 From: Peter Jeremy <PeterJeremy@optushome.com.au> To: Phillip Salzman <phill@sysctl.net> Cc: stable@freebsd.org Subject: Re: Very large directory Message-ID: <20050120094551.GK79646@cirb503493.alcatel.com.au> In-Reply-To: <00b001c4fea0$7533d490$6745a8c0@MESE> References: <00b001c4fea0$7533d490$6745a8c0@MESE>
next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, 2005-Jan-19 21:30:53 -0600, Phillip Salzman wrote: >They've been running for a little while now - and recently we've noticed a >lot of disk space disappearing. Shortly after that, a simple du into our >/var/spool returned a not so nice error: > > du: fts_read: Cannot allocate memory > >No matter what command I run on that directory, I just don't seem to have >enough available resources to show the files let alone delete them (echo *, >ls, find, rm -rf, etc.) I suspect you will need to write something that uses dirent(3) to scan the offending directory and delete (or whatever) the files one by one. Skeleton code (in perl) would look like: chdir $some_dir or die "Can't cd $some_dir: $!"; opendir(DIR, ".") or die "Can't opendir: $!"; while (my $file = readdir(DIR)) { next if ($file eq '.' || $file eq '..'); next if (&this_file_is_still_needed($file)); unlink $file or warn "Unable to delete $file: $!"; } closedir DIR; If you've reached the point where you can't actually read the entire directory into user memory, expect the cleanup to take quite a while. Once you've finished the cleanup, you should confirm that the directory has shrunk to a sensible size. If not, you need to re-create the directory and move the remaining files into the new directory. -- Peter Jeremy
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20050120094551.GK79646>