Date: Wed, 16 Feb 2011 17:10:32 -0800 From: Jeremy Chadwick <freebsd@jdc.parodius.com> To: Ivan Voras <ivoras@freebsd.org> Cc: freebsd-fs@freebsd.org Subject: Re: zfs directory listing Message-ID: <20110217011032.GA15027@icarus.home.lan> In-Reply-To: <ijhrmj$var$1@dough.gmane.org> References: <AANLkTinwRumkvSn7wfh4a%2BeNJyFoFDyMMKjk7GOSLAXc@mail.gmail.com> <ijhrmj$var$1@dough.gmane.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, Feb 17, 2011 at 01:55:46AM +0100, Ivan Voras wrote: > On 16/02/2011 23:52, Andrew Thompson wrote: > >Hi, > > > > > >I have a zfs file system on 8.1-RELEASE amd64 which as a large number > >of files in /var/spool/mqueue. loader.conf has vfs.zfs.arc_max=2G for > >an 8G box. > > >mqueue has a link count of 522824. I can not list the contents of this > >directory, when I do the number of read IOPS sits> 100 and it will > >never complete > > Your problem is probably not FreeBSD-specific and possibly not even > ZFS-specific. That is a fairly large number of files in a directory > for any file system; The rule of thumb is usually to start sharding > as soon as the number of files gets even two orders of magnitude > lower than what you have there. > > As others said, try "cd /var/spool/mqueue && find ." - the find > utility just reads the directory, it doesn't try to gather other > metadata which "ls" uses and is usually one of the rare utilities > which can work with gigantic directories (usually in the form "find > . -delete" :) ). > > The reason for this is that filenames and file metadata are separate > objects on the drives and the drives need to seek between them to > get both if they are not cached. Simple version, to the OP: your mail server has an immense number of queued mails in it. You need to find out why and put an end to it. I also recommend you stop sendmail, "rm -r mqueue ; chown root:daemon mqueue ; chmod 755 mqueue", then start sendmail again. I'll also point out that things like ls -l and related methods take a significantly longer amount of time than doing something like "echo *" given the number of stat() calls that have to be made. Regarding the focus on ZFS with regards to this situation: you may want to look at vfs.numvnodes (sysctl) and if it's reaching kern.maxvnodes, increase kern.maxvnodes (via sysctl.conf). Be aware more vnodes means more memory usage, and this is separate/unrelated to/from ARC memory. -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP 4BD6C0CB |
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20110217011032.GA15027>