Date: Mon, 2 Nov 2015 20:06:31 +1100 (EST) From: Bruce Evans <brde@optusnet.com.au> To: fs@freebsd.org Subject: an easy (?) question on namecache sizing Message-ID: <20151102193756.L1475@besplex.bde.org>
next in thread | raw e-mail | index | archive | help
At least in old versions before cache_changesize() (should be nc_chsize()) existed, the name cache is supposed to have size about 2 * desiredvnodes, but its effective size seems to be only about desiredvnodes / 4? Why is this? This shows up in du -s on a large directory like /usr. Whenever the directory has more than about desiredvnodes / 4 entries under it, the namecache thrashes. The number of cached vnodes is also limited to about desiredvnodes / 4. The problem might actually be in vnode caching. Indeed, if all the data in the directory is read using tar cf /dev/zero, then at least if it all fits in the data and vnode caches , then the vnode cache starts working and caches more than desirevnodes / 4 files. The name caches then starts working and caches more than desirevnodes / 4 files too. The test directory had 6896 directories and 49643 files under it. With desiredvnodes = 123141, du -s caches only about 34000 vnodes. This is less than 48643 and the namecache thrashed with repeated du -s's. The vnode cache probably thrashed too, but this was not so easy to see. This was on an nfs client where it gave a slowdown of 20-30 times. On the server, desiredvnodes was only 70240 and only about 17000 vnodes were cached and the problem was not so evident (I think because the VMIO cache actually works when the data + metadata is not too large to fit in it; reconsituting vnodes from it wastes a lot of CPU but is not as slow as fetching the metadata again from a disk or network). Bruce
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20151102193756.L1475>