From owner-freebsd-fs Thu Apr 24 13:57:55 1997 Return-Path: Received: (from root@localhost) by hub.freebsd.org (8.8.5/8.8.5) id NAA00821 for fs-outgoing; Thu, 24 Apr 1997 13:57:55 -0700 (PDT) Received: from root.com (implode.root.com [198.145.90.17]) by hub.freebsd.org (8.8.5/8.8.5) with ESMTP id NAA00814 for ; Thu, 24 Apr 1997 13:57:53 -0700 (PDT) Received: from localhost (localhost [127.0.0.1]) by root.com (8.8.5/8.6.5) with SMTP id NAA11021; Thu, 24 Apr 1997 13:59:30 -0700 (PDT) Message-Id: <199704242059.NAA11021@root.com> X-Authentication-Warning: implode.root.com: localhost [127.0.0.1] didn't use HELO protocol To: Poul-Henning Kamp cc: fs@freebsd.org Subject: Re: the namei cache... In-reply-to: Your message of "Thu, 24 Apr 1997 22:38:54 +0200." <1420.861914334@critter> From: David Greenman Reply-To: dg@root.com Date: Thu, 24 Apr 1997 13:59:29 -0700 Sender: owner-fs@freebsd.org X-Loop: FreeBSD.org Precedence: bulk >This is what I thought, but appearantly not so. Part of the problem seems >to be that there are multiple names pointing at the same directory ("foo" > + N * "..") and that depletes your name-cache. With the current design >it should probably be 1.5 .. 2 times bigger than desiredvnodes. > >I'm very reluctant to increase it, when entries cost 64 bytes each, and >since data seems to indicate that 10% is stale (but we don't know how to >find them), so we keep recycling valid entries instead. > >Another thing that bothers me is the size. The reason for the current >size of 64 is the way malloc works. In reality we would get very close >to the same efficiency from 48 bytes per entry. I may look into changing >the way we allocate them. It would buy us 33% more entries in the same >space. > >David: > Can I get you to try one (mostly harmless) experiment on > wcarchive ? In vfs_cache.c, where it checks the number of > namecache entries against "desiredvnodes" could you try to use > 2*desiredvnodes (or something similar) instead, and measure > the difference in cache hit rate ? I've increased kern.maxvnodes which I think should have the same effect. This actually made things worse - a few percent lower cache hit rate. Very odd. It might have just been a statistical anomoly. In any case, it definately didn't improve the hit rate. -DG David Greenman Core-team/Principal Architect, The FreeBSD Project