From owner-cvs-src@FreeBSD.ORG Mon Apr 4 22:48:57 2005 Return-Path: Delivered-To: cvs-src@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id DC57616A4CF for ; Mon, 4 Apr 2005 22:48:57 +0000 (GMT) Received: from relay02.pair.com (relay02.pair.com [209.68.5.16]) by mx1.FreeBSD.org (Postfix) with SMTP id D538E43D41 for ; Mon, 4 Apr 2005 22:48:56 +0000 (GMT) (envelope-from silby@silby.com) Received: (qmail 20818 invoked from network); 4 Apr 2005 22:48:55 -0000 Received: from unknown (HELO localhost) (unknown) by unknown with SMTP; 4 Apr 2005 22:48:55 -0000 X-pair-Authenticated: 209.68.2.70 Date: Mon, 4 Apr 2005 17:48:47 -0500 (CDT) From: Mike Silbersack To: Jeff Roberson In-Reply-To: <20050404173257.R54623@mail.chesapeake.net> Message-ID: <20050404174244.W922@odysseus.silby.com> References: <200504041143.j34Bhjar031386@repoman.freebsd.org> <20050404173257.R54623@mail.chesapeake.net> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed cc: cvs-src@FreeBSD.org cc: Alfred Perlstein cc: cvs-all@FreeBSD.org cc: src-committers@FreeBSD.org Subject: Re: cvs commit: src/sys/kern vfs_subr.c X-BeenThere: cvs-src@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: CVS commit messages for the src tree List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 04 Apr 2005 22:48:58 -0000 On Mon, 4 Apr 2005, Jeff Roberson wrote: > Well, the vnlruproc will try to vgone vnodes when we reach 9/10th of our > limit. However, it skips directories that have valid children. Perhaps > it shouldn't. I think that we need to be able to fail from getnewvnode(), > which we weren't doing before. We should also try to find more ways to > deal with resource starvation to make this failure less likely, but there > will always be cases where it must happen. If I'm not mistaken, the biggest consumer of vnodes on my (desktop) system is locate.updatedb - it takes my numvnodes from ~5000 to ~28000, and desiredvnodes is 34177 here. Is there some sort of MRU scheme that could preemptively free these useless vnode allocations? I know that doesn't address how to handle real OOM situations, but perhaps reducing the number of vnodes allocated for useless purposes would help the overall situation. Mike "Silby" Silbersack