From owner-freebsd-stable@FreeBSD.ORG Sun Jun 8 21:19:56 2003 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 3EFD337B401 for ; Sun, 8 Jun 2003 21:19:56 -0700 (PDT) Received: from HAL9000.homeunix.com (ip114.bella-vista.sfo.interquest.net [66.199.86.114]) by mx1.FreeBSD.org (Postfix) with ESMTP id F13F443FBF for ; Sun, 8 Jun 2003 21:19:54 -0700 (PDT) (envelope-from das@freebsd.org) Received: from HAL9000.homeunix.com (localhost [127.0.0.1]) by HAL9000.homeunix.com (8.12.9/8.12.9) with ESMTP id h594JhPB004326; Sun, 8 Jun 2003 21:19:44 -0700 (PDT) (envelope-from das@freebsd.org) Received: (from das@localhost) by HAL9000.homeunix.com (8.12.9/8.12.9/Submit) id h594Jg7H004325; Sun, 8 Jun 2003 21:19:42 -0700 (PDT) (envelope-from das@freebsd.org) Date: Sun, 8 Jun 2003 21:19:42 -0700 From: David Schultz To: Masachika ISHIZUKA Message-ID: <20030609041942.GA4029@HAL9000.homeunix.com> Mail-Followup-To: Masachika ISHIZUKA , stable@freebsd.org References: <200305280102.LAA00949@lightning.itga.com.au> <20030609.114033.74731601.ishizuka@ish.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20030609.114033.74731601.ishizuka@ish.org> cc: stable@freebsd.org Subject: Re: system slowdown - vnode related X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Jun 2003 04:19:56 -0000 On Mon, Jun 09, 2003, Masachika ISHIZUKA wrote: > Hi, David-san. > I have still vnodes problem in 4.8-stable with /sys/kern/vfs_subr.c > 1.249.2.30. > > 310.locate of weekly cron make slow down or panic. Values of sysctl > are shown as follows when they reached slow down. > > (1) #1 machine (Celeron 466 with 256 mega byte rams) > > % sysctl kern.maxvnodes > kern.maxvnodes: 17979 > % sysctl vm.zone | grep VNODE > VNODE: 192, 0, 18004, 122, 18004 This looks pretty normal to me for a quiescent system. Ordinarily I would actually suggest raising maxvnodes if you have lots of little files. Does the number of vnodes shoot up when 310.locate runs? Did you get a backtrace from the panics? Perhaps the VM page cache is still interfering...