From owner-freebsd-arch Sun Feb 24 19:26: 8 2002 Delivered-To: freebsd-arch@freebsd.org Received: from rina.r.dl.itc.u-tokyo.ac.jp (rina.r.dl.itc.u-tokyo.ac.jp [133.11.199.247]) by hub.freebsd.org (Postfix) with ESMTP id 3759037B402 for ; Sun, 24 Feb 2002 19:26:05 -0800 (PST) Received: from silver.carrots.uucp.r.dl.itc.u-tokyo.ac.jp (silver.carrots.uucp.r.dl.itc.u-tokyo.ac.jp [IPv6:3ffe:b80:5b0:3:280:c8ff:fe6b:6d73]) by rina.r.dl.itc.u-tokyo.ac.jp (8.12.2/3.7W-rina.r-Nankai-Koya) with ESMTP id g1P3Q1kg040648 ; Mon, 25 Feb 2002 12:26:02 +0900 (JST) Received: from silver.carrots.uucp.r.dl.itc.u-tokyo.ac.jp (localhost [127.0.0.1]) by silver.carrots.uucp.r.dl.itc.u-tokyo.ac.jp (8.12.2/3.7W-carrots-Keikyu-Kurihama) with ESMTP id g1P3PVN9092431 ; Mon, 25 Feb 2002 12:25:59 +0900 (JST) Message-Id: <200202250325.g1P3PVN9092431@silver.carrots.uucp.r.dl.itc.u-tokyo.ac.jp> Date: Mon, 25 Feb 2002 12:25:30 +0900 From: Seigo Tanimura To: Matthew Dillon Cc: Seigo Tanimura , arch@FreeBSD.ORG Subject: Re: reclaiming v_data of free vnodes In-Reply-To: <200202242041.g1OKfXt95731@apollo.backplane.com> References: <200202231556.g1NFu9N9040749@silver.carrots.uucp.r.dl.itc.u-tokyo.ac.jp> <200202242041.g1OKfXt95731@apollo.backplane.com> User-Agent: Wanderlust/2.8.1 (Something) SEMI/1.14.3 (Ushinoya) FLIM/1.14.3 (=?ISO-8859-1?Q?Unebigory=F2mae?=) APEL/10.3 MULE XEmacs/21.1 (patch 14) (Cuyahoga Valley) (i386--freebsd) Organization: Digital Library Research Division, Information Techinology Centre, The University of Tokyo MIME-Version: 1.0 (generated by SEMI 1.14.3 - "Ushinoya") Content-Type: text/plain; charset=US-ASCII Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.ORG On Sun, 24 Feb 2002 12:41:33 -0800 (PST), Matthew Dillon said: Matthew> cache). 330,000 vnodes and/or inodes is pushing what a kernel Matthew> with only 1G of KVM can handle. For these machines you may want Matthew> to change the kernel start address from c000000 (1G of KVM) to Matthew> 8000000 (2G of KVM). I forget exactly how that is done. Increasing KVM is not likely to help. The panic message in the Friday night was something like this: kmem_malloc(256): kmem_map too small: (~=200M) total allocated in kmem_malloc() called by ffs_vget(). It may help me to expand kmem_map to 512M. This, however, scales the number of vnodes/inodes to only up to about twice of the present number. Matthew> Did kern.maxvnodes auto-size to 330,000 or did you set it up Matthew> there manually? Or is kern.maxvnodes set lower and it blew it out Matthew> on its own due to load? It is set automatically by the kernel. -- Seigo Tanimura To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message