From owner-freebsd-arch@FreeBSD.ORG Tue Jul 27 14:16:29 2010 Return-Path: Delivered-To: freebsd-arch@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4D7A5106564A; Tue, 27 Jul 2010 14:16:29 +0000 (UTC) (envelope-from jhb@freebsd.org) Received: from cyrus.watson.org (cyrus.watson.org [65.122.17.42]) by mx1.freebsd.org (Postfix) with ESMTP id 1E3588FC0C; Tue, 27 Jul 2010 14:16:29 +0000 (UTC) Received: from bigwig.baldwin.cx (66.111.2.69.static.nyinternet.net [66.111.2.69]) by cyrus.watson.org (Postfix) with ESMTPSA id C233E46B38; Tue, 27 Jul 2010 10:16:28 -0400 (EDT) Received: from jhbbsd.localnet (smtp.hudson-trading.com [209.249.190.9]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id E8E748A04E; Tue, 27 Jul 2010 10:16:27 -0400 (EDT) From: John Baldwin To: freebsd-arch@freebsd.org, alc@freebsd.org Date: Tue, 27 Jul 2010 09:35:52 -0400 User-Agent: KMail/1.13.5 (FreeBSD/7.3-CBSD-20100217; KDE/4.4.5; amd64; ; ) References: <4C4DB2B8.9080404@freebsd.org> <4C4DD1AA.3050906@freebsd.org> In-Reply-To: MIME-Version: 1.0 Message-Id: <201007270935.52082.jhb@freebsd.org> Content-Type: Text/Plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.0.1 (bigwig.baldwin.cx); Tue, 27 Jul 2010 10:16:27 -0400 (EDT) X-Virus-Scanned: clamav-milter 0.95.1 at bigwig.baldwin.cx X-Virus-Status: Clean X-Spam-Status: No, score=-2.6 required=4.2 tests=AWL,BAYES_00 autolearn=ham version=3.2.5 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on bigwig.baldwin.cx Cc: Matthew Fleming , Andriy Gapon Subject: Re: amd64: change VM_KMEM_SIZE_SCALE to 1? X-BeenThere: freebsd-arch@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussion related to FreeBSD architecture List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 27 Jul 2010 14:16:29 -0000 On Monday, July 26, 2010 3:30:59 pm Alan Cox wrote: > As far as eliminating or reducing the manual tuning that many ZFS users do, > I would love to see someone tackle the overly conservative hard limit that > we place on the number of vnode structures. The current hard limit was put > in place when we had just introduced mutexes into many structures and more a > mutex was much larger than it is today. I have a strawman of that (relative to 7). It simply adjusts the hardcoded maximum to instead be a function of the amount of physical memory. Index: vfs_subr.c =================================================================== --- vfs_subr.c (revision 210934) +++ vfs_subr.c (working copy) @@ -288,6 +288,7 @@ static void vntblinit(void *dummy __unused) { + int vnodes; /* * Desiredvnodes is a function of the physical memory size and @@ -299,10 +300,19 @@ desiredvnodes = min(maxproc + cnt.v_page_count / 4, 2 * vm_kmem_size / (5 * (sizeof(struct vm_object) + sizeof(struct vnode)))); if (desiredvnodes > MAXVNODES_MAX) { + + /* + * If there is a lot of physical memory, allow the cap + * on vnodes to expand to using a little under 1% of + * available RAM. + */ + vnodes = max(MAXVNODES_MAX, cnt.v_page_count * (PAGE_SIZE / + 128) / (sizeof(struct vm_object) + sizeof(struct vnode))); + KASSERT(vnodes < desiredvnodes, ("capped vnodes too big")); if (bootverbose) printf("Reducing kern.maxvnodes %d -> %d\n", - desiredvnodes, MAXVNODES_MAX); - desiredvnodes = MAXVNODES_MAX; + desiredvnodes, vnodes); + desiredvnodes = vnodes; } wantfreevnodes = desiredvnodes / 4; mtx_init(&mntid_mtx, "mntid", NULL, MTX_DEF); -- John Baldwin