From owner-freebsd-current@FreeBSD.ORG Fri Jun 11 15:22:08 2004 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 2290216A4D8 for ; Fri, 11 Jun 2004 15:22:07 +0000 (GMT) Received: from freefall.freebsd.org (freefall.freebsd.org [216.136.204.21]) by mx1.FreeBSD.org (Postfix) with ESMTP id 1904043D2F; Fri, 11 Jun 2004 15:22:07 +0000 (GMT) (envelope-from bmilekic@FreeBSD.org) Received: from freefall.freebsd.org (bmilekic@localhost [127.0.0.1]) i5BFM69U012120; Fri, 11 Jun 2004 15:22:06 GMT (envelope-from bmilekic@freefall.freebsd.org) Received: (from bmilekic@localhost) by freefall.freebsd.org (8.12.11/8.12.11/Submit) id i5BFM6ep012119; Fri, 11 Jun 2004 15:22:06 GMT (envelope-from bmilekic) Date: Fri, 11 Jun 2004 15:22:06 +0000 From: Bosko Milekic To: othermark Message-ID: <20040611152206.GA11917@freefall.freebsd.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.1i cc: freebsd-current@freebsd.org Subject: Re: Today's -current panics X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Jun 2004 15:22:08 -0000 othermark wrote: >I get a very similar stack track traversing through sossend(), under heavy >NFS load on a 1GB machine. Note the panic message here, and the >peculiarity that previous incarnations of -current did not panic under >similar load. It is highly reproduceable via a 'make installworld' via >NFS with /usr/src and /usr/obj mounted. The NFS serving machine will >always panic using vanilla GENERIC: > >[root@pippin root]$ panic: kmem_malloc(4096): kmem_map too small: 40894464 >total > allocated Do you have the kern.ipc.nmbclusters boot-time tunable set to 0? I just noticed that if this is set to zero then kmem_map will not be scaled larger to accomodate clusters and mbufs. In this scenario, what I recommend that you do is increase the vm.kmem_size boot-time tunable to ~300,000,000 or ~400,000,000. Currently, your kmem_map is way too small (looks like only ~40M). Be careful not to overdo it, though, because you might also then have to increase the available KVA (KVA_PAGES). -Bosko