From owner-freebsd-i386@FreeBSD.ORG Tue Sep 26 21:40:28 2006 Return-Path: X-Original-To: freebsd-i386@hub.freebsd.org Delivered-To: freebsd-i386@hub.freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 44DD116A40F for ; Tue, 26 Sep 2006 21:40:28 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [216.136.204.21]) by mx1.FreeBSD.org (Postfix) with ESMTP id 1307E43D4C for ; Tue, 26 Sep 2006 21:40:28 +0000 (GMT) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.13.4/8.13.4) with ESMTP id k8QLeRm2095328 for ; Tue, 26 Sep 2006 21:40:27 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.13.4/8.13.4/Submit) id k8QLeRTp095327; Tue, 26 Sep 2006 21:40:27 GMT (envelope-from gnats) Date: Tue, 26 Sep 2006 21:40:27 GMT Message-Id: <200609262140.k8QLeRTp095327@freefall.freebsd.org> To: freebsd-i386@FreeBSD.org From: Ed Maste Cc: Subject: Re: i386/103664: kmem_map_too_small panic after about 7d uptime on 6.1-release X-BeenThere: freebsd-i386@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Ed Maste List-Id: I386-specific issues for FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 26 Sep 2006 21:40:28 -0000 The following reply was made to PR i386/103664; it has been noted by GNATS. From: Ed Maste To: Igor Soumenkov <2igosha@gmail.com> Cc: freebsd-gnats-submit@FreeBSD.org Subject: Re: i386/103664: kmem_map_too_small panic after about 7d uptime on 6.1-release Date: Tue, 26 Sep 2006 17:37:50 -0400 > I am running 6.1-RELEASE on a two-xeon machine with 2GB ram, acpi and HT disabled. The system is running the SMP kernel and each week it panics with "kmem_map too small" error. You can have a look at vmstat -m output over time, to see if one particular allocation type is constantly growing. If that's the case it might suggest the location of a leak. You can also run vmstat -m on a core file to examine the malloc statistics.