From owner-freebsd-fs@FreeBSD.ORG Thu Apr 26 07:35:10 2007 Return-Path: X-Original-To: freebsd-fs@freebsd.org Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 0B09916A403; Thu, 26 Apr 2007 07:35:10 +0000 (UTC) (envelope-from bakul@bitblocks.com) Received: from mail.bitblocks.com (ns1.bitblocks.com [64.142.15.60]) by mx1.freebsd.org (Postfix) with ESMTP id E392413C44B; Thu, 26 Apr 2007 07:35:09 +0000 (UTC) (envelope-from bakul@bitblocks.com) Received: from bitblocks.com (localhost.bitblocks.com [127.0.0.1]) by mail.bitblocks.com (Postfix) with ESMTP id 804E05B29; Thu, 26 Apr 2007 00:35:09 -0700 (PDT) To: Pawel Jakub Dawidek Date: Thu, 26 Apr 2007 00:35:09 -0700 From: Bakul Shah Message-Id: <20070426073509.804E05B29@mail.bitblocks.com> Cc: freebsd-fs@freebsd.org Subject: ZFS: kmem_map too small panic again X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 26 Apr 2007 07:35:10 -0000 The system first paniced during a restore operation to zfs. Now I get this kmem_map too small panic by doing a couple of du in the partially restored dir. This bug seems to have come back as of yesterday (about the time freebsd namecache started to be used?) -- prior to that I restored many more files and did several make buildworlds on the same filesystem with no problems. Sources were cvsupped about two hours back (as of approx 10:30pm PDT Apr 25). I did a sysctl -a on a freshly booted machine and after one du and diffed them. The most glaring diffs seem to be the one shown below (I can supply both sysctl outputs). vmstat shows the solaris pool is using about 127MB. -debug.numcachehv: 267 -debug.numcache: 2166 +debug.numcachehv: 14517 +debug.numcache: 86100 -kstat.zfs.misc.arcstats.deleted: 16 -kstat.zfs.misc.arcstats.recycle_miss: 0 -kstat.zfs.misc.arcstats.mutex_miss: 0 -kstat.zfs.misc.arcstats.evict_skip: 0 -kstat.zfs.misc.arcstats.hash_elements: 92 -kstat.zfs.misc.arcstats.hash_elements_max: 94 -kstat.zfs.misc.arcstats.hash_collisions: 1 -kstat.zfs.misc.arcstats.hash_chains: 0 -kstat.zfs.misc.arcstats.hash_chain_max: 1 -kstat.zfs.misc.arcstats.p: 83886080 -kstat.zfs.misc.arcstats.c: 167772160 +kstat.zfs.misc.arcstats.deleted: 50263 +kstat.zfs.misc.arcstats.recycle_miss: 7242 +kstat.zfs.misc.arcstats.mutex_miss: 6701 +kstat.zfs.misc.arcstats.evict_skip: 9294733 +kstat.zfs.misc.arcstats.hash_elements: 3514 +kstat.zfs.misc.arcstats.hash_elements_max: 18588 +kstat.zfs.misc.arcstats.hash_collisions: 9805 +kstat.zfs.misc.arcstats.hash_chains: 160 +kstat.zfs.misc.arcstats.hash_chain_max: 4 +kstat.zfs.misc.arcstats.p: 15810418 +kstat.zfs.misc.arcstats.c: 16777216 -kstat.zfs.misc.arcstats.size: 963072 +kstat.zfs.misc.arcstats.size: 57576448