From owner-freebsd-fs@FreeBSD.ORG Thu Apr 26 21:33:48 2007 Return-Path: X-Original-To: freebsd-fs@freebsd.org Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 17A4316A402; Thu, 26 Apr 2007 21:33:48 +0000 (UTC) (envelope-from bakul@bitblocks.com) Received: from mail.bitblocks.com (mail.bitblocks.com [64.142.15.60]) by mx1.freebsd.org (Postfix) with ESMTP id EE61513C44C; Thu, 26 Apr 2007 21:33:47 +0000 (UTC) (envelope-from bakul@bitblocks.com) Received: from bitblocks.com (localhost.bitblocks.com [127.0.0.1]) by mail.bitblocks.com (Postfix) with ESMTP id C3A7C5B58; Thu, 26 Apr 2007 14:33:47 -0700 (PDT) To: Pawel Jakub Dawidek In-reply-to: Your message of "Thu, 26 Apr 2007 00:35:09 PDT." Date: Thu, 26 Apr 2007 14:33:47 -0700 From: Bakul Shah Message-Id: <20070426213347.C3A7C5B58@mail.bitblocks.com> Cc: freebsd-fs@freebsd.org Subject: Re: ZFS: kmem_map too small panic again X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 26 Apr 2007 21:33:48 -0000 An update: I reverted sources to to Apr 24 16:49 UTC and rebuilt the kernel and the bug goes away -- I was able to restore 53GB (840K+ inodes) and do a bunch of du with no problems. But the bug remains on a kernel with the latest zfs changes. All I have to do run du a couple of times in the restored tree to crash the system. There is no crash with multiple du on a similarly sized UFS2, only on ZFS. This is on a Athlon64 X2 Dual Core Processor 3800+ running in 32 bit mode. The exact message is: panic: kmem_malloc(98304): kmem_map too small: 335478784 total allocated Am I the only one seeing this problem? I will attempt to grab a core dump -- so far it seems to hang during dump after panic. > The system first paniced during a restore operation to zfs. > Now I get this kmem_map too small panic by doing a couple of > du in the partially restored dir. This bug seems to have > come back as of yesterday (about the time freebsd namecache > started to be used?) -- prior to that I restored many more > files and did several make buildworlds on the same filesystem > with no problems. > > Sources were cvsupped about two hours back (as of approx > 10:30pm PDT Apr 25). I did a sysctl -a on a freshly booted > machine and after one du and diffed them. The most glaring > diffs seem to be the one shown below (I can supply both > sysctl outputs). vmstat shows the solaris pool is using > about 127MB. > > -debug.numcachehv: 267 > -debug.numcache: 2166 > +debug.numcachehv: 14517 > +debug.numcache: 86100 > > -kstat.zfs.misc.arcstats.deleted: 16 > -kstat.zfs.misc.arcstats.recycle_miss: 0 > -kstat.zfs.misc.arcstats.mutex_miss: 0 > -kstat.zfs.misc.arcstats.evict_skip: 0 > -kstat.zfs.misc.arcstats.hash_elements: 92 > -kstat.zfs.misc.arcstats.hash_elements_max: 94 -kstat.zfs.misc.arcstats.hash_collisions: 1 > -kstat.zfs.misc.arcstats.hash_chains: 0 > -kstat.zfs.misc.arcstats.hash_chain_max: 1 > -kstat.zfs.misc.arcstats.p: 83886080 > -kstat.zfs.misc.arcstats.c: 167772160 > +kstat.zfs.misc.arcstats.deleted: 50263 > +kstat.zfs.misc.arcstats.recycle_miss: 7242 > +kstat.zfs.misc.arcstats.mutex_miss: 6701 > +kstat.zfs.misc.arcstats.evict_skip: 9294733 > +kstat.zfs.misc.arcstats.hash_elements: 3514 > +kstat.zfs.misc.arcstats.hash_elements_max: 18588 > +kstat.zfs.misc.arcstats.hash_collisions: 9805 > +kstat.zfs.misc.arcstats.hash_chains: 160 > +kstat.zfs.misc.arcstats.hash_chain_max: 4 > +kstat.zfs.misc.arcstats.p: 15810418 > +kstat.zfs.misc.arcstats.c: 16777216 > > -kstat.zfs.misc.arcstats.size: 963072 > +kstat.zfs.misc.arcstats.size: 57576448 >