Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 26 Apr 2007 14:33:47 -0700
From:      Bakul Shah <bakul@bitblocks.com>
To:        Pawel Jakub Dawidek <pjd@FreeBSD.org>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: ZFS: kmem_map too small panic again 
Message-ID:  <20070426213347.C3A7C5B58@mail.bitblocks.com>
In-Reply-To: Your message of "Thu, 26 Apr 2007 00:35:09 PDT."

next in thread | raw e-mail | index | archive | help
An update:

I reverted sources to to Apr 24 16:49 UTC and rebuilt the
kernel and the bug goes away -- I was able to restore 53GB
(840K+ inodes) and do a bunch of du with no problems.

But the bug remains on a kernel with the latest zfs changes.
All I have to do run du a couple of times in the restored
tree to crash the system.  There is no crash with multiple du
on a similarly sized UFS2, only on ZFS.  This is on a
Athlon64 X2 Dual Core Processor 3800+ running in 32 bit mode.
The exact message is:

panic: kmem_malloc(98304): kmem_map too small: 335478784 total allocated

Am I the only one seeing this problem?  I will attempt to
grab a core dump -- so far it seems to hang during dump after
panic.

> The system first paniced during a restore operation to zfs.
> Now I get this kmem_map too small panic by doing a couple of
> du in the partially restored dir.  This bug seems to have
> come back as of yesterday (about the time freebsd namecache
> started to be used?) -- prior to that I restored many more
> files and did several make buildworlds on the same filesystem
> with no problems.
> 
> Sources were cvsupped about two hours back (as of approx
> 10:30pm PDT Apr 25).  I did a sysctl -a on a freshly booted
> machine and after one du and diffed them.  The most glaring
> diffs seem to be the one shown below (I can supply both
> sysctl outputs).  vmstat shows the solaris pool is using
> about 127MB.
> 
> -debug.numcachehv: 267
> -debug.numcache: 2166
> +debug.numcachehv: 14517
> +debug.numcache: 86100
> 
> -kstat.zfs.misc.arcstats.deleted: 16
> -kstat.zfs.misc.arcstats.recycle_miss: 0
> -kstat.zfs.misc.arcstats.mutex_miss: 0
> -kstat.zfs.misc.arcstats.evict_skip: 0
> -kstat.zfs.misc.arcstats.hash_elements: 92
> -kstat.zfs.misc.arcstats.hash_elements_max: 94
-kstat.zfs.misc.arcstats.hash_collisions: 1
> -kstat.zfs.misc.arcstats.hash_chains: 0
> -kstat.zfs.misc.arcstats.hash_chain_max: 1
> -kstat.zfs.misc.arcstats.p: 83886080
> -kstat.zfs.misc.arcstats.c: 167772160
> +kstat.zfs.misc.arcstats.deleted: 50263
> +kstat.zfs.misc.arcstats.recycle_miss: 7242
> +kstat.zfs.misc.arcstats.mutex_miss: 6701
> +kstat.zfs.misc.arcstats.evict_skip: 9294733
> +kstat.zfs.misc.arcstats.hash_elements: 3514
> +kstat.zfs.misc.arcstats.hash_elements_max: 18588
> +kstat.zfs.misc.arcstats.hash_collisions: 9805
> +kstat.zfs.misc.arcstats.hash_chains: 160
> +kstat.zfs.misc.arcstats.hash_chain_max: 4
> +kstat.zfs.misc.arcstats.p: 15810418
> +kstat.zfs.misc.arcstats.c: 16777216
> 
> -kstat.zfs.misc.arcstats.size: 963072
> +kstat.zfs.misc.arcstats.size: 57576448
> 



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20070426213347.C3A7C5B58>