Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 03 May 2007 12:59:43 -0700
From:      Bakul Shah <bakul@bitblocks.com>
To:        Pawel Jakub Dawidek <pjd@FreeBSD.org>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: ZFS: kmem_map too small panic again 
Message-ID:  <20070503195943.47C3A5B2E@mail.bitblocks.com>
In-Reply-To: Your message of "Fri, 27 Apr 2007 16:26:06 %2B0200." <20070427142606.GK49413@garage.freebsd.pl> 

next in thread | previous in thread | raw e-mail | index | archive | help
> On Fri, Apr 27, 2007 at 12:35:35AM +0200, Pawel Jakub Dawidek wrote:
> > On Thu, Apr 26, 2007 at 02:33:47PM -0700, Bakul Shah wrote:
> > > An update:
> > >
> > > I reverted sources to to Apr 24 16:49 UTC and rebuilt the
> > > kernel and the bug goes away -- I was able to restore 53GB
> > > (840K+ inodes) and do a bunch of du with no problems.
> > >
> > > But the bug remains on a kernel with the latest zfs changes.
> > > All I have to do run du a couple of times in the restored
> > > tree to crash the system.  There is no crash with multiple du
> > > on a similarly sized UFS2, only on ZFS.  This is on a
> > > Athlon64 X2 Dual Core Processor 3800+ running in 32 bit mode.
> > > The exact message is:
> > >
> > > panic: kmem_malloc(98304): kmem_map too small: 335478784 total allocated
> >
> > I can reproduce it and I'm working on it.
> 
> The problem is that kern.maxvnodes are tuned based on vnode+UFS_inode
> size. In case of ZFS, the size of vnode+ZFS_znode_dnode+dmu_buf is
> larger. As a work-around just decrease kern.maxvnodes to something like
> 3/4 of the current value.

Pawel, thank you for this fix; I have been running -current
with it for a few days but as others have reported, this does
not fix the problem, only makes it much less likely -- or may
be there is another problem.  At least now I can get a crash
dump!

I have two filesystems in one pool with about 1.28M inodes
altogether.  Based on a few trials it seems it is necessary
to walk them both before triggering this panic (or may be it
is a function of how many inodes are statted).

Every second I sent output of vmstat -z to another machine
during testing.  Nothing pops out as obviously wrong but here
are somethings worth looking at:

$ grep ^512 vmstream | firstlast
512:                      512,        0,      466,       38,
512:                      512,        0,   131491,     3669,
$ grep ' Slab' vmstream | firstlast
UMA Slabs:                 64,        0,     2516,       21,
UMA Slabs:                 64,        0,    23083,      222,
$ grep dmu vmstream | firstlast   
dmu_buf_impl_t:           140,        0,      912,       68,
dmu_buf_impl_t:           140,        0,   136034,     3938,
$ grep znode vmstream | firstlast
zfs_znode_cache:          236,        0,      261,       43,
zfs_znode_cache:          236,        0,    64900,     5596,

# firstlast displays the first and last line of its input.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20070503195943.47C3A5B2E>