From owner-freebsd-fs@FreeBSD.ORG Wed Jun 16 02:34:22 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 67AB01065679 for ; Wed, 16 Jun 2010 02:34:22 +0000 (UTC) (envelope-from spork@bway.net) Received: from xena.bway.net (xena.bway.net [216.220.96.26]) by mx1.freebsd.org (Postfix) with ESMTP id 0B3788FC08 for ; Wed, 16 Jun 2010 02:34:21 +0000 (UTC) Received: (qmail 53023 invoked by uid 0); 16 Jun 2010 02:34:21 -0000 Received: from unknown (HELO ?10.3.2.41?) (spork@96.57.144.66) by smtp.bway.net with (DHE-RSA-AES256-SHA encrypted) SMTP; 16 Jun 2010 02:34:21 -0000 Date: Tue, 15 Jun 2010 22:34:20 -0400 (EDT) From: Charles Sprickman X-X-Sender: spork@hotlap.local To: freebsd-fs@freebsd.org Message-ID: User-Agent: Alpine 2.00 (OSX 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII Subject: zfs panic "evicting znode" X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 16 Jun 2010 02:34:22 -0000 Howdy, I have a box running 8.0-RELEASE that recently started panicing every few hours with the following message: panic: evicting znode 0xa1eafe80 cpuid = 0 Uptime: 30m56s Physical memory: 2034 MB Dumping 569 MB (counts down to 458, then the box freezes hard) The dump doesn't finish, so there's nothing for savecore to grab. It's a very basic zfs config - two scsi drives in a mirror: [root@h21 /usr/local/etc/pdns]# zpool status pool: zroot state: ONLINE scrub: scrub completed after 0h5m with 0 errors on Tue Jun 15 21:32:11 2010 config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 mirror ONLINE 0 0 0 gpt/disk0 ONLINE 0 0 0 gpt/disk1 ONLINE 0 0 0 errors: No known data errors This is running on an older dual-xeon (1.8GHz/32 bit) supermicro 1U server w/2GB of RAM. The following zfs tunables are set in loader.conf: [root@h21 /usr/local/etc/pdns]# cat /boot/loader.conf zfs_load="YES" vm.kmem_size_max="1000M" vm.kmem_size="1000M" vfs.zfs.arc_max="400M" vfs.root.mountfrom="zfs:zroot" Google is showing me nothing on this panic except for hits on the source code that actuall contains the panic message. Any hints as to what this means? Here's zdb output: [root@h21 /usr/local/adm/bin]# zdb zroot version=13 name='zroot' state=0 txg=267691 pool_guid=12059945251392529754 hostid=1898595607 hostname='h21.biglist.com' vdev_tree type='root' id=0 guid=12059945251392529754 children[0] type='mirror' id=0 guid=14682767316808875040 metaslab_array=23 metaslab_shift=30 ashift=9 asize=142515896320 is_log=0 children[0] type='disk' id=0 guid=11600930948623447097 path='/dev/gpt/disk0' whole_disk=0 children[1] type='disk' id=1 guid=4279842263738814989 path='/dev/gpt/disk1' whole_disk=0 Assertion failed: (rwlp->rw_count == 0), file /usr/src/cddl/lib/libzpool/../../../cddl/contrib/opensolaris/lib/libzpool/common/kernel.c, line 203. Abort trap: 6 (core dumped) Note: the coredump on zdb only occurs if the pool's name is specified, otherwise the output is identical. I'm guessing this is just an error in zdb since it happens on every 8.0 box I've got. Any help is appreciated - we're really looking to get zfs in production, but this problem is a bit odd, and it's not like we can fsck to fix any possible problems with the fs. Thanks, Charles