From owner-freebsd-fs@FreeBSD.ORG Mon May 17 01:27:54 2010 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B7B181065670; Mon, 17 May 2010 01:27:54 +0000 (UTC) (envelope-from staalebk@ifi.uio.no) Received: from putsch.kolbu.ws (cl-225.sto-01.se.sixxs.net [IPv6:2001:16d8:ff00:e0::2]) by mx1.freebsd.org (Postfix) with ESMTP id 74F758FC0A; Mon, 17 May 2010 01:27:54 +0000 (UTC) Received: from chiller by putsch.kolbu.ws with local (Exim 4.71 (FreeBSD)) (envelope-from ) id 1ODp7Y-000E89-KS; Mon, 17 May 2010 03:27:52 +0200 Date: Mon, 17 May 2010 03:27:52 +0200 From: =?iso-8859-1?Q?St=E5le?= Kristoffersen To: Pawel Jakub Dawidek Message-ID: <20100517012752.GA3943@putsch.kolbu.ws> References: <20100506012217.GA41806@putsch.kolbu.ws> <20100512102156.GE1703@garage.freebsd.pl> <20100512110803.GF1703@garage.freebsd.pl> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20100512110803.GF1703@garage.freebsd.pl> User-Agent: Mutt/1.5.18 (2008-05-17) Cc: freebsd-fs@freebsd.org Subject: Re: Bad hardware + zfs = panic X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 May 2010 01:27:54 -0000 On 2010-05-12 at 13:08, Pawel Jakub Dawidek wrote: > On Wed, May 12, 2010 at 12:21:56PM +0200, Pawel Jakub Dawidek wrote: > > Well, I don't think it should be possible for vdev to be NULL. > > But if you still have this panic, can you try this patch: > > > > http://people.freebsd.org/~pjd/patches/vdev_mirror.c.patch Yeah, it shouldn't be possible, but I had something in my system that corrupted data in memory, and that can lead to all sorts of problems. I'm not blaming ZFS for not handling 'impossible' situations, but this seemed like something that could be avoided. I'm actually impressed with how well ZFS handled it, my UFS root-fs went ballistic a few times. > It looks like: > > http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6435666 > > The work-around is to remove /boot/zfs/zpool.cache and import the pool > again. Fortunately I found out which file was problematic, and after removing it and recreating it, I've had no more panics. I'm not sure if the tip on that page would have solved the problem, because zpool.cache is removed on my system when exporting the pool. Thanks for the help tho, everything looks to be working now :) -- Ståle Kristoffersen staalebk@ifi.uio.no