From owner-freebsd-current@FreeBSD.ORG Fri May 8 07:20:41 2009 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AD2971065676 for ; Fri, 8 May 2009 07:20:41 +0000 (UTC) (envelope-from nakal@web.de) Received: from fmmailgate03.web.de (fmmailgate03.web.de [217.72.192.234]) by mx1.freebsd.org (Postfix) with ESMTP id 32FDC8FC0C for ; Fri, 8 May 2009 07:20:36 +0000 (UTC) (envelope-from nakal@web.de) Received: from smtp07.web.de (fmsmtp07.dlan.cinetic.de [172.20.5.215]) by fmmailgate03.web.de (Postfix) with ESMTP id A25CCFC01913; Fri, 8 May 2009 09:20:34 +0200 (CEST) Received: from [217.236.36.151] (helo=zelda.local) by smtp07.web.de with asmtp (TLSv1:AES128-SHA:128) (WEB.DE 4.110 #277) id 1M2KNm-0000No-00; Fri, 08 May 2009 09:20:34 +0200 Date: Fri, 8 May 2009 09:20:33 +0200 From: Martin To: Richard Todd Message-ID: <20090508092033.299daab6@zelda.local> In-Reply-To: References: <20090507210516.06331fb2@zelda.local> X-Mailer: Claws Mail 3.7.1 (GTK+ 2.16.1; amd64-portbld-freebsd8.0) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: nakal@web.de X-Sender: nakal@web.de X-Provags-ID: V01U2FsdGVkX1+6hKKIFgFLtQ+uGU6ycE0ipPSjcA5LWaSDoqEm YQzkVpkhVyuKRV6gpUuacZTZ7mcGW4E7Qj2CrA8z3sfai2fEJ3 /lwZAkniA= Cc: freebsd-current@freebsd.org, Kip Macy Subject: Re: ZFS panic space_map.c line 110 X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 08 May 2009 07:20:42 -0000 Hi Richard and Kip, @Richard: > This panic wouldn't have anything to do with zpool.cache (that's just > a file to help the system find which devices it should expect to find > zpools on during boot). This is a problem with the free space map, > which is part of the filesystem metadata. If you're lucky, it's just > the in-core copy of the free space map that was bogus and there's a > valid map on disk. If you're unlucky, the map on disk is trashed, > and there's no really easy way to recover that pool. I really cannot tell. I thought it would be nice to have ZFS for jail managements, so I can create one file system for one jail that's why I installed -CURRENT with version 13 of ZFS on a server in production. > > One more piece of information I can give is that every hour the ZFS > > file systems create snapshots. Maybe it triggered some > > inconsistency between the writes to a file system and the snapshot, > > I cannot tell, because I don't understand the condition. > > I doubt this had anything to do with the problem. Well, you said you provoked the panic by mounting and unmounting very often. The zfs-snapshot-mgmt port that I used shows similar behavior in certain situations. @Kip: > This could be a locking bug or a space map corruption (depressing). > There really isn't enough context here for me to go on. If you can't > get a core, please at least provide us with a backtrace from ddb. It does not look like a locking bug to me. I tried several times to get the pool running, also with an older kernel. It paniced in the same way. I could get past the panic the first time, when I removed zfs_enable="YES" from rc.conf. ZFS really made we worried and I removed the pools now, created UFS partition and restored all data from backup. Sorry, I did not investigate the problem deeper because I wanted to get the file server running and thought that the exact panic line number and mentioning the situation (during importing the pool) would be enough to make the problem clear. Nothing was lost, this ZFS data corruption just ended my ZFS experiment for now. I will use the good old UFS2 for now and check it at a later time again. Thanks to you both for your advice. -- Martin