Date: Fri, 8 May 2009 09:20:33 +0200 From: Martin <nakal@web.de> To: Richard Todd <rmtodd@ichotolot.servalan.com> Cc: freebsd-current@freebsd.org, Kip Macy <kmacy@freebsd.org> Subject: Re: ZFS panic space_map.c line 110 Message-ID: <20090508092033.299daab6@zelda.local> In-Reply-To: <x7my9oz42a.fsf@ichotolot.servalan.com> References: <20090507210516.06331fb2@zelda.local> <x7my9oz42a.fsf@ichotolot.servalan.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Hi Richard and Kip, @Richard: > This panic wouldn't have anything to do with zpool.cache (that's just > a file to help the system find which devices it should expect to find > zpools on during boot). This is a problem with the free space map, > which is part of the filesystem metadata. If you're lucky, it's just > the in-core copy of the free space map that was bogus and there's a > valid map on disk. If you're unlucky, the map on disk is trashed, > and there's no really easy way to recover that pool. I really cannot tell. I thought it would be nice to have ZFS for jail managements, so I can create one file system for one jail that's why I installed -CURRENT with version 13 of ZFS on a server in production. > > One more piece of information I can give is that every hour the ZFS > > file systems create snapshots. Maybe it triggered some > > inconsistency between the writes to a file system and the snapshot, > > I cannot tell, because I don't understand the condition. > > I doubt this had anything to do with the problem. Well, you said you provoked the panic by mounting and unmounting very often. The zfs-snapshot-mgmt port that I used shows similar behavior in certain situations. @Kip: > This could be a locking bug or a space map corruption (depressing). > There really isn't enough context here for me to go on. If you can't > get a core, please at least provide us with a backtrace from ddb. It does not look like a locking bug to me. I tried several times to get the pool running, also with an older kernel. It paniced in the same way. I could get past the panic the first time, when I removed zfs_enable="YES" from rc.conf. ZFS really made we worried and I removed the pools now, created UFS partition and restored all data from backup. Sorry, I did not investigate the problem deeper because I wanted to get the file server running and thought that the exact panic line number and mentioning the situation (during importing the pool) would be enough to make the problem clear. Nothing was lost, this ZFS data corruption just ended my ZFS experiment for now. I will use the good old UFS2 for now and check it at a later time again. Thanks to you both for your advice. -- Martin
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20090508092033.299daab6>