From owner-freebsd-current@FreeBSD.ORG Thu May 7 19:06:23 2009 Return-Path: Delivered-To: freebsd-current@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8C52B106564A for ; Thu, 7 May 2009 19:06:23 +0000 (UTC) (envelope-from nakal@web.de) Received: from fmmailgate01.web.de (fmmailgate01.web.de [217.72.192.221]) by mx1.freebsd.org (Postfix) with ESMTP id 479A28FC16 for ; Thu, 7 May 2009 19:06:23 +0000 (UTC) (envelope-from nakal@web.de) Received: from smtp05.web.de (fmsmtp05.dlan.cinetic.de [172.20.4.166]) by fmmailgate01.web.de (Postfix) with ESMTP id 0092A1018B8BC for ; Thu, 7 May 2009 21:05:19 +0200 (CEST) Received: from [217.236.38.100] (helo=zelda.local) by smtp05.web.de with asmtp (TLSv1:AES128-SHA:128) (WEB.DE 4.110 #277) id 1M28uE-0000SX-00 for freebsd-current@FreeBSD.org; Thu, 07 May 2009 21:05:18 +0200 Date: Thu, 7 May 2009 21:05:16 +0200 From: Martin To: freebsd-current@FreeBSD.org Message-ID: <20090507210516.06331fb2@zelda.local> X-Mailer: Claws Mail 3.7.1 (GTK+ 2.16.1; amd64-portbld-freebsd8.0) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: nakal@web.de X-Sender: nakal@web.de X-Provags-ID: V01U2FsdGVkX1+35uLtVunP+iNQYRQkUKBidG66Fpbj/9bpH10G Nf9GhAgiRL5k0e3VoIEPJAx9lziyVkKvt3e3ghlQxDy/ULdJ3t zpanfLKKk= Cc: Subject: ZFS panic space_map.c line 110 X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 07 May 2009 19:06:23 -0000 Hi, I have a file server running ZFS on -CURRENT. Someone has tried to transfer a file with several gigabytes onto the system. The kernel crashed with a panic and freezed up during spewing the panic. I've only written down the most important messages: solaris assert ss==NULL zfs/space_map.c line 110 process: 160 spa_zio I've heard that I can try to move the zpool cache away and import the zpool with force once again. Will this help? I am asking because I don't know if the panic is caused by a corrupt cache or corrupt file system metadata. Maybe someone can explain it. (I had to switch the server off very ungently and the underlying RAID is rebuilding, so I can try it out later.) Is this issue with inconsistent zpools well known? I've seen some posts from 2007 and January 2009 that reported similar problems. Apparently some people have lost their entire zpools multiple times already, as far as I understood it. One more piece of information I can give is that every hour the ZFS file systems create snapshots. Maybe it triggered some inconsistency between the writes to a file system and the snapshot, I cannot tell, because I don't understand the condition. -- Martin