From owner-freebsd-questions@freebsd.org Wed Apr 12 15:52:48 2017 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id C4356D3BEFB for ; Wed, 12 Apr 2017 15:52:48 +0000 (UTC) (envelope-from cyberleo@cyberleo.net) Received: from mail.cyberleo.net (paka.cyberleo.net [216.226.128.180]) by mx1.freebsd.org (Postfix) with ESMTP id A99E712F for ; Wed, 12 Apr 2017 15:52:47 +0000 (UTC) (envelope-from cyberleo@cyberleo.net) Received: from [172.16.44.4] (vitani.den.cyberleo.net [216.80.73.130]) by mail.cyberleo.net (Postfix) with ESMTPSA id 25048510BD; Wed, 12 Apr 2017 11:46:50 -0400 (EDT) Subject: Re: ZFS kernel panic after power outage To: "Jason W. Barnes" , freebsd-questions@freebsd.org References: From: CyberLeo Kitsana Message-ID: Date: Wed, 12 Apr 2017 10:46:49 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 12 Apr 2017 15:52:48 -0000 On 04/11/2017 05:38 PM, Jason W. Barnes wrote: > FreeBSDers: > > After a recent power outage here I have run into trouble whenever I > try to mount, or export, or list, or do anything at all to one of my ZFS > file systems (see attached image). It instantly goes all "panic: > solaris assert: rs == NULL". This is my data drive, not my system > drive, so I commented out "zfs_enable=YES" in my rc.conf and it will now > boot up after I set the system drive readonly to "off". But I'd like to > now try to recover the damaged file system on those drives. > The only think that I can think of, and hope for, at this point > might be to start unplugging one of this three-drive RAID-5 at a time to > see if that might cleanse the RAID of something that might have been in > mid-write when the power went out. Or that could metastasize the > cancer. Which would be bad. > I've seen occasional posts in the past with similar issues, but > without any firm solutions, so I thought that I'd ask if anyone has any > insights as to how I might attempt to recover this drive. Thanks in > advance if you have any ideas, ZFS is self-healing - this makes it extremely robust; unfortunately, when problems arise that its self-healing cannot correct, it tends to be an extremely dire situation indeed. Having had a look at the code surrounding the above-mentioned assert, it doesn't appear to be easily circumventable for recovery. After making a copy of the disks involved just in case, you may attempt to import the pool in ZFS-on-Linux or Illumos, to avoid possible implementation bugs, or import from an earlier, hopefully-consistent txg; however, I suspect that you'll have better luck recreating the pool from backups. -- Fuzzy love, -CyberLeo Technical Administrator CyberLeo.Net Webhosting http://www.CyberLeo.Net Element9 Communications http://www.Element9.net Furry Peace! - http://www.fur.com/peace/