From owner-freebsd-fs@FreeBSD.ORG Mon Jun 16 14:11:45 2014 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D3C583B9 for ; Mon, 16 Jun 2014 14:11:45 +0000 (UTC) Received: from wonkity.com (wonkity.com [67.158.26.137]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "wonkity.com", Issuer "wonkity.com" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 67753250D for ; Mon, 16 Jun 2014 14:11:44 +0000 (UTC) Received: from wonkity.com (localhost [127.0.0.1]) by wonkity.com (8.14.9/8.14.9) with ESMTP id s5GEBh4L006468 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Mon, 16 Jun 2014 08:11:43 -0600 (MDT) (envelope-from wblock@wonkity.com) Received: from localhost (wblock@localhost) by wonkity.com (8.14.9/8.14.9/Submit) with ESMTP id s5GEBgIC006465; Mon, 16 Jun 2014 08:11:43 -0600 (MDT) (envelope-from wblock@wonkity.com) Date: Mon, 16 Jun 2014 08:11:42 -0600 (MDT) From: Warren Block To: Anders Jensen-Waud Subject: Re: ZFS pool permanent error question -- errors: Permanent errors have been detected in the following files: storage: <0x0> In-Reply-To: <20140616024942.GA13697@koodekoo.local> Message-ID: References: <20140615211052.GA63247@neutralgood.org> <20140616024942.GA13697@koodekoo.local> User-Agent: Alpine 2.11 (BSF 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (wonkity.com [127.0.0.1]); Mon, 16 Jun 2014 08:11:43 -0600 (MDT) Cc: freebsd-fs@freebsd.org X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Jun 2014 14:11:45 -0000 On Mon, 16 Jun 2014, Anders Jensen-Waud wrote: > This disk is not the ``storage'' zpool -- it is my ``backup'' pool, > which is on a different drive: > > NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT > backup 464G 235G 229G 50% 1.00x ONLINE - > storage 928G 841G 87.1G 90% 1.00x ONLINE - What does 'zpool status' say about the device names of that pool? > Running 'gpt recover /dev/da1' fixes the error above but after a reboot > it reappears. Would it be better to completely wipe the disk and > reinitialise it with zfs? Most likely the problem is that the disk was GPT partitioned, but when the pool was created, ZFS was told to use the whole disk (ada0) rather than just a partition (ada0p1). One of the partition tables was overwritten by ZFS information. Possibly this space was mostly unused by ZFS, because otherwise a 'gpart recover' would have damaged it. This could also have happened if GPT partitioning was not cleared from the disk before using it for ZFS. ZFS leaves some unused space at the end of the disk, enough to not overwrite a backup GPT. That would be detected by GEOM, and not match the primary, which was overwritten by ZFS. The error would be spurious, but attempting a recovery could overwrite actual ZFS data. ZFS works fine on whole disks or in partitions. But yes, in this case, I'd back up, destroy the pool, destroy partition information on the drives, then recreate the pool. A handy way to make sure a backup GPT table is not left on a disk is to create and then destroy GPT partitioning: gpart destroy -F adaN gpart create -s gpt adaN gpart destroy adaN