From owner-freebsd-fs@FreeBSD.ORG Mon Jan 31 19:23:10 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 04125106566B for ; Mon, 31 Jan 2011 19:23:10 +0000 (UTC) (envelope-from amvandemore@gmail.com) Received: from mail-fx0-f54.google.com (mail-fx0-f54.google.com [209.85.161.54]) by mx1.freebsd.org (Postfix) with ESMTP id 8435C8FC12 for ; Mon, 31 Jan 2011 19:23:09 +0000 (UTC) Received: by fxm16 with SMTP id 16so6179200fxm.13 for ; Mon, 31 Jan 2011 11:23:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=nI/ypO06BkmD3JtnFbeLvYvR5QkgJEpLy4+NMcy/CqM=; b=vgKHQ2TrEyL86y0hhLFD3JiEL9o62stLRx0kQy4WmizMB+nO/n0YMuM/I0QmLowI2t 9ipeUheKzqjLQzAwo7xJAe59CS0qg7g4zTqoNYFKnupOVbJoaUjcvDejISxmfLkOzeER chVJYkJaTsIZWlCdYT1HMjmNg62PxbwaWnNf0= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=RsBLLc78iizKW/l+GutnkDfyaShRi8y800KvsnBY0lK1qRh7ZpUdOli3r0fTbBRNRu Wk1XbP4H1F5+LoKirE7J0/LE+yB7M6ZIunNhKhFzVZbxazj5bIEPC9t9Yu77+S5ZSIEY vSHT4cbS4GF+6nKvpnrBl/fzlICArivI9+t+4= MIME-Version: 1.0 Received: by 10.223.106.129 with SMTP id x1mr6407512fao.13.1296501788480; Mon, 31 Jan 2011 11:23:08 -0800 (PST) Received: by 10.223.114.4 with HTTP; Mon, 31 Jan 2011 11:23:08 -0800 (PST) In-Reply-To: <4D470A65.4050000@sentex.net> References: <4D43475D.5050008@sentex.net> <4D44D775.50507@jrv.org> <4D470A65.4050000@sentex.net> Date: Mon, 31 Jan 2011 13:23:08 -0600 Message-ID: From: Adam Vande More To: Mike Tancsa Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS help! X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 31 Jan 2011 19:23:10 -0000 On Mon, Jan 31, 2011 at 1:15 PM, Mike Tancsa wrote: > On 1/29/2011 10:13 PM, James R. Van Artsdalen wrote: > > On 1/28/2011 4:46 PM, Mike Tancsa wrote: > >> > >> I had just added another set of disks to my zfs array. It looks like the > >> drive cage with the new drives is faulty. I had added a couple of files > >> to the main pool, but not much. Is there any way to restore the pool > >> below ? I have a lot of files on ad0,1,4,6 and ada4,5,6,7 and perhaps > >> one file on the new drives in the bad cage. > > > > Get another enclosure and verify it works OK. Then move the disks from > > the suspect enclosure to the tested enclosure and try to import the pool. > > > > The problem may be cabling or the controller instead - you didn't > > specify how the disks were attached or which version of FreeBSD you're > > using. > > > > OK, good news (for me) it seems. New cage and all seems to be recognized > correctly. The history is > > ... > 2010-04-22.14:27:38 zpool add tank1 raidz /dev/ada4 /dev/ada5 /dev/ada6 > /dev/ada7 > 2010-06-11.13:49:33 zfs create tank1/argus-data > 2010-06-11.13:49:41 zfs create tank1/argus-data/previous > 2010-06-11.13:50:38 zfs set compression=off tank1/argus-data > 2010-08-06.12:20:59 zpool replace tank1 ad1 ad1 > 2010-09-16.10:17:51 zpool upgrade -a > 2011-01-28.11:45:43 zpool add tank1 raidz /dev/ada0 /dev/ada1 /dev/ada2 > /dev/ada3 > > FreeBSD RELENG_8 from last week, 8G of RAM, amd64. > > zpool status -v > pool: tank1 > state: ONLINE > status: One or more devices has experienced an error resulting in data > corruption. Applications may be affected. > action: Restore the file in question if possible. Otherwise restore the > entire pool from backup. > see: http://www.sun.com/msg/ZFS-8000-8A > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > tank1 ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > ad0 ONLINE 0 0 0 > ad1 ONLINE 0 0 0 > ad4 ONLINE 0 0 0 > ad6 ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > ada0 ONLINE 0 0 0 > ada1 ONLINE 0 0 0 > ada2 ONLINE 0 0 0 > ada3 ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > ada5 ONLINE 0 0 0 > ada8 ONLINE 0 0 0 > ada7 ONLINE 0 0 0 > ada6 ONLINE 0 0 0 > > errors: Permanent errors have been detected in the following files: > > /tank1/argus-data/previous/argus-sites-radium.2011.01.28.16.00 > tank1/argus-data:<0xc6> > /tank1/argus-data/argus-sites-radium > > 0(offsite)# zpool get all tank1 > NAME PROPERTY VALUE SOURCE > tank1 size 14.5T - > tank1 used 7.56T - > tank1 available 6.94T - > tank1 capacity 52% - > tank1 altroot - default > tank1 health ONLINE - > tank1 guid 7336939736750289319 default > tank1 version 15 default > tank1 bootfs - default > tank1 delegation on default > tank1 autoreplace off default > tank1 cachefile - default > tank1 failmode wait default > tank1 listsnapshots on local > > > Do I just want to do a scrub ? > > Unfortunately, http://www.sun.com/msg/ZFS-8000-8A gives a 503 > A scrub will not help fix those files, but if it was me I'd do it anyway to ensure consistency. http://dlc.sun.com/osol/docs/content/ZFSADMIN/gbbwl.html I've seen similar types of corruption on ZFS when using devices that don't obey cache flush. Perhaps this can help provide some understanding. http://blogs.digitar.com/jjww/2006/12/shenanigans-with-zfs-flushing-and-intelligent-arrays/ -- Adam Vande More