Date: Mon, 31 Jan 2011 13:23:08 -0600 From: Adam Vande More <amvandemore@gmail.com> To: Mike Tancsa <mike@sentex.net> Cc: freebsd-fs@freebsd.org Subject: Re: ZFS help! Message-ID: <AANLkTi=Z=Onduz9uMuoRgJNXEUJeNKU%2BWw=Rgi8TP2tP@mail.gmail.com> In-Reply-To: <4D470A65.4050000@sentex.net> References: <4D43475D.5050008@sentex.net> <4D44D775.50507@jrv.org> <4D470A65.4050000@sentex.net>
next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Jan 31, 2011 at 1:15 PM, Mike Tancsa <mike@sentex.net> wrote: > On 1/29/2011 10:13 PM, James R. Van Artsdalen wrote: > > On 1/28/2011 4:46 PM, Mike Tancsa wrote: > >> > >> I had just added another set of disks to my zfs array. It looks like the > >> drive cage with the new drives is faulty. I had added a couple of files > >> to the main pool, but not much. Is there any way to restore the pool > >> below ? I have a lot of files on ad0,1,4,6 and ada4,5,6,7 and perhaps > >> one file on the new drives in the bad cage. > > > > Get another enclosure and verify it works OK. Then move the disks from > > the suspect enclosure to the tested enclosure and try to import the pool. > > > > The problem may be cabling or the controller instead - you didn't > > specify how the disks were attached or which version of FreeBSD you're > > using. > > > > OK, good news (for me) it seems. New cage and all seems to be recognized > correctly. The history is > > ... > 2010-04-22.14:27:38 zpool add tank1 raidz /dev/ada4 /dev/ada5 /dev/ada6 > /dev/ada7 > 2010-06-11.13:49:33 zfs create tank1/argus-data > 2010-06-11.13:49:41 zfs create tank1/argus-data/previous > 2010-06-11.13:50:38 zfs set compression=off tank1/argus-data > 2010-08-06.12:20:59 zpool replace tank1 ad1 ad1 > 2010-09-16.10:17:51 zpool upgrade -a > 2011-01-28.11:45:43 zpool add tank1 raidz /dev/ada0 /dev/ada1 /dev/ada2 > /dev/ada3 > > FreeBSD RELENG_8 from last week, 8G of RAM, amd64. > > zpool status -v > pool: tank1 > state: ONLINE > status: One or more devices has experienced an error resulting in data > corruption. Applications may be affected. > action: Restore the file in question if possible. Otherwise restore the > entire pool from backup. > see: http://www.sun.com/msg/ZFS-8000-8A > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > tank1 ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > ad0 ONLINE 0 0 0 > ad1 ONLINE 0 0 0 > ad4 ONLINE 0 0 0 > ad6 ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > ada0 ONLINE 0 0 0 > ada1 ONLINE 0 0 0 > ada2 ONLINE 0 0 0 > ada3 ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > ada5 ONLINE 0 0 0 > ada8 ONLINE 0 0 0 > ada7 ONLINE 0 0 0 > ada6 ONLINE 0 0 0 > > errors: Permanent errors have been detected in the following files: > > /tank1/argus-data/previous/argus-sites-radium.2011.01.28.16.00 > tank1/argus-data:<0xc6> > /tank1/argus-data/argus-sites-radium > > 0(offsite)# zpool get all tank1 > NAME PROPERTY VALUE SOURCE > tank1 size 14.5T - > tank1 used 7.56T - > tank1 available 6.94T - > tank1 capacity 52% - > tank1 altroot - default > tank1 health ONLINE - > tank1 guid 7336939736750289319 default > tank1 version 15 default > tank1 bootfs - default > tank1 delegation on default > tank1 autoreplace off default > tank1 cachefile - default > tank1 failmode wait default > tank1 listsnapshots on local > > > Do I just want to do a scrub ? > > Unfortunately, http://www.sun.com/msg/ZFS-8000-8A gives a 503 > A scrub will not help fix those files, but if it was me I'd do it anyway to ensure consistency. http://dlc.sun.com/osol/docs/content/ZFSADMIN/gbbwl.html I've seen similar types of corruption on ZFS when using devices that don't obey cache flush. Perhaps this can help provide some understanding. http://blogs.digitar.com/jjww/2006/12/shenanigans-with-zfs-flushing-and-intelligent-arrays/ -- Adam Vande More
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?AANLkTi=Z=Onduz9uMuoRgJNXEUJeNKU%2BWw=Rgi8TP2tP>