From owner-freebsd-fs@FreeBSD.ORG Wed Apr 25 15:36:48 2012 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 3A3F2106566B for ; Wed, 25 Apr 2012 15:36:48 +0000 (UTC) (envelope-from peter.maloney@brockmann-consult.de) Received: from moutng.kundenserver.de (moutng.kundenserver.de [212.227.126.186]) by mx1.freebsd.org (Postfix) with ESMTP id BE4F18FC18 for ; Wed, 25 Apr 2012 15:36:47 +0000 (UTC) Received: from [10.3.0.26] ([141.4.215.32]) by mrelayeu.kundenserver.de (node=mrbap1) with ESMTP (Nemesis) id 0Lp8j6-1RkhBa0Rfw-00erIW; Wed, 25 Apr 2012 17:36:46 +0200 Message-ID: <4F981A0D.40507@brockmann-consult.de> Date: Wed, 25 Apr 2012 17:36:45 +0200 From: Peter Maloney User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:11.0) Gecko/20120312 Thunderbird/11.0 MIME-Version: 1.0 To: Andrew Reilly References: <20120424143014.GA2865@johnny.reilly.home> <4F96BAB9.9080303@brockmann-consult.de> <20120424232136.GA1441@johnny.reilly.home> In-Reply-To: <20120424232136.GA1441@johnny.reilly.home> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Provags-ID: V02:K0:ZEB21vgG62ARK3a35TcrPTVfJLWHqPHIAAVKDw7/geq Saccc3sytw3Zsyo+cmw6WI7SSGdsW1h7FfTrb2wNASmNa6n8Dy vPky5hIs0TKF7HhXjqSeosbkHtwsc4hyhklqDJcbnmH0WAq+sc TmjuW5vADifBp+E/UnEjnJDMWS2Q4IC/L0HmR31P9I804uMDDP T6I9g+/YMQm3MG0yR3kplZUUX0/RDYVZX9+a6L7JW1KlaSkK4k pupXm5ZHuy1QpfeL7mgI1LHfqLA7mwOs2QVudecil02fcfFX3R wNtP5NGySuHleqMzTqPCZMnln3XAFoLfdhw5BAotdsHHsTsFAU 3PiHRy9m5qj5QLjxgX6CpgfUrq8a3j3Yc7FsIfcAV Cc: freebsd-fs@freebsd.org Subject: Re: Odd file system corruption in ZFS pool X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 25 Apr 2012 15:36:48 -0000 On 04/25/2012 01:21 AM, Andrew Reilly wrote: > On Tue, Apr 24, 2012 at 04:37:45PM +0200, Peter Maloney wrote: >> So far the only corruption I had was the result of installing FreeBSD on >> a 4 GB USB flash stick. It had no redundancy, and within a few months, >> some files were spontaneously broken. >> >> And in that one instance I found that move, copy, etc. on broken files >> reported by zpool status -v will always fail. Only "rm" worked for me. >> So I suggest you try rmdir or rm -r. > Rm and rm -r doesn't work. Even as root, rm -rf Maildir.bad > returns a lot of messages of the form: foo/bar: no such file > or directory. The result is that I now have a directory that > contains no "good" files, but a concentrated collection of > breakage. That sucks. But there is one thing I forgot... you need to run the "rm" command immediately after scrub. (no export, reboot, etc. in between). And it probably only applies to the files listed with the -v part of "zpool status -v". So since yours aren't listed... that is something different. Is your broken stuff limited to a single dataset, or the whole pool? You could try making a second dataset, copying good files to it, and destroying the old one (losing all your snapshots on that dataset, of course). Here is another thread about it: http://lists.freebsd.org/pipermail/freebsd-current/2011-October/027902.html And this message looks interesting: "but if you search on the lists for up to a year or so, you'll find some useful commands to inspect and destroy corrupted objects." http://lists.freebsd.org/pipermail/freebsd-current/2011-October/027926.html And "I tried your suggestion and ran the command "zdb -ccv backups" to try and check the consistency of the troublesome "backups" pool. This is what I ended up with:" But they don't say what the solution is (other than destroy the pool, and I would think the dataset could be enough since the filesystem is corrupt, but maybe not the pool). > > I have another zpool scrub running at the moment. We'll see if > that is able to clean it up, but it hasn't had much luck in the > past. > > Note that none of these broken files or directories show up in > the zpool status -v error list. That just contains the one > entry for the zfs root directory: tank/home:<0x0> > > Cheers, > I doubt scrubbing more than once (repeating the same thing and expecting different results) should fix anything. But if you scrubbed on OpenIndiana, it would at least be different. And if it worked, you could file a PR about it.