Date: Sun, 28 Nov 2004 20:15:22 -0700 From: Scott Long <scottl@freebsd.org> To: Michael Nottebrock <michaelnottebrock@gmx.net> Cc: freebsd-current@freebsd.org Subject: Re: fsck shortcomings Message-ID: <41AA944A.5090109@freebsd.org> In-Reply-To: <41AA8E00.2050401@gmx.net>
index | next in thread | previous in thread | raw e-mail
Michael Nottebrock wrote: > I recently had a filesystem go bad on me in such a way that it was > recognized way bigger than it actually was, causing fsck to fail while > trying to allocate and equally astronomic amount of memory (and my > machine already had 1 Gig of mem + 2 Gig swap available). > I just newfs'd and I'm now in the process of restoring data, however, I > googled a bit on this and it seems that this kind of fs corruption is > occurring quite often, in particular due to power failures. Yes, very troubling. You said that the alternate superblocks didn't help? > > Is there really no way that fsck could be made smarter about dealing > with seemingly huge filesystems? Also, what kind of memory would be > required to fsck a _real_ 11TB filesystem? > More than you can address in 32 bits. Reducing the RAM footprint of fsck_ufs is something that desperately needs to be done, especially since it's now easy to trash crashdumps that are saved in swap because fsck is consuming so much memory. Scotthome | help
Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?41AA944A.5090109>
