Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 8 May 2019 08:48:14 +0200
From:      Borja Marcos <borjam@sarenet.es>
To:        Walter Parker <walterp@gmail.com>
Cc:        freebsd-stable@freebsd.org
Subject:   Re: ZFS...
Message-ID:  <C0A831F9-4D65-4054-A19C-F5DD17AFA0A7@sarenet.es>
In-Reply-To: <CAMPTd_A7RYJ12pFyY4TzbXct82kWfr1hcEkSpDg7bjP25xjJGA@mail.gmail.com>
References:  <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <CAOtMX2gf3AZr1-QOX_6yYQoqE-H%2B8MjOWc=eK1tcwt5M3dCzdw@mail.gmail.com> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <CAGMYy3tYqvrKgk2c==WTwrH03uTN1xQifPRNxXccMsRE1spaRA@mail.gmail.com> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <d0118f7e-7cfc-8bf1-308c-823bce088039@denninger.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <A535026E-F9F6-4BBA-8287-87EFD02CF207@sorbs.net> <a82bfabe-a8c3-fd9a-55ec-52530d4eafff@denninger.net> <a1b78a63-0ef1-af51-4e33-a9a97a257c8b@sorbs.net> <CAMPTd_A7RYJ12pFyY4TzbXct82kWfr1hcEkSpDg7bjP25xjJGA@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help


> On 8 May 2019, at 05:09, Walter Parker <walterp@gmail.com> wrote:
> Would a disk rescue program for ZFS be a good idea? Sure. Should the =
lack
> of a disk recovery program stop you from using ZFS? No. If you think =
so, I
> suggest that you have your data integrity priorities in the wrong =
order
> (focusing on small, rare events rather than the common base case).

ZFS is certainly different from other flesystems. Its self healing =
capabilities help it survive problems=20
that would destroy others. But if you reach a level of damage past that =
=E2=80=9Ctolerable=E2=80=9D threshold consider
yourself dead.

Is it possible at all to write an effective repair tool? It would be =
really complicated.

By the way, ddrescue can help in a multiple drive failure scenery with =
ZFS. If some of the drives are
showing the typical problem of =E2=80=9Cflaky=E2=80=9D sectors with a =
lot of retries slowing down the whole pool you can
shut down the system or at least export the pool, copy the required =
drive/s to fresh ones, replace the
flaky drives and try to import the pool. I would first do the experiment =
to make sure it=E2=80=99s harmless,
but ZFS relies on labels written on the disks to import a pool =
regardless of disk controller topology,
devices names, uuids, or whatever.  So a full disk copy should work.=20

Michelle, were you doing periodic scrubs? I=E2=80=99m not sure you =
mentioned it.=20





Borja.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?C0A831F9-4D65-4054-A19C-F5DD17AFA0A7>