Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 09 May 2019 08:55:28 +1000
From:      Michelle Sullivan <michelle@sorbs.net>
To:        Walter Parker <walterp@gmail.com>
Cc:        freebsd-stable@freebsd.org
Subject:   Re: ZFS...
Message-ID:  <7D18A234-E7BF-4855-BD51-4AE2253DB1E4@sorbs.net>
In-Reply-To: <CAMPTd_CYxFNmtFyxBU3=OZ6K1JgmoX-CTP-%2Bne92r-zoFy5DsA@mail.gmail.com>
References:  <30506b3d-64fb-b327-94ae-d9da522f3a48@sorbs.net> <56833732-2945-4BD3-95A6-7AF55AB87674@sorbs.net> <3d0f6436-f3d7-6fee-ed81-a24d44223f2f@netfence.it> <17B373DA-4AFC-4D25-B776-0D0DED98B320@sorbs.net> <70fac2fe3f23f85dd442d93ffea368e1@ultra-secure.de> <70C87D93-D1F9-458E-9723-19F9777E6F12@sorbs.net> <CAGMYy3tYqvrKgk2c==WTwrH03uTN1xQifPRNxXccMsRE1spaRA@mail.gmail.com> <5ED8BADE-7B2C-4B73-93BC-70739911C5E3@sorbs.net> <d0118f7e-7cfc-8bf1-308c-823bce088039@denninger.net> <2e4941bf-999a-7f16-f4fe-1a520f2187c0@sorbs.net> <20190430102024.E84286@mulder.mintsol.com> <41FA461B-40AE-4D34-B280-214B5C5868B5@punkt.de> <20190506080804.Y87441@mulder.mintsol.com> <08E46EBF-154F-4670-B411-482DCE6F395D@sorbs.net> <33D7EFC4-5C15-4FE0-970B-E6034EF80BEF@gromit.dlib.vt.edu> <A535026E-F9F6-4BBA-8287-87EFD02CF207@sorbs.net> <a82bfabe-a8c3-fd9a-55ec-52530d4eafff@denninger.net> <a1b78a63-0ef1-af51-4e33-a9a97a257c8b@sorbs.net> <CAMPTd_A7RYJ12pFyY4TzbXct82kWfr1hcEkSpDg7bjP25xjJGA@mail.gmail.com> <d91cf5@sorbs.net>

next in thread | previous in thread | raw e-mail | index | archive | help


Michelle Sullivan
http://www.mhix.org/
Sent from my iPad

On 09 May 2019, at 01:55, Walter Parker <walterp@gmail.com> wrote:

>>=20
>>=20
>> ZDB (unless I'm misreading it) is able to find all 34m+ files and
>> verifies the checksums.  The problem is in the zfs data structures (one
>> definitely, two maybe, metaslabs fail checksums preventing the mounting
>> (even read-only) of the volumes.)
>>=20
>>>  Especially, how to you know
>>> before you recovered the data from the drive.
>> See above.
>>=20
>>> As ZFS meta data is stored
>>> redundantly on the drive and never in an inconsistent form (that is what=

>>> fsck does, it fixes the inconsistent data that most other filesystems
>> store
>>> when they crash/have disk issues).
>> The problem - unless I'm reading zdb incorrectly - is limited to the
>> structure rather than the data.  This fits with the fact the drive was
>> isolated from user changes when the drive was being resilvered so the
>> data itself was not being altered .. that said, I am no expert so I
>> could easily be completely wrong.
>>=20
>> What it sounds like you need is a meta data fixer, not a file recovery
> tool.

This is true, but I am of the thought in alignment with the zFs devs this mi=
ght not be a good idea... if zfs can=E2=80=99t work it out already, the best=
 thing to do will probably be get everything off it and reformat... =20

> Assuming the meta data can be fixed that would be the easy route.

That=E2=80=99s the thing... I don=E2=80=99t know if it can be easily fixed..=
. more I think the meta data can probably be easily fixed, but I suspect the=
 spacemap can=E2=80=99t and as such if it can=E2=80=99t there is going to be=
 one of two things...  either a big hole (or multiple little ones) or the li=
kelihood of new data overwriting partially or in full, old data and this wou=
ld not be good..

> That sound not be hard to write if everything else on the disk has no
> issues. Don't you say in another message that the system is now returning
> 100's of drive errors.

No, one disk in the 16 disk zRAID2 ...  previously unseen but it could be th=
e errors have occurred in the last 6 weeks... everytime I reboot it started r=
esilvering, gets to 761M resilvered and then stops.


> How does that relate the statement =3D>Everything on
> the disk is fine except for a little bit of corruption in the freespace ma=
p?

Well I think it goes through until it hits that little bit of corruption at s=
tops it mounting...  then stops again..

I=E2=80=99m seeing 100s of hard errors at the beginning of one of the drives=
.. they were reported in syslog but only just so could be a new thing.  Coul=
d be previously undetected.. no way to know.

>=20
>=20
>>=20
>>>=20
>>> I have a friend/business partner that doesn't want to move to ZFS becaus=
e
>>> his recovery method is wait for a single drive (no-redundancy, sometimes=

>> no
>>> backup) to fail and then use ddrescue to image the broken drive to a new=

>>> drive (ignoring any file corruption because you can't really tell withou=
t
>>> ZFS). He's been using disk rescue programs for so long that he will not
>>> move to ZFS, because it doesn't have a disk rescue program.
>>=20
>> The first part is rather cavilier .. the second part I kinda
>> understand... its why I'm now looking at alternatives ... particularly
>> being bitten as badly as I have with an unmountable volume.
>>=20
>> On the system I managed for him, we had a system with ZFS crap out. I
> restored it from a backup. I continue to believe that people running
> systems without backups are living on borrowed time. The idea of relying o=
n
> a disk recovery tool is too risky for my taste.
>=20
>=20
>>>  He has systems
>>> on Linux with ext3 and no mirroring or backups. I've asked about moving
>>> them to a mirrored ZFS system and he has told me that the customer
>> doesn't
>>> want to pay for a second drive (but will pay for hours of his time to fi=
x
>>> the problem when it happens). You kind of sound like him.
>> Yeah..no!  I'd be having that on a second (mirrored) drive... like most
>> of my production servers.
>>=20
>>> ZFS is risky
>>> because there isn't a good drive rescue program.
>> ZFS is good for some applications.  ZFS is good to prevent cosmic ray
>> issues.  ZFS is not good when things go wrong.  ZFS doesn't usually go
>> wrong.  Think that about sums it up.
>>=20
>> When it does go wrong I restore from backups. Therefore my systems don't
> have problems. I sorry you had the perfect trifecta that caused you to los=
e
> multiple drives and all your backups at the same time.
>=20
>=20
>>>  Sun's design was that the
>>> system should be redundant by default and checksum everything. If the
>>> drives fail, replace them. If they fail too much or too fast, restore
>> from
>>> backup. Once the system had too much corruption, you can't recover/check=

>>> for all the damage without a second off disk copy. If you have that off
>>> disk, then you have backup. They didn't build for the standard use case
>> as
>>> found in PCs because the disk recover programs rarely get everything
>> back,
>>> therefore they can't be relied on to get you data back when your data is=

>>> important. Many PC owners have brought PC mindset ideas to the "UNIX"
>>> world. Sun's history predates Windows and Mac and comes from a
>>> Mini/Mainframe mindset (were people tried not to guess about data
>>> integrity).
>> I came from the days of Sun.
>>=20
>> Good then you should understand Sun's point of view.
>=20
>=20
>>>=20
>>> Would a disk rescue program for ZFS be a good idea? Sure. Should the lac=
k
>>> of a disk recovery program stop you from using ZFS? No. If you think so,=

>> I
>>> suggest that you have your data integrity priorities in the wrong order
>>> (focusing on small, rare events rather than the common base case).
>> Common case in your assessment in the email would suggest backups are
>> not needed unless you have a rare event of a multi-drive failure.  Which
>> I know you're not advocating, but it is this same circular argument...
>> ZFS is so good it's never wrong we don't need no stinking recovery
>> tools, oh but take backups if it does fail, but it won't because it's so
>> good and you have to be running consumer hardware or doing something
>> wrong or be very unlucky with failures... etc.. round and round we go,
>> where ever she'll stop no-one knows.
>>=20
>> I advocate 2-3 backups of any important system (at least one different
> that the other, offsite if one can afford it).
> I never said ZFS is so good we don't need backups (that would be a stupid
> comment). As far as a recovery tool, those sound risky. I'd prefer
> something without so much risk.
>=20
> Make your own judgement, it is your time and data. I think ZFS is a great
> filesystem that anyone using FreeBSD or Illumios should be using.
>=20
>=20
> --=20
> The greatest dangers to liberty lurk in insidious encroachment by men of
> zeal, well-meaning but without understanding.   -- Justice Louis D. Brande=
is
> _______________________________________________
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?7D18A234-E7BF-4855-BD51-4AE2253DB1E4>