Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 15 Apr 2023 11:07:20 -0700
From:      Cy Schubert <Cy.Schubert@cschubert.com>
To:        Mark Millard <marklmi@yahoo.com>
Cc:        FreeBSD User <freebsd@walstatt-de.de>, Cy Schubert <Cy.Schubert@cschubert.com>, Charlie Li <vishwin@freebsd.org>, Pawel Jakub Dawidek <pjd@FreeBSD.org>, Mateusz Guzik <mjguzik@gmail.com>, dev-commits-src-main@freebsd.org, Current FreeBSD <freebsd-current@freebsd.org>
Subject:   Re: git: 2a58b312b62f - main - zfs: merge openzfs/zfs@431083f75
Message-ID:  <20230415180720.AC396404@slippy.cwsent.com>
In-Reply-To: <5A47F62D-0E78-4C3E-84C0-45EEB03C7640@yahoo.com>
References:  <20230413071032.18BFF31F@slippy.cwsent.com>  <D0D9BD06-C321-454C-A038-C55C63E0DD6B@dawidek.net>  <20230413063321.60344b1f@cschubert.com> <CAGudoHG3rCx93gyJTmzTBnSe4fQ9=m4mBESWbKVWtAGRxen_4w@mail.gmail.com> <20230413135635.6B62F354@slippy.cwsent.com> <c41f9ed6-e557-9255-5a46-1a22d4b32d66@dawidek.net> <319a267e-3f76-3647-954a-02178c260cea@dawidek.net> <b60807e9-f393-6e6d-3336-042652ddd03c@freebsd.org> <441db213-2abb-b37e-e5b3-481ed3e00f96@dawidek.net> <5ce72375-90db-6d30-9f3b-a741c320b1bf@freebsd.org> <99382FF7-765C-455F-A082-C47DB4D5E2C1@yahoo.com> <32cad878-726c-4562-0971-20d5049c28ad@freebsd.org> <ABC9F3DB-289E-455E-AF43-B3C13525CB2C@yahoo.com> <20230415115452.08911bb7@thor.intern.walstatt.dynvpn.de> <20230415143625.99388387@slippy.cwsent.com> <5A47F62D-0E78-4C3E-84C0-45EEB03C7640@yahoo.com>

next in thread | previous in thread | raw e-mail | index | archive | help
In message <5A47F62D-0E78-4C3E-84C0-45EEB03C7640@yahoo.com>, Mark Millard 
write
s:
> On Apr 15, 2023, at 07:36, Cy Schubert <Cy.Schubert@cschubert.com> =
> wrote:
>
> > In message <20230415115452.08911bb7@thor.intern.walstatt.dynvpn.de>,=20=
>
> > FreeBSD Us
> > er writes:
> >> Am Thu, 13 Apr 2023 22:18:04 -0700
> >> Mark Millard <marklmi@yahoo.com> schrieb:
> >>=20
> >>> On Apr 13, 2023, at 21:44, Charlie Li <vishwin@freebsd.org> wrote:
> >>>=20
> >>>> Mark Millard wrote: =20
> >>>>> FYI: in my original report for a context that has never had
> >>>>> block_cloning enabled, I reported BOTH missing files and
> >>>>> file content corruption in the poudriere-devel bulk build
> >>>>> testing. This predates:
> >>>>> https://people.freebsd.org/~pjd/patches/brt_revert.patch
> >>>>> but had the changes from:
> >>>>> https://github.com/openzfs/zfs/pull/14739/files
> >>>>> The files were missing from packages installed to be used
> >>>>> during a port's build. No other types of examples of missing
> >>>>> files happened. (But only 11 ports failed.) =20
> >>>> I also don't have block_cloning enabled. "Missing files" prior to =
> brt_rev
> >> ert may actually
> >>>> be present, but as the corruption also messes with the file(1) =
> signature,
> >> some tools like
> >>>> ldconfig report them as missing. =20
> >>>=20
> >>> For reference, the specific messages that were not explicit
> >>> null-byte complaints were (some shown with a little context):
> >>>=20
> >>>=20
> >>> =3D=3D=3D>   py39-lxml-4.9.2 depends on shared library: libxml2.so - =
> not found
> >>> =3D=3D=3D>   Installing existing package =
> /packages/All/libxml2-2.10.3_1.pkg =20
> >>> [CA72_ZFS] Installing libxml2-2.10.3_1...
> >>> [CA72_ZFS] Extracting libxml2-2.10.3_1: .......... done
> >>> =3D=3D=3D>   py39-lxml-4.9.2 depends on shared library: libxml2.so - =
> found
> >>> (/usr/local/lib/libxml2.so) . . .
> >>> [CA72_ZFS] Extracting libxslt-1.1.37: .......... done
> >>> =3D=3D=3D>   py39-lxml-4.9.2 depends on shared library: libxslt.so - =
> found
> >>> (/usr/local/lib/libxslt.so) =3D=3D=3D>   Returning to build of =
> py39-lxml-4.9.2 =20
> >>> . . .
> >>> =3D=3D=3D>  Configuring for py39-lxml-4.9.2 =20
> >>> Building lxml version 4.9.2.
> >>> Building with Cython 0.29.33.
> >>> Error: Please make sure the libxml2 and libxslt development packages =
> are in
> >> stalled.
> >>>=20
> >>>=20
> >>> [CA72_ZFS] Extracting libunistring-1.1: .......... done
> >>> =3D=3D=3D>   libidn2-2.3.4 depends on shared library: =
> libunistring.so - not found
> >>=20
> >>>=20
> >>>=20
> >>> [CA72_ZFS] Extracting gmp-6.2.1: .......... done
> >>> =3D=3D=3D>   mpfr-4.2.0,1 depends on shared library: libgmp.so - not =
> found =20
> >>>=20
> >>>=20
> >>> =3D=3D=3D>   nettle-3.8.1 depends on shared library: libgmp.so - not =
> found
> >>> =3D=3D=3D>   Installing existing package /packages/All/gmp-6.2.1.pkg =
> =20
> >>> [CA72_ZFS] Installing gmp-6.2.1...
> >>> the most recent version of gmp-6.2.1 is already installed
> >>> =3D=3D=3D>   nettle-3.8.1 depends on shared library: libgmp.so - not =
> found =20
> >>> *** Error code 1
> >>>=20
> >>>=20
> >>> autom4te: error: need GNU m4 1.4 or later: /usr/local/bin/gm4
> >>>=20
> >>>=20
> >>> checking for GNU=20
> >>> M4 that supports accurate traces... configure: error: no acceptable =
> m4 coul
> >> d be found in
> >>> $PATH. GNU M4 1.4.6 or later is required; 1.4.16 or newer is =
> recommended.
> >>> GNU M4 1.4.15 uses a buggy replacement strstr on some systems.
> >>> Glibc 2.9 - 2.12 and GNU M4 1.4.11 - 1.4.15 have another strstr bug.
> >>>=20
> >>>=20
> >>> ld: error: /usr/local/lib/libblkid.a: unknown file type
> >>>=20
> >>>=20
> >>> =3D=3D=3D
> >>> Mark Millard
> >>> marklmi at yahoo.com
> >>>=20
> >>>=20
> >>=20
> >> Hello=20
> >>=20
> >> whar is the recent status of fixing/mitigate this desatrous bug? =
> Especially f
> >> or those with the
> >> new option enabled on ZFS pools. Any advice?
> >>=20
> >> In an act of precausion (or call it panic) I shutdown several servers =
> to prev
> >> ent irreversible
> >> damages to databases and data storages. We face on one host with =
> /usr/ports r
> >> esiding on ZFS
> >> always errors on the same files created while staging (using =
> portmaster, leav
> >> es the system
> >> with noninstalled software, i.e. www/apache24 in our case). Deleting =
> the work
> >> folder doesn't
> >> seem to change anything, even when starting a scrubbing of the entire =
> pool (R
> >> AIDZ1 pool) -
> >> cause unknown, why it affects always the same files to be corrupted. =
> Same wit
> >> h deve/ruby-gems.
> >>=20
> >> Poudriere has been shutdown for the time being to avoid further =
> issues.=20
> >>=20
> >> Are there any advies to proceed apart from conserving the boxes via =
> shutdown?
> >>=20
> >> Thank you ;-)
> >> oh
> >>=20
> >>=20
> >>=20
> >> --=20
> >> O. Hartmann
> >=20
> > With an up-to-date tree + pjd@'s "Fix data corruption when cloning =
> embedded=20
> > blocks. #14739" patch I didn't have any issues, except for email =
> messages=20
> > with corruption in my sent directory, nowhere else. I'm still =
> investigating=20
> > the email messages issue. IMO one is generally safe to run poudriere =
> on the=20
> > latest ZFS with the additional patch.
>
> My poudriere testing failed when I tested such (14739 included),
> per what I reported, block_cloning never have been enabled.
> Others have also reported poudriere bulk build failures absent
> block_cloning being involved and 14739 being in place. My tests
> do predate:
>
> https://people.freebsd.org/~pjd/patches/brt_revert.patch

IIRC this patch doesn't build.

My tree includes this patch. Pardon the cut&paste. This will not apply.

diff --git a/sys/contrib/openzfs/module/zfs/dmu.c b/sys/contrib/openzfs/modu
le/zfs/dmu.c985d833f58..cda1472a77aa 100644
--- a/sys/contrib/openzfs/module/zfs/dmu.c
+++ b/sys/contrib/openzfs/module/zfs/dmu.c
@@ -2312,8 +2312,10 @@ dmu_brt_clone(objset_t *os, uint64_t object, 
uint64_t offset, uint64_t length,   dl->dr_overridden_by.blk_phys_birth = 0;
                } else {
                        dl->dr_overridden_by.blk_birth = dr->dr_txg;
-                   dl->dr_overridden_by.blk_phys_birth =
-                       BP_PHYSICAL_BIRTH(bp);
+                 if (!BP_IS_EMBEDDED(bp)) {
+                         dl->dr_overridden_by.blk_phys_birth =
+                             BP_PHYSICAL_BIRTH(bp);
+                 }
                }
 
                mutex_exit(&db-
>
> and I'm not sure of if Cy's activity had brt_revert.patch in
> place or not.

I don't know if your poudriere has any residual file corruption or not. My 
poudriere working 100% ok and yours not indicates there may be something 
amiss with your poudriere tree. Remember I rolled back to the last nightly 
snapshot whereas you did not. I don't know the state of your poudriere 
tree. I know with 100% certainty that my tree is good.

>
> Other's notes include Mateusz Guzik's:
>
> =
> https://lists.freebsd.org/archives/dev-commits-src-main/2023-April/014534.=
> html

My tree included this patch + pjd@'s last patch on people.freebsd.org.
>
> which said:
>
> QUOTE
> There is corruption with the recent import, with the
> https://github.com/openzfs/zfs/pull/14739/files patch applied and
> block cloning disabled on the pool.

I had zero poudriere corruption with this patch. My only corruption was in 
my sent-items in my MH mail directory, which I think was due to email 
threads already containing nulls.

>
> There is no corruption with top of main with zfs merge reverted =
> altogether.
>
> Which commit results in said corruption remains to be seen, a variant
> of the tree with just block cloning support reverted just for testing
> purposes is about to be evaluated.
> END QUOTE
>
> Charlie Li's later related notes that helps interpret that were in:
>
> =
> https://lists.freebsd.org/archives/dev-commits-src-main/2023-April/014545.=
> html
>
> QUOTE
> Testing with mjg@ earlier today revealed that block_cloning was not the=20=
>
> cause of poudriere bulk build (and similar cp(1)/install(1)-based)=20
> corruption, although may have exacerbated it.
> END QUOTE
>
> Mateusz later indicated had a hope to have is sorted out sometime
> Friday for what the cause(s) were:
>
> =
> https://lists.freebsd.org/archives/dev-commits-src-main/2023-April/014551.=
> html
>
> QUOTE
> I'm going to narrow down the non-blockcopy corruption after my testjig
> gets off the ground.
>
> Basically I expect to have it sorted out on Friday.
> END QUOTE
>
> But the lack of later related messages suggests that did not happen.
>
> > My tests of the additional patch
>
> (I'm guessing that is a reference to 14739, not to brt_revert.patch .)
>
> > concluded that it resolved my last=20
> > problems, except for the sent email problem I'm still investigating. =
> I'm=20
> > sure there's a simple explanation for it, i.e. the email thread was=20
> > corrupted by the EXDEV regression which cannot be fixed by anything, =
> even=20
> > reverting to the previous ZFS -- the data in those files will remain=20=
>
> > damaged regardless.
>
> Again: my test jump from prior to the import to after the EXDEV
> changes, including having 14739. I still had poudriere bulk
> produce file corruptions.
>
> > I cannot speak to the others who have had poudriere and other issues. =
> I=20
> > never had any problems with poudriere on top of the new ZFS.
>
> Part of the mess is the variability. As I remember, I had 252
> ports build fine in my test before the 11th failure meant that
> the rest (213) had all been classified as skipped.
>
> It is not like most of the port builds failed: relatively uncommon.
>
> Also, one port built on a retry, indicating random/racy behavior
> is involved. (The original failure was not from a file from
> installing build dependencies but something that the builder
> generated during the build. The 2nd try did not fail there or
> anywhere.)
>
> > WRT reverting block_cloning pools to without, your only option is to =
> backup=20
> > your pool and recreate it without block_cloning. Then restore your =
> data.
> >=20
>
> Given what has been reported by multiple people and
> Cy's own example of unexplained corruptions in email
> handling, I'd be cautious risking important data
> until reports from testing environment activity
> consistently report not having corruptions.

The "unexplained" email corruptions occurred in only the threads that 
already had corruption. I haven't been able to reproduce it anywhere else. 
I will continue testing on Monday. I expect my testing to confirm this 
hypothesis.

>
> Another thing my activity does not include any testing
> of the suggestion in:
>
> =
> https://lists.freebsd.org/archives/dev-commits-src-main/2023-April/014607.=
> html
>
> to use "-o sync=3Ddisabled" in a clone, reporting:

This is a different issue. We need a core dump to resolve this. I'll test 
this on my sandbox on Monday.

We can now reproduce this panic by hand. If there is no panic a diff -qr 
will confirm/deny this bug.

>
> QUOTE
> With this workaround I was able to build thousands of packages without=20=
>
> panics or failures due to data corruption.
> END QUOTE
>
> If reliable, that consequence to the change might help
> folks that are trying to isolate the problem(s) figure
> out what is involved.
>
> =3D=3D=3D
> Mark Millard
> marklmi at yahoo.com

IMO we've had a lack of systematic testing of the various bugs. The fact 
that this has caused some corrupt files has led to human panic over the 
issue.

Now that I've reverted my laptop to the old ZFS, the MH sent-email issue 
continues to exhibit itself. This is because the files I forward to myself 
already contain corrupt data. The old ZFS will not magically remove this. 
This testing is necessary to prove my hypothesis. I expect brand new email 
threads not to exhibit this problem with the new ZFS.


-- 
Cheers,
Cy Schubert <Cy.Schubert@cschubert.com>
FreeBSD UNIX:  <cy@FreeBSD.org>   Web:  https://FreeBSD.org
NTP:           <cy@nwtime.org>    Web:  https://nwtime.org

			e^(i*pi)+1=0






Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20230415180720.AC396404>