Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 27 Jun 2024 15:54:41 -0700
From:      Mark Millard <marklmi@yahoo.com>
To:        Warner Losh <imp@bsdimp.com>
Cc:        bob prohaska <fbsd@www.zefox.net>, FreeBSD ARM List <freebsd-arm@freebsd.org>
Subject:   Re: Git clone failures on armv7, was Re: Git core dump checking out main on armv7
Message-ID:  <D631100F-6533-41CE-B046-1E0AF291422C@yahoo.com>
In-Reply-To: <AD0B461D-6AF4-4C33-B3AC-9C3F3B172A2A@yahoo.com>
References:  <ZnNOXjgfyHQh7IeH@www.zefox.net> <5D5B6739-1685-43F5-80CC-E55603181D09@yahoo.com> <ZndZ9pVET2mCCpe8@www.zefox.net> <8F4F4B49-5ED3-4ACA-B0D3-356D8459BE95@yahoo.com> <ZngxCS22kAZSrWH4@www.zefox.net> <F05531C2-F2F3-463A-9E89-3EB8A5D714B6@yahoo.com> <ZntgkwPflE5S-Vhn@www.zefox.net> <C0E5C804-5B68-4D0B-883F-75FBCC8484EC@yahoo.com> <Zny1g_Ktg01_kQVV@www.zefox.net> <5DC2D33F-A8DB-4542-B8B3-A131F660A395@yahoo.com> <Zn2STr_KNhbWiXBY@www.zefox.net> <CANCZdfrALiptS-GMJHawxNXDJUfDxBrBvbL4xS8LUW6k%2BHQdJw@mail.gmail.com> <AD0B461D-6AF4-4C33-B3AC-9C3F3B172A2A@yahoo.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Jun 27, 2024, at 14:40, Mark Millard <marklmi@yahoo.com> wrote:

> On Jun 27, 2024, at 12:15, Warner Losh <imp@bsdimp.com> wrote:
>=20
> On Thu, Jun 27, 2024 at 10:24=E2=80=AFAM bob prohaska =
<fbsd@www.zefoxnet> wrote:
>> On Wed, Jun 26, 2024 at 06:24:59PM -0700, Mark Millard wrote:
>>>=20
>>> Does using the chroot setup using --depth=3D1 on the
>>> RPi2B consistently work when tried repeatedly? Or
>>> was this just an example of a rare success?
>>>=20
>> Apparently it was a rare success. Five back-to-back retries
>> all failed in an orderly way, though with different messages;
>> invalid fetch-pack and missing blob object.
>> The transcript is at
>> =
http://www.zefox.net/~fbsd/rpi2/git_problems/readme_shallow_armv7_chroot_g=
ittests
>>=20
>> Is it worthwhile to repeat with different --depth=3D values? I'm not =
sure
>> what the sensible range might be, maybe a 1, 2, 5 sequence? It would
>> be convenient to avoid a panic, as that slows repetition.
>>=20
>> What happens if you start limiting the memory resources inside a =
armv7 jail
>> on a aarch64 machine?
>>=20
>> Sometimes it works, sometimes it doesn't triggers a "memory shortage" =
or
>> "marginal amounts of memory available" bug hunting memories for me.
>=20
> As I reported in a earlier submittal to the list, I've
> replicated the problem on an armv7 system running main [15]
> with RAM+SWAP being:
>=20
> 2048 MiBytes RAM + 3685 MiBytes SWAP =3D=3D 5733 MiBytes OVERALL
>=20
> This was on a Orange Pi+ 2ed. A top variation monitoring and
> reporting various maximum observed figures did not show any
> large memory use compared to even 1024 MiBytes. Any limitation
> would appear to have to be local to some more specific kind
> of constraint rather than overall system RAM or RAM+SWAP.
>=20
>> Warner

FYI:

So far, doing the likes of "truss -o ~/truss.txt -f -a -H -p 2136"
towards the end of "Receiving objects" (where 2136 was the original
git process) has always resulted in a normal completion of the clone.

comparing/contrasting use of

(gdb) run clone --depth=3D1 -o freebsd =
ssh://anongit@192.158.248.9/src.git /tmp/DOES-NOT-EXIST

Such still gets the errors:

(gdb) run clone --depth=3D1 -o freebsd =
ssh://anongit@192.158.248.9/src.git /tmp/DOES-NOT-EXIST
Starting program: /usr/local/bin/git clone --depth=3D1 -o freebsd =
ssh://anongit@192.158.248.9/src.git /tmp/DOES-NOT-EXIST
Cloning into '/tmp/DOES-NOT-EXIST'...
[Detaching after fork from child process 2172]
[New LWP 100254 of process 2171]
[Detaching after fork from child process 2173]
remote: Enumerating objects: 104642, done.
remote: Counting objects: 100% (104642/104642), done.
remote: Compressing objects: 100% (88919/88919), done.
remote: Total 104642 (delta 22161), reused 43523 (delta 11808), =
pack-reused 0 (from 0)
Receiving objects: 100% (104642/104642), 344.50 MiB | 1.11 MiB/s, done.
[LWP 100254 of process 2171 exited]
Resolving deltas: 100% (22161/22161), done.
[Detaching after fork from child process 2176]
fatal: missing blob object '64981a94f867c4c6f9c4aaa26c1117cc8d85de34'
fatal: remote did not send all necessary objects
[Inferior 1 (process 2171) exited with code 0200]

>>  Thanks for reading,
>>=20
>> bob prohaska
>>=20
>>=20
>>>> A second try without chroot resulted in failure but no panic:
>>>=20
>>>> <jemalloc>: Should own extent_mutex_pool(17)
>>>=20
>>> That looks like it would be interesting to someone
>>> appropriately knowledgeable. If jemalloc can see bad
>>> mutex ownerships, that seems like such could lead to
>>> all sorts of later problems: Garbage-in/garbage-out.
>>>=20
>>> I do not know if the message means that various
>>> corruptions might be in place afterwards so that
>>> various later problems might be consequences that
>>> are not surprising possibilities.
>>>=20
>>>> 47.25 MiB | 1.35 MiB/s =20
>>>> error: index-pack died of signal 6
>>>>=20
>>>> A repeat session produced an oft-seen failure:
>>>>=20
>>>> root@www:/mnt # mkdir 3rdarmv7gittest
>>>> root@www:/mnt # cd 3rdarmv7gittest
>>>> root@www:/mnt/3rdarmv7gittest # git clone  -o freebsd =
ssh://anongit@192.158.248.9/src.git .
>>>> Cloning into '.'...
>>>> remote: Enumerating objects: 4511481, done.
>>>> remote: Counting objects: 100% (383480/383480), done.
>>>> remote: Compressing objects: 100% (28955/28955), done.
>>>=20
>>>> <jemalloc>: Should own extent_mutex_pool(17)
>>>=20
>>> That is the same error notice as above that looked
>>> to be interesting.
>>>=20
>>> Note that it happens before the later message
>>> "error: index-pack died of signal 6". So that
>>> last may just be a later consequence of the
>>> earlier error(s).
>>>=20
>>>> 47.25 MiB | 1.35 MiB/s =20
>>>> error: index-pack died of signal 6
>>>> fatal: index-pack failed
>>>> root@www:/mnt/3rdarmv7gittest # ls
>>>> root@www:/mnt/3rdarmv7gittest # cd ..
>>>> root@www:/mnt # mkdir 4tharmv7gittest
>>>> root@www:/mnt # cd 4tharmv7gittest
>>>> root@www:/mnt/4tharmv7gittest # git clone -o freebsd =
ssh://anongit@192.158.248.9/src.git .
>>>> Cloning into '.'...
>>>> remote: Enumerating objects: 4511481, done.
>>>> remote: Counting objects: 100% (383480/383480), done.
>>>> remote: Compressing objects: 100% (28955/28955), done.
>>>> Receiving objects:  43% (1966916/4511481), 926.00 MiB | 626.00 =
KiB/s=20
>>>> remote: Total 4511481 (delta 377747), reused 354525 (delta 354525), =
pack-reused 4128001 (from 1)
>>>> Receiving objects: 100% (4511481/4511481), 1.64 GiB | 705.00 KiB/s, =
done.
>>>> fatal: pack is corrupted (SHA1 mismatch)
>>>> fatal: index-pack failed
>>>=20
>>> Note the lack of a local message:
>>>=20
>>> <jemalloc>: Should own extent_mutex_pool
>>>=20
>>> But the prior jemalloc message(s) may be sufficient
>>> context to not be surprised about this.
>>>=20
>>>> root@www:/mnt/4tharmv7gittest #=20
>>>>=20
>>>> No panic, however, and it seems reproducible:
>>>> root@www:/mnt # mkdir 5tharmv7gittest
>>>> root@www:/mnt # cd 5tharmv7gittest
>>>> root@www:/mnt/5tharmv7gittest # git clone -o freebsd =
ssh://anongit@192.158.248.9/src.git .
>>>> Cloning into '.'...
>>>> remote: Enumerating objects: 4511513, done.
>>>> remote: Counting objects: 100% (383480/383480), done.
>>>> remote: Compressing objects: 100% (28955/28955), done.
>>>> remote: Total 4511513 (delta 377756), reused 354525 (delta 354525), =
pack-reused 4128033 (from 1)
>>>> Receiving objects: 100% (4511513/4511513), 1.64 GiB | 1.28 MiB/s, =
done.
>>>> fatal: pack is corrupted (SHA1 mismatch)
>>>> fatal: index-pack failed
>>>=20
>>> Note the lack of a local message:
>>>=20
>>> <jemalloc>: Should own extent_mutex_pool
>>>=20
>>> But the prior jemalloc message(s) may be sufficient
>>> context to not be surprised about this (again).
>>>=20
>>>> root@www:/mnt/5tharmv7gittest=20
>>>>=20
>>>> Not sure what to try next, thanks for reading this far!=20
>>>>=20
>>>> bob prohaska
>>>>=20
>>>>=20
>>>> Archived at=20
>>>> http://www.zefox.net/~fbsd/rpi2/git_problems/readme_armv7
>>>=20
>>>=20


=3D=3D=3D
Mark Millard
marklmi at yahoo.com




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?D631100F-6533-41CE-B046-1E0AF291422C>