Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 25 Jun 2020 17:52:12 -0700
From:      Mark Millard <marklmi@yahoo.com>
To:        =?utf-8?Q?Klaus_K=C3=BCchemann?= <maciphone2@googlemail.com>
Cc:        freebsd-arm@freebsd.org
Subject:   Re: USB [USB3 and USB2] problems when using UEFi v1.16 to boot RPi4: Still produces inaccurate file copies
Message-ID:  <F7BDD05D-C803-4ACB-9C48-6CBEC277F464@yahoo.com>
In-Reply-To: <ED69F8C1-C042-43C6-941A-E154229E4623@googlemail.com>
References:  <476DD0F0-2286-4B2C-8E44-4404AF17F5A8@yahoo.com> <B1FF8DD3-DFD1-4973-B0D2-6AC33BCAA59C@yahoo.com> <CF81584E-75CE-4BFC-8ACC-AB95E561B28D@yahoo.com> <F426CFE6-F619-4B3C-9260-07E72BC709AF@yahoo.com> <ED69F8C1-C042-43C6-941A-E154229E4623@googlemail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 2020-Jun-25, at 15:40, Klaus K=C3=BCchemann <maciphone2 at =
googlemail.com> wrote:

> Am 25.06.2020 um 21:29 schrieb Mark Millard via freebsd-arm =
<freebsd-arm@freebsd.org>:
>> =E2=80=A6
>> .
>> The test still failed to produce an accurate file copy
>> but the kernel did not report anything either. I'm
>> Unsure how get evidence of the context for the bad 4K
>> chunks.
>>=20
> No clue if it has effects but maybe : dd if=3Dxxx of=3Dxxx bs=3D4k ?

Something interesting does result from dd testing,
even though doing file copies that way still gets
the problem. In fact a couple of interesting points
show up.

Using dd to copy large files still gets corrupted copies.
(Large files are only because the corruptions are not
frequent in the files but a sufficiently large file
seems to always have some corruption.)

Interestingly, dd if=3D/dev/zero based large file
generation has produced good files from what I
can tell. (Generate separate files and diff them
after a reboot.)

The problem was originally discovered copying
from another machine to a RPi4. But the Ethernet
use involved USB in providing data (but not a
local USB drive) --while /dev/zero does not
involve USB as a data source and copies of
data in memory via file content buffering. So
the contrasting dd if=3D/dev/zero results may be
indicating something.

Another interesting point is that the following
sequence seems repeatable for step (E)'s resultant
property below:

A) first do a couple of large dd if=3D/dev/zero file generations
B) then do a (non-zero) large file copy (dd based or cp based)
C) reboot
D) diff the 2 files generated in (A): no differences
E) diff the original large file and the temporary copy
   from (B): there are differences and the temporary copy
   has zero in every byte that is different.

(E) suggests that the bad file copies via cp or
via dd are picking up data from the wrong memory
pages sometimes, (A) just made large numbers of
pages zero, making it more likely a zero page
would be used if the wrong page was referenced.

An example of checking for (E) was:

# diff clang-cortexA53-installworld-poud.tar mmjnk.other=20
Binary files clang-cortexA53-installworld-poud.tar and mmjnk.other =
differ

# cmp -l clang-cortexA53-installworld-poud.tar mmjnk.other | grep -v " =
0$" | more
--More--(END)


Note about my example "large file" sizes:

-rw-r--r--   1 root  wheel  4011026432 Apr 25 21:04:42 2020 =
clang-cortexA53-installworld-poud.tar

and I've been mostly using 4 GiByte for the resultant size
of large files generated via dd.

I have not tried to find a minimum size for reliably
getting corrupted file copies.

=3D=3D=3D
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?F7BDD05D-C803-4ACB-9C48-6CBEC277F464>