Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 24 Aug 2023 21:57:42 -0700
From:      bob prohaska <fbsd@www.zefox.net>
To:        Mark Millard <marklmi@yahoo.com>
Cc:        Current FreeBSD <freebsd-current@freebsd.org>, freebsd-arm@freebsd.org
Subject:   Re: www/chromium will not build on a host w/ 8 CPU and 16G mem [RPi4B 8 GiByte example]
Message-ID:  <ZOg0xg4m63BD35Gq@www.zefox.net>
In-Reply-To: <804E6287-71B7-4D2C-A72C-6FA681311139@yahoo.com>
References:  <804E6287-71B7-4D2C-A72C-6FA681311139.ref@yahoo.com> <804E6287-71B7-4D2C-A72C-6FA681311139@yahoo.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, Aug 24, 2023 at 03:20:50PM -0700, Mark Millard wrote:
> bob prohaska <fbsd_at_www.zefox.net> wrote on
> Date: Thu, 24 Aug 2023 19:44:17 UTC :
>=20
> > On Fri, Aug 18, 2023 at 08:05:41AM +0200, Matthias Apitz wrote:
> > >=20
> > > sysctl vfs.read_max=3D128
> > > sysctl vfs.aio.max_buf_aio=3D8192
> > > sysctl vfs.aio.max_aio_queue_per_proc=3D65536
> > > sysctl vfs.aio.max_aio_per_proc=3D8192
> > > sysctl vfs.aio.max_aio_queue=3D65536
> > > sysctl vm.pageout_oom_seq=3D120
> > > sysctl vm.pfault_oom_attempts=3D-1=20
> > >=20
> >=20
> > Just tried these settings on a Pi4, 8GB. Seemingly no help,
> > build of www/chromium failed again, saying only:
> >=20
> > =3D=3D=3D> Compilation failed unexpectedly.
> > Try to set MAKE_JOBS_UNSAFE=3Dyes and rebuild before reporting the fail=
ure to
> > the maintainer.
> > *** Error code 1
> >=20
> > No messages on the console at all, no indication of any swap use at all.
> > If somebody can tell me how to invoke MAKE_JOBS_UNSAFE=3Dyes, either
> > locally or globally, I'll give it a try. But, if it's a system problem
> > I'd expect at least a peep on the console....
>=20
> Are you going to post the log file someplace?=20


http://nemesis.zefox.com/~bob/data/logs/bulk/main-default/2023-08-20_16h11m=
59s/logs/errors/chromium-115.0.5790.170_1.log

> You may have  missed an earlier message.=20

Yes, I did. Some (very long) lines above there is:

[ 96% 53691/55361] "python3" "../../build/toolchain/gcc_link_wrapper.py" --=
output=3D"./v8_context_snapshot_generator" -- c++ -fuse-ld=3Dlld -Wl,--buil=
d-id=3Dsha1 -fPIC -Wl,-z,noexecstack -Wl,-z,relro -Wl,-z,now -Wl,--icf=3Dal=
l -Wl,--color-diagnostics -Wl,--undefined-version -Wl,-mllvm,-enable-machin=
e-outliner=3Dnever -no-canonical-prefixes -Wl,-O2 -Wl,--gc-sections -rdynam=
ic -pie -Wl,--disable-new-dtags -Wl,--icf=3Dnone -L/usr/local/lib  -fstack-=
protector-strong -L/usr/local/lib  -o "./v8_context_snapshot_generator" -Wl=
,--start-group @"./v8_context_snapshot_generator.rsp"  -Wl,--end-group  -lp=
thread -lgmodule-2.0 -lglib-2.0 -lgobject-2.0 -lgthread-2.0 -lintl -licui18=
n -licuuc -licudata -lnss3 -lsmime3 -lnssutil3 -lplds4 -lplc4 -lnspr4 -ldl =
-lkvm -lexecinfo -lutil -levent -lgio-2.0 -ljpeg -lpng16 -lxml2 -lxslt -lex=
pat -lwebp -lwebpdemux -lwebpmux -lharfbuzz-subset -lharfbuzz -lfontconfig =
-lopus -lopenh264 -lm -lz -ldav1d -lX11 -lXcomposite -lXdamage -lXext -lXfi=
xes -lXrender -lXrandr -lXtst -lepoll-shim -ldrm -lxcb -lxkbcommon -lgbm -l=
Xi -lGL -lpci -lffi -ldbus-1 -lpangocairo-1.0 -lpango-1.0 -lcairo -latk-1.0=
 -latk-bridge-2.0 -lsndio -lFLAC -lsnappy -latspi=20
FAILED: v8_context_snapshot_generator=20

Then, a bit further down in the file a series of=20
d.lld: error: relocation R_AARCH64_ABS64 cannot be used against local symbo=
l; recompile with -fPIC
complaints.

Unclear if the two kinds of complaints are related, nor whether they're the=
 first..

> How long had it run before  stopping?=20

95 hours, give or take. Nothing about timeout was reported

> How does that match up with the MAX_EXECUTION_TIME
> and NOHANG_TIME and the like that you have poudriere set
> up to use ( /usr/local/etc/poudriere.conf ).=20

NOHANG_TIME=3D44400
MAX_EXECUTION_TIME=3D1728000
MAX_EXECUTION_TIME_EXTRACT=3D144000
MAX_EXECUTION_TIME_INSTALL=3D144000
MAX_EXECUTION_TIME_PACKAGE=3D11728000
Admittedly some are plain silly, I just started
tacking on zeros after getting timeouts and being
unable to match the error message and variable name..

I checked for duplicates this time, however.

> Something  relevant for the question is what you have for:
>=20
> # Grep build logs to determine a possible build failure reason.  This is
> # only shown on the web interface.
> # Default: yes
> DETERMINE_BUILD_FAILURE_REASON=3Dno
>=20
> Using DETERMINE_BUILD_FAILURE_REASON leads to large builds
> running for a long time after it starts the process of
> stopping from a timeout the grep activity takes a long
> time and the build activity is not stopped during the
> grep.
>=20
>=20
> vm.pageout_oom_seq=3D120 and vm.pfault_oom_attempts=3D-1 make
> sense to me for certain kinds of issues involved in large
> builds, presuming sufficient RAM+SWAP for how it is set
> up to operate. vm.pageout_oom_seq is associated with
> console/log messages. if one runs out of RAM+SWAP,
> vm.pfault_oom_attempts=3D-1 tends to lead to deadlock. But
> it allows slow I/O to have the time to complete and so
> can be useful.
>=20
> I'm not sure that any vfs.aio.* is actually involved: special
> system calls are involved, splitting requests vs. retrieving
> the status of completed requests later. Use of aio has to be
> explicit in the running software from what I can tell. I've
> no information about which software builds might be using aio
> during the build activity.
>=20
> # sysctl -d vfs.aio
> vfs.aio: Async IO management
> vfs.aio.max_buf_aio: Maximum buf aio requests per process
> vfs.aio.max_aio_queue_per_proc: Maximum queued aio requests per process
> vfs.aio.max_aio_per_proc: Maximum active aio requests per process
> vfs.aio.aiod_lifetime: Maximum lifetime for idle aiod
> vfs.aio.num_unmapped_aio: Number of aio requests presently handled by unm=
apped I/O buffers
> vfs.aio.num_buf_aio: Number of aio requests presently handled by the buf =
subsystem
> vfs.aio.num_queue_count: Number of queued aio requests
> vfs.aio.max_aio_queue: Maximum number of aio requests to queue, globally
> vfs.aio.target_aio_procs: Preferred number of ready kernel processes for =
async IO
> vfs.aio.num_aio_procs: Number of presently active kernel processes for as=
ync IO
> vfs.aio.max_aio_procs: Maximum number of kernel processes to use for hand=
ling async IO=20
> vfs.aio.unsafe_warningcnt: Warnings that will be triggered upon failed IO=
 requests on unsafe files
> vfs.aio.enable_unsafe: Permit asynchronous IO on all file types, not just=
 known-safe types
>=20
>=20
> vfs.read_max may well change the disk access sequences:
>=20
> # sysctl -d vfs.read_max
> vfs.read_max: Cluster read-ahead max block count
>=20
> That might well help some spinning rust or other types of
> I/O.
There don't seem to be any indications of disk speed being
a problem, despite using "spinning rust" 8-)

>=20
>=20
> MAKE_JOBS_UNSAFE=3Dyes is, for example, put in makefiles of
> ports that have problems with parallel build activity. It
> basically disables having parallel activity in the build
> context involved. I've no clue if you use the likes of,
> say,
>
=20
> /usr/local/etc/poudriere.d/make.conf
>=20
> with conditional logic inside such as use of notation
> like:
>=20
> .if ${.CURDIR:M*/www/chromium}
> STUFF HERE
> .endif
>=20
> but you could.

That wasn't needed when the Pi4 last compiled www/chromium.
A Pi3 did benefit from tuning of that sort.=20

It sounds like the sysctl settings were unlikely to be=20
a source of the trouble seen, if not actively helpful.

For the moment the machine is updating world and kernel.
That should finish by tomorrow, at which point I'll try
to add something like =20

 .if ${.CURDIR:M*/www/chromium}
MAKE_JOBS_UNSAFE=3Dyes
 .endif

to /usr/local/etc/poudriere.d/make.conf


Thanks very much for writing.

bob prohaska




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?ZOg0xg4m63BD35Gq>