Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 14 Aug 2022 18:15:28 +0100
From:      Nuno Teixeira <eduardo@freebsd.org>
To:        Mark Millard <marklmi@yahoo.com>
Cc:        FreeBSD Mailing List <freebsd-ports@freebsd.org>
Subject:   Re: Resolved: devel/llvm13 build: "ninja: build stopped: subcommand failed"
Message-ID:  <CAFDf7UJmBNvfVo3SAenPUk1WkFgvpkqoM6=Riv6pwaovuNnAWg@mail.gmail.com>
In-Reply-To: <7CDC63F3-8B68-420E-8012-B1692667E293@yahoo.com>
References:  <1D4C14BD-8955-4B86-9C99-3E58D7603122.ref@yahoo.com> <1D4C14BD-8955-4B86-9C99-3E58D7603122@yahoo.com> <CAFDf7UK-pAFXCrZZA9veASaa-wf9HKMdX52fxmcmDgRFiNOF7A@mail.gmail.com> <7CDC63F3-8B68-420E-8012-B1692667E293@yahoo.com>

next in thread | previous in thread | raw e-mail | index | archive | help
--0000000000001e25c205e636aa0f
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

I use ZFS.

I will follow your recomendations and use a swap of 64GB and then test it
again.

In the meanwhile I will take a look at freebsd docs to see how do I
increase swap, by adding a new swap file or resize actual one if possible.

Mark Millard <marklmi@yahoo.com> escreveu no dia domingo, 14/08/2022 =C3=A0=
(s)
17:35:

> On 2022-Aug-14, at 07:50, Nuno Teixeira <eduardo@freebsd.org> wrote:
>
> Hello Mark,
>
> > I use poudriere with USE_TMPFS=3Dno, ofc because of low mem)
> > The problem "ninja: build stopped: subcommand failed"
>
> That is never the original error, just ninja reporting after
> it observed an error that occurred, generally in another
> process that is involved. A wide variety of errors will
> end up with a "ninja: build stopped: subcommand failed"
> notice as well.
>
> The original error should be earlier in the log or on the
> console ( or in /var/log/messages ). The "was killed: failed
> to reclaim memory" is an example.
>
> With 16 GiBytes of RAM you could have up to something like
> 60 GiByte of swap without FreeBSD complaining about being
> potentially mistuned. (It would complain before 64 GiBytes
> of SWAP.) 16+60 would be 76 GiBytes for RAM+SWAP.
>
> I forgot to ask about UFS vs. ZFS being in use: which is in
> use? (ZFS uses more RAM.)
>
> > have some time now and it's caused by a build peak of memory that
> affects people with less than 32/64GB mem and to solve building it must b=
e
> build using one builder with one core thats takes about 7 hours on my
> machine or with 6c+6t on 12.3 i386 that takes about 45min (123i386 is the
> only jail that I can use all cores).
>
> Last I tried I built all the various devel/llvm* on a 8 GiByte
> RPi4B, 4 builders active and ALLOW_MAKE_JOBS=3Dyes in use.
> 4 FreeBSD cpus. So the load average would have been around 16+
> much of the time during devel/llvm13 's builder activity.
> USE_TMPFS=3Ddata in use.
>
> Similarly for a 16 GiByte machine --but it is also an aarch64
> context, also 4 FreebSD cpus.
>
> But I use in /boot/loader.conf:
>
> #
> # Delay when persistent low free RAM leads to
> # Out Of Memory killing of processes:
> vm.pageout_oom_seq=3D120
>
> This has been historically important to avoiding the likes of
> "was killed: failed to reclaim memory" and related notices on
> various armv7 and aarch64 small board computers used to
> buildworld buildkernel and build ports, using all the cores.
>
> The only amd64 system that I've access to has 32 FreeBSD cpus
> and 128 GiBytes of RAM. Not a good basis for a comparison test
> with your context. I've no i386 access at all.
>
> > llvm 12 build without problems
>
> Hmm. I'll try building devel/llvm13 on aarch64 with periodic
> sampling of the memory use to see maximum observed figures
> for SWAP and for various categories of RAM, as well as the
> largest observed load averages.
>
> ZFS context use. I could try UFS as well.
>
> Swap: 30720Mi Total on the 8GiByte RPi4B.
> So about 38 GiBytes RAM+SWAP available.
> We should see how much SWAP is used.
>
> Before starting poudriere, shortly after a reboot:
>
> 19296Ki MaxObs(Act+Lndry+SwapUsed)
> (No SWAP in use at the time.)
>
> # poudriere bulk -jmain-CA72-bulk_a -w devel/llvm13
>
> for the from scratch build: reports:
>
> [00:00:34] Building 91 packages using up to 4 builders
>
> The ports tree is about a month back:
>
> # ~/fbsd-based-on-what-commit.sh -C /usr/ports/
> branch: main
> merge-base: 872199326a916efbb4bf13c97bc1af910ba1482e
> merge-base: CommitDate: 2022-07-14 01:26:04 +0000
> 872199326a91 (HEAD -> main, freebsd/main, freebsd/HEAD) devel/ruby-build:
> Update to 20220713
> n589512 (--first-parent --count for merge-base)
>
> But, if I gather right, the problem you see goes back
> before that.
>
> I can not tell how 4 FreeBSD cpus compares to the
> count that the Lenovo Legion 5 gets.
>
> I'll report on its maximum observed figures once the
> build stops. It will be a while before the RPi4B
> gets that far.
>
> The ports built prior to devel/llvm13's builder starting
> will lead to load averages over 4 from up to 4
> builders, each potentially using up to around 4
> processes. I'll see about starting a separate tracking
> once devel/llvm13 's builder has started if I happen
> to observe it at the right time frame for doing such.
>
> > Cheers
> >
> > Mark Millard <marklmi@yahoo.com> escreveu no dia domingo, 14/08/2022
> =C3=A0(s) 03:54:
> > Nuno Teixeira <eduardo_at_freebsd.org> wrote on
> > Date: Sat, 13 Aug 2022 16:52:09 UTC :
> >
> > > . . .
> > > I've tested it but it still fails:
> > > ---
> > > pid 64502 (c++), jid 7, uid 65534, was killed: failed to reclaim memo=
ry
> > > swap_pager: out of swap space
> > > ---
> > > on a Lenovo Legion 5, 16GB RAM and 4GB swap.
> > > . . .
> >
> > This leaves various points unclear:
> >
> > poudriere style build? Some other style?
> >
> > (I'll state questions in a form generally for a poudriere style
> > context. Some could be converted to analogous points for other
> > build-styles.)
> >
> > How many poudriere builders allowed (-JN) ?
> >
> > /usr/local/etc/poudreire.conf :
> > ALLOW_MAKE_JOBS=3Dyes in use?
> > ALLOW_MAKE_JOBS_PACKAGES=3D??? in use?
> > USE_TMPFS=3D??? With what value? Anything other that "data" or "no"?
> >
> > /usr/local/etc/poudriere.d/make.conf (or the like):
> > MAKE_JOBS_NUMBER=3D??? in use? With what value?
> >
> > Is tmpfs in use such that it will use RAM+SWAP when the
> > used tmpfs space is large?
> >
> > How much free space is available for /tmp ?
> >
> > Are you using something like ( in, say, /boot/loader/conf ):
>
> That should have been: /boot/loader.conf
>
> Sorry.
>
> > #
> > # Delay when persistent low free RAM leads to
> > # Out Of Memory killing of processes:
> > vm.pageout_oom_seq=3D120
> >
> >
> > How many FreeBSD cpus does a Lenovo Legion 5 present
> > in the configuration used?
> >
>
>
> =3D=3D=3D
> Mark Millard
> marklmi at yahoo.com
>
>

--=20
Nuno Teixeira
FreeBSD Committer (ports)

--0000000000001e25c205e636aa0f
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div>I use ZFS.<br><br></div>I will follow your recom=
endations and use a swap of 64GB and then test it again.<br><br></div>In th=
e meanwhile I will take a look at freebsd docs to see how do I increase swa=
p, by adding a new swap file or resize actual one if possible.<br></div><br=
><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail_attr">Mark Mill=
ard &lt;<a href=3D"mailto:marklmi@yahoo.com">marklmi@yahoo.com</a>&gt; escr=
eveu no dia domingo, 14/08/2022 =C3=A0(s) 17:35:<br></div><blockquote class=
=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rg=
b(204,204,204);padding-left:1ex">On 2022-Aug-14, at 07:50, Nuno Teixeira &l=
t;<a href=3D"mailto:eduardo@freebsd.org" target=3D"_blank">eduardo@freebsd.=
org</a>&gt; wrote:<br>
<br>
Hello Mark,<br>
<br>
&gt; I use poudriere with USE_TMPFS=3Dno, ofc because of low mem)<br>
&gt; The problem &quot;ninja: build stopped: subcommand failed&quot;<br>
<br>
That is never the original error, just ninja reporting after<br>
it observed an error that occurred, generally in another<br>
process that is involved. A wide variety of errors will<br>
end up with a &quot;ninja: build stopped: subcommand failed&quot;<br>
notice as well.<br>
<br>
The original error should be earlier in the log or on the<br>
console ( or in /var/log/messages ). The &quot;was killed: failed<br>
to reclaim memory&quot; is an example.<br>
<br>
With 16 GiBytes of RAM you could have up to something like<br>
60 GiByte of swap without FreeBSD complaining about being<br>
potentially mistuned. (It would complain before 64 GiBytes<br>
of SWAP.) 16+60 would be 76 GiBytes for RAM+SWAP.<br>
<br>
I forgot to ask about UFS vs. ZFS being in use: which is in<br>
use? (ZFS uses more RAM.)<br>
<br>
&gt; have some time now and it&#39;s caused by a build peak of memory that =
affects people with less than 32/64GB mem and to solve building it must be =
build using one builder with one core thats takes about 7 hours on my machi=
ne or with 6c+6t on 12.3 i386 that takes about 45min (123i386 is the only j=
ail that I can use all cores).<br>
<br>
Last I tried I built all the various devel/llvm* on a 8 GiByte<br>
RPi4B, 4 builders active and ALLOW_MAKE_JOBS=3Dyes in use.<br>
4 FreeBSD cpus. So the load average would have been around 16+<br>
much of the time during devel/llvm13 &#39;s builder activity.<br>
USE_TMPFS=3Ddata in use.<br>
<br>
Similarly for a 16 GiByte machine --but it is also an aarch64<br>
context, also 4 FreebSD cpus.<br>
<br>
But I use in /boot/loader.conf:<br>
<br>
#<br>
# Delay when persistent low free RAM leads to<br>
# Out Of Memory killing of processes:<br>
vm.pageout_oom_seq=3D120<br>
<br>
This has been historically important to avoiding the likes of<br>
&quot;was killed: failed to reclaim memory&quot; and related notices on<br>
various armv7 and aarch64 small board computers used to<br>
buildworld buildkernel and build ports, using all the cores.<br>
<br>
The only amd64 system that I&#39;ve access to has 32 FreeBSD cpus<br>
and 128 GiBytes of RAM. Not a good basis for a comparison test<br>
with your context. I&#39;ve no i386 access at all.<br>
<br>
&gt; llvm 12 build without problems<br>
<br>
Hmm. I&#39;ll try building devel/llvm13 on aarch64 with periodic<br>
sampling of the memory use to see maximum observed figures<br>
for SWAP and for various categories of RAM, as well as the<br>
largest observed load averages.<br>
<br>
ZFS context use. I could try UFS as well.<br>
<br>
Swap: 30720Mi Total on the 8GiByte RPi4B.<br>
So about 38 GiBytes RAM+SWAP available.<br>
We should see how much SWAP is used.<br>
<br>
Before starting poudriere, shortly after a reboot:<br>
<br>
19296Ki MaxObs(Act+Lndry+SwapUsed)<br>
(No SWAP in use at the time.)<br>
<br>
# poudriere bulk -jmain-CA72-bulk_a -w devel/llvm13<br>
<br>
for the from scratch build: reports:<br>
<br>
[00:00:34] Building 91 packages using up to 4 builders<br>
<br>
The ports tree is about a month back:<br>
<br>
# ~/fbsd-based-on-what-commit.sh -C /usr/ports/<br>
branch: main<br>
merge-base: 872199326a916efbb4bf13c97bc1af910ba1482e<br>
merge-base: CommitDate: 2022-07-14 01:26:04 +0000<br>
872199326a91 (HEAD -&gt; main, freebsd/main, freebsd/HEAD) devel/ruby-build=
: Update to 20220713<br>
n589512 (--first-parent --count for merge-base)<br>
<br>
But, if I gather right, the problem you see goes back<br>
before that.<br>
<br>
I can not tell how 4 FreeBSD cpus compares to the<br>
count that the Lenovo Legion 5 gets.<br>
<br>
I&#39;ll report on its maximum observed figures once the<br>
build stops. It will be a while before the RPi4B<br>
gets that far.<br>
<br>
The ports built prior to devel/llvm13&#39;s builder starting<br>
will lead to load averages over 4 from up to 4<br>
builders, each potentially using up to around 4<br>
processes. I&#39;ll see about starting a separate tracking<br>
once devel/llvm13 &#39;s builder has started if I happen<br>
to observe it at the right time frame for doing such.<br>
<br>
&gt; Cheers<br>
&gt; <br>
&gt; Mark Millard &lt;<a href=3D"mailto:marklmi@yahoo.com" target=3D"_blank=
">marklmi@yahoo.com</a>&gt; escreveu no dia domingo, 14/08/2022 =C3=A0(s) 0=
3:54:<br>
&gt; Nuno Teixeira &lt;<a href=3D"http://eduardo_at_freebsd.org" rel=3D"nor=
eferrer" target=3D"_blank">eduardo_at_freebsd.org</a>&gt; wrote on<br>
&gt; Date: Sat, 13 Aug 2022 16:52:09 UTC :<br>
&gt; <br>
&gt; &gt; . . .<br>
&gt; &gt; I&#39;ve tested it but it still fails:<br>
&gt; &gt; ---<br>
&gt; &gt; pid 64502 (c++), jid 7, uid 65534, was killed: failed to reclaim =
memory<br>
&gt; &gt; swap_pager: out of swap space<br>
&gt; &gt; ---<br>
&gt; &gt; on a Lenovo Legion 5, 16GB RAM and 4GB swap.<br>
&gt; &gt; . . .<br>
&gt; <br>
&gt; This leaves various points unclear:<br>
&gt; <br>
&gt; poudriere style build? Some other style?<br>
&gt; <br>
&gt; (I&#39;ll state questions in a form generally for a poudriere style<br=
>
&gt; context. Some could be converted to analogous points for other<br>
&gt; build-styles.)<br>
&gt; <br>
&gt; How many poudriere builders allowed (-JN) ?<br>
&gt; <br>
&gt; /usr/local/etc/poudreire.conf :<br>
&gt; ALLOW_MAKE_JOBS=3Dyes in use?<br>
&gt; ALLOW_MAKE_JOBS_PACKAGES=3D??? in use?<br>
&gt; USE_TMPFS=3D??? With what value? Anything other that &quot;data&quot; =
or &quot;no&quot;?<br>
&gt; <br>
&gt; /usr/local/etc/poudriere.d/make.conf (or the like):<br>
&gt; MAKE_JOBS_NUMBER=3D??? in use? With what value?<br>
&gt; <br>
&gt; Is tmpfs in use such that it will use RAM+SWAP when the<br>
&gt; used tmpfs space is large?<br>
&gt; <br>
&gt; How much free space is available for /tmp ?<br>
&gt; <br>
&gt; Are you using something like ( in, say, /boot/loader/conf ):<br>
<br>
That should have been: /boot/loader.conf<br>
<br>
Sorry.<br>
<br>
&gt; #<br>
&gt; # Delay when persistent low free RAM leads to<br>
&gt; # Out Of Memory killing of processes:<br>
&gt; vm.pageout_oom_seq=3D120<br>
&gt; <br>
&gt; <br>
&gt; How many FreeBSD cpus does a Lenovo Legion 5 present<br>
&gt; in the configuration used?<br>
&gt; <br>
<br>
<br>
=3D=3D=3D<br>
Mark Millard<br>
marklmi at <a href=3D"http://yahoo.com" rel=3D"noreferrer" target=3D"_blank=
">yahoo.com</a><br>
<br>
</blockquote></div><br clear=3D"all"><br>-- <br><div dir=3D"ltr" class=3D"g=
mail_signature"><div dir=3D"ltr"><span style=3D"color:rgb(102,102,102)">Nun=
o Teixeira<br>FreeBSD Committer (ports)</span></div></div>

--0000000000001e25c205e636aa0f--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAFDf7UJmBNvfVo3SAenPUk1WkFgvpkqoM6=Riv6pwaovuNnAWg>