Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 1 May 2022 14:53:32 -0700
From:      Mark Millard <marklmi@yahoo.com>
To:        pmh@hausen.com, freebsd-current <freebsd-current@freebsd.org>
Subject:   Re: Cross-compile worked, cross-install not so much ...
Message-ID:  <F5CF791B-650B-4486-BA60-37DCCDDE865E@yahoo.com>
References:  <F5CF791B-650B-4486-BA60-37DCCDDE865E.ref@yahoo.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Patrick M. Hausen <pmh_at_hausen.com> wrote on
Date: Sun, 1 May 2022 17:29:27 +0200 :

> > Am 26.04.2022 um 17:47 schrieb bob prohaska <fbsd_at_www.zefox.net>:
> > If the result is unsatisfactory, self-hosting isn't impossible. I've been
> > doing it for a few years now, albeit with much help from the list. On a
> > Pi3 running aarch64 memory and swap are a constraint. I'd suggest 4 GB
> > of swap and -j2 or -j3, perhaps increasing to -j4 as you see how things
> > go. If you can split the swap across devices it helps some. Useful 
> > /boot/loader.conf tweaks include
> > 
> > vm.pageout_oom_seq="4096"
> > vm.pfault_oom_attempts="120"
> > vm.pfault_oom_wait="20" 
> > 
> > Mark Millard made me aware of these parameters over the list.

And going back farther: Mark Johnston made us both
aware of vm.pageout_oom_seq and its use back when
you were having problems with the system killing
processes. He had provided some investigative patches
that we used as well. THat is part of how Mark J.
determiened that vm.pageout_oom_seq use was
appropraite. Konstantin Belousov als corrected my
mistaken mental model relative to FreeBSD
swapping/paging back in that time frame --and that
feeds into what vm.pageout_oom_seq controls.

In recent enough 13.1-??? , stable/13 , and main :

Console messages with:

was killed: failed to reclaim memory

are tied to what vm.pageout_oom_seq controls: the
number of tries to reclaim the targeted amount of
free RAM before initiating kills to do so.

Console messages with:

was killed: a thread waited too long to allocate a page

are tied to the combination of vm.pfault_oom_attempts
and vm.pfault_oom_wait used, which, together result in
an overall time frame (multiply the two) before such
kills start.

There can be messages with:

was killed: out of swap space

which is somewhat of a misnomer: the out-of-space is
actually in one or both of a couple of related kernel
data structures for managing the swap space, not the
swap partition content itself. As near as I can tell,
this type of failure is rare.

But I'll note that in FreeBSD versions before the
messages were added for "failed to reclaim memory"
and "waited too long to allocate a page", the
messaging always said words about "out of swap space"
for all 3 types of contexts: rather misleading.

> without any additional tuning but with an SSD connected via USB
> and 4GB swap on that I was able to compile with -j4 and a mostly CPU
> bound system.
> 
> --------------------------------------------------------------
> >>> World build completed on Thu Apr 28 10:30:53 CEST 2022
> >>> World built in 155832 seconds, ncpu: 4, make -j4
> --------------------------------------------------------------
> --------------------------------------------------------------
> >>> Kernel build for GENERIC completed on Thu Apr 28 13:11:37 CEST 2022
> >>> Kernel(s)  GENERIC built in 9643 seconds, ncpu: 4, make -j4
> --------------------------------------------------------------

So: a little under 46 hours, if I calculated right.

I'll note that in some past experiments on some types
of RPi*'s, using -j3 actually took less overall time
than -j4 when deliberately repeating from-scratch
builds. The differences were not all that large as I
remember. But, if -j3 and -j4 end up with even similar
time frames, then -j3 has the additional advantage in
limited-RAM contexts of not being as likely to have
resource problems. So -j3 could be appropriate.

I do not know if -j3 would work out better for you or not.
I'm just noting that it may be worth experimenting with.

> Thanks everyone for your valuable hints. Guess I will subscribe to
> -arm, since there are some more rough edges compared to "just put a
> Debian or Ubuntu image on it".
> 
> And then I wonder what workload I can put on a seven-node FreeBSD
> cluster, since it won't be k8s, obviously. Let's start with Ceph, I guess.


===
Mark Millard
marklmi at yahoo.com




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?F5CF791B-650B-4486-BA60-37DCCDDE865E>