Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 1 Nov 2012 07:41:52 +1100
From:      Peter Jeremy <peter@rulingia.com>
To:        Alfred Perlstein <bright@mu.org>
Cc:        hackers@freebsd.org
Subject:   Re: make -jN buildworld on < 512MB ram
Message-ID:  <20121031204152.GK3309@server.rulingia.com>
In-Reply-To: <509182DA.8070303@mu.org>
References:  <509182DA.8070303@mu.org>

next in thread | previous in thread | raw e-mail | index | archive | help

--oJ71EGRlYNjSvfq7
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On 2012-Oct-31 12:58:18 -0700, Alfred Perlstein <bright@mu.org> wrote:
>It seems like the new compiler likes to get up to ~200+MB resident when=20
>building some basic things in our tree.

The killer I found was the ctfmerge(1) on the kernel - which exceeds
~400MB on i386.  Under low RAM, that fails _without_ reporting any
errors back to make(1), resulting in a corrupt new kernel (it booted
but had virtually no devices so it couldn't find root).

>Doesn't our make(1) have some stuff to mitigate this?  I would expect it=
=20
>to be a bit smarter about detecting the number of swaps/pages/faults of=20
>its children and taking into account the machine's total ram before=20
>forking off new processes.

The difficulty I see is that the make process can't tell anything
about the memory requirements of the pipeline it is about to spawn.
As a rule of thumb, C++ needs more memory than C but that depends
on what is being compiled - I have a machine-generated C program that
makes gcc bloat to ~12GB.

>Any ideas?  I mean a really simple algorithm could be devised that would=
=20
>be better than what we appear to have (which is nothing).

If you can afford to waste CPU, one approach would be for make(1) to
setrlimit(2) child processes and if the child dies, it retries that
child by itself - but that will generate unnecessary retries.

Another, more involved, approach would be for the scheduler to manage
groups of processes - if a group of processes is causing memory
pressure as a whole then the scheduler just stops scheduling some of
them until the pressure reduces (effectively swap them out).  (Yes,
that's vague and lots of hand-waving that might not be realisable).

--=20
Peter Jeremy

--oJ71EGRlYNjSvfq7
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (FreeBSD)

iEYEARECAAYFAlCRjRAACgkQ/opHv/APuIf8bACeKtbpNmaXSp2R6mEFNY16AeyK
LiAAoKRUiIC+YlaaUmdMXdz27947sC98
=KqHt
-----END PGP SIGNATURE-----

--oJ71EGRlYNjSvfq7--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20121031204152.GK3309>