Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 31 Oct 2012 14:21:51 -0700
From:      Alfred Perlstein <bright@mu.org>
To:        Peter Jeremy <peter@rulingia.com>
Cc:        hackers@freebsd.org
Subject:   Re: make -jN buildworld on < 512MB ram
Message-ID:  <5091966F.6070706@mu.org>
In-Reply-To: <20121031204152.GK3309@server.rulingia.com>
References:  <509182DA.8070303@mu.org> <20121031204152.GK3309@server.rulingia.com>

next in thread | previous in thread | raw e-mail | index | archive | help


On 10/31/12 1:41 PM, Peter Jeremy wrote:
> On 2012-Oct-31 12:58:18 -0700, Alfred Perlstein <bright@mu.org> wrote:
>> It seems like the new compiler likes to get up to ~200+MB resident when
>> building some basic things in our tree.
> The killer I found was the ctfmerge(1) on the kernel - which exceeds
> ~400MB on i386.  Under low RAM, that fails _without_ reporting any
> errors back to make(1), resulting in a corrupt new kernel (it booted
> but had virtually no devices so it couldn't find root).
Trolled by FreeBSD. :)
>
>> Doesn't our make(1) have some stuff to mitigate this?  I would expect it
>> to be a bit smarter about detecting the number of swaps/pages/faults of
>> its children and taking into account the machine's total ram before
>> forking off new processes.
> The difficulty I see is that the make process can't tell anything
> about the memory requirements of the pipeline it is about to spawn.
> As a rule of thumb, C++ needs more memory than C but that depends
> on what is being compiled - I have a machine-generated C program that
> makes gcc bloat to ~12GB.
Ah, but make(1) can delay spawning any new processes when it knows its 
children are paging.

This is sort of like "well you can't predict when an elevator will 
plunge to its doom."

...but you can stop loading hapless people onto it when it starts 
creaking... (paging/swapping).




>
>> Any ideas?  I mean a really simple algorithm could be devised that would
>> be better than what we appear to have (which is nothing).
> If you can afford to waste CPU, one approach would be for make(1) to
> setrlimit(2) child processes and if the child dies, it retries that
> child by itself - but that will generate unnecessary retries.
This doesn't really help.

>
> Another, more involved, approach would be for the scheduler to manage
> groups of processes - if a group of processes is causing memory
> pressure as a whole then the scheduler just stops scheduling some of
> them until the pressure reduces (effectively swap them out).  (Yes,
> that's vague and lots of hand-waving that might not be realisable).
>
I think that could be done, this is actually a very interesting idea.

Another idea is for make(1) to start to kill -STOP a child when it 
detects a lot of child paging until other independent children complete 
running, which is basically what I do manually when my build explodes 
until it gets past some C++ bits.

*ugh*

-Alfred



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5091966F.6070706>