Date: Wed, 31 Oct 2012 13:24:19 -0700 From: Garrett Cooper <yanegomi@gmail.com> To: Alfred Perlstein <bright@mu.org> Cc: hackers@freebsd.org Subject: Re: make -jN buildworld on < 512MB ram Message-ID: <CAGH67wS=O8DvrKLGD0MwxDtjHOQOBCXaDegLkzS6GOHT1GDzow@mail.gmail.com> In-Reply-To: <509182DA.8070303@mu.org> References: <509182DA.8070303@mu.org>
next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Oct 31, 2012 at 12:58 PM, Alfred Perlstein <bright@mu.org> wrote: > It seems like the new compiler likes to get up to ~200+MB resident when > building some basic things in our tree. > > Unfortunately this causes smaller machines (VMs) to take days because of > swap thrashing. > > Doesn't our make(1) have some stuff to mitigate this? I would expect it to > be a bit smarter about detecting the number of swaps/pages/faults of its > children and taking into account the machine's total ram before forking off > new processes. I know gmake has some algorithms, although last I checked > they were very naive and didn't work well. > > Any ideas? I mean a really simple algorithm could be devised that would be > better than what we appear to have (which is nothing). > > Even if an algorithm can't be come up with, why not something just to > throttle the max number of c++/g++ processes thrown out. Maybe I'm missing > a trick I can pull off with some make.conf knobs? > > Idk, summer of code idea? Anyone mentoring someone they want to have a look > at this? FreeBSD make/bmake doesn't, but gmake sure does with --load [1]! I would ask sjg@ to be absolutely sure, but I don't see any getrlimit/setrlimit calls around the code, apart from RLIMIT_NOFILE. For now you have to tune your number of jobs appropriately.. HTH, -Garrett PS I feel your pain because I have a number of FreeBSD VMs with <=2GB RAM. 1. http://www.gnu.org/software/make/manual/make.html#index-g_t_0040code_007b_002d_002dload_002daverage_007d-746
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAGH67wS=O8DvrKLGD0MwxDtjHOQOBCXaDegLkzS6GOHT1GDzow>