Date: Mon, 27 Jan 2020 10:20:35 -0800 From: Cy Schubert <Cy.Schubert@cschubert.com> To: "Rodney W. Grimes" <freebsd-rwg@gndrsh.dnsmgr.net> Cc: sgk@troutmask.apl.washington.edu, freebsd-current@freebsd.org, Mark Millard <marklmi@yahoo.com>, yasu@utahime.org Subject: Re: After update to r357104 build of poudriere jail fails with 'out of swap space' Message-ID: <A0E565B0-52A1-41CE-915F-35B8E0F9394F@cschubert.com> In-Reply-To: <202001271309.00RD96nr005876@slippy.cwsent.com> References: <202001261745.00QHjkuW044006@gndrsh.dnsmgr.net> <202001271309.00RD96nr005876@slippy.cwsent.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On January 27, 2020 5:09:06 AM PST, Cy Schubert <Cy=2ESchubert@cschubert=2E= com> wrote: >In message <202001261745=2E00QHjkuW044006@gndrsh=2Ednsmgr=2Enet>, "Rodney= W=2E=20 >Grimes" >writes: >> > In message <20200125233116=2EGA49916@troutmask=2Eapl=2Ewashington=2Ee= du>, >Steve=20 >> > Kargl w >> > rites: >> > > On Sat, Jan 25, 2020 at 02:09:29PM -0800, Cy Schubert wrote: >> > > > On January 25, 2020 1:52:03 PM PST, Steve Kargl ><sgk@troutmask=2Eapl=2Ewash >> ingt >> > > on=2Eedu> wrote: >> > > > >On Sat, Jan 25, 2020 at 01:41:16PM -0800, Cy Schubert wrote: >> > > > >>=20 >> > > > >> It's not just poudeiere=2E Standard port builds of chromium, >rust >> > > > >> and thunderbird also fail on my machines with less than 8 >GB=2E >> > > > >> >> > > > > >> > > > >Interesting=2E I routinely build chromium, rust, firefox, >> > > > >llvm and few other resource-hunger ports on a i386-freebsd >> > > > >laptop with 3=2E4 GB available memory=2E This is done with >> > > > >chrome running with a few tabs swallowing a 1-1=2E5 GB of >> > > > >memory=2E No issues=2E =20 >> > > >=20 >> > > > Number of threads makes a difference too=2E How many core/threads >does yo >> ur l >> > > aptop have? >> > > >> > > 2 cores=2E >> >=20 >> > This is why=2E >> >=20 >> > > >> > > > Reducing number of concurrent threads allowed my builds to >complete >> > > > on the 5 GB machine=2E My build machines have 4 cores, 1 thread >per >> > > > core=2E Reducing concurrent threads circumvented the issue=2E=20 >> > > >> > > I use portmaster, and AFIACT, it uses 'make -j 2' for the build=2E >> > > Laptop isn't doing too much, but an update and browsing=2E It does >> > > take a long time especially if building llvm is required=2E >> >=20 >> > I use portmaster as well (for quick incidental builds)=2E It uses=20 >> > MAKE_JOBS_NUMBER=3D4 (which is equivalent to make -j 4)=2E I suppose >machines=20 >> > with not enough memory to support their cores with certain builds >might=20 >> > have a better chance of having this problem=2E >> >=20 >> > MAKE_JOBS_NUMBER_LIMIT to limit a 4 core machine with less than 2 >GB per=20 >> > core might be an option=2E Looking at it this way, instead of an >extra 3 GB,=20 >> > the extra 60% more memory in the other machine makes a big >difference=2E A=20 >> > rule of thumb would probably be, have ~ 2 GB RAM for every core or >thread=20 >> > when doing large parallel builds=2E >> >> Perhaps we need to redo some boot time calculations, for one the >> ZFS arch cache, IMHO, is just silly at a fixed percent of total >> memory=2E A high percentage at that=2E >> >> One idea based on what you just said might be: >> >> percore_memory_reserve =3D 2G (Your number, I personally would use 1G >here) >> arc_max =3D MAX(memory size - (Cores * percore_memory_reserve), 512mb) >> >> I think that simple change would go a long ways to cutting down the >> number of OOM reports we see=2E ALSO IMHO there should be a way for >> sub systems to easily tell zfs they are memory pigs too and need to >> share the space=2E Ie, bhyve is horrible if you do not tune zfs arc >> based on how much memory your using up for VM's=2E >> >> Another formulation might be >> percore_memory_reserve =3D alpha * memory_zire / cores >> >> Alpha most likely falling in the 0=2E25 to 0=2E5 range, I think this on= e >> would have better scalability, would need to run some numbers=2E >> Probably needs to become non linear above some core count=2E > >Setting a lower arc_max at boot is unlikely to help=2E Rust was building >on=20 >the 8 GB and 5 GB 4 core machines last night=2E It completed successfully >on=20 >the 8 GB machine, while using 12 MB of swap=2E ARC was at 1307 MB=2E > >On the 5 GB 4 core machine the rust build died of OOM=2E 328 KB swap was= =20 >used=2E ARC was reported at 941 MB=2E arc_min on this machine is 489=2E2 = MB=2E MAKE_JOBS_NUMBER=3D3 worked building rust on the 5 GB 4 core machine=2E A= RC is at 534 MB with 12 MB swap used=2E --=20 Pardon the typos and autocorrect, small keyboard in use=2E=20 Cy Schubert <Cy=2ESchubert@cschubert=2Ecom> FreeBSD UNIX: <cy@FreeBSD=2Eorg> Web: https://www=2EFreeBSD=2Eorg The need of the many outweighs the greed of the few=2E Sent from my Android device with K-9 Mail=2E Please excuse my brevity=2E
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?A0E565B0-52A1-41CE-915F-35B8E0F9394F>