Date: Sat, 4 Aug 2018 00:14:52 -0700 From: Mark Millard <marklmi@yahoo.com> To: Jamie Landeg-Jones <jamie@catflap.org>, bob prohaska <fbsd@www.zefox.net> Cc: markj@freebsd.org, freebsd-arm <freebsd-arm@freebsd.org> Subject: Re: RPI3 swap experiments ["was killed: out of swap space" with: "v_free_count: 5439, v_inactive_count: 1"] Message-ID: <8CC5DF53-F950-495C-9DC8-56FCA0087259@yahoo.com> In-Reply-To: <201808040355.w743tPsF039729@donotpassgo.dyslexicfish.net> References: <20180731153531.GA94742@www.zefox.net> <201807311602.w6VG2xcN072497@pdx.rh.CN85.dnsmgr.net> <20180731191016.GD94742@www.zefox.net> <23793AAA-A339-4DEC-981F-21C7CC4FE440@yahoo.com> <20180731231912.GF94742@www.zefox.net> <2222ABBD-E689-4C3B-A7D3-50AECCC5E7B2@yahoo.com> <20180801034511.GA96616@www.zefox.net> <201808010405.w7145RS6086730@donotpassgo.dyslexicfish.net> <6BFE7B77-A0E2-4FAF-9C68-81951D2F6627@yahoo.com> <20180802002841.GB99523@www.zefox.net> <20180802015135.GC99523@www.zefox.net> <EC74A5A6-0DF4-48EB-88DA-543FD70FEA07@yahoo.com> <201808030034.w730YURL034270@donotpassgo.dyslexicfish.net> <F788BDD8-80DC-441A-AA3E-2745F50C3B56@yahoo.com> <201808040355.w743tPsF039729@donotpassgo.dyslexicfish.net>
next in thread | previous in thread | raw e-mail | index | archive | help
On 2018-Aug-3, at 8:55 PM, Jamie Landeg-Jones <jamie at catflap.org> wrote: > Mark Millard <marklmi at yahoo.com> wrote: > >> If Inact+Laundry+Buf(?)+Free was not enough to provide sufficient >> additional RAM, I'd would have guessed that some Active Real Memory >> should then have been paged/swapped out and so RAM would be made >> available. (This requires the system to have left itself sufficient >> room in RAM for that guessed activity.) >> >> But I'm no expert at the intent or actual operation. >> >> Bob P.'s reports (for having sufficient swap space) >> also indicate the likes of: >> >> v_free_count: 5439, v_inactive_count: 1 >> >> >> So all the examples have: "v_inactive_count: 1". >> (So: vmd->vmd_pagequeues[PQ_INACTIVE].pq_cnt==1 ) > > Thanks for the feedback. I'll do a few more runs and other stress tests > to see if that result is consistent. I'm open to any other idea too! > The book "The Design and Implementation of the FreeBSD Operating System" (2nd edition, 2014) states (page labeled 296): QUOTE: The FreeBSD swap-out daemon will not select a runnable processes to swap out. So, if the set of runnable processes do not fit in memory, the machine will effectively deadlock. Current machines have enough memory that this condition usually does not arise. If it does, FreeBSD avoids deadlock by killing the largest process. If the condition begins to arise in normal operation, the 4.4BSD algorithm will need to be restored. END QUOTE. As near as I can tell, for the likes of rpi3's and rpi2's, the condition is occurring during buildworld "normal operation" that tries to use the available cores to advantage. (Your context does not have the I/O problems that Bob P.'s have had in at least some of your OOM process kill examples, if I understand right.) (4.4BSD used to swap out the runnable process that had been resident the longest, followed by the processes taking turns being swapped out. I'll not quote the exact text about such.) So I guess the question becomes, is there a reasonable way to enable the 4.4BSD style of "Swapping" for "small" memory machines in order to avoid having to figure out how to not end up with OOM process kills while also not just wasting cores by using -j1 for buildworld? In other words: enable swapping out active RAM when it eats nearly all the non-wired RAM. But it might be discovered that the performance is not better than using fewer cores during buildworld. (Experiments needed and possibly environment specific for the tradeoffs.) Avoiding having to figure out the maximum -j? that avoids OOM process kills but avoids just sticking to -j1 seems and advantage for some rpi3 and rpi2 folks. === Mark Millard marklmi at yahoo.com ( dsl-only.net went away in early 2018-Mar)
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?8CC5DF53-F950-495C-9DC8-56FCA0087259>