Date: Sun, 12 Aug 2018 19:12:26 -0700 From: bob prohaska <fbsd@www.zefox.net> To: Mark Millard <marklmi@yahoo.com> Cc: Mark Johnston <markj@FreeBSD.org>, John Kennedy <warlock@phouka.net>, freebsd-arm <freebsd-arm@freebsd.org>, bob prohaska <fbsd@www.zefox.net> Subject: Re: RPI3 swap experiments ["was killed: out of swap space" with: "v_free_count: 5439, v_inactive_count: 1"] Message-ID: <20180813021226.GA46750@www.zefox.net> In-Reply-To: <B81E53A9-459E-4489-883B-24175B87D049@yahoo.com> References: <EC74A5A6-0DF4-48EB-88DA-543FD70FEA07@yahoo.com> <20180806155837.GA6277@raichu> <20180808153800.GF26133@www.zefox.net> <20180808204841.GA19379@raichu> <2DC1A479-92A0-48E6-9245-3FF5CFD89DEF@yahoo.com> <20180809033735.GJ30738@phouka1.phouka.net> <20180809175802.GA32974@www.zefox.net> <20180812173248.GA81324@phouka1.phouka.net> <20180812224021.GA46372@www.zefox.net> <B81E53A9-459E-4489-883B-24175B87D049@yahoo.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, Aug 12, 2018 at 04:23:31PM -0700, Mark Millard wrote: > On 2018-Aug-12, at 3:40 PM, bob prohaska <fbsd at www.zefox.net> wrote: > > > On Sun, Aug 12, 2018 at 10:32:48AM -0700, John Kennedy wrote: > >> . . . > > Setting vm.pageout_oom_seq to 120 made a decisive improvement, almost allowing > > buildworld to finish. By the time I tried CAM_IOSCHED_DYNAMIC buildworld was > > getting only about half as far, so it seems the patches were harmful to a degree. > > Changes were applied in the order > > You could experiment with figures bigger than 120 for > vm.pageout_oom_seq . > Could anybody hazard a guess as to how much? The leap from 12 to 120 rather startled me, I thought a factor of two a big adjustment. Maybe go to 240, or is that insignificant? > I'll note that the creation of this mechanism seems > to be shown for -r290920 at: > > https://lists.freebsd.org/pipermail/svn-src-head/2015-November/078968.html > > In part is says: > > . . . only raise OOM when pagedaemon is unable to produce a free > page in several back-to-back passes. Track the failed passes per > pagedaemon thread. > > The number of passes to trigger OOM was selected empirically and > tested both on small (32M-64M i386 VM) and large (32G amd64) > configurations. If the specifics of the load require tuning, sysctl > vm.pageout_oom_seq sets the number of back-to-back passes which must > fail before OOM is raised. Each pass takes 1/2 of seconds. Less the > value, more sensible the pagedaemon is to the page shortage. > > The code shows: > > int vmd_oom_seq > > and it looks like fairly large values would be > tolerated. You may be able to scale beyond > the problem showing up in your context. Would 1024 be enough to turn OOMA off completely? That's what I originally wanted to try. > > > pageout > > batchqueue > > slow_swap > > iosched > > For my new Pine64+ 2GB experiments I've only applied > the Mark J. reporting patches, not the #define one. > Nor have I involved CAM_IOSCHED_DYNAMIC. > > But with 2 GiBytes of RAM and the default 12 for > vm.pageout_oom_seq I got: > > v_free_count: 7773, v_inactive_count: 1 > Aug 12 09:30:13 pine64 kernel: pid 80573 (c++), uid 0, was killed: out of swap space > > with no other reports from Mark Johnston's reporting > patches. > > It appears that long I/O latencies as seen by the > subsystem are not necessary to ending up with OOM > kills, even if they can contribute when they occur. > It has seemed to me in the past that OOMA kills aren't closely-tied to busy swap. They do seem closely-related to busy storage (swap and disk). > (7773 * 4 KiBytes = 31,838,298 Bytes, by the way.) > The RPI3 seems to start adding to swap use when free memory drops below about 20 MB, Does that seem consistent with your observations? > > My RPI3 is now updating to 337688 with no patches/config changes. I'll start the > > sequence over and would be grateful if anybody could suggest a better sequence. > It seems rather clear that turning up vm.pageout_oom_seq is the first thing to try. The question is how much: 240 (double Mark J.'s number), 1024 (small for an int on a 64 bit machine)? If in fact the reporting patches do increase the load on the machine, is the slow swap patch the next thing to try, or the iosched option? Maybe something else altogether? There's no immediate expectation of fixing things; just to shed a little light. Thanks for reading! bob prohaska
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20180813021226.GA46750>