From nobody Sat Jan 29 19:23:58 2022 X-Original-To: freebsd-arm@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id EB4D31972C24 for ; Sat, 29 Jan 2022 19:24:27 +0000 (UTC) (envelope-from marklmi@yahoo.com) Received: from sonic313-20.consmr.mail.gq1.yahoo.com (sonic313-20.consmr.mail.gq1.yahoo.com [98.137.65.83]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4JmPQf1jc0z4pGf for ; Sat, 29 Jan 2022 19:24:26 +0000 (UTC) (envelope-from marklmi@yahoo.com) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1643484244; bh=OpAZWHo1EE47Rf6q23iMYFr7/5SX3Jxn2Vbsm+HyKU4=; h=Subject:From:In-Reply-To:Date:Cc:References:To:From:Subject:Reply-To; b=pI4B3TEIO5sl4US09HLQucvaJ3incTJUXs4r5nZ2eQ+7hS81C25xVlnEzH9kg08KSvCMoAgCg/PZ9MUuM91DtgPETUBWL0vSUZnsibDwc80gg74X1Z8+RIhMbn2TkjoIeuoWALOedIbUtsbmgt6vZ55H6QGek0wTl42ZDOleUU4KgrWd0+IitQAVSuXDaZgCpQuK1wX7qFL3v9uXsstnEz31hwYPP/KjPXOBfsPd0vFpSO8LYYJfYT9orDvBbIgc9cjfh94w/U9o2E8FNPyA7SXIk1yh0i77yVjxaLn0lNrpc7s/QDXmAP1CODNCwNP7RbQ7AVpGh5/h7G0Vv/Zk/Q== X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1643484244; bh=TcW+th+JrX6Kt1bNOM5cBqc9CQRDUsayB0gr8Mxuqfx=; h=X-Sonic-MF:Subject:From:Date:To:From:Subject; b=rTWNSDg7Oe0g9QTte4DJ4ROwnBiZiwK1HP2Z+CPPoLnHa64Aj+n0DqQGrjDSe91oosrfDXXVMSFHtJdDtKr6KY1ei3x5hfAcJPa2+tlcGE8awSFld/462b2lNd9/IiQzX0CG5TcFs8ZxyBLZBwRUGZlnrwg40s6pxf8YNsqiMUxi3dC2d+MsV908FSJ12X/0WOOCoPZhfC1opMHnWTnSZV73VORXUgHBIKKQU6pY5AKm4pzvZzKg0IWx1T5kDy8UtuGlCExcXO3yWehRuV49j0NyBWoR+vjahTmUMzAwjeYwwOWbyy3SmitK3iYaLOdgWCiVu9gPyZfJUEZyTUczUQ== X-YMail-OSG: NJq1Wl0VM1laETz9TBz9vBwLu8UvH2TpGQt3CDQL.QP.Lg3VfzS4vx1e5Y4NwoA BRipwJ9yEATngPjXD2q.kBgdc_RcxH_Ca7d4PeRCz.s1DLOXaBA.qAHCeT9FVnB3w.sYT3z2Tij9 anFY6qTfqibF0GIkHmIIiwaDqSzcXonFb.3tCAEbuBuxAS.euCwi_7VPHMeCzhqhKHYKaG86PBM9 yMgGGnLRuSM9b3Z2Yfpc0dpiW.vIGYk1rE5LvjyWYc4OgmnqmCB9ep.IhKJjFqsoo393aaBJtoKl LC9lMJqhi5teH8wFiEdscarAI6MqY4PdqurX.lIBeLhzZnJLJsh.RLnHiBfPBVVSeSQX75k1Up5o 7b9SYOFmmOhBmcKu9b071YIO7__2lq9j9AqZ5jZZG51GHKlIaC6417Y74Z0w8cnUrKEtoiPUQvVI TcOrpbTbRa0wCJjCk_dyKA0ZiZdb.jiMBSlMGAlLDrqN92EcYlDFng_w4QSLFPkCbzsmKShj13Sw n6nEBrbtmOu_l7T.iHc5_7_rZ9XxIUKte3VBNI2eUmJA0vT6TcMH1QJc60OuHiwG6HJ.0Qsl9b7. epYTEGztPMVfAfOSuH5tMCVfEsC.8jv01N2Wol6w7vaMQdOgQxmPD1NDQoprsdb8eG_9h9lMUn89 OKu10ljgJ4ssVWE_aNg26B.kB3eCu.8BROqK8axOoqGNI.C90MZbS9mWlJwQqn04RJuoHojcfIWs mh.3IaWMSMwlFls3g8da9AWNEs23uXvJf2Pi3H4fe_RWDXh1Gya5q7MFc0GjY0gMIBvwiKUsYGCn 3Sq4jlZGl5DzsrBJA9vXxcG2xr5_6nYA6I1XhaCjQumVnOY8Xy2BxIvCJLkKRFFtJ6GzOgn4Obai UhFekG.eO59iUGF5KHo7BjLIEWr3lh0OdgLYM1ARRjzkGE10y8gylNlMD1Pd4nQAqGDxZm_ax4hP Otj0a5cZsDW.nOtaqWuEo6n_hoids6Wz96ARXL_c08TZ8FJl0F8s2wU9n5iyW2HEfsCmHMIrhrth _7zUiarcnf1Vs9RcVdPoyzDOhby0tnRova8UZmSDm9mYzTeSI3Gn6WYe9DvuYkZrFkHjrMgNFeeG dUDu9qZEHLYKrw0xwI_d4uM8b07dbGL3ciDgQgB2CfFcpJhMS5ADxMRn0I9rylC..B8yxQhesOde Po3N2SIlrJVbVOOmZmqMCW7iBsxy0KXCyMx7pVozqsVVzhG0_9PtcZoDsi5oycAkkIcTouGWlpiw bAK7QNKG3eYU2nkcWgYvPLHkTyRDUpHj4eI0tbjTQGNxX44kxMTsjlB1ruKDX3mYr0Rn1oEzEtVX dwhrmgQ5cR8xje3vL769VdcJzVYS73tuZ83wsfpsh38kx4E4wm7rUPuiVQ1DntNjtIja1SxNW8Sz VWJejGnL4D6I6gn9sDJO_sRPaEn7TlwzGi77_9HOCbzQrkp4ku.1uk8yCUTps4RKANEMBnFriZ1a t46ekVTCY0vSIId1YBbsYU393C6q.iON.iPDxsKLbb7574uusCdgYwShTbQIPOaB2Nv2jevW6ze0 tjUcW3m0uTFXN6b5VEVc.WPDt211VhCv2978IJxRmQIbv.2wA2Pbsk2WS1FiJFvJ5kGw7SD38Txw HxKCZYzFhReR2KcjKVhpwEetg5cIXzAhQzXz2Rglb0ZYlfXM4wDfmGz5bslNZXWWdMHHrV.pkKOD hTfXnp90DdftRtfa3nSXDdltR5U6Gx2tcHoFvEDmejws.bBASbEbnRtEFnCgSJ3ewDTXqdc_vEc5 U0acXWPyAs3JeBNcMVGG2rKrCF8vmV7Z0SjLgTn9V.FBYa5Jefj44rIlzhn_TX6B5T2Ymvb2zxXZ YjqNnT14lFWL1A.9vklLO1wgy8A_bwpMEcGCfq0afSVrRgxeiw8SyjvIhMHNlJ9BQ8g5KZfBxwlt qRth3N3D7JwC6X5MpTNIzO.qMCuwyyo1C12VpAulJW2SKm62kTg.En_q.KfMFdaLHgoUgsbflbD3 KS.WjtnF7MjJVlYi5GBi2Qog5atyLAdzc7gf6HUvt.tBqwwiKXv1OakF8yvepclwAX2aK1CSQvdk MZRlKOfyTBh07j2VImspr6UQfeRtlVVyZawSu2YgxkVzx782M7tGQ0PN.bpiVFkdGVyLOAlm5j11 ED1M_wG5G4QM5n_8Q1Qy9E58kHBhMjWz3v6sb_Mutf0Pjk4UaUHyg2IxKZParNk.Tm.j_EMbtJDB n8YMPZqY.kXqmqFUd63GwJYRkFweUtCINxABp39DkC6oo3JZOYXkPC.c82ayggoAvn6bGTwr9Ks1 m9g8A0mqg0hgNpAsvRA-- X-Sonic-MF: Received: from sonic.gate.mail.ne1.yahoo.com by sonic313.consmr.mail.gq1.yahoo.com with HTTP; Sat, 29 Jan 2022 19:24:04 +0000 Received: by kubenode525.mail-prod1.omega.ne1.yahoo.com (VZM Hermes SMTP Server) with ESMTPA ID 5b42bfc84a8a471531ef84cce0e0a110; Sat, 29 Jan 2022 19:23:59 +0000 (UTC) Content-Type: text/plain; charset=us-ascii List-Id: Porting FreeBSD to ARM processors List-Archive: https://lists.freebsd.org/archives/freebsd-arm List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-arm@freebsd.org Mime-Version: 1.0 (Mac OS X Mail 14.0 \(3654.120.0.1.13\)) Subject: Re: devel/llvm13 failed to reclaim memory on 8 GB Pi4 running -current [UFS success context for 4 cores, notes added] From: Mark Millard In-Reply-To: Date: Sat, 29 Jan 2022 11:23:58 -0800 Cc: Free BSD Content-Transfer-Encoding: quoted-printable Message-Id: References: <20220127164512.GA51200@www.zefox.net> <2C7E741F-4703-4E41-93FE-72E1F16B60E2@yahoo.com> <20220127214801.GA51710@www.zefox.net> <5E861D46-128A-4E09-A3CF-736195163B17@yahoo.com> <20220127233048.GA51951@www.zefox.net> <6528ED25-A3C6-4277-B951-1F58ADA2D803@yahoo.com> <10B4E2F0-6219-4674-875F-A7B01CA6671C@yahoo.com> <54CD0806-3902-4B9C-AA30-5ED003DE4D41@yahoo.com> <9771EB33-037E-403E-8A77-7E8E98DCF375@yahoo.com> <6D67BFDF-D786-4BB7-BF2D-CE4D5532D452@yahoo.com> To: bob prohaska X-Mailer: Apple Mail (2.3654.120.0.1.13) X-Rspamd-Queue-Id: 4JmPQf1jc0z4pGf X-Spamd-Bar: --- Authentication-Results: mx1.freebsd.org; dkim=pass header.d=yahoo.com header.s=s2048 header.b=pI4B3TEI; dmarc=pass (policy=reject) header.from=yahoo.com; spf=pass (mx1.freebsd.org: domain of marklmi@yahoo.com designates 98.137.65.83 as permitted sender) smtp.mailfrom=marklmi@yahoo.com X-Spamd-Result: default: False [-3.50 / 15.00]; FREEMAIL_FROM(0.00)[yahoo.com]; MV_CASE(0.50)[]; R_SPF_ALLOW(-0.20)[+ptr:yahoo.com]; TO_DN_ALL(0.00)[]; DKIM_TRACE(0.00)[yahoo.com:+]; RCPT_COUNT_TWO(0.00)[2]; DMARC_POLICY_ALLOW(-0.50)[yahoo.com,reject]; NEURAL_HAM_SHORT(-1.00)[-1.000]; FROM_EQ_ENVFROM(0.00)[]; RCVD_TLS_LAST(0.00)[]; MIME_TRACE(0.00)[0:+]; FREEMAIL_ENVFROM(0.00)[yahoo.com]; ASN(0.00)[asn:36647, ipnet:98.137.64.0/20, country:US]; MID_RHS_MATCH_FROM(0.00)[]; DWL_DNSWL_NONE(0.00)[yahoo.com:dkim]; ARC_NA(0.00)[]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; R_DKIM_ALLOW(-0.20)[yahoo.com:s=s2048]; FROM_HAS_DN(0.00)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; MIME_GOOD(-0.10)[text/plain]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_IN_DNSWL_NONE(0.00)[98.137.65.83:from]; MLMMJ_DEST(0.00)[freebsd-arm]; RWL_MAILSPIKE_POSSIBLE(0.00)[98.137.65.83:from]; RCVD_COUNT_TWO(0.00)[2] X-ThisMailContainsUnwantedMimeParts: N Status: O Content-Length: 14288 Lines: 402 On 2022-Jan-29, at 03:59, Mark Millard wrote: > On 2022-Jan-28, at 19:20, Mark Millard wrote: >=20 >> On 2022-Jan-28, at 15:05, Mark Millard wrote: >>=20 >>> On 2022-Jan-28, at 00:31, Mark Millard wrote: >>>=20 >>>>> . . . >>>>=20 >>>> UFS context: >>>>=20 >>>> . . .; load averages: . . . MaxObs: 5.47, 4.99, 4.82 >>>> . . . threads: . . ., 14 MaxObsRunning >>>> . . . >>>> Mem: . . ., 6457Mi MaxObsActive, 1263Mi MaxObsWired, 7830Mi = MaxObs(Act+Wir+Lndry) >>>> Swap: 8192Mi Total, 8192Mi Used, K Free, 100% Inuse, 8192Mi = MaxObsUsed, 14758Mi MaxObs(Act+Lndry+SwapUsed), 16017Mi = MaxObs(Act+Wir+Lndry+SwapUsed) >>>>=20 >>>>=20 >>>> Console: >>>>=20 >>>> swap_pager: out of swap space >>>> swp_pager_getswapspace(4): failed >>>> swp_pager_getswapspace(1): failed >>>> swp_pager_getswapspace(1): failed >>>> swp_pager_getswapspace(2): failed >>>> swp_pager_getswapspace(2): failed >>>> swp_pager_getswapspace(4): failed >>>> swp_pager_getswapspace(1): failed >>>> swp_pager_getswapspace(9): failed >>>> swp_pager_getswapspace(4): failed >>>> swp_pager_getswapspace(7): failed >>>> swp_pager_getswapspace(29): failed >>>> swp_pager_getswapspace(9): failed >>>> swp_pager_getswapspace(1): failed >>>> swp_pager_getswapspace(2): failed >>>> swp_pager_getswapspace(1): failed >>>> swp_pager_getswapspace(4): failed >>>> swp_pager_getswapspace(1): failed >>>> swp_pager_getswapspace(10): failed >>>>=20 >>>> . . . Then some time with no messages . . . >>>>=20 >>>> vm_pageout_mightbe_oom: kill context: v_free_count: 7740, = v_inactive_count: 1 >>>> Jan 27 23:01:07 CA72_UFS kernel: pid 57238 (c++), jid 3, uid 0, was = killed: failed to reclaim memory >>>> swp_pager_getswapspace(2): failed >>>>=20 >>>>=20 >>>> Note: The "vm_pageout_mightbe_oom: kill context:" >>>> notice is one of the few parts of an old reporting >>>> patch Mark J. had supplied (long ago) that still >>>> fits in the modern code (or that I was able to keep >>>> updated enough to fit, anyway). It is another of the >>>> personal updates that I keep in my source trees, >>>> such as in /usr/main-src/ . >>>>=20 >>>> diff --git a/sys/vm/vm_pageout.c b/sys/vm/vm_pageout.c >>>> index 36d5f3275800..f345e2d4a2d4 100644 >>>> --- a/sys/vm/vm_pageout.c >>>> +++ b/sys/vm/vm_pageout.c >>>> @@ -1828,6 +1828,8 @@ vm_pageout_mightbe_oom(struct vm_domain *vmd, = int page_shortage, >>>> * start OOM. Initiate the selection and signaling of the >>>> * victim. >>>> */ >>>> + printf("vm_pageout_mightbe_oom: kill context: v_free_count: = %u, v_inactive_count: %u\n", >>>> + vmd->vmd_free_count, = vmd->vmd_pagequeues[PQ_INACTIVE].pq_cnt); >>>> vm_pageout_oom(VM_OOM_MEM); >>>>=20 >>>> /* >>>>=20 >>>>=20 >>>> Again, I'd used vm.pfault_oom_attempts inappropriately >>>> for running out of swap (although with UFS it did do >>>> a kill fairly soon): >>>>=20 >>>> # Delay when persistent low free RAM leads to >>>> # Out Of Memory killing of processes: >>>> vm.pageout_oom_seq=3D120 >>>> # >>>> # For plunty of swap/paging space (will not >>>> # run out), avoid pageout delays leading to >>>> # Out Of Memory killing of processes: >>>> vm.pfault_oom_attempts=3D-1 >>>> # >>>> # For possibly insufficient swap/paging space >>>> # (might run out), increase the pageout delay >>>> # that leads to Out Of Memory killing of >>>> # processes (showing defaults at the time): >>>> #vm.pfault_oom_attempts=3D 3 >>>> #vm.pfault_oom_wait=3D 10 >>>> # (The multiplication is the total but there >>>> # are other potential tradoffs in the factors >>>> # multiplied, even for nearly the same total.) >>>>=20 >>>> I'll change: >>>>=20 >>>> vm.pfault_oom_attempts >>>> vm.pfault_oom_wait >>>>=20 >>>> and reboot --and start the bulk somewhat before >>>> going to bed. >>>>=20 >>>>=20 >>>> For reference: >>>>=20 >>>> [00:02:13] [01] [00:00:00] Building devel/llvm13 | llvm13-13.0.0_3 >>>> [07:37:05] [01] [07:34:52] Finished devel/llvm13 | llvm13-13.0.0_3: = Failed: build >>>>=20 >>>>=20 >>>> [ 65% 4728/7265] . . . flang/lib/Evaluate/fold-designator.cpp >>>> [ 65% 4729/7265] . . . flang/lib/Evaluate/fold-integer.cpp >>>> FAILED: = tools/flang/lib/Evaluate/CMakeFiles/obj.FortranEvaluate.dir/fold-integer.c= pp.o=20 >>>> [ 65% 4729/7265] . . . flang/lib/Evaluate/fold-logical.cpp >>>> [ 65% 4729/7265] . . . flang/lib/Evaluate/fold-complex.cpp >>>> [ 65% 4729/7265] . . . flang/lib/Evaluate/fold-real.cpp >>>>=20 >>>> So the flang/lib/Evaluate/fold-integer.cpp one was the one killed. >>>>=20 >>>> Notably, the specific sources being compiled are different >>>> than in the ZFS context report. But this might be because >>>> of my killing ninja explicitly in the ZFS context, before >>>> killing the running compilers. >>>>=20 >>>> Again, using the options to avoid building the Fortran >>>> compiler probably avoids such memory use --if you do not >>>> need the Fortran compiler. >>>=20 >>>=20 >>> UFS based on instead using (not vm.pfault_oom_attempts=3D-1): >>>=20 >>> vm.pfault_oom_attempts=3D 3 >>> vm.pfault_oom_wait=3D 10 >>>=20 >>> It reached swap-space-full: >>>=20 >>> . . .; load averages: . . . MaxObs: 5.42, 4.98, 4.80 >>> . . . threads: . . ., 11 MaxObsRunning >>> . . . >>> Mem: . . ., 6482Mi MaxObsActive, 1275Mi MaxObsWired, 7832Mi = MaxObs(Act+Wir+Lndry) >>> Swap: 8192Mi Total, 8192Mi Used, K Free, 100% Inuse, 4096B In, = 81920B Out, 8192Mi MaxObsUsed, 14733Mi MaxObs(Act+Lndry+SwapUsed), = 16007Mi MaxObs(Act+Wir+Lndry+SwapUsed) >>>=20 >>>=20 >>> swap_pager: out of swap space >>> swp_pager_getswapspace(5): failed >>> swp_pager_getswapspace(25): failed >>> swp_pager_getswapspace(1): failed >>> swp_pager_getswapspace(31): failed >>> swp_pager_getswapspace(6): failed >>> swp_pager_getswapspace(1): failed >>> swp_pager_getswapspace(25): failed >>> swp_pager_getswapspace(10): failed >>> swp_pager_getswapspace(17): failed >>> swp_pager_getswapspace(27): failed >>> swp_pager_getswapspace(5): failed >>> swp_pager_getswapspace(11): failed >>> swp_pager_getswapspace(9): failed >>> swp_pager_getswapspace(29): failed >>> swp_pager_getswapspace(2): failed >>> swp_pager_getswapspace(1): failed >>> swp_pager_getswapspace(9): failed >>> swp_pager_getswapspace(20): failed >>> swp_pager_getswapspace(4): failed >>> swp_pager_getswapspace(21): failed >>> swp_pager_getswapspace(11): failed >>> swp_pager_getswapspace(2): failed >>> swp_pager_getswapspace(21): failed >>> swp_pager_getswapspace(2): failed >>> swp_pager_getswapspace(1): failed >>> swp_pager_getswapspace(2): failed >>> swp_pager_getswapspace(3): failed >>> swp_pager_getswapspace(3): failed >>> swp_pager_getswapspace(2): failed >>> swp_pager_getswapspace(1): failed >>> swp_pager_getswapspace(20): failed >>> swp_pager_getswapspace(2): failed >>> swp_pager_getswapspace(1): failed >>> swp_pager_getswapspace(16): failed >>> swp_pager_getswapspace(6): failed >>> swap_pager: out of swap space >>> swp_pager_getswapspace(4): failed >>> swp_pager_getswapspace(9): failed >>> swp_pager_getswapspace(17): failed >>> swp_pager_getswapspace(30): failed >>> swp_pager_getswapspace(1): failed >>>=20 >>> . . . Then some time with no messages . . . >>>=20 >>> vm_pageout_mightbe_oom: kill context: v_free_count: 7875, = v_inactive_count: 1 >>> Jan 28 14:36:44 CA72_UFS kernel: pid 55178 (c++), jid 3, uid 0, was = killed: failed to reclaim memory >>> swp_pager_getswapspace(11): failed >>>=20 >>>=20 >>> So, not all that much different from how the >>> vm.pfault_oom_attempts=3D-1 example looked. >>>=20 >>>=20 >>> [00:01:00] [01] [00:00:00] Building devel/llvm13 | llvm13-13.0.0_3 >>> [07:41:39] [01] [07:40:39] Finished devel/llvm13 | llvm13-13.0.0_3: = Failed: build >>>=20 >>> Again it killed: >>>=20 >>> FAILED: = tools/flang/lib/Evaluate/CMakeFiles/obj.FortranEvaluate.dir/fold-integer.c= pp.o >>>=20 >>> So, basically the same stopping area as for the >>> vm.pfault_oom_attempts=3D-1 example. >>>=20 >>>=20 >>> I'll set things up for swap totaling to 30 GiBytes, reboot, >>> and start it again. This will hopefully let me see and >>> report MaxObs??? figures for a successful build when there >>> is RAM+SWAP: 38 GiBytes. So: more than 9 GiBytes per compiler >>> instance (mean). >>=20 >> The analogous ZFS test with: >>=20 >> vm.pfault_oom_attempts=3D 3 >> vm.pfault_oom_wait=3D 10 >>=20 >> got: >>=20 >> . . .; load averages: . . . MaxObs: 5.90, 5.07, 4.80 >> . . . threads: . . ., 11 MaxObsRunning >> . . . >> Mem: . . ., 6006Mi MaxObsActive >> . . . >> Swap: 8192Mi Total, 8192Mi Used, 32768B Free, 99% Inuse, 28984Ki In, = 4792Ki Out, 8192Mi MaxObsUsed, 14282Mi MaxObs(Act+Lndry+SwapUsed), = 16009Mi MaxObs(Act+Wir+Lndry+SwapUsed) >>=20 >> (I got that slightly early, before the 100% showed up.) >>=20 >>=20 >> swap_pager: out of swap space >> swp_pager_getswapspace(10): failed >> swp_pager_getswapspace(1): failed >> swp_pager_getswapspace(4): failed >> swp_pager_getswapspace(16): failed >> swp_pager_getswapspace(5): failed >> swp_pager_getswapspace(2): failed >> swp_pager_getswapspace(8): failed >> swp_pager_getswapspace(12): failed >> swp_pager_getswapspace(1): failed >> swp_pager_getswapspace(32): failed >> swp_pager_getswapspace(4): failed >> swp_pager_getswapspace(9): failed >> swp_pager_getswapspace(4): failed >> swp_pager_getswapspace(17): failed >> swp_pager_getswapspace(21): failed >> swp_pager_getswapspace(10): failed >> swp_pager_getswapspace(18): failed >> swp_pager_getswapspace(6): failed >> swp_pager_getswapspace(2): failed >> swp_pager_getswapspace(14): failed >> swp_pager_getswapspace(1): failed >> swp_pager_getswapspace(5): failed >> swp_pager_getswapspace(25): failed >> swp_pager_getswapspace(12): failed >> swp_pager_getswapspace(5): failed >> swp_pager_getswapspace(7): failed >> swp_pager_getswapspace(10): failed >> swp_pager_getswapspace(3): failed >> swp_pager_getswapspace(24): failed >> swap_pager: out of swap space >> swp_pager_getswapspace(11): failed >> swap_pager: out of swap space >> swp_pager_getswapspace(17): failed >> swp_pager_getswapspace(5): failed >> swp_pager_getswapspace(1): failed >> swp_pager_getswapspace(32): failed >> swp_pager_getswapspace(15): failed >> swp_pager_getswapspace(19): failed >> swp_pager_getswapspace(1): failed >> swp_pager_getswapspace(25): failed >> swp_pager_getswapspace(11): failed >> swp_pager_getswapspace(1): failed >> swp_pager_getswapspace(15): failed >> swp_pager_getswapspace(1): failed >> swp_pager_getswapspace(8): failed >> swp_pager_getswapspace(31): failed >> swp_pager_getswapspace(26): failed >> swp_pager_getswapspace(1): failed >> swp_pager_getswapspace(20): failed >> swp_pager_getswapspace(4): failed >> swp_pager_getswapspace(3): failed >> swp_pager_getswapspace(3): failed >> swp_pager_getswapspace(9): failed >> swp_pager_getswapspace(1): failed >> swp_pager_getswapspace(15): failed >> swp_pager_getswapspace(3): failed >> swp_pager_getswapspace(7): failed >> swp_pager_getswapspace(8): failed >> swp_pager_getswapspace(17): failed >> swp_pager_getswapspace(2): failed >> swp_pager_getswapspace(10): failed >> swp_pager_getswapspace(6): failed >> swp_pager_getswapspace(2): failed >> swp_pager_getswapspace(11): failed >> swp_pager_getswapspace(21): failed >> swp_pager_getswapspace(1): failed >> swp_pager_getswapspace(9): failed >> swp_pager_getswapspace(32): failed >> swp_pager_getswapspace(2): failed >> swp_pager_getswapspace(32): failed >> swp_pager_getswapspace(25): failed >> swp_pager_getswapspace(21): failed >> swp_pager_getswapspace(22): failed >> swp_pager_getswapspace(14): failed >> swp_pager_getswapspace(10): failed >> swap_pager: out of swap space >> swp_pager_getswapspace(1): failed >> swp_pager_getswapspace(28): failed >> swp_pager_getswapspace(2): failed >> swp_pager_getswapspace(13): failed >> swp_pager_getswapspace(3): failed >> swp_pager_getswapspace(31): failed >> swp_pager_getswapspace(20): failed >> swp_pager_getswapspace(2): failed >> vm_pageout_mightbe_oom: kill context: v_free_count: 8186, = v_inactive_count: 1 >> Jan 28 18:42:42 CA72_4c8G_ZFS kernel: pid 98734 (c++), jid 3, uid 0, = was killed: failed to reclaim memory >>=20 >> [00:00:49] [01] [00:00:00] Building devel/llvm13 | llvm13-13.0.0_3 >> [08:06:09] [01] [08:05:20] Finished devel/llvm13 | llvm13-13.0.0_3: = Failed: build >>=20 >> FAILED: = tools/flang/lib/Evaluate/CMakeFiles/obj.FortranEvaluate.dir/fold-complex.c= pp.o >>=20 >> and flang/lib/Evaluate/fold-integer.cpp was one of the compiles going = on. The below is about the success case for the 8 GiByte RPi4B: > Finally, what a successful build of devel/llvm13 on > UFS was like on the 8 GiByte RPi4B (overclocked, > USB3 NVMe based SSD): >=20 > [00:00:57] [01] [00:00:00] Building devel/llvm13 | llvm13-13.0.0_3 > [12:25:40] [01] [12:24:43] Finished devel/llvm13 | llvm13-13.0.0_3: = Success >=20 > where its Maximum Observed figures were: >=20 > . . .; load averages: . . . MaxObs: 6.15, 5.71, 5.31 > . . . threads: . . ., 11 MaxObsRunning > . . . > Mem: . . ., 6465Mi MaxObsActive, 1355Mi MaxObsWired, 7832Mi = MaxObs(Act+Wir+Lndry) > Swap: . . ., 10429Mi MaxObsUsed, 16799Mi MaxObs(Act+Lndry+SwapUsed), = 18072Mi MaxObs(Act+Wir+Lndry+SwapUsed) >=20 > But 18072Mi MaxObs(Act+Wir+Lndry+SwapUsed) =3D=3D 17.6484375 GiByte, > so more than 17.6484375 GiByte for RAM+SWAP, depending on > how much room for inactive and margin one chooses. Probably > 20+ GiBytes, so 12+ GiBytes of swap for 8 GiBytes of RAM. >=20 > (Reminder: maximum of sum <=3D sum of maximums.) For folks that might read the above without a lot of prior context . . . I forgot to mention above that the RPi4B has 4 cores and the poudriere ALLOW_PARALLEL_JOB=3D meant that there were 4 jobs (processes) much of the time. (Nightly cron related activity and made the MaxObs load averages bigger than the 4.? or 5.? that would otherwise have showed up.) Having notably more (or fewer) processes active for the build need not use RAM+SWAP proportionally overall. The 20+ GiBytes figure for 4 active hardware threads in use is somewhat context specific. So having 5+ GiBytes of RAM+SWAP per hardware thread that is to be in use may be significant overkill when there are notably more hardware threads involved. =3D=3D=3D Mark Millard marklmi at yahoo.com