Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 13 Feb 2022 23:41:25 -0800
From:      Kevin Oberman <rkoberman@gmail.com>
To:        tech-lists <tech-lists@zyxst.net>
Cc:        "freebsd-questions@freebsd.org" <questions@freebsd.org>
Subject:   Re: swap/page problem
Message-ID:  <CAN6yY1voapyFnkfmB=DdRLX3jiSxsv%2BMV32hYVfV4vNnkPDtsA@mail.gmail.com>
In-Reply-To: <YgW4WXY93XFlIp23@cloud9.zyxst.net>
References:  <CAN6yY1s%2BC3ap7rvajujiTqGwQVgkZrHRh6eHnTGHgTsVrfjPcg@mail.gmail.com> <YgW4WXY93XFlIp23@cloud9.zyxst.net>

next in thread | previous in thread | raw e-mail | index | archive | help
--000000000000a5050905d7f58c91
Content-Type: text/plain; charset="UTF-8"

On Thu, Feb 10, 2022 at 5:14 PM tech-lists <tech-lists@zyxst.net> wrote:

> Hi,
>
> On Thu, Feb 10, 2022 at 02:28:50PM -0800, Kevin Oberman wrote:
> >During a large build (llvm13), my system ground to a near halt with almost
> >everything suspended. After several minutes, the system slowly recovered.
> >When I looked at the messages log, I found 57 kernel messages spread over
> >3.75 hours, in the form of:
> >wap_pager: indefinite wait buffer: bufobj: 0, blkno: 862845, size: 20480
> >
> >The block numbers and sizes varied. bufobj was always '0'. I had
> >significant swap available swap space,as far as I could tell. I have 20GB
> >of RAM and 24GB of swap. I am running stable 48937-3c6b6246f2f from
> January
> >13.
> >
> >I know that the LLVM build is huge, but I've not seen this before. What,
> >exactly, is this message telling me? Am I out of RAM and swap? I couldd
> add
> >another 24GB of swap, though it would be on spinning rust, not SSD.
>
> I've seen this problem before and came to the conclusion
> after reading threads on the lists and asking questions that
> it wasn't a swap problem as primary issue but maybe theres some contention
> between processes which affects llvm in particular and makes it eat swap.
> Also, as you noted, swap doesn't really run out.
>
> I use poudriere to build and have parallel jobs set to 1 now
> with make jobs enabled and the problem doesn't happen. If you're just
> using the ports tree in the traditional way, try make -j1.
> If parallel jobs is unset it would use hw.ncpu which here is 8 which would
> produce the problem you descrive when compiling llvm13.
>
> These sysctls are now set:
>
> vfs.read_max=128                       # default 64 - speeds up disk i/o
> vfs.aio.max_buf_aio=8192
> vfs.aio.max_aio_queue_per_proc=65536
> vfs.aio.max_aio_per_proc=8192
> vfs.aio.max_aio_queue=65536
> vm.pageout_oom_seq=120
> vm.pfault_oom_attempts=-1
>
> those last two may be especially helpful for your situation.
>
> My context here is amd64 i7-4770K (so 8 cpus with HT) clocked to 4.3GHz
> and the disk used for building poers is SSD wiht 16GB swap (partition).
> RAM is 32GB
>
> --
> J.
>
Thanks for the suggestions. The AIO ones look like those I recommended for
vbox, though they are no longer needed for that as vbox has been modified
to no longer use AIO. In any case, I'll see what happens.

One oddity is that the problem seems to occur when my system reports
"critical temperature detected". Shortly after, I see the swap_pager
messages start. I have concluded that there is absolutely no issue with
either RAM or swap space. Whatever is happening, it is tied to the problems
that I have been seeing since I got my Lenovo L-13: P-States disabled,
weird CPU frequency behavior, strange thermal control issues. I'm really
regretting getting this laptop.

Thanks again for the suggestions!
-- 
Kevin Oberman, Part time kid herder and retired Network Engineer
E-mail: rkoberman@gmail.com
PGP Fingerprint: D03FB98AFA78E3B78C1694B318AB39EF1B055683

--000000000000a5050905d7f58c91
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><div class=3D"gmail_default" style=3D"fon=
t-family:tahoma,sans-serif;font-size:small">On Thu, Feb 10, 2022 at 5:14 PM=
 tech-lists &lt;<a href=3D"mailto:tech-lists@zyxst.net">tech-lists@zyxst.ne=
t</a>&gt; wrote:<br></div></div><div class=3D"gmail_quote"><blockquote clas=
s=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid r=
gb(204,204,204);padding-left:1ex">Hi,<br>
<br>
On Thu, Feb 10, 2022 at 02:28:50PM -0800, Kevin Oberman wrote:<br>
&gt;During a large build (llvm13), my system ground to a near halt with alm=
ost<br>
&gt;everything suspended. After several minutes, the system slowly recovere=
d.<br>
&gt;When I looked at the messages log, I found 57 kernel messages spread ov=
er<br>
&gt;3.75 hours, in the form of:<br>
&gt;wap_pager: indefinite wait buffer: bufobj: 0, blkno: 862845, size: 2048=
0<br>
&gt;<br>
&gt;The block numbers and sizes varied. bufobj was always &#39;0&#39;. I ha=
d<br>
&gt;significant swap available swap space,as far as I could tell. I have 20=
GB<br>
&gt;of RAM and 24GB of swap. I am running stable 48937-3c6b6246f2f from Jan=
uary<br>
&gt;13.<br>
&gt;<br>
&gt;I know that the LLVM build is huge, but I&#39;ve not seen this before. =
What,<br>
&gt;exactly, is this message telling me? Am I out of RAM and swap? I couldd=
 add<br>
&gt;another 24GB of swap, though it would be on spinning rust, not SSD.<br>
<br>
I&#39;ve seen this problem before and came to the conclusion <br>
after reading threads on the lists and asking questions that <br>
it wasn&#39;t a swap problem as primary issue but maybe theres some content=
ion <br>
between processes which affects llvm in particular and makes it eat swap.<b=
r>
Also, as you noted, swap doesn&#39;t really run out.<br>
<br>
I use poudriere to build and have parallel jobs set to 1 now <br>
with make jobs enabled and the problem doesn&#39;t happen. If you&#39;re ju=
st<br>
using the ports tree in the traditional way, try make -j1.<br>
If parallel jobs is unset it would use hw.ncpu which here is 8 which would<=
br>
produce the problem you descrive when compiling llvm13.<br>
<br>
These sysctls are now set:<br>
<br>
vfs.read_max=3D128=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0# default 64 - speeds up disk i/o<br>
vfs.aio.max_buf_aio=3D8192<br>
vfs.aio.max_aio_queue_per_proc=3D65536<br>
vfs.aio.max_aio_per_proc=3D8192<br>
vfs.aio.max_aio_queue=3D65536<br>
vm.pageout_oom_seq=3D120<br>
vm.pfault_oom_attempts=3D-1<br>
<br>
those last two may be especially helpful for your situation.<br>
<br>
My context here is amd64 i7-4770K (so 8 cpus with HT) clocked to 4.3GHz<br>
and the disk used for building poers is SSD wiht 16GB swap (partition).<br>
RAM is 32GB<br>
<br>
-- <br>
J.<br>
</blockquote></div><div style=3D"font-family:tahoma,sans-serif;font-size:sm=
all" class=3D"gmail_default">Thanks for the suggestions. The AIO ones look =
like those I recommended for vbox, though they are no longer needed for tha=
t as vbox has been modified to no longer use AIO. In any case, I&#39;ll see=
 what happens.</div><div style=3D"font-family:tahoma,sans-serif;font-size:s=
mall" class=3D"gmail_default"><br></div><div style=3D"font-family:tahoma,sa=
ns-serif;font-size:small" class=3D"gmail_default">One oddity is that the pr=
oblem seems to occur when my system reports &quot;critical temperature dete=
cted&quot;. Shortly after, I see the swap_pager messages start. I have conc=
luded that there is absolutely no issue with either RAM or swap space. What=
ever is happening, it is tied to the problems that I have been seeing since=
 I got my Lenovo L-13: P-States disabled, weird CPU frequency behavior, str=
ange thermal control issues. I&#39;m really regretting getting this laptop.=
 <br></div><div style=3D"font-family:tahoma,sans-serif;font-size:small" cla=
ss=3D"gmail_default"><br></div><div style=3D"font-family:tahoma,sans-serif;=
font-size:small" class=3D"gmail_default">Thanks again for the suggestions!<=
br></div>-- <br><div dir=3D"ltr" class=3D"gmail_signature"><div dir=3D"ltr"=
><div><div dir=3D"ltr"><div><div dir=3D"ltr"><div><div dir=3D"ltr">Kevin Ob=
erman, Part time kid herder and retired Network Engineer<br>E-mail: <a href=
=3D"mailto:rkoberman@gmail.com" target=3D"_blank">rkoberman@gmail.com</a><b=
r></div><div>PGP Fingerprint: D03FB98AFA78E3B78C1694B318AB39EF1B055683</div=
></div></div></div></div></div></div></div></div>

--000000000000a5050905d7f58c91--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAN6yY1voapyFnkfmB=DdRLX3jiSxsv%2BMV32hYVfV4vNnkPDtsA>