Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 19 Jul 2023 15:11:30 -0700
From:      Scott Gasch <scott.gasch@gmail.com>
To:        Pete Wright <pete@nomadlogic.org>
Cc:        freebsd-questions <freebsd-questions@freebsd.org>, freebsd-hackers@freebsd.org
Subject:   Re: Swap filling up, usermode process swap usage doesn't explain
Message-ID:  <CABYAQkRHbdfbccQa_UqoZ7_YBTKEjwQAR7hv%2BR0Lih60a=vSAw@mail.gmail.com>
In-Reply-To: <b24efee3-939b-3e20-d07f-8dad92d8e081@nomadlogic.org>
References:  <CABYAQkQftAfRXpdSJnqH2Hi=uD-dOiGWdFU8u1XqfeZNBUA35w@mail.gmail.com> <b24efee3-939b-3e20-d07f-8dad92d8e081@nomadlogic.org>

next in thread | previous in thread | raw e-mail | index | archive | help
--000000000000d8dc6f0600de5111
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Yes, I'm using ZFS.  Here's what top says:

last pid: 88926;  load averages:  1.20,  0.96,  0.87
 up 5+17:48:34  15:09:58
274 processes: 1 running, 272 sleeping, 1 zombie
CPU:  1.8% user,  0.0% nice,  0.5% system,  0.0% interrupt, 97.8% idle
Mem: 1844M Active, 7777M Inact, 77G Laundry, 35G Wired, 750M Buf, 3367M Fre=
e
ARC: 24G Total, 2878M MFU, 18G MRU, 21M Anon, 119M Header, 2622M Other
     18G Compressed, 25G Uncompressed, 1.33:1 Ratio
Swap: 144G Total, 11G Used, 133G Free, 7% Inuse

If I leave this alone it will grow to consume all available swap space.
I'll try your fix with the sysctl knob and see what happens...  I hope this
is it, I've been fighting this for a while now.

Thx,
Scott


On Wed, Jul 19, 2023 at 11:34=E2=80=AFAM Pete Wright <pete@nomadlogic.org> =
wrote:

>
>
> On 7/19/23 07:49, Scott Gasch wrote:
> > I am running a 13.2-RELEASE GENERIC kernel and seeing a pattern where,
> > after about 10 days of uptime, my swap begins to fill up.
> >
> <snip>
> >
> > At least they agree about it being 11G.  Is this kernel memory being
> > paged out to swap?  The machine has 128G of physical memory and isn't
> > under very heavy load at the moment.
> >
>
> Are you running ZFS by any chance?  If so its possible it is trying to
> use as much memory as possible for the ARC.  I've seen this on a few
> systems which lots of memory.  One way to tell is to run "top" and look
> at the ARC stats:
>
> last pid: 71322;  load averages:  1.02,  0.94,  0.87              up
> 8+18:38:34  11:31:26
> 376 processes: 1 running, 146 sleeping, 229 zombie
> CPU:  0.6% user,  0.0% nice,  6.5% system,  0.0% interrupt, 93.0% idle
> Mem: 3599M Active, 18G Inact, 4132M Laundry, 4272M Wired, 892M Free
> ARC: 1749M Total, 651M MFU, 239M MRU, 1864K Anon, 13M Header, 844M Other
>       216M Compressed, 758M Uncompressed, 3.52:1 Ratio
>
>
>
> On a few of my larger memory systems I cap the ARC by setting this
> sysctl knob (this is like 45G on my system):
> vfs.zfs.arc.max=3D45000000000
>
>
> -pete
>
>
> --
> Pete Wright
> pete@nomadlogic.org
> @nomadlogicLA
>

--000000000000d8dc6f0600de5111
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Yes, I&#39;m using ZFS.=C2=A0 Here&#39;s what top says:<di=
v><br></div><div>last pid: 88926; =C2=A0load averages: =C2=A01.20, =C2=A00.=
96, =C2=A00.87 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0up 5+17:48:34 =C2=A015:09:58<br>274 processes: 1 ru=
nning, 272 sleeping, 1 zombie<br>CPU: =C2=A01.8% user, =C2=A00.0% nice, =C2=
=A00.5% system, =C2=A00.0% interrupt, 97.8% idle<br>Mem: 1844M Active, 7777=
M Inact, 77G Laundry, 35G Wired, 750M Buf, 3367M Free<br>ARC: 24G Total, 28=
78M MFU, 18G MRU, 21M Anon, 119M Header, 2622M Other<br>=C2=A0 =C2=A0 =C2=
=A018G Compressed, 25G Uncompressed, 1.33:1 Ratio<br>Swap: 144G Total, 11G =
Used, 133G Free, 7% Inuse<br></div><div><br></div><div>If I leave this alon=
e it will grow to consume all available swap space.=C2=A0 I&#39;ll try your=
 fix with the sysctl knob and see what happens...=C2=A0 I hope this is it, =
I&#39;ve been fighting this for a while now.</div><div><br></div><div>Thx,<=
/div><div>Scott</div><div><br></div></div><br><div class=3D"gmail_quote"><d=
iv dir=3D"ltr" class=3D"gmail_attr">On Wed, Jul 19, 2023 at 11:34=E2=80=AFA=
M Pete Wright &lt;<a href=3D"mailto:pete@nomadlogic.org">pete@nomadlogic.or=
g</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin=
:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"=
><br>
<br>
On 7/19/23 07:49, Scott Gasch wrote:<br>
&gt; I am running a 13.2-RELEASE GENERIC kernel and seeing a pattern where,=
 <br>
&gt; after about 10 days of uptime, my swap begins to fill up.<br>
&gt; <br>
&lt;snip&gt;<br>
&gt; <br>
&gt; At least they agree about it being 11G.=C2=A0 Is this kernel memory be=
ing <br>
&gt; paged out to swap?=C2=A0 The machine has 128G of physical memory and i=
sn&#39;t <br>
&gt; under very heavy load at the moment.<br>
&gt; <br>
<br>
Are you running ZFS by any chance?=C2=A0 If so its possible it is trying to=
 <br>
use as much memory as possible for the ARC.=C2=A0 I&#39;ve seen this on a f=
ew <br>
systems which lots of memory.=C2=A0 One way to tell is to run &quot;top&quo=
t; and look <br>
at the ARC stats:<br>
<br>
last pid: 71322;=C2=A0 load averages:=C2=A0 1.02,=C2=A0 0.94,=C2=A0 0.87=C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 up <br>
8+18:38:34=C2=A0 11:31:26<br>
376 processes: 1 running, 146 sleeping, 229 zombie<br>
CPU:=C2=A0 0.6% user,=C2=A0 0.0% nice,=C2=A0 6.5% system,=C2=A0 0.0% interr=
upt, 93.0% idle<br>
Mem: 3599M Active, 18G Inact, 4132M Laundry, 4272M Wired, 892M Free<br>
ARC: 1749M Total, 651M MFU, 239M MRU, 1864K Anon, 13M Header, 844M Other<br=
>
=C2=A0 =C2=A0 =C2=A0 216M Compressed, 758M Uncompressed, 3.52:1 Ratio<br>
<br>
<br>
<br>
On a few of my larger memory systems I cap the ARC by setting this <br>
sysctl knob (this is like 45G on my system):<br>
vfs.zfs.arc.max=3D45000000000<br>
<br>
<br>
-pete<br>
<br>
<br>
-- <br>
Pete Wright<br>
<a href=3D"mailto:pete@nomadlogic.org" target=3D"_blank">pete@nomadlogic.or=
g</a><br>
@nomadlogicLA<br>
</blockquote></div>

--000000000000d8dc6f0600de5111--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CABYAQkRHbdfbccQa_UqoZ7_YBTKEjwQAR7hv%2BR0Lih60a=vSAw>