Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 18 Nov 2019 09:29:04 -0500
From:      Paul Mather <paul@gromit.dlib.vt.edu>
To:        Eugene Grosbein <eugen@grosbein.net>
Cc:        Xin LI <delphij@gmail.com>, Alexander Motin <mav@freebsd.org>, FreeBSD stable <freebsd-stable@freebsd.org>
Subject:   Re: bhyve memory leak in stable/11
Message-ID:  <F2DA5BC2-BB37-4B2D-9AC7-FA0589131EB2@gromit.dlib.vt.edu>
In-Reply-To: <0cb84655-bdd2-1881-cfa2-09875c0aa7ff@grosbein.net>
References:  <7fddcea5-2188-afe1-3ea9-a53dffdbec32@grosbein.net> <CAGMYy3txqYg34UBQeLToSN-Thsfp0ZuBOuWTaPHS8VMrhe-Szg@mail.gmail.com> <edbb7248-d70e-a45c-0666-762606bb9bfd@grosbein.net> <0cb84655-bdd2-1881-cfa2-09875c0aa7ff@grosbein.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On Nov 18, 2019, at 8:06 AM, Eugene Grosbein <eugen@grosbein.net> wrote:

> 18.11.2019 19:03, Eugene Grosbein wrote:
>
>> Please point me to right direction for debugging this.
>
> Is it normal that over 1/3rd of 360G total physical RAM is in "Laundry"  
> category in addition to 173G Wired?
>
> last pid: 20372;  load averages:  8.04,  7.73,   
> 7.84                       up 2+05:55:29  16:04:02
> 130 processes: 3 running, 126 sleeping, 1 zombie
> CPU:  1.1% user,  0.0% nice, 13.2% system,  0.1% interrupt, 85.7% idle
> Mem: 42G Active, 8325M Inact, 112G Laundry, 173G Wired, 7809M Free
> ARC: 131G Total, 28G MFU, 90G MRU, 11M Anon, 2442M Header, 10G Other
>      107G Compressed, 363G Uncompressed, 3.41:1 Ratio
> Swap: 64G Total, 16G Used, 48G Free, 24% Inuse
>
>   PID USERNAME    THR PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
> 78042 root         34  20    0 54328M 52867M kqread  7  81.4H 210.63%  
> bhyve: sappdev (bhyve)
> 59085 root         20  20    0 31512M 25256M kqread  6 490:16  16.25%  
> bhyve: sdc01 (bhyve)
> 59568 root         28  20    0 28549M 24270M kqread  6 143:32   1.22%  
> bhyve: sfile01 (bhyve)
> 60011 root         20  20    0 30262M 23697M kqread 27 121:22   1.08%  
> bhyve: skms01 (bhyve)
> 63676 root         34  20    0 16418M 12799M kqread  3 113:06  19.92%  
> bhyve: solap (bhyve)
> 26819 root         26  20    0 12321M 10472M kqread 28 151:43  10.12%  
> bhyve: srdapp01 (bhyve)
> 63662 root         34  20    0  8226M  6969M kqread  4 114:52  20.36%  
> bhyve: ssql01 (bhyve)


I wondered the same back in late March this year:  
https://www.mail-archive.com/freebsd-stable@freebsd.org/msg137556.html

I have a 12-STABLE system that has 16 GB RAM yet regularly shows hundred of  
megabytes of "Laundry."  To be fair, it's also showing a good chunk of free  
memory, so maybe the philosophy is "why bother to do ANYTHING unless you  
absolutely have to?"  (There's also a lower amount of "Inactive" memory,  
but still amounting to a couple of hundred megabytes.)

My concern is that when I do need to grab a lot of free memory in a hurry  
(like when I do a Poudriere bulk run, or when I use the GitLab instance  
that runs in a jail on the machine), then there is a mad scramble to obtain  
memory.  It seems increasingly that "idle" processes get pushed out to swap  
at these times.  Oftentimes, when doing a Poudriere run, this means the  
GitLab processes get swapped out, which means when I next access GitLab  
there's a long latency whilst it gets paged back in to memory.

Given there's normally a lot of idle CPU time on this system, why doesn't  
the laundry ever seem to get done?  Is it just a matter that it is being  
done, but, also, more laundry is being created at an equally fast rate by  
something else running on the system?  (Is there a way of finding out what  
is generating laundry?)  Or, does laundry processing (and other memory  
reclamation) stop when the system believes there is "enough" free memory to  
warrant not doing any more reclamation work?  (If so, how much is "enough",  
and is it possible to alter what the system considers to be "enough?")

Cheers,

Paul.




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?F2DA5BC2-BB37-4B2D-9AC7-FA0589131EB2>