Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 11 Jun 2012 15:07:20 -0700
From:      Garrett Cooper <yanegomi@gmail.com>
To:        Konstantin Belousov <kostikbel@gmail.com>
Cc:        alc@freebsd.org, freebsd-current <freebsd-current@freebsd.org>
Subject:   Re: 10-CURRENT and swap usage
Message-ID:  <CAGH67wRB6%2BvrgSYC-yEWfCyyKMFGEN8b-0w%2B8hOyjYJvhO2DUg@mail.gmail.com>
In-Reply-To: <20120611204157.GG2337@deviant.kiev.zoral.com.ua>
References:  <6809F782-1D1F-4773-BAC5-BC3037C58B87@gmail.com> <CAJUyCcP0ry_Mt-KKUGiaDmuUm8o1emc2RXgjuibBwOpTWuaQ5g@mail.gmail.com> <20120611204157.GG2337@deviant.kiev.zoral.com.ua>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, Jun 11, 2012 at 1:41 PM, Konstantin Belousov
<kostikbel@gmail.com> wrote:
> On Mon, Jun 11, 2012 at 01:23:03PM -0500, Alan Cox wrote:
>> On Sat, Jun 9, 2012 at 9:26 PM, Garrett Cooper <yanegomi@gmail.com> wrot=
e:
>>
>> > =A0 =A0 =A0 =A0I build out of my UFS-only VM in VMware Fusion from tim=
e to time,
>> > and it looks like there's a large chunk of processes that are swapped =
out
>> > when doing two parallel builds:
>> >
>> > last pid: 27644; =A0load averages: =A02.43, =A00.94, =A00.98
>> >
>> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0up 1+15:06:=
06 =A019:20:48
>> > 79 processes: =A04 running, 75 sleeping
>> > CPU: 77.3% user, =A00.0% nice, 22.7% system, =A00.0% interrupt, =A00.0=
% idle
>> > Mem: 407M Active, 186M Inact, 208M Wired, 24M Cache, 110M Buf, 145M Fr=
ee
>> > Swap: 1024M Total, 267M Used, 757M Free, 26% Inuse
>> >
>> > =A0 =A0 =A0 =A0I know that some minor changes have gone in in the past=
 couple
>> > months to change when swapping and page ins/outs would occur, but I wa=
s
>> > wondering if this behavior was intended; I'm finding it a bit bizarre =
that
>> > there's ~150MB free, ~180MB inactive, and 267MB swapped out as previou=
s
>> > experience has dictated that swap is basically untouched except in ext=
reme
>> > circumstances.
>> >
>>
>> I can't think of any change in the past couple months that would have th=
is
>> effect. =A0Specifically, I don't recall there having been any change tha=
t
>> would make the page daemon more (or less aggressive) in laundering dirty
>> pages.
>>
>> Keep in mind that gcc at higher optimization levels can and will use a l=
ot
>> of memory, i.e., hundreds of megabytes.
> The new jemalloc in debugging mode uses much more anonymous memory now.
> And since typical compiler process is relatively short-lived, the picture
> posted probably related to some memory hog recently finished a run.

    Good point -- that was another thing that crossed my mind (even
though it stayed that way for quite a while).. I'll try the compile
with MALLOC_PRODUCTION to see if the behavior differs quite a bit.
Thanks!
-Garrett



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAGH67wRB6%2BvrgSYC-yEWfCyyKMFGEN8b-0w%2B8hOyjYJvhO2DUg>