Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 11 Jun 2012 13:23:03 -0500
From:      Alan Cox <alan.l.cox@gmail.com>
To:        Garrett Cooper <yanegomi@gmail.com>
Cc:        freebsd-current <freebsd-current@freebsd.org>
Subject:   Re: 10-CURRENT and swap usage
Message-ID:  <CAJUyCcP0ry_Mt-KKUGiaDmuUm8o1emc2RXgjuibBwOpTWuaQ5g@mail.gmail.com>
In-Reply-To: <6809F782-1D1F-4773-BAC5-BC3037C58B87@gmail.com>
References:  <6809F782-1D1F-4773-BAC5-BC3037C58B87@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Jun 9, 2012 at 9:26 PM, Garrett Cooper <yanegomi@gmail.com> wrote:

>        I build out of my UFS-only VM in VMware Fusion from time to time,
> and it looks like there's a large chunk of processes that are swapped out
> when doing two parallel builds:
>
> last pid: 27644;  load averages:  2.43,  0.94,  0.98
>
>                              up 1+15:06:06  19:20:48
> 79 processes:  4 running, 75 sleeping
> CPU: 77.3% user,  0.0% nice, 22.7% system,  0.0% interrupt,  0.0% idle
> Mem: 407M Active, 186M Inact, 208M Wired, 24M Cache, 110M Buf, 145M Free
> Swap: 1024M Total, 267M Used, 757M Free, 26% Inuse
>
>        I know that some minor changes have gone in in the past couple
> months to change when swapping and page ins/outs would occur, but I was
> wondering if this behavior was intended; I'm finding it a bit bizarre that
> there's ~150MB free, ~180MB inactive, and 267MB swapped out as previous
> experience has dictated that swap is basically untouched except in extreme
> circumstances.
>

I can't think of any change in the past couple months that would have this
effect.  Specifically, I don't recall there having been any change that
would make the page daemon more (or less aggressive) in laundering dirty
pages.

Keep in mind that gcc at higher optimization levels can and will use a lot
of memory, i.e., hundreds of megabytes.

Alan



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAJUyCcP0ry_Mt-KKUGiaDmuUm8o1emc2RXgjuibBwOpTWuaQ5g>