From owner-freebsd-questions@FreeBSD.ORG Wed Mar 7 01:23:58 2012 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 6CC2B106566B for ; Wed, 7 Mar 2012 01:23:58 +0000 (UTC) (envelope-from rwmaillists@googlemail.com) Received: from mail-ww0-f50.google.com (mail-ww0-f50.google.com [74.125.82.50]) by mx1.freebsd.org (Postfix) with ESMTP id ED6C88FC12 for ; Wed, 7 Mar 2012 01:23:57 +0000 (UTC) Received: by wgbds12 with SMTP id ds12so4923789wgb.31 for ; Tue, 06 Mar 2012 17:23:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=20120113; h=date:from:to:subject:message-id:in-reply-to:references:x-mailer :mime-version:content-type:content-transfer-encoding; bh=wswJRLGmF5wvcyKXbUgJ2ngopgvtwuyp0k9ReTg3T5Y=; b=KhPSy1R1N4rBVYQEwbtni5Yy5/b4W+NHVIFVVEv2k7WJnU8m4kgVV/PfW1WBpRxt3T zH1pCfuL4/pMGIA+SCq62SYgHkL0SzyKnxR9o22vCiGNlt2A+hJ/ceU5Tn+eYYeSEtPb OYCd0LUn73Xm+j/y2VhL1pjMISkgYdG7LlgCKgaQUDHE9HUJfcRaIPEFmenJifAeUYzp wnMos10AgkBR/8MDtqVHSX6orVcbr2Cq+UI4sqhvyebJUisNaYI2S6Uef6e7YJGvxkxl owQZdNMO8G2EY+iSyevmdVHDiC3xlsKHUnIfuPho1XqIt6TGUfU6Ql5eCIcwXHzIw1fR 9Kpw== Received: by 10.180.78.225 with SMTP id e1mr334925wix.0.1331083437030; Tue, 06 Mar 2012 17:23:57 -0800 (PST) Received: from gumby.homeunix.com (87-194-105-247.bethere.co.uk. [87.194.105.247]) by mx.google.com with ESMTPS id dw7sm32232083wib.4.2012.03.06.17.23.55 (version=SSLv3 cipher=OTHER); Tue, 06 Mar 2012 17:23:55 -0800 (PST) Date: Wed, 7 Mar 2012 01:23:53 +0000 From: RW To: freebsd-questions@freebsd.org Message-ID: <20120307012353.7fbf3bd6@gumby.homeunix.com> In-Reply-To: <4F569DFF.8040807@mac.com> References: <1331061203.2218.38.camel@pow> <4F569DFF.8040807@mac.com> X-Mailer: Claws Mail 3.8.0 (GTK+ 2.24.6; amd64-portbld-freebsd8.2) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: FreeBSD 8.2 - active plus inactive memory leak!? X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Mar 2012 01:23:58 -0000 On Tue, 06 Mar 2012 18:30:07 -0500 Chuck Swiger wrote: > On 3/6/2012 2:13 PM, Luke Marsden wrote: > > * Resident corresponds to a subset of the pages above: those > > pages which actually occupy physical/core memory. Notably pages may > > appear in size but not appear in resident for read-only > > text pages from libraries which have not been used yet or which have > > been malloc()'d but not yet written-to. > > Yes. > > > My understanding for the values for the system as a whole (at the > > top in 'top') is as follows: > > > > * Active / inactive memory is the same thing: resident > > memory from processes in use. Being in the inactive as opposed to > > active list simply indicates that the pages in question are less > > recently used and therefore more likely to get swapped out > > if the machine comes under memory pressure. > > Well, they aren't exactly the same thing. The kernel implements a VM > working set algorithm which periodically looks at all of the pages > that are in memory and notes whether a process has accessed that page > recently. If it has, the page is active; if the page has not been > used for "some time", it becomes inactive. I think the previous poster has it about right, it's mostly about lifecycle. The inactive queue contains a mixture of resident and non-resident memory. It's commonly dominated by disk cache pages, and consequently is easily blown away by recursive greps etc. > > * Cache is freed memory which the kernel has decided to keep > > in case it correspond to a useful page in future; it can be cheaply > > evicted into the free list. > > Sort of, although this description fits the "inactive" memory > category also. > > The major distinction is that the system is actively trying to flush > any dirty pages in the cache category, so that they are available for > reuse by something else immediately. Only clean pages are added to cache. A dirty page will go twice around the inactive queue as dirty, get flushed and then do a third pass as a clean page. The point of cache is that it's a small stock of memory that's available for immediate reuse, the pages have nothing else in common. On Wed, 07 Mar 2012 00:36:21 +0000 Luke Marsden wrote: > But that's what I'm saying... > > sum(process resident sizes) >= active + inactive Inactive memory contains disc cache.