From owner-freebsd-hackers Thu Jan 30 18:50: 4 2003 Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 4BF7F37B401 for ; Thu, 30 Jan 2003 18:50:03 -0800 (PST) Received: from HAL9000.homeunix.com (12-233-57-224.client.attbi.com [12.233.57.224]) by mx1.FreeBSD.org (Postfix) with ESMTP id 5C8BC43F43 for ; Thu, 30 Jan 2003 18:50:02 -0800 (PST) (envelope-from dschultz@uclink.Berkeley.EDU) Received: from HAL9000.homeunix.com (localhost [127.0.0.1]) by HAL9000.homeunix.com (8.12.6/8.12.5) with ESMTP id h0V2o0Nt011417; Thu, 30 Jan 2003 18:50:00 -0800 (PST) (envelope-from dschultz@uclink.Berkeley.EDU) Received: (from das@localhost) by HAL9000.homeunix.com (8.12.6/8.12.5/Submit) id h0V2nqm1011416; Thu, 30 Jan 2003 18:49:52 -0800 (PST) (envelope-from dschultz@uclink.Berkeley.EDU) Date: Thu, 30 Jan 2003 18:49:52 -0800 From: David Schultz To: Matthew Dillon Cc: James Gritton , freebsd-hackers@FreeBSD.ORG Subject: Re: What's the memory footprint of a set of processes? Message-ID: <20030131024952.GA11372@HAL9000.homeunix.com> Mail-Followup-To: Matthew Dillon , James Gritton , freebsd-hackers@FreeBSD.ORG References: <20030130064448.GA7258@HAL9000.homeunix.com> <200301300719.h0U7JOfI086054@apollo.backplane.com> <20030130091419.GA7776@HAL9000.homeunix.com> <200301301923.h0UJNT0l089037@apollo.backplane.com> <20030131001436.GA10856@HAL9000.homeunix.com> <200301310023.h0V0NqAE090963@apollo.backplane.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200301310023.h0V0NqAE090963@apollo.backplane.com> Sender: owner-freebsd-hackers@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.ORG Thus spake Matthew Dillon : > :Thanks for the explanations! I still don't understand why this > :doesn't work, assuming you don't care about nonresident pages: > : > :for each process p in the set > : for each map entry e in p->vmspace->vm_map > : for each page m in e->object.vm_object->memq > : if I haven't seen this m.phys_addr yet in the scan > : resident_pages++ > > That would get close, as long as the machine is not paging heavily. > Think of it this way: If you have a lot of ram the above calculation > will give you an upper bound on memory use, but some of the pages > in the VM object's may be very old and not actually under active > access by the process (for example, the pages might represent part > of the program that was used during initialization and then never used > again). If you do not have so much memory older pages will get flushed > out or flushed to swap and the above calculation will represent more > of a lower bound on the memory used by the group of processes. Yes, I understand this; that's why I said ``assuming you don't care about nonresident pages'' (a pretty big assumption, mind you.) I was just thinking about essentially calculating the physical memory usage for a set of processes, taking sharing into account, and I take it you were talking about calculating the total amount mapped. I imagine both metrics would be useful. For instance, a database might map a huge file but have a very small resident set. I don't know what the OP intended... To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message