Date: Wed, 29 Jan 2003 23:19:24 -0800 (PST) From: Matthew Dillon <dillon@apollo.backplane.com> To: David Schultz <dschultz@uclink.Berkeley.EDU> Cc: James Gritton <gritton@iserver.com>, freebsd-hackers@FreeBSD.ORG Subject: Re: What's the memory footprint of a set of processes? Message-ID: <200301300719.h0U7JOfI086054@apollo.backplane.com> References: <Pine.BSF.4.21.0301291145030.25856-100000@InterJet.elischer.org> <x7k7gnog4m.fsf@guppy.dmz.orem.verio.net> <20030130064448.GA7258@HAL9000.homeunix.com>
next in thread | previous in thread | raw e-mail | index | archive | help
:Thus spake James Gritton <gritton@iserver.com>: :> The object's ref_count hasn't changed, which is what I meant about seeing :> reference counts in the kernel that were apparently not counting what I'm :> looking for. I did see a ref_count increase on the first object :> (presumably the text image), but nothing on the allocated memory. :> :> It seems the object level isn't fine enough, but the deeper I go into the :> VM code, the more confused I become. In this forked process example, what :> happens when I alter a few COW pages in the currently-shared object? :> Apparently a shadow object is created, but it claims to be the same size as :> the original object. True, but I know it's not actually using that many :> pages, since most of them are still validly shared. System usage numbers :> tell me this is true, but I can't find what in the process or object data :> structures reflect this fact. : :No, you don't have enough information. Even if you knew which :objects shadowed which, I still don't think you would have enough :information. You want to account for physical pages, so you :should be looking at vm_page structures. AFAIK, there isn't an :interface to do that, but one shouldn't be too hard to implement. Well, first he should read my DaemonNews article. http://www.daemonnews.org/200001/freebsd_vm.html Now, in regards to COW faults and shadow pages, it basically comes down to the process's VM map (struct vm_map) which contains a list of vm_map_entry structures which tell the VM system what VM address ranges correspond to what VM objects. For all intents and purposes, the size of a VM object (struct vm_object) which represents anonymous memory is irrelevant. What is relevant is the size of the 'window' into the VM object defined by the vm_map_entry. You can get a list of vm_map_entry's associated with a process using the /proc filesystem. e.g. dd if=/proc/PID/map bs=256k dd if=/proc/86006/map bs=256k Shadow objects are basically just a way for the VM system to be able to mix pages that are COW faulted and pages that have not yet been COW faulted together without having to create independant little VM objects for each faulted page. The shadowing is basically just layering two or more VM objects on top of each other. If the top VM object doesn't have the page the system recurses into deeper VM objects to find the page. If the access is a read the deeper page is simply used straight out. If the access is a write the deeper page is COW copied into the top level VM object. Generally speaking I don't think you have to go into this level of detail to figure out approximate real memory use. Just look at the output of the /proc/PID/map and pull out the 'default' or 'swap' lines. Ignore the 'vnode' lines. apollo:/home/dillon# dd if=/proc/85965/map bs=256k 0x8048000 0x805b000 14 15 0xd2482600 r-x 2 1 0x0 COW NC vnode 0x805b000 0x805c000 1 0 0xd2830960 rw- 1 0 0x2180 COW NNC vnode 0x805c000 0x8061000 5 0 0xd29ea360 rw- 2 0 0x2180 NCOW NNC default <<<<< 0x8061000 0x8156000 237 0 0xd29ea360 rwx 2 0 0x2180 NCOW NNC default <<<<< 0x2805b000 0x2806d000 17 0 0xc031df60 r-x 77 34 0x4 COW NC vnode 0x2806d000 0x2806e000 1 0 0xd2c12d80 rw- 1 0 0x2180 COW NNC vnode 0x2806e000 0x28070000 2 0 0xd2c84c60 rw- 2 0 0x2180 NCOW NNC default <<<<< 0x28070000 0x28078000 6 0 0xd2c84c60 rwx 2 0 0x2180 NCOW NNC default <<<<< 0x28078000 0x280f8000 90 0 0xc031e2c0 r-x 100 58 0x4 COW NC vnode 0x280f8000 0x280f9000 1 0 0xd2d5ad20 r-x 1 0 0x2180 COW NNC vnode 0x280f9000 0x280fe000 5 0 0xd34e2f00 rwx 1 0 0x2180 COW NNC vnode 0x280fe000 0x28112000 7 0 0xd34100c0 rwx 1 0 0x2180 NCOW NNC default <<<<< 0x2a07b000 0x2bfe4000 8041 0 0xd342e4e0 r-x 1 0 0x0 NCOW NNC vnode 0xbfbe0000 0xbfc00000 2 0 0xd2730300 rwx 1 0 0x2180 NCOW NNC default <<<<< Now you have two metrics. You can calculate the size based on the range (the first two arguments). (addressB - addressA), and you can get the total number of resident pages from the third argument. What you can't do easily is figure out the total allocation because that is actually going to be resident pages + swapped pages. swapped pages is not included in the proc output. The fifth argument is the VM object. You could garner more information from the VM object by accessing it via /dev/kvm. You can get an approximate number of swapped out pages using: (vm_object->un_pager.swp.swp_bcount * 16) (only from VM objects of type OBJT_SWAP). You can get an exact count by delving into the swap array and locating all the swblock meta structures associated with the object used to store swap information for the object. That gets rather involved, I wouldn't bother. You can also theoretically push into shadow VM objects to locate pages from the parent process that have not yet been COW'd into the child (in the case of a fork()), noting also that these shadow objects might be shared with other children of the parent and by the parent process itself, but under most conditions this information will not be significant and can be ignored. Any vnode object is always shared with other processes mapping the same vnode. Since this information is backed by a file, figuring out how much 'memory' it represents by any reasonable definition is guesswork. The resident page count will represent how much of the vnode is cached, but not how much of the vnode is actually being accessed by the process. -Matt Matthew Dillon <dillon@backplane.com> To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200301300719.h0U7JOfI086054>