From owner-freebsd-hackers Wed Mar 20 22:43:59 2002 Delivered-To: freebsd-hackers@freebsd.org Received: from mx01-a.netapp.com (mx01-a.netapp.com [198.95.226.53]) by hub.freebsd.org (Postfix) with ESMTP id A707D37B41B for ; Wed, 20 Mar 2002 22:43:55 -0800 (PST) Received: from frejya.corp.netapp.com (frejya [10.10.20.91]) by mx01-a.netapp.com (8.11.1/8.11.1/NTAP-1.2) with ESMTP id g2L6hf313878; Wed, 20 Mar 2002 22:43:41 -0800 (PST) Received: from orbit-fe.eng (localhost [127.0.0.1]) by frejya.corp.netapp.com (8.12.2/8.12.2/NTAP-1.4) with ESMTP id g2L6hUtd003118; Wed, 20 Mar 2002 22:43:30 -0800 (PST) Received: from localhost (kmacy@localhost) by orbit-fe.eng (8.11.6+Sun/8.11.6) with ESMTP id g2L6hJ719485; Wed, 20 Mar 2002 22:43:19 -0800 (PST) Date: Wed, 20 Mar 2002 22:43:19 -0800 (PST) From: Kip Macy To: Ian Dowse Cc: Miguel Mendez , hackers@FreeBSD.ORG Subject: Re: mmap and efence In-Reply-To: <200203200110.aa31284@salmon.maths.tcd.ie> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-freebsd-hackers@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.ORG > I've also found it useful to increase the value of MEMORY_CREATION_SIZE > in the ElectricFence source. Setting this to larger than the amount > of address space ever used by the program seems to avoid the > vm.max_proc_mmap limit; maybe when ElectricFence calls mprotect() > to divide up its allocated address space, each part of the split > region is counted as a separate mmap. Basically, since there is initially one vm_map entry per mmap. Each vm_map entry represents a piece of virtually contiguous piece of memory where each page is treated the same way. Hence, if I have a vm_map entry that reference pages A, B, and C, and I mprotect B, the VM system splits that into three vm_map entries. So another, more likely, alternative is that GTK2.0 is doing more malloc() and free() calls than GTK1.2. The check happens right at the end of mmap: /* * Do not allow more then a certain number of vm_map_entry structures * per process. Scale with the number of rforks sharing the map * to make the limit reasonable for threads. */ if (max_proc_mmap && vms->vm_map.nentries >= max_proc_mmap * vms->vm_refcnt) { error = ENOMEM; goto done; } error = vm_mmap(&vms->vm_map, &addr, size, prot, maxprot, flags, handle, pos); > > I came across this before while debugging perl-Tk, and one other > issue was that the program ran fantastically slowly; a trivial > script that normally starts in a fraction of a second was taking > close to an hour to get there on quite fast hardware. You expect > ElectricFence to make things slow, but not quite that slow :-) If you have a heavily fragmented address space you could, in a pathological case, end up with almost a page per vm_map entry. Considering the common case is 3 or 4 vm_map entries per process and yeah, it is going to mind-numbingly slow :-(. It would be interesting if you could dump the statistics on process vmspaces. -Kip To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message