Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 20 Mar 2002 22:43:19 -0800 (PST)
From:      Kip Macy <kmacy@netapp.com>
To:        Ian Dowse <iedowse@maths.tcd.ie>
Cc:        Miguel Mendez <flynn@energyhq.homeip.net>, hackers@FreeBSD.ORG
Subject:   Re: mmap and efence 
Message-ID:  <Pine.GSO.4.10.10203202226210.17341-100000@orbit>
In-Reply-To: <200203200110.aa31284@salmon.maths.tcd.ie>

next in thread | previous in thread | raw e-mail | index | archive | help

> I've also found it useful to increase the value of MEMORY_CREATION_SIZE
> in the ElectricFence source. Setting this to larger than the amount
> of address space ever used by the program seems to avoid the
> vm.max_proc_mmap limit; maybe when ElectricFence calls mprotect()
> to divide up its allocated address space, each part of the split
> region is counted as a separate mmap.

Basically, since there is initially one vm_map entry per mmap. Each vm_map
entry represents a piece of virtually contiguous piece of memory where
each page is treated the same way. Hence, if I have a vm_map entry that
reference pages A, B, and C, and I mprotect B, the VM system splits that into
three vm_map entries. So another, more likely, alternative is that GTK2.0 is
doing more malloc() and free() calls than GTK1.2.


The check happens right at the end of mmap:
        /*
         * Do not allow more then a certain number of vm_map_entry structures
         * per process.  Scale with the number of rforks sharing the map
         * to make the limit reasonable for threads.
         */
        if (max_proc_mmap && 
            vms->vm_map.nentries >= max_proc_mmap * vms->vm_refcnt) {
                error = ENOMEM;
                goto done;
        }

        error = vm_mmap(&vms->vm_map, &addr, size, prot, maxprot,
            flags, handle, pos);


> 
> I came across this before while debugging perl-Tk, and one other
> issue was that the program ran fantastically slowly; a trivial
> script that normally starts in a fraction of a second was taking
> close to an hour to get there on quite fast hardware. You expect
> ElectricFence to make things slow, but not quite that slow :-)

If you have a heavily fragmented address space you could, in a pathological
case, end up with almost a page per vm_map entry. Considering the common case is
3 or 4 vm_map entries per process and yeah, it is going to mind-numbingly slow
:-(. It would be interesting if you could dump the statistics on process
vmspaces.


				-Kip


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.GSO.4.10.10203202226210.17341-100000>