Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 18 Feb 2003 20:57:59 -0800 (PST)
From:      Matthew Dillon <dillon@apollo.backplane.com>
To:        Bosko Milekic <bmilekic@unixdaemons.com>
Cc:        freebsd-arch@FreeBSD.ORG
Subject:   Re: mb_alloc cache balancer / garbage collector
Message-ID:  <200302190457.h1J4vxes000970@apollo.backplane.com>
References:  <200302171742.h1HHgSOq097182@apollo.backplane.com> <20030217154127.A66206@unixdaemons.com> <200302180000.h1I00bvl000432@apollo.backplane.com> <20030217192418.A67144@unixdaemons.com> <20030217192952.A67225@unixdaemons.com> <200302180101.h1I11AWr001132@apollo.backplane.com> <20030217203306.A67720@unixdaemons.com> <200302180458.h1I4wQiA048763@apollo.backplane.com> <20030218093946.A69621@unixdaemons.com> <200302181757.h1IHvjaC051829@apollo.backplane.com> <20030218134836.A70583@unixdaemons.com>

next in thread | previous in thread | raw e-mail | index | archive | help

:> 
:> 	void **uma_lock = NULL;
:> 
:> 	/*
:> 	 * use of *uma_lock is entirely under the control of UMA.  It
:> 	 * can  release block and reobtain, release and obtain another
:> 	 * lock, or not use it at all (leave it NULL).  The only 
:> 	 * requirement is that you call uma_cache_unlock(&uma_lock) 
:> 	 * after you are through and that you not block in between UMA 
:> 	 * operations.
:> 	 */
:> 	uma_cache_free(&uma_lock, ...) ... etc
:> 	uma_cache_alloc(&uma_lock, ...) ... etc
:> 
:> 	uma_cache_unlock(&uma_lock);
:> 
:  It's not quite that simple.  You would also have to teach it how to
:  drop the lock if one of the allocations fails (or if it has to go to
:  another cache) and how to tell the caller that it has done that.
:...
:Bosko Milekic * bmilekic@unixdaemons.com * bmilekic@FreeBSD.org

    I think you missed the double pointer.  It's void **uma_lock,
    not void *uma_lock.  i.e. UMA can use *uma_lock for whatever
    it wants, including dropping and reobtaining, or just dropping,
    or whatever.

    Then you could call the uma allocator a whole bunch of times
    with virtually no overhead.

    Another alternative is to simply add a mutex pointer to the
    current thread and allow *any* major kernel API to use it to
    cache an obtained mutex in order to streamline multiple calls.
    It would be a very powerful efficiency mechanism but would
    also require a mindset change on the part of kernel developers.

					-Matt
					Matthew Dillon 
					<dillon@backplane.com>

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200302190457.h1J4vxes000970>