From owner-freebsd-arch Wed Feb 27 11:47: 1 2002 Delivered-To: freebsd-arch@freebsd.org Received: from angelica.unixdaemons.com (angelica.unixdaemons.com [209.148.64.135]) by hub.freebsd.org (Postfix) with ESMTP id 42D4237B400 for ; Wed, 27 Feb 2002 11:46:56 -0800 (PST) Received: from angelica.unixdaemons.com (bmilekic@localhost.unixdaemons.com [127.0.0.1]) by angelica.unixdaemons.com (8.12.2/8.12.1) with ESMTP id g1RJkgh4041423; Wed, 27 Feb 2002 14:46:42 -0500 (EST) Received: (from bmilekic@localhost) by angelica.unixdaemons.com (8.12.2/8.12.1/Submit) id g1RJkgLK041422; Wed, 27 Feb 2002 14:46:42 -0500 (EST) (envelope-from bmilekic) Date: Wed, 27 Feb 2002 14:46:42 -0500 From: Bosko Milekic To: Matthew Dillon Cc: Jeff Roberson , arch@FreeBSD.ORG Subject: Re: Slab allocator Message-ID: <20020227144642.A40638@unixdaemons.com> References: <20020227005915.C17591-100000@mail.chesapeake.net> <200202271926.g1RJQCm29905@apollo.backplane.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.2.5.1i In-Reply-To: <200202271926.g1RJQCm29905@apollo.backplane.com>; from dillon@apollo.backplane.com on Wed, Feb 27, 2002 at 11:26:12AM -0800 Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.ORG On Wed, Feb 27, 2002 at 11:26:12AM -0800, Matthew Dillon wrote: > > :... > : > :There are also per cpu queues of items, with a per cpu lock. This allows > :for very effecient allocation, and also it provides near linear > :performance as the number of cpus increase. I do still depend on giant to > :talk to the back end page supplier (kmem_alloc, etc.). Once the VM is > :locked the allocator will not require giant at all. > :... > : > :Since you've read this far, I'll let you know where the patch is. ;-) > : > :http://www.chesapeake.net/~jroberson/uma.tar > :... > :Any feedback is appreciated. I'd like to know what people expect from > :this before it is committable. > : > :Jeff > : > :PS Sorry for the long winded email. :-) > > Well, one thing I've noticed right off the bat is that the code > is trying to take advantage of per-cpu queues but is still > having to obtain a per-cpu mutex to lock the per-cpu queue. Yes, that's normal. One can get pre-empted here. > Another thing I noticed is that the code appears to assume > that PCPU_GET(cpuid) is stable in certain places, and I don't > think that condition necessarily holds unless you explicitly > enter a critical section (critical_enter() and critical_exit()). > There are some cases where you obtain the per-cpu cache and lock > it, which would be safe even if the cpu changed out from under > you, and other case such as in uma_zalloc_internal() where you > assume that the cpuid is stable when it isn't. No, what he does is take PCPU_GET(cpuid) and save it in a variable. If he gets pre-empted (unlikely) and he gets shifted CPUs he still uses the old CPU's cache. That's fine as long as it's done correctly. > I also noticed that cache_drain() appears to be the only > place where you iterate through the per-cpu mutexes. All > the other places appear to use the current-cpu's mutex. That's normal, he drains all PCPU caches. [...] > * That you consider an alternative method for draining > the per-cpu caches. For example, by having the > per-cpu code use a global, shared SX lock along > with the critical section to access their per-cpu > caches and then have the cache_drain code obtain > an exclusive SX lock in order to have full access > to all of the per-cpu caches. > > * Documentation. i.e. comment the code more, especially > areas where you have to special-case things like for > example when you unlock a cpu cache in order to > call uma_zfree_internal(). > > -Matt > Matthew Dillon > -- Bosko Milekic bmilekic@unixdaemons.com bmilekic@FreeBSD.org To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message