From owner-freebsd-arch Wed Feb 27 15: 7:12 2002 Delivered-To: freebsd-arch@freebsd.org Received: from apollo.backplane.com (apollo.backplane.com [216.240.41.2]) by hub.freebsd.org (Postfix) with ESMTP id 1ECD237B41A for ; Wed, 27 Feb 2002 15:07:07 -0800 (PST) Received: (from dillon@localhost) by apollo.backplane.com (8.11.6/8.9.1) id g1RN75T31581; Wed, 27 Feb 2002 15:07:05 -0800 (PST) (envelope-from dillon) Date: Wed, 27 Feb 2002 15:07:05 -0800 (PST) From: Matthew Dillon Message-Id: <200202272307.g1RN75T31581@apollo.backplane.com> To: Jeff Roberson Cc: arch@FreeBSD.ORG Subject: Re: Slab allocator References: <20020227172755.W59764-100000@mail.chesapeake.net> Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.ORG :See comments below: : :On Wed, 27 Feb 2002, Matthew Dillon wrote: : :> :> Well, one thing I've noticed right off the bat is that the code :> is trying to take advantage of per-cpu queues but is still :> having to obtain a per-cpu mutex to lock the per-cpu queue. :> :> Another thing I noticed is that the code appears to assume :> that PCPU_GET(cpuid) is stable in certain places, and I don't :> think that condition necessarily holds unless you explicitly :> enter a critical section (critical_enter() and critical_exit()). :> There are some cases where you obtain the per-cpu cache and lock :> it, which would be safe even if the cpu changed out from under :> you, and other case such as in uma_zalloc_internal() where you :> assume that the cpuid is stable when it isn't. : :Ok, I did make a PCPU_GET mistake.. If uma_zalloc_internal is called from :the fast path it needs to hand down a cache. The point of the locks is :so that you don't have to have a critical section around the entire :allocator. They really should be fast because they should only be cached :in one cpu's cache. This also makes it easier to drain. I think that the :preemption and migration case is going to be somewhat rare so it's ok to :block another cpu for a little while if it happens. As long as I pass :around a cpu # it shouldn't matter if I get preempted. Well, of course it is always nice when using a critical section to minimize the cycles, which is why you would only use it for the common-case code. When used properly it can save a lot of cycles. For i386 critical_enter() will soon be optimized down to an inlined non-bus-locked ++td->td_critnest and critical_exit() will wind up being essentially --td->td_critnest. That is a huge savings over a mutex which at a minimum is going to do a locked bus cycle to memory. You want to be careful not to penalize the critical path (i.e. the common case code) for the benefit of procedures which are executed comparitively rarely. That said, critical sections do not necessarily have to block interrupts. Bruce has demonstrated that certain FAST interrupts can in fact be allowed to operate even while in a critical section. The critical_*() code I will be committing as soon as possible gets us half way there and already allows certain interrupts (such as VM related IPIs) to execute while inside a critical section. For SMPng I am confident that at the very least we will be able to schedule ithreads even while in a critical section, as long as sched_lock is not being held or is being held in a safe zone, and we will probably be able to execute certain FAST interrupts as well. So I would not worry too much about critical sections blocking interrupts. They should be thought of as a mechanism to prevent unexpected preemption or cpu migration and, insofar as FAST interrupts do not usually call into other subsystems, to prevent unexpected alterations of the per-cpu data. They should not be thought of as a mechanism that blocks interrupts. -Matt To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message