From owner-freebsd-arch Thu Jan 3 17:57:30 2002 Delivered-To: freebsd-arch@freebsd.org Received: from harrier.prod.itd.earthlink.net (harrier.mail.pas.earthlink.net [207.217.120.12]) by hub.freebsd.org (Postfix) with ESMTP id 8130D37B405; Thu, 3 Jan 2002 17:57:27 -0800 (PST) Received: from pool0602.cvx21-bradley.dialup.earthlink.net ([209.179.194.92] helo=mindspring.com) by harrier.prod.itd.earthlink.net with esmtp (Exim 3.33 #1) id 16MJbh-0004XD-00; Thu, 03 Jan 2002 17:57:17 -0800 Message-ID: <3C350BFB.63EC59C0@mindspring.com> Date: Thu, 03 Jan 2002 17:57:15 -0800 From: Terry Lambert X-Mailer: Mozilla 4.7 [en]C-CCK-MCD {Sony} (Win98; U) X-Accept-Language: en MIME-Version: 1.0 To: John Baldwin Cc: Peter Jeremy , Michal Mertl , Bruce Evans , Mike Smith , Bernd Walter , arch@FreeBSD.ORG, Alfred Perlstein , Matthew Dillon , Bernd Walter Subject: Re: When to use atomic_ functions? (was: 64 bit counters) References: Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk List-ID: List-Archive: (Web Archive) List-Help: (List Instructions) List-Subscribe: List-Unsubscribe: X-Loop: FreeBSD.ORG John Baldwin wrote: > > 7) The "scheduler insertion queue" for the selected target > > CPU (a simple linked list) has the task added to it (the > > task is migrated). This requires a lock on the queue; > > note that only one other CPU is tied up in this process, > > in an N CPU system, and the statistical likelihood of a > > collision is very low. > > 8) Each CPU going into its scheduler looks first at its per > > CPU scheduler insertion queue; this look does not require > > a lock, in most instances, since all it is doing is a > > check to see if a pointer is NULL. If the pointer is not > > NULL, then (and only then) is a lock on the queue > > asserted by the CPU that "owns" it, and the element is > > dequeued, the lock released, and run. > > Actually, if another CPU can write to this queue whcih 7) seems to indicate, > then you do need a lock. Well, if you can guarantee that the entire word is > always consistent then you may be able to get away with this, but you can read > a stale value (you get NULL when another CPU has written to it) and you will > miss a task. I suppose that is a losable race as you will run the local task > on the next switch. Yes, it's a losable race. You care about presence. > When you do grab a task off your local queue you _will_ need a lock > however. Otherwise another CPU could be giving you a task at the > same time and depending on how this worked you could end up with a task that > gets "lost" and never gets to run. Or worse, you could corrupt the list. > Thus, you do always need at least one lock when getting the next task to run. > However, that lock may be a per-CPU lock which will be contended for less than > a lock on a system-wide queue. Yes. See #8... I call that lock out explicitly. And it's only contended when you are "gifted" a new task. > > The point of this example is that the issue isn't about how much > > or how little locks cost, but about designs that don't require > > locks in the common case. > > Yes, optimizing is good, but getting the design right first is much more > important. Optimizations can wait until you have the design closer to > finalized. Pre-mature optimization can lock you into bad design decisions that > you end up regretting later on. I think trying to make the locks very fast is just such a premature optimization. 8-). -- Terry To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message