Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 03 Jan 2002 17:57:15 -0800
From:      Terry Lambert <tlambert2@mindspring.com>
To:        John Baldwin <jhb@FreeBSD.org>
Cc:        Peter Jeremy <peter.jeremy@alcatel.com.au>, Michal Mertl <mime@traveller.cz>, Bruce Evans <bde@zeta.org.au>, Mike Smith <msmith@FreeBSD.ORG>, Bernd Walter <ticso@cicely8.cicely.de>, arch@FreeBSD.ORG, Alfred Perlstein <bright@mu.org>, Matthew Dillon <dillon@apollo.backplane.com>, Bernd Walter <ticso@cicely9.cicely.de>
Subject:   Re: When to use atomic_ functions? (was: 64 bit counters)
Message-ID:  <3C350BFB.63EC59C0@mindspring.com>
References:  <XFMail.020103145704.jhb@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
John Baldwin wrote:
> > 7)    The "scheduler insertion queue" for the selected target
> >       CPU (a simple linked list) has the task added to it (the
> >       task is migrated).  This requires a lock on the queue;
> >       note that only one other CPU is tied up in this process,
> >       in an N CPU system, and the statistical likelihood of a
> >       collision is very low.
> > 8)    Each CPU going into its scheduler looks first at its per
> >       CPU scheduler insertion queue; this look does not require
> >       a lock, in most instances, since all it is doing is a
> >       check to see if a pointer is NULL.  If the pointer is not
> >       NULL, then (and only then) is a lock on the queue
> >       asserted by the CPU that "owns" it, and the element is
> >       dequeued, the lock released, and run.
> 
> Actually, if another CPU can write to this queue whcih 7) seems to indicate,
> then you do need a lock.  Well, if you can guarantee that the entire word is
> always consistent then you may be able to get away with this, but you can read
> a stale value (you get NULL when another CPU has written to it) and you will
> miss a task.  I suppose that is a losable race as you will run the local task
> on the next switch.

Yes, it's a losable race.  You care about presence.

> When you do grab a task off your local queue you _will_ need a lock
> however.  Otherwise another CPU could be giving you a task at the
> same time and depending on how this worked you could end up with a task that
> gets "lost" and never gets to run.  Or worse, you could corrupt the list.
> Thus, you do always need at least one lock when getting the next task to run.
> However, that lock may be a per-CPU lock which will be contended for less than
> a lock on a system-wide queue.

Yes.  See #8... I call that lock out explicitly.  And it's only
contended when you are "gifted" a new task.


> > The point of this example is that the issue isn't about how much
> > or how little locks cost, but about designs that don't require
> > locks in the common case.
> 
> Yes, optimizing is good, but getting the design right first is much more
> important.  Optimizations can wait until you have the design closer to
> finalized.  Pre-mature optimization can lock you into bad design decisions that
> you end up regretting later on.

I think trying to make the locks very fast is just such a
premature optimization.  8-).

-- Terry

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3C350BFB.63EC59C0>