Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 12 Sep 2000 00:12:52 +0000 (GMT)
From:      Terry Lambert <tlambert@primenet.com>
To:        grog@lemis.com (Greg Lehey)
Cc:        jhb@pike.osd.bsdi.com (John Baldwin), markm@FreeBSD.ORG (Mark Murray), FreeBSD-arch@FreeBSD.ORG
Subject:   Re: cvs commit: src/sys/conf files src/sys/sys random.h src/sys/dev/randomdev hash.c hash.h harvest.c randomdev.c yarrow.c yarro
Message-ID:  <200009120012.RAA25062@usr09.primenet.com>
In-Reply-To: <20000912091705.O19431@wantadilla.lemis.com> from "Greg Lehey" at Sep 12, 2000 09:17:05 AM

next in thread | previous in thread | raw e-mail | index | archive | help
> [following up to -arch]
[ ... ]
> I've been wondering whether we shouldn't associate mutexes with data
> structures rather than code.  It's possible that it would make it
> easier to avoid deadlocks.  Thoughts?

I think that deadlock detection and back-off beats deadlock avoidance,
particularly if you are trying to debug something hairy.  The ability
to take an EWOULDBLOCK response and decide to print out the cycle
that would result in the deadlock, not only for the caller trying to
assert a new lock/mutex, but for the other player(s) in the deadlock
situation, is invaluable.

To achieve this, you really want to associate all locks into an
overall hierarchical relationship.  This lets you keep metadata
on the locks; if you are familiar with Sedgewick, what you will
be doing is keeping a running tally of the transitive closure of
arcs (one per locking thread) over a directed acyclic graph.

When you want to assert a new lock, you imply an edge from the lock
location (as a leaf node) to the root.  Then you can do a linear
traversal to the root; this will immediately point out any cycles
which would be caused by the new lock, should the assertion go
through.  This is normally a small number of compares, equal to the
depth of your tree at the location where the new lock would be,
plus one.  Effectively, this is an O(1) method of finding Hamiltonian
cycles in your proposed lock list.

Creation and deletion of lock objects in the graph is a rare event,
and so can be permitted to be relatively expensive (though it's on
the order of 20,000 transactions a second on a P60, per work done at
Novell in 1994).

SIX mode locking can further increase concurrency.

It's easy to see that you could have, for example, a process table
that you wanted to be able to simultaneously access.  You could have
a top level "entrire process table" lock, and then divide the table
into 6 chunks with inferior locks hung off the top level lock.  This
would let you traverse 5/6ths of the table without contention, with
one writer holding a lock for creation of a new table entry.  You
could take this to a per slot locking granularity, if you felt it
were necessary for the task at hand.

The main point is that, the closer you get to the root, the less time
you have to hold the lock to get an inferior lock, since the inferior
(leaf) lock is what's protecting your data -- non-leaf locks only
protect access to their inferior lists.

Dynix used a similar approach for some of its highly contended kernel
structures and resource pools, though it only went one deep in most
places.


					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200009120012.RAA25062>