Date: Fri, 18 Sep 1998 03:04:35 +0000 (GMT) From: Terry Lambert <tlambert@primenet.com> To: eivind@yes.no (Eivind Eklund) Cc: tlambert@primenet.com, freebsd-hackers@FreeBSD.ORG Subject: Re: Unused functions Message-ID: <199809180304.UAA00324@usr04.primenet.com> In-Reply-To: <19980917142841.01218@follo.net> from "Eivind Eklund" at Sep 17, 98 02:28:41 pm
next in thread | previous in thread | raw e-mail | index | archive | help
> [I'm keeping a lot of context to make this easy to read; I also note > that I didn't move this to -chat when I thought I did, but we're > getting into some locking-related discussions that are again related > to -hackers, so I keep it here now] I'll take the compiler discussion to private email... > > Using this vector as an arc through the existing Warshall-complete > > graph, you look for a Hamiltonian cycle. If the cycle exists, then > > you have a deadlock condition. > > Don't you have a deadlock condition if _any_ cycle exists? > > Ie, you grab a bunch of locks, blocking on an attempted lock of X. > Whatever is holding X end up blocking on one of your locks. You've > got a cycle, and thus you've got a deadlock. Yes. The question is "how do I know when I have a cycle?". The best way is to take the list of locks that an entity has, and consider it as a connectivity vector through a graph. For the standard graph, you have precalculated transitive closure via Warshall's before you try this. Basically, this means I have: 1) A DAG of hierarchically connected nodes that model the granularity of the locking infrastructure for the system; an individual node is called a "LOCKABLE". 2) An DAG of hierarchically related contexts that can hold locks. In typical use, this will be a set of related scheduling entities that can not be permitted to lock against themselves (e.g., kernel threads, or more generally, kernel schedulable entities, or KSE's). The most common DAG will reduce to a simply connected DAG of KSE's, or a "locking context vector". An individual node is called a "LOCKING ENTITY". 3) A simply connected DAG (vector) that describes the locks held by an individual locking entity *or* locakable. This is a "LOCK VECTOR". An individual vector element is called a "LOCK". ;-). So what we have is the intersection of two graphs complexly connected via one or more vectors that describe the connection points. When I ask for a lock, I'm asking to extend a vector by one element off (a) the locakable and (b) the locking entity. In effect, I have a 5 dimensional graph that I need to search for a cycle. > Why do you need Hamiltonian cycles? They're a pain to find, and I > can't see that looking for them gets you anything; I can't see that > whether they are present make a difference for this case. Because most of the work to find them has already been done, and then cached, into the structure of the graph. I have the transitive closure over the lockable graph, and I have the closure over the the locking entity graph, and I inherit the vector back to the "parent" (topmost) locking entity. This means I can find a Hamiltonian by examining from the point I want to lock in the lockable graph to the root of the lockable graph, looking for instances of my locking entity. If I find it, then I have a cycle. So it's mostly precalculated, and the time it takes is the time needed for exactly one traversal to the root. In generally, this the depth of the locakable in the graph which is being locked number of pointer dereferences + compares. For locking, this is damn fast, and the cost of maintaining a closure calculation when adding a leaf node (i.e., registering a new lockable into the tree) is trivial, as is the cost of inheriting the locks up. > That way to look for deadlocks look right for FreeBSD. However, I > still don't get why it need Hamiltonian (as opposed to plain) cycles > :-( Because they are nearly free, given the already cached data. Effectively, each vector is reduced to a single element, and each locking entity DAG is reduced to a single element. One of the biggest wins here is that you don't have to take IPI's for L1/L2 cache coherency for most locks. The use of intention mode locking to implement the locks on top of this framework justs ads gravy to already high concurrency. Terry Lambert terry@lambert.org --- Any opinions in this posting are my own and not those of my present or previous employers. To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-hackers" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199809180304.UAA00324>