Date: Fri, 6 Dec 1996 08:53:00 +0900 (JST) From: Michael Hancock <michaelh@cet.co.jp> To: Terry Lambert <terry@lambert.org> Cc: Bakul Shah <bakul@plexuscom.com>, julian@whistle.com, cracauer@wavehh.hanse.de, nawaz921@cs.uidaho.EDU, freebsd-hackers@FreeBSD.ORG Subject: Re: clone()/rfork()/threads (Re: Inferno for FreeBSD) Message-ID: <Pine.SV4.3.95.961206084725.29547A-100000@parkplace.cet.co.jp> In-Reply-To: <199612050216.TAA18540@phaeton.artisoft.com>
index | next in thread | previous in thread | raw e-mail
I wonder how DEC handles priority inversion. Do they use priority
lending?
Computing Transitive Closure takes too much time doesn't it? How many
nodes are there in a typical system? Is there an algorithm that scales
well?
Regards,
Mike Hancock
On Wed, 4 Dec 1996, Terry Lambert wrote:
> > The above idea can be extended to multi processors fairly easily.
> > Though multi-processor schedulers that can also do realtime
> > scheduling (and appropriately deal with priority inversion) are not
> > easy.
>
> Heh. "locking nodes in a directed acylic graph representing a lock
> heirarchy" will address the priority inversion handily -- assuming
> you compute transitive closure over the entire graph, instead of the
> subelements for a single processors or kernel subsystem. This
> requires that you be clere with per processor memory regions for
> global objects which are scoped in per processor pools. For instance,
> say I have N processors.
>
> global lock
> /
> /
> VM lock
> / | \
> / | \
> XXX global page pool ...
> / | \
> / | \
> CPU 1 CPU 2 ... CPU N page pool locks
>
>
> init_page_locks( 2)
> {
> lock global lock IX (intention exclusive)
> lock VM lock IX
> lock global page pool IX
> lock CPU 2 page pool lock IX
> /* promise no one but CPU2, single threaded, will touch
> * CPU 2 page pool...
> */
> lock CPU 2 page pool lock X (exclusive)
> }
>
> alloc_page( 2) /* someone on CPU 2 wants a page...*/
> {
> is page pool at low water mark? {
> /* Prevent other CPU's from doing same...*/
> lock X global page pool
> get pages from global page pool into CPU 2 page pool
> /* OK for other CPU's to do same...*/
> unlock X global page pool
> }
> return = get page from CPU 2 page pool
> }
>
> free_page( 2) /* someone on CPU is throwing a page away*/
> put page in CPU 2 page pool
> is page pool at high water mark? {
> /* Prevent other CPU's from doing same...*/
> lock X global page pool
> put pages from CPU 2 page pool into global page pool
> /* OK for other CPU's to do same...*/
> unlock X global page pool
> }
> }
>
> No need to hold a global lock or hit the bus for inter-CPU state unless
> we hit the high or low water mark...
help
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.SV4.3.95.961206084725.29547A-100000>
