From owner-freebsd-hackers Thu Dec 5 15:54:02 1996 Return-Path: Received: (from root@localhost) by freefall.freebsd.org (8.8.4/8.8.4) id PAA13187 for hackers-outgoing; Thu, 5 Dec 1996 15:54:02 -0800 (PST) Received: from parkplace.cet.co.jp (parkplace.cet.co.jp [202.32.64.1]) by freefall.freebsd.org (8.8.4/8.8.4) with ESMTP id PAA13178 for ; Thu, 5 Dec 1996 15:53:59 -0800 (PST) Received: from localhost (michaelh@localhost) by parkplace.cet.co.jp (8.8.3/CET-v2.1) with SMTP id XAA29588; Thu, 5 Dec 1996 23:53:00 GMT Date: Fri, 6 Dec 1996 08:53:00 +0900 (JST) From: Michael Hancock To: Terry Lambert cc: Bakul Shah , julian@whistle.com, cracauer@wavehh.hanse.de, nawaz921@cs.uidaho.EDU, freebsd-hackers@FreeBSD.ORG Subject: Re: clone()/rfork()/threads (Re: Inferno for FreeBSD) In-Reply-To: <199612050216.TAA18540@phaeton.artisoft.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-hackers@FreeBSD.ORG X-Loop: FreeBSD.org Precedence: bulk I wonder how DEC handles priority inversion. Do they use priority lending? Computing Transitive Closure takes too much time doesn't it? How many nodes are there in a typical system? Is there an algorithm that scales well? Regards, Mike Hancock On Wed, 4 Dec 1996, Terry Lambert wrote: > > The above idea can be extended to multi processors fairly easily. > > Though multi-processor schedulers that can also do realtime > > scheduling (and appropriately deal with priority inversion) are not > > easy. > > Heh. "locking nodes in a directed acylic graph representing a lock > heirarchy" will address the priority inversion handily -- assuming > you compute transitive closure over the entire graph, instead of the > subelements for a single processors or kernel subsystem. This > requires that you be clere with per processor memory regions for > global objects which are scoped in per processor pools. For instance, > say I have N processors. > > global lock > / > / > VM lock > / | \ > / | \ > XXX global page pool ... > / | \ > / | \ > CPU 1 CPU 2 ... CPU N page pool locks > > > init_page_locks( 2) > { > lock global lock IX (intention exclusive) > lock VM lock IX > lock global page pool IX > lock CPU 2 page pool lock IX > /* promise no one but CPU2, single threaded, will touch > * CPU 2 page pool... > */ > lock CPU 2 page pool lock X (exclusive) > } > > alloc_page( 2) /* someone on CPU 2 wants a page...*/ > { > is page pool at low water mark? { > /* Prevent other CPU's from doing same...*/ > lock X global page pool > get pages from global page pool into CPU 2 page pool > /* OK for other CPU's to do same...*/ > unlock X global page pool > } > return = get page from CPU 2 page pool > } > > free_page( 2) /* someone on CPU is throwing a page away*/ > put page in CPU 2 page pool > is page pool at high water mark? { > /* Prevent other CPU's from doing same...*/ > lock X global page pool > put pages from CPU 2 page pool into global page pool > /* OK for other CPU's to do same...*/ > unlock X global page pool > } > } > > No need to hold a global lock or hit the bus for inter-CPU state unless > we hit the high or low water mark...