From owner-freebsd-arch Thu Nov 25 1:10:51 1999 Delivered-To: freebsd-arch@freebsd.org Received: from ns1.yes.no (ns1.yes.no [195.204.136.10]) by hub.freebsd.org (Postfix) with ESMTP id D517F14CAD for ; Thu, 25 Nov 1999 01:10:47 -0800 (PST) (envelope-from eivind@bitbox.follo.net) Received: from bitbox.follo.net (bitbox.follo.net [195.204.143.218]) by ns1.yes.no (8.9.3/8.9.3) with ESMTP id KAA18651 for ; Thu, 25 Nov 1999 10:10:45 +0100 (CET) Received: (from eivind@localhost) by bitbox.follo.net (8.8.8/8.8.6) id KAA38397 for freebsd-arch@freebsd.org; Thu, 25 Nov 1999 10:10:44 +0100 (MET) Received: from alpo.whistle.com (alpo.whistle.com [207.76.204.38]) by hub.freebsd.org (Postfix) with ESMTP id 5209E14D7E for ; Thu, 25 Nov 1999 01:10:05 -0800 (PST) (envelope-from julian@whistle.com) Received: from current1.whiste.com (current1.whistle.com [207.76.205.22]) by alpo.whistle.com (8.9.1a/8.9.1) with ESMTP id BAA65194; Thu, 25 Nov 1999 01:09:13 -0800 (PST) Date: Thu, 25 Nov 1999 01:09:13 -0800 (PST) From: Julian Elischer To: "Richard Seaman, Jr." Cc: freebsd-arch@freebsd.org Subject: Re: Threads In-Reply-To: <19991124173947.K1408@tar.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG On Wed, 24 Nov 1999, Richard Seaman, Jr. wrote: > On Wed, Nov 24, 1999 at 02:40:52PM -0800, Julian Elischer wrote: > > > here are some common points that I think everyone agrees on: > > > > The proc structure get's broken down to separate out those parts needed > > to schedule. (i.e. context etc) > > This is probably the best. But, if you wanted to start out in a > less invasive manner (ie. less kernel changes), I don't know why > the KSE couldn't be an rforked proc (with appropriate flags), > with only minor changes to the proc structure (eg. you might need > to hold a pointer to the UTS, though if your "upcall" was just > a signal, maybe the pointer to the UTS could be a signal handler). > > In this case your KSE would be "heavier", in terms of kernel context, > than you would really need, but hopefully you wouldn't need all that > many of them (one for each processor plus one for each thread blocked > in a syscall, plus maybe a few more if you need more scheduler classes > for your thread pool). I was thinking about this. but I think it may be better to actually start the split. at the moment we have in our mental model three kinds of entities involved: 1/ A process. Basically looks like a unx process. it can use a variant of rfork() to produce several subprocesses. 2/ To run on multiple CPUs you need multiple subprocesses. Also if you want to run some code in a differnet scheduling environment (e.g different priority), you would need a subprocess for that. For example on a 4 processor machine you could have 4 low priority subprocesses, each bound to one machine, and a high priority subprocess that might or might not be bound to a processor. Subprocesses compete with non MT processes as equals for CPU slices, but share address space etc. they are basically the LINUXthreads system. 3/ KSE's Each Subprocess has at least ONE KSE. In Linux threads, that's the end of the story. However in our ideal world, once a mode was flipped, then any KSE that blocked in the kernel would immediatly be replaced by another that would be made available to the subprocess, to allow it to finish out its quantum. Thus the real definition would be that a SUbprocess had at least one KSE, and at most one Running KSE. A KSE is basically a holder to hold context for a thread of control within the kernel. i.e it has a kernel stack and a few fields to allow it to sleep on the sleep queues and space for storage of all the registers. (maybe on the stack:-) A KSE is woken up, It's taken off the sleep queue an hung on the subprocess (which is put on the run queue). When the subprocess is next run, one of two things happens (depending on whether you use Matt's theory or mine/Dan's) In Matt's world, one of the KSE's hanging off the subprocess is fired off to return to user space. In My world, a UTS is started, and completion information on all waiting KSE's is passed back to the UTS, and the KSE's are all freed. > > The aio kernel code manages a pool of "kernel threads" that might > provide a template for how to manage a pool of KSE's in this manner. I was looking at this just yesterday :-) > > Doing this might get you up and running sooner, with a longer term > goal of reivsing the proc structure more extensively. > > Just a thought. > > > -- > Richard Seaman, Jr. email: dick@tar.com > 5182 N. Maple Lane phone: 262-367-5450 > Chenequa WI 53058 fax: 262-367-5852 > To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message