Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 20 Nov 1999 22:59:19 -0800 (PST)
From:      Julian Elischer <julian@whistle.com>
To:        "Daniel M. Eischen" <eischen@vigrid.com>
Cc:        freebsd-arch@freebsd.org
Subject:   Re: Threads
Message-ID:  <Pine.BSF.4.10.9911202126170.6767-100000@current1.whistle.com>
In-Reply-To: <38378171.82BE24C3@vigrid.com>

next in thread | previous in thread | raw e-mail | index | archive | help


On Sun, 21 Nov 1999, Daniel M. Eischen wrote:

> Julian Elischer wrote:
> > 
> > On Sat, 20 Nov 1999, Daniel M. Eischen wrote:
> > > How about an approach similar to Solaris?  Deliver the signal to either a
> > > special KSE/thread that acts as a proxy for the other threads?  Or even
> > > use an activation to notify the UTS of signals.  From the UTS point of
> > > view, notification through an activation might be simpler since we already
> > > have to handle them (activations).
> > 
> > We need to define EXACTLY what an activation means in our context.
> > it's an upcall, but are all upcalls the result of a prior downcall, (or
> > even a downcall that 'forked a new KSE')? how does an upcall know what to
> > do when it goes up?
> 
> I'd say that all upcalls are _not_ necessarily the result of a prior downcall.
> A subprocess that is preempted should force an upcall on the next available
> subprocess.
> 
> I assume the UTS will install a set of upcall entry points to handle
> async notification events.  We just need to define what events are passed
> between the UTS and the kernel.

What better way to install them than to do a syscall down?
All the kernel needs to know is that it sets some stuff, pre-empts the
present KSE and runs another.

> 
> > My suggested idea is that the UTS does a blocking syscall
> > that notifies the kernel that it is now capable of receiving upcalls,
> > and the kermnel duplicates that KSE and stack and returns on a new
> > KSE. The UTS knows when it returns, to immediatly allocate a new stack
> > and thread (probably already allocated and waiting before the downcall
> > actually) and switches to the new stack (or maybe the downcall was
> > already done on the new one in which case it restarts the old thread. In
> > any case the thread that did the downcall is the nominated handler of
> > async events.
> 
> That's OK, but why not use a ucontext passed in with a new system
> call?  Make the UTS supply ucontexts (made with makecontext) for event
> notifications also.

The UTS requires a KSE to run in, otherwise it has no CPU
so there has to be one reserved for the task I think. (maybe not)

That reminds me that all the returning of ucontexts around the place is
one of the reasons why a new callgate would be a good idea.

I'm not that fussed about it but it just seems to me that the information
being passed back and forth is DIFFERENT than that being passed back and
forth in the non-threaded world.

I don't want to get into the trap of deciding that we are doing
everything the "scheduler activations way", because we may be doing some
things quite different.


My philosophy is that things should be precomputed early and when they are
required they are just 'used'.

Here is a first half-thought out run-threough of how an IO syscall blocks
and the UTS gets notified.
---------------------

in a syscall,

the thread loads some values on the user stack

the thread calls the ULT syscall handler

The ULT allocates an IO status block

the ULT sycall handler calls the new callgate

The callgate causes the user stack to be saved and registers to be saved,
etc. the kernel stack is activated.

a flag (a) is set.

a setjmp() is done and saved at (b) and the stack is marked at (c)

syscall is processed......

at some point a tsleep() is hit.

it notices that flag (a) is set, and

grabs a new KSE from the KSE cache.

copies the kernel stack up to to (c) to that on the new KSE

copies the Async KSE's stack to (c) onto the old KSE

hacks the contents of (b) to fit the new KSE

does a longjmp().. We are now effectively the thread returning

Set 'incomplete' into the IO status block.

* some stack munging thing done here I am not sure about *

returns to the UTS indicating thread blocked

* some user stack munging thing done here I am not sure about *

UTS suspends thread

UTS schedules new thread

***Time passes****

down in the bowels of the system the original KSE unblocks..

control passes back up to the top of the kernel.

It sets the status of the IO as it would have, then returns,

and finds itself in the ASYNC entrypoint of the UTS. (or maybe on the
original return path.. depends on state of stack)

the UTS notices that the IO has completed, and schedules the original
thread to continue. The KSE then runs that thread until the UTS says
otherwise.

------

notes:
maybe only one KSE runnable on each subproc at a time? (maybe need N
subprocs to run N processors)






 > 
> Dan Eischen
> eischen@vigrid.com
> 






To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.10.9911202126170.6767-100000>