Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 21 Nov 1999 08:41:13 -0500
From:      "Daniel M. Eischen" <eischen@vigrid.com>
To:        Julian Elischer <julian@whistle.com>
Cc:        freebsd-arch@freebsd.org
Subject:   Re: Threads
Message-ID:  <3837F679.BCEC1312@vigrid.com>
References:  <Pine.BSF.4.10.9911202126170.6767-100000@current1.whistle.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Julian Elischer wrote:
> 
> On Sun, 21 Nov 1999, Daniel M. Eischen wrote:
> > I'd say that all upcalls are _not_ necessarily the result of a prior downcall.
> > A subprocess that is preempted should force an upcall on the next available
> > subprocess.
> >
> > I assume the UTS will install a set of upcall entry points to handle
> > async notification events.  We just need to define what events are passed
> > between the UTS and the kernel.
> 
> What better way to install them than to do a syscall down?
> All the kernel needs to know is that it sets some stuff, pre-empts the
> present KSE and runs another.

I assumed that the UTS would have to perform a system call to
install the UTS upcall entry point(s).

> 
> >
> > > My suggested idea is that the UTS does a blocking syscall
> > > that notifies the kernel that it is now capable of receiving upcalls,
> > > and the kermnel duplicates that KSE and stack and returns on a new
> > > KSE. The UTS knows when it returns, to immediatly allocate a new stack
> > > and thread (probably already allocated and waiting before the downcall
> > > actually) and switches to the new stack (or maybe the downcall was
> > > already done on the new one in which case it restarts the old thread. In
> > > any case the thread that did the downcall is the nominated handler of
> > > async events.
> >
> > That's OK, but why not use a ucontext passed in with a new system
> > call?  Make the UTS supply ucontexts (made with makecontext) for event
> > notifications also.
> 
> The UTS requires a KSE to run in, otherwise it has no CPU
> so there has to be one reserved for the task I think. (maybe not)

That's why you hand the kernel (at UTS initialization time) premade
contexts for handling async notification.

> That reminds me that all the returning of ucontexts around the place is
> one of the reasons why a new callgate would be a good idea.
> 
> I'm not that fussed about it but it just seems to me that the information
> being passed back and forth is DIFFERENT than that being passed back and
> forth in the non-threaded world.

True, but it is passed back and forth in NEW ways.  It doesn't affect
non-MT system calls.

> I don't want to get into the trap of deciding that we are doing
> everything the "scheduler activations way", because we may be doing some
> things quite different.

Granted, but let's also not let NIH have any impact on what we decide ;-)

> My philosophy is that things should be precomputed early and when they are
> required they are just 'used'.

Agreed.

> Here is a first half-thought out run-threough of how an IO syscall blocks
> and the UTS gets notified.
> ---------------------
> 
> in a syscall,
> 
> the thread loads some values on the user stack
> 
> the thread calls the ULT syscall handler
> 
> The ULT allocates an IO status block
> 
> the ULT sycall handler calls the new callgate
> 
> The callgate causes the user stack to be saved and registers to be saved,
> etc. the kernel stack is activated.
> 
> a flag (a) is set.
> 
> a setjmp() is done and saved at (b) and the stack is marked at (c)
> 
> syscall is processed......

This seems overly complicated.  What if the system call doesn't
block?  You've just done all the above for nothing.

> 
> at some point a tsleep() is hit.
> 
> it notices that flag (a) is set, and
> 
> grabs a new KSE from the KSE cache.
> 
> copies the kernel stack up to to (c) to that on the new KSE
> 
> copies the Async KSE's stack to (c) onto the old KSE

Why not just context switch to the cached context beginning
at the UTS defined async notification entry points?

> 
> hacks the contents of (b) to fit the new KSE
> 
> does a longjmp().. We are now effectively the thread returning
> 
> Set 'incomplete' into the IO status block.
> 
> * some stack munging thing done here I am not sure about *
> 
> returns to the UTS indicating thread blocked
> 
> * some user stack munging thing done here I am not sure about *
> 
> UTS suspends thread
> 
> UTS schedules new thread
> 
> ***Time passes****
> 
> down in the bowels of the system the original KSE unblocks..
> 
> control passes back up to the top of the kernel.
> 
> It sets the status of the IO as it would have, then returns,
> 
> and finds itself in the ASYNC entrypoint of the UTS. (or maybe on the
> original return path.. depends on state of stack)
> 
> the UTS notices that the IO has completed, and schedules the original
> thread to continue. The KSE then runs that thread until the UTS says
> otherwise.

I think the above approach is too complicated and can be simplified
to eliminate the overhead in setting up the system call.

Here's my thoughts.

At thread library intialization time, the UTS makes contexts for
async notification event handlers.  Contexts are in the form of a
return from a trap to the kernel.  My idea is to use setcontext(2)
and makecontext(2) to create the contexts.  We may need to give
the kernel additional user stacks, but that isn't clear yet.  Flag
(async_notify) is set in the calling process.

A thread makes a system call and it blocks; tsleep is hit.
(Notice that nothing needs to be done differently in order
to make a system call.)

It notices that flag (async_notify) is set.  It grabs a new
KSE (kernel register save area and kernel stack) from the KSE
cache, and places the current KSE in the sleep queue.

A unique ID is created to identify the blocked KSE (integer
ID?).  This unique ID, along with an ID that identifies the
event and any other necessary information, is copied to the
top of the user stack of the relevent async notification
event handler.  The parameters to the UTS event handler are
setup to point to the top of the stack.

The kernel returns directly to the UTS entry point on the
predefined context/user stack, just as if a setcontext(2)
was performed.

The UTS knows what thread was running and marks it as blocked
in the kernel and saves the unique KSE ID.

The UTS schedules a new thread.

***Time passes***

The original KSE unblocks.  It is added to the current processes
unblocked KSE list.

When the process runs again (or the next subprocess, since you
can have more than 1 cooperating processes), a new KSE is
allocated and context is set to the relevent UTS event handler.
Event information is copied out to the top of the predefined
user stack, parameters set accordingly, and the kernel returns
to the UTS event handler.  If a thread was preempted for the
notification, then its context (just like getcontext(2)) is
also copied out to the UTS event handler user stack.

The UTS event handler receives notification that the thread that
_was_ running on this process was preempted for this notification,
and that other KSEs have unblocked in the kernel.  The UTS marks
the unblocked threads has active.  The context of the thread that
was running can be resumed with setcontext(2) or hacked into a
jmp_buf and resumed with longjmp.

The UTS resumes the blocked thread in one of two ways.  A new
system call to resume the KSE identified by it's unique ID is
one way.  The other way is to have the kernel copy the context
of the unblocked KSE out to the UTS event handler.  It can be
resumed with setcontext(2).

Dan Eischen
eischen@vigrid.com




To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3837F679.BCEC1312>