Date: Fri, 11 Jan 2002 13:41:07 -0800 From: Bakul Shah <bakul@bitblocks.com> To: Dan Eischen <eischen@vigrid.com> Cc: arch@freebsd.org Subject: Re: Request for review: getcontext, setcontext, etc Message-ID: <200201112141.QAA25529@devonshire.cnchost.com> In-Reply-To: Your message of "Sun, 06 Jan 2002 00:49:13 EST." <3C37E559.B011DF29@vigrid.com>
next in thread | previous in thread | raw e-mail | index | archive | help
I quick-scanned this discussion thread and got a bit
confused, so please allow me to approach it in my own way:-)
I have a simulation thread library which allows a thread
context switch with 12 or so instructions on a x86. Actual
switching is done in a function with inline assembly code.
During this switch if a signal is received a signal handler
will get control. The kernel must (and does) save & restore
the necessary state to continue the interrupted simulation
thread context switch upon return from the signal handler.
This is but one example to point out the obvious: if you make
a context switch from a signal handler to another regular
thread with setcontext(), the ultimate return to the original
thread (the one interrupted by a signal) must be done
properly _even_ if it is in the middle of munging any user
modifiable register or stack.
In other words, a signal handler *must* not make _any_
assumptions about the state of a regular thread. With
setcontext() and getcontext() things get worse because now a
thread interrupted by a signal may be restored from any other
thread.
To deal with this, you can either take a pessimistic approach
and save/restore everything on get/setcontext() or allow for
a varying amount of context and a way to decipher them. As
an example, a getcontext() done from regular C code can be
"lean" and only save a few registers. A getcontext()
equivalent done as part of a signal delivery can be "fat" and
save more things. If the signal arrives in the middle of a
user mode getcontext(), you throw away what getcontext() did
and save a fat context. But to do that, you need a way to
signal you are in the middle of getcontext(). If a signal is
delivered in the middle of a setcontext(), it is not even
clear *which* thread should be receiving the signal --
perhaps it should be the one doing the setcontext(). The
kernel must *backout* a partial setcontext before delivering
the signal. So again you need to indicate you are in middle
of setcontext() or do it atomically.
I am not prepared to speculate on the use of FP & SSE
registers at this point except for one thing: an FP exception
*must* be delivered to whichever thread caused it. Any bugs
in SIGFPE delivery is a separate discussion!
Note: I used the term "regular thread" to distinguish them
from my "simulation threads". A regular thread is the one on
which you can do {get,set}context().
BTW, it is perfectly legitimate and actually very useful to
implement simulation threads within a regular thread. To
give you an idea, on an Athlon XP 1700+ I get a simulation
thread context switch time of about 17ns best case (both
contexts in primary cache) to 480ns worst case (both contexts
in memory + other effects when you have over a million
threads). I can never get this sort of speed with a generic
regular thread system and this speed is what make it
practical to simulate an ASIC at a register transfer level or
even a whole system in C/C++. On the other hand, simulation
threads live in simulation time and interfacting them to the
real world is painful.
-- bakul
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200201112141.QAA25529>
