Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 20 Sep 1997 20:58:43 +0000 (GMT)
From:      Terry Lambert <tlambert@primenet.com>
To:        michaelv@MindBender.serv.net (Michael L. VanLoon -- HeadCandy.com)
Cc:        jre@ipsilon.com, nate@mt.sri.com, tlambert@primenet.com, toor@dyson.iquest.net, dyson@freebsd.org, karpen@ocean.campus.luth.se, current@freebsd.org
Subject:   Re: FYI: regarding our rfork(2)
Message-ID:  <199709202058.NAA23483@usr07.primenet.com>
In-Reply-To: <199709200826.BAA22114@MindBender.serv.net> from "Michael L. VanLoon -- HeadCandy.com" at Sep 20, 97 01:26:06 am

next in thread | previous in thread | raw e-mail | index | archive | help
> I agree to the point where we agree on the term for "virtual address
> space".  When doing so, I am referring specifically to the heap and
> any memmory-mapped regions.  I don't consider the stack to be part of
> that designation, at least for this specific case.

How do you manage the seperation of a stack virtual address space at
the time you do a thread context switch, without adding overhead
simply to obtain this seperation?


> This means that all threads in the process should share the same
> "virtual address space" for the lifetime of the process.  Furthermore,
> their stacks should be functionally identical at the time the thread
> is created, and should be able to change separately, from that point
> forward.  Similar to the case of how a child process state is set up
> right after a fork.

"Copy on write stacks"?

This increases thread context switch overhead.  If you are going to
do this, why use threads?  If you are on a locaded system, and you
use a purely kernel threaded paradigm (this seperation requires a
kernel component as descriptor for each of the threads stacks), there
is no guarantee that when you give up your quantum, another thread in
the same process will get the next quantum.  This means that your
thread context switch overhead has just grown to be nearly statistically
identical to process context switch overhead.


> This also means that any auto variables created on the stack by a
> functional call in one stack are local in scope and paradigm to that
> thread.  They might actually be physically accessible from other
> threads in the process because of the physical implementation of the
> threading library, but that should be considered not-guaranteed, and a
> design no-no for this paradigm.

If you do not impose seperate stack descriptors, then you can use either
a pure usesr space call conversion mechanism, or you can use a cooperative
scheduler mechanism, either of which will permit more efficient use of
a quantum and significantly reduced process context switch overhead.

But the rationale for stack seperation is the incorrectly behaved
thread not causing all threads to fail.

Forgetting for a moment that, if a single thread breaks, your entire
task is broken (by definition), a "wild thread" that you are attempting
to protect other threads from by seperation of stacks, doesn't care
where in the address space it stomps.  Your other threads stacks are
not sacrosanct, and you have failed to achieve the protection you set
out to achieve when you seperated the stacks.

You have not even achieved the scoping protction you claim to want
to achieve (see previous posting, with "concurrent DNS lookup" example).


					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199709202058.NAA23483>