Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 19 Sep 1997 16:28:30 -0600 (MDT)
From:      Nate Williams <nate@mt.sri.com>
To:        Terry Lambert <tlambert@primenet.com>
Cc:        nate@mt.sri.com (Nate Williams), current@freebsd.org
Subject:   Re: FYI: regarding our rfork(2)
Message-ID:  <199709192228.QAA21185@rocky.mt.sri.com>
In-Reply-To: <199709192210.PAA08418@usr06.primenet.com>
References:  <199709191956.NAA20377@rocky.mt.sri.com> <199709192210.PAA08418@usr06.primenet.com>

next in thread | previous in thread | raw e-mail | index | archive | help
> The benefits [ of shared mapping of stacks ] over a seperate mapping
> are that thread context switches between threads in a given
> process(/kernel schedulable context/kernel thread) are lighter weight,
> and that auto variables may be passed between threads.

Thread context aren't *that* much different.  You're already doing the
register savings/restores, what's one more register?  (The stack is
pointed to by a register, right?)

You still have the ability to share the heap and any static/global data
in your program, which is IMHO a big deal with threads, since it saves
on the context switch.

> > understand the reasons, I can also see where doing so makes it *much*
> > more difficult to write 'correct' threaded programs, where I define
> > correct as the ability to run w/out stepping on yourself in *all*
> > cases.  Note, I said difficult, not impossible.
> 
> The cases where you might step on yourself are error cases, so far as
> I can see.  I think that any case where there is an error, it doesn't
> matter how the error exhibits, your results are suspect.

True, but no matter how smart you are, you will produce buggy code.
And, anything that helps you avoid writing buggy code is your friend.

I consider myself smarter than the average bear, and writing
'significant' threaded programs is a hard problem, especially when it
involves user interfaces where you have one thread getting data from the
user, and another thread operating on that data.  The chance of the
threads walking all over each other are significant, and hard to avoid.
This, coupled with the fact that we're also doing *lots* of
communications means that we have lots of threads doing lots of
different 'work', and they all have to be protected from one another.
In theory this sounds real easy, but in reality it's much harder than it
looks, since doing things 'easily' means requiring too many locks around
data.  So, you end up with a solution that's complex, but fast.

When you throw into the mix the possibilities of not knowing whether or
not your data is on the heap or from a stack, and things start to get
*real* interesting with regards to memory allocation.  What/when/how do
you deal with allocated data, that many 'threads' can share?  Who cleans
up?  As any C programmer knows, finding dynamic memory allocation bugs
are one of the *hardest* and most common mistakes made, even by folks
who really know what they are doing.  When coupled with threads it gives
you a wonderful ability to hang yourself even faster. :)

Is it an acceptable risk for performance?  Sure, but the fact of the
matter is that very few people are able to keep all of the balls in the
air correctly w/out them falling down around them.  And, too many people
think they can do it, but really can't. :(



Nate



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199709192228.QAA21185>