From owner-freebsd-current Fri Sep 19 17:07:28 1997 Return-Path: Received: (from root@localhost) by hub.freebsd.org (8.8.7/8.8.7) id RAA19850 for current-outgoing; Fri, 19 Sep 1997 17:07:28 -0700 (PDT) Received: from usr04.primenet.com (tlambert@usr04.primenet.com [206.165.6.204]) by hub.freebsd.org (8.8.7/8.8.7) with ESMTP id RAA19836 for ; Fri, 19 Sep 1997 17:07:23 -0700 (PDT) Received: (from tlambert@localhost) by usr04.primenet.com (8.8.5/8.8.5) id RAA21261; Fri, 19 Sep 1997 17:07:20 -0700 (MST) From: Terry Lambert Message-Id: <199709200007.RAA21261@usr04.primenet.com> Subject: Re: FYI: regarding our rfork(2) To: nate@mt.sri.com (Nate Williams) Date: Sat, 20 Sep 1997 00:07:16 +0000 (GMT) Cc: tlambert@primenet.com, nate@mt.sri.com, current@freebsd.org In-Reply-To: <199709192228.QAA21185@rocky.mt.sri.com> from "Nate Williams" at Sep 19, 97 04:28:30 pm X-Mailer: ELM [version 2.4 PL23] Content-Type: text Sender: owner-freebsd-current@freebsd.org X-Loop: FreeBSD.org Precedence: bulk > Thread context aren't *that* much different. You're already doing the > register savings/restores, what's one more register? (The stack is > pointed to by a register, right?) And a page table entry, if the stack address spaces are seperate. If there isn't a different page table entry per thread for the stack, then I don't understand what you mean by "seperate". 8-(. > > The cases where you might step on yourself are error cases, so far as > > I can see. I think that any case where there is an error, it doesn't > > matter how the error exhibits, your results are suspect. > > True, but no matter how smart you are, you will produce buggy code. > And, anything that helps you avoid writing buggy code is your friend. Agreed, to the extent that it doesn't make the tools useless for their intended purpose; for threads, this is light-weight context switches (primary) and resource sharing (secondary). A mode where you could run on seperate stack address spaces, but which didn't require you to do so, would be a good idea for developement and testing (but not deployment). [ ... ] > Is it an acceptable risk for performance? Sure, but the fact of the > matter is that very few people are able to keep all of the balls in the > air correctly w/out them falling down around them. And, too many people > think they can do it, but really can't. :( I still don't see how you can thread context switch without changing the process address space map if one threads stack is not supposed to be available in the address space of another thread. And that's where the higher overhead (and "data marshalling" issues) comes from. Actually, for a loaded system, we haven't discussed the kernel threading case where a blocking call is made on one kernel thread. There's nothing to guarantee that the context switch that results will favor another kernel thread in the same process, even if the call that caused the context switch occurred very early in the quantum. And even if you implement "quantum affinity" for kernel threads in a given process to *actually* reduce the context switch overhead (instead of just *theoretically* doing it), then you've only introduced additional race issues, where a multithreaded process favors itself to the point of starving other processes. In other words, eventually a preemptive multitasking system has to context switch. 8-). Terry Lambert terry@lambert.org --- Any opinions in this posting are my own and not those of my present or previous employers.