From owner-freebsd-arch Fri Nov 26 20: 9:58 1999 Delivered-To: freebsd-arch@freebsd.org Received: from ns1.yes.no (ns1.yes.no [195.204.136.10]) by hub.freebsd.org (Postfix) with ESMTP id CD67E14D49 for ; Fri, 26 Nov 1999 20:09:50 -0800 (PST) (envelope-from eivind@bitbox.follo.net) Received: from bitbox.follo.net (bitbox.follo.net [195.204.143.218]) by ns1.yes.no (8.9.3/8.9.3) with ESMTP id FAA02087 for ; Sat, 27 Nov 1999 05:09:48 +0100 (CET) Received: (from eivind@localhost) by bitbox.follo.net (8.8.8/8.8.6) id FAA51749 for freebsd-arch@freebsd.org; Sat, 27 Nov 1999 05:09:48 +0100 (MET) Received: from pcnet1.pcnet.com (pcnet1.pcnet.com [204.213.232.3]) by hub.freebsd.org (Postfix) with ESMTP id 4249D14D49 for ; Fri, 26 Nov 1999 20:09:37 -0800 (PST) (envelope-from eischen@vigrid.com) Received: from vigrid.com (pm3-pt72.pcnet.net [206.105.29.146]) by pcnet1.pcnet.com (8.8.7/PCNet) with ESMTP id XAA22646; Fri, 26 Nov 1999 23:09:37 -0500 (EST) Message-ID: <383F5982.CC9F132C@vigrid.com> Date: Fri, 26 Nov 1999 23:09:38 -0500 From: "Daniel M. Eischen" X-Mailer: Mozilla 4.5 [en] (X11; I; FreeBSD 4.0-CURRENT i386) X-Accept-Language: en MIME-Version: 1.0 To: Julian Elischer Cc: arch@freebsd.org Subject: Re: Threads stuff References: Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG Sorry about the last post; I accidently hit send. Julian Elischer wrote: > You HAVE to sing this to get the full effect.... Thanks! > Each thread has a pernamently assigned IO status block and it's errno > lives there. The contents of the IO status block are updated when the > KSE that was blocked returns. SInce it was blocked it then hangs itself > (or a proxy) off the sub-proc "unblocked" queue. Had it not blocked, > it would simply update the status block and retunr to the userland > which would check the status and return to the caller. > > code path in library for (write); > > set IO status to 'incomplete' > setjmp(mycontext) /* save all the regs in thread context*/ > set saved mycontext->eip to point to 'A' > call kernel > lngjmp (mycontext) > /* notreached */ > > B: > while status == incomplete > wait on mutex in IO status block > A: > check status > return to calling code This seems overly complicated just to perform a system call, but OK. You were talking about putting the trapframe (or most of it) on the user stack, so why not just put the address of the trapframe (stack pointer) in the IO control block. No need for setjmp/longjmp unless the system call blocks. > > when the thread is reported blocked: > the return mycontect->eip is changed from "A" to "B" > code is run to put the thread into whatever > structures the UTS maintains for mutexes. That completes all processing. > It then goes on to schedule another thread. > from the thread's point of view, it has just woken up from waiting > on a mutex. The IO is mysteriously complete. Why mutexes? It just needs to be marked blocked/suspended/whatever. The UTS will only run threads that are in the run state. There is some overhead with mutexes. > > What I don't quite see yet is how you resume a thread at the kernel > > level. You can't just "run the thread" after the IO control block > > was updated indicating that the thread unblocked. You need to resume > > the thread within the kernel. As long as you need a kernel call to > > resume/cancel a blocked thread, why try to hide it behind smoke and > > mirrors ;-) > > No you don't need to go back to the kernel. > All kernel processing has completed. This is what I don't see. When KSEs are woken-up in the kernel, how are they resumed? If you have 10 blocked KSEs hanging off a process, and they all become unblocked before the process runs again, what happens the next time the process runs? KSEs can also hit tsleep more than once before leaving the kernel. If the kernel automatically completes the KSEs, then the kernel is arbitrarily deciding the priority of the threads. There could be runnable threads with higher priority than any of the threads blocked in the kernel. > All your upcall needs is the address of the IO completion block that contains > the mutex to release. All copyin()s and copyout()s have been completed, and the > IO status block has been updated, just as if the call would > have been synchronous. (in fact the code that did it didn't know that it > was not going to go back to the user.) > just before it was going to go back to the user, it checked a bit and > discoverd that instead, it should hang itself (or a small struct holding the > address of the IO completion block) off the subproc. (If the latter the > KSE can actually be freed now). At some stage in the future (maybe immediatly) > an upcall reports to the UTS the addresses of all completed IO status blocks, > and the UTS releases all the mutexes. OK, I think this answers some of my questions. KSEs are automatically completed by the kernel to the point where they would return control to the application. I'm not sure I like that because the UTS can't decide which blocked KSEs are resumed - they are all resumed, possibly stealing time from other higher priority threads. Dan Eischen eischen@vigrid.com To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message