From owner-freebsd-arch Fri Nov 26 14:49:32 1999 Delivered-To: freebsd-arch@freebsd.org Received: from ns1.yes.no (ns1.yes.no [195.204.136.10]) by hub.freebsd.org (Postfix) with ESMTP id 98BD71528C for ; Fri, 26 Nov 1999 14:49:30 -0800 (PST) (envelope-from eivind@bitbox.follo.net) Received: from bitbox.follo.net (bitbox.follo.net [195.204.143.218]) by ns1.yes.no (8.9.3/8.9.3) with ESMTP id XAA18378 for ; Fri, 26 Nov 1999 23:49:29 +0100 (CET) Received: (from eivind@localhost) by bitbox.follo.net (8.8.8/8.8.6) id XAA47198 for freebsd-arch@freebsd.org; Fri, 26 Nov 1999 23:49:29 +0100 (MET) Received: from alpo.whistle.com (alpo.whistle.com [207.76.204.38]) by hub.freebsd.org (Postfix) with ESMTP id BF1181527B for ; Fri, 26 Nov 1999 14:49:21 -0800 (PST) (envelope-from julian@whistle.com) Received: from current1.whiste.com (current1.whistle.com [207.76.205.22]) by alpo.whistle.com (8.9.1a/8.9.1) with ESMTP id OAA93210 for ; Fri, 26 Nov 1999 14:49:21 -0800 (PST) Date: Fri, 26 Nov 1999 14:49:21 -0800 (PST) From: Julian Elischer To: arch@freebsd.org Subject: Re: Threads diagrams. (fwd) Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG ---------- Forwarded message ---------- Date: Fri, 26 Nov 1999 15:55:42 -0500 From: Daniel M. Eischen To: Julian Elischer Subject: Re: Threads stuff Julian Elischer wrote: > Ok I fetched it and have been looking at it. > > comments: > > circles P1 and P2 are? Just the UTS' view of its allocated subprocesses. The UTS asks the kernel (or rforks) for additional subprocesses. Either P1 or P2 is the main process, and the other one is the rfork'd process. > I assume the two boxes are the subprocesses that the program has forked to > give itself parallelism. Yes. > > there are two aync nootify contexts because there is one per subprocess > right? Yes. > > kse_current.. > ok > > kse_notify.. > this is effectively the saved context that will be used to upcall? Yes. There is only one for now, there may be a need for more than one though. > > kse_unblockedq > ok > > kse_blockedq.. > hmmm maybe.. the KSE's that are blocked are onthe sleep queues but I guess > you need a way of being able to find them, so, ok.. Right. The UTS might also want to cancel them. > I presume that t4, t6, and t9 are blocked in userspace.. No, those are blocked in kernel space. Their respective KSEs are shown hung off of proc P1 and P2. Oops, the kse_blockedq and kse_unblockedq tags on proc P1 are reversed. P1->kse_blockedq should read kse_unblockedq, and P1->kse_unblockedq should read kse_blockedq. > can we presume that threads blocked in user space and threads blocked in the > kernel are indetical? (At this moment I don't think we can, though it is a > design goal') Threads blocked in user space (let's say waiting on a mutex), don't have a KSE. There are only KSEs for threads blocked in the kernel, the UTS event notifications, and the currently running threads. When a thread blocked in user space unblocks, the UTS can simply do a _longjmp to resume it (using the same KSE as the thread being swapped out). Perhaps we should show examples of threads blocked in user space in the diagram also. I don't think we want to require a kernel call to resume a thread that is blocked in user space, right? And I really don't think we want KSEs for every thread in an application, right? So other than the above, threads blocked in user space and threads blocked in kernel space are exactly the same. There will just be a flag and perhaps KSE ID in the user thread struct to indicate whether it can be resumed with a _longjmp or a thread_resume(2). > > kde_freeq.. > I don't think this is needed. we can have a systemwide cache of free KSEs > without much problem, and even a per-processor cache maybe.. Good point. I'll remove them. We might want the proc to know the maximum number of KSEs it's suppose to have, though. Consider thread groups and subprocesses at other-than-default-priority. Once a processes blocked KSE limit is reached, the process can be put to sleep until at least one of the KSEs wakeup. > > We need some more diagrams, but I wanted to make sure we're in > > general agreement before I make any more. > > basically I agree. > I had an interesting thought yesterday.. > > If every thread stores all it's context on the end of it's stack, then > we only have to store the stack right? OK, makes sense. > so at teh moment using the current system call-gate, we store all teh > context in the kernel stack, Hmm, I was under the impression it was on the USER stack. Isn't that why we have to copyin/copyout to access the trapframe from the kernel? > but if we were to save it onto the USER stack before it did the > KERNCALL, and didn't do it onto the kernel stack > then the information we needed to move a blocked syscall thread onto the > blocked thread list (as you show) would already all be in the right place. > returned values would be placed into a separate async completion block > that is allocated at the beginning of every thread's stack rather than > straight into the context. Basically this all happens anyhow, but > at the present it's all done on the kernel side of the line. > I suggest instead moving it to the user side, and making each syscall > load it's return values from the status block after it returns from the > kernel. ERRNO lives in the block permanently and need not be moved out. I think I follow you, but diagrams would help :-) Would there be some sort of unique ID returned from a blocked syscall, so that the UTS could later resume/cancel the thread? > This is the kind of thing that mean when I say that using a different > protocol might make sense. I'm open to the idea, but it just seems like there might be a real easy and fast way to switch to a predefined context/trapframe that would take us directly to the UTS. The only thing the UTS needs to know is what process blocked and a unique ID for resuming/suspending the blocked system call. Based on which process blocked, the UTS will know what thread blocked, and can tag it with the unique ID and place it in the blocked queue. For threads that are preempted, you can additionally send the program counter so the UTS can decide if it was in a critical region. > what are you using to make the drawings? > If it's tgif or xfig, then maybe we can get at the sources so we can > submit suggestions for changes in picture format :-) I'm using xfig. I wish there was something better, though. Let me make your changes, and I'll send you the .fig file. Dan Eischen eischen@vigrid.com To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message