From owner-freebsd-arch Sun Nov 28 5:48:43 1999 Delivered-To: freebsd-arch@freebsd.org Received: from ns1.yes.no (ns1.yes.no [195.204.136.10]) by hub.freebsd.org (Postfix) with ESMTP id E047E14F82 for ; Sun, 28 Nov 1999 05:48:37 -0800 (PST) (envelope-from eivind@bitbox.follo.net) Received: from bitbox.follo.net (bitbox.follo.net [195.204.143.218]) by ns1.yes.no (8.9.3/8.9.3) with ESMTP id OAA22289 for ; Sun, 28 Nov 1999 14:48:36 +0100 (CET) Received: (from eivind@localhost) by bitbox.follo.net (8.8.8/8.8.6) id OAA57446 for freebsd-arch@freebsd.org; Sun, 28 Nov 1999 14:48:35 +0100 (MET) Received: from pcnet1.pcnet.com (pcnet1.pcnet.com [204.213.232.3]) by hub.freebsd.org (Postfix) with ESMTP id 300AC14F82 for ; Sun, 28 Nov 1999 05:48:22 -0800 (PST) (envelope-from eischen@vigrid.com) Received: from vigrid.com (pm3-pt79.pcnet.net [206.105.29.153]) by pcnet1.pcnet.com (8.8.7/PCNet) with ESMTP id IAA07418; Sun, 28 Nov 1999 08:48:22 -0500 (EST) Message-ID: <384132CD.91D3C180@vigrid.com> Date: Sun, 28 Nov 1999 08:49:01 -0500 From: "Daniel M. Eischen" X-Mailer: Mozilla 4.5 [en] (X11; I; FreeBSD 4.0-CURRENT i386) X-Accept-Language: en MIME-Version: 1.0 To: Julian Elischer Cc: arch@freebsd.org Subject: Re: Threads stuff References: Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG Julian Elischer wrote: > On Sat, 27 Nov 1999, Daniel M. Eischen wrote: > > I think it's basically right. If you saw the diagram at different > > stages, it would be easier to see. > > look at: > http://www.freebsd.org/~julian/threads Great, thanks! > The .obj file is the tgif source for the 7 stages. > stages 6 and 7 are not wuite worked out yet.. > teh question is "Who makes teh decision to pre-empt a running thread, and > continue the unblocked thread in the kernel? (step 6) Yes, I guess this is still open for discussion. I'd really like to be able to do it the SA way, having the UTS decide when to resume threads blocked in the kernel. But recalling Nate's earlier objection to this, FreeBSD excels at being a good server platform, where I/O throughput matters. What is the typical pattern of processes blocked on I/O, especially in a loaded system? Are there many tsleep/wakeups per I/O request, or are there usually just one or two tsleep/wakeup pairs? I can see that it would be advantageous to have the kernel automatically try to complete unblocked KSEs. But it needs to track the time spent in the system for each KSE, so that its respective thread doesn't starve other threads. Do we also want to place a limit on how much of the _process_ quantum is used to complete unblocked KSEs? What if we have the UTS dole out time to be used for completing unblocked KSEs? If there are no runnable higher priority threads, the UTS can say "here's some time, try to complete as many of the unblocked KSEs that you can". The kernel can use that time all at once, piecemeal, or until the UTS says "your time is revoked, I have higher priority threads". > I have shown it in step 6 as though the kernel took it upon itself to do so, > (for example at a quantum boundary) but if teh user decided to do it then the > situation would skip straight to step 7 because the thread state would already > be in the right place. > > I'm puting these out just to get comments.. > > The shaded areas are where there has been a change. > > tell me what you think of this format. > > > > > o A thread blocks in kernel, the KSE is saved, a new KSE is allocated, > > and an upcall is made to the scheduler with a unique KSE ID provided > > to identify the now blocked thread. > > > > o Scheduler receives notification of a thread blocking, tags the > > currently running thread with the KSE ID, chooses a new thread > > to run, switches to the new thread, and makes a system call to > > schedule a signal/upcall when the new threads quantum expires. > > > > o A KSE is woken up in the kernel. > > > > o Scheduler receives notification of a thread unblocking (finishing?) > > in the kernel. > > Here's where I get into difficulty.. shoul d we notify the > UTS on unblocking, or on completion? or both? Yeah, that's a tough question to answer. Perhaps we should take a simple approach for now, and try to expand on it and optimize it later. I think the simple solution is to notify the UTS and let it decide when to resume it. Once that's working, we can look at optimizing it so that the kernel can somehow try to automatically complete unblocked KSEs. Since the UTS knows which KSE is being run/resumed, tracking of time spent completing unblocked KSEs can also be added later. My $.02, FWIW. > > > > o At the request of the scheduler, the kernel schedules a timeout for > > the new quantum and resumes the now unblocked thread. > > define " the kernel schedules a timeout for > the new quantum and resumes the now unblocked thread" When the UTS is informed that a thread is now unblocked in the kernel (to the point that it can return to userland), and now wants to resume the thread, the UTS will compute the time in which a scheduling signal/upcall should be performed. It makes a system call that both resumes the thread and schedules the signal. Under your different syscall gate, this would be a longjmp followed by a call to schedule a signal. But if we're going to make a system call anyways, why not switch to the resumed thread and schedule the signal all at once? If the point of a different syscall gate is to eliminate a system call to resume an unblocked thread, then my contention is that we still have to make a system call for a scheduling signal/upcall. Combine the resumption of the thread and the scheduling of the signal (thr_resume_with_quantum), and you don't need a different syscall gate ;-) Dan Eischen eischen@vigrid.com To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message