From owner-freebsd-arch Thu Nov 25 3: 9:24 1999 Delivered-To: freebsd-arch@freebsd.org Received: from ns1.yes.no (ns1.yes.no [195.204.136.10]) by hub.freebsd.org (Postfix) with ESMTP id B1F8A14D85 for ; Thu, 25 Nov 1999 03:09:22 -0800 (PST) (envelope-from eivind@bitbox.follo.net) Received: from bitbox.follo.net (bitbox.follo.net [195.204.143.218]) by ns1.yes.no (8.9.3/8.9.3) with ESMTP id MAA21626 for ; Thu, 25 Nov 1999 12:09:20 +0100 (CET) Received: (from eivind@localhost) by bitbox.follo.net (8.8.8/8.8.6) id MAA38676 for freebsd-arch@freebsd.org; Thu, 25 Nov 1999 12:09:19 +0100 (MET) Received: from pcnet1.pcnet.com (pcnet1.pcnet.com [204.213.232.3]) by hub.freebsd.org (Postfix) with ESMTP id D3ABF14D85 for ; Thu, 25 Nov 1999 03:09:12 -0800 (PST) (envelope-from eischen@vigrid.com) Received: from vigrid.com (pm3-pt40.pcnet.net [206.105.29.114]) by pcnet1.pcnet.com (8.8.7/PCNet) with ESMTP id GAA16131; Thu, 25 Nov 1999 06:09:11 -0500 (EST) Message-ID: <383D18A9.884D6155@vigrid.com> Date: Thu, 25 Nov 1999 06:08:25 -0500 From: "Daniel M. Eischen" X-Mailer: Mozilla 4.5 [en] (X11; I; FreeBSD 4.0-CURRENT i386) X-Accept-Language: en MIME-Version: 1.0 To: Jason Evans Cc: Julian Elischer , freebsd-arch@freebsd.org Subject: Re: Threads References: <383BF031.B52BC41F@vigrid.com> <19991124220406.X301@sturm.canonware.com> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG Jason Evans wrote: > > One of the main advantages I see to adding an asnchronous call gate (ACG) > rather than changing the semantics of the current syscalls is that mixing I'm advocating _not_ changing the semantics of current syscalls. I don't think it's necessary. > > Right. And just because it woke up from a tsleep doesn't mean that it > > will eventually be able to finish and return to userland. It may > > encounter more tsleeps before leaving the kernel. The UTS needs > > to enter the kernel in order to resume the thread. And it needs a > > way of telling the kernel which blocked KSE to resume. > > > > The UTS is notified that a KSE has unblocked, but it doesn't have to > > immediately resume it - other threads may have higher priority. I think > > we are in agreement here. I'm just advocating using the stack of the > > UTS event handler context (in the form of parameters to the event handlers) > > to tell the UTS that threads have blocked/unblocked in the kernel. There > > doesn't have to be any magic/wizardry in the system calling convention > > to do this. The kernel can return directly to the predefined UTS event > > handlers (also on a predefined stack) and totally bypass the original system > > call in which it entered the kernel. At some point later, the UTS resumes > > the (now unblocked) KSE and returns the same way it entered. > > > > You also want the ability to inform the UTS of _more_ than just one > > event at a time. Several KSEs may unblock before a subprocess is run. > > You should be able to notify the UTS of them all at once. How does > > that work in your method? > > This sounds similar to Solaris LWPs in that there are potentially KSEs > blocked in the kernel, whereas with scheduler activations (SA), that > doesn't happen under normal circumstances. Sure it does. And if an application has a lot of I/O bound threads, you want to inform the UTS of all the unblocked threads at once - some threads may have higher priority than other threads so let the UTS decide which one to resume. If you only notify the UTS of one unblocked thread at a time, then the kernel arbitrarily decides the priority. > It sounds to me like the > disagreement between you two (Daniel and Julian) is much more significant > than what decisions are made by the UTS. No, it's really only the issue of whether the we need a different syscall gate. > Daniel, you say "The UTS is > notified that a KSE has unblocked ...". However, if I understand the SA > way of doing things, there is no KSE associated with a blocked syscall. Well, up until now, we've been using KSE to mean the "saved kernel context, flags, queue management, and perhaps some portion of saved user context". A KSE is not a kernel thread or subprocess. I think we might be abusing this definition, because KSE seems more like a kernel thread. Perhaps we should be calling it kernel context or something. > The syscall context has some kernel context, but there is no bonifide > context, such as with Solaris's LWP model. When the syscall completes, a > new activation is created for the upcall to the UTS. > > That said, I disagree with the idea of the UTS having explicit control over > scheduling of KSEs. I think that there should be exactly one KSE per > processor (with the exception of PTHREAD_SCOPE_SYSTEM (bound) threads), and > that threads should be multiplexed onto the KSEs. This lets the kernel > schedule KSEs as it sees fit, and lets the UTS divide the runtime of the > KSEs as it sees fit. Yes, Julian and I agree on this, it's just that you're mixing up the terms that he and I were using. The application will have control over the priority of the subprocesses (or KSEs as you're calling them) by use of setpriority/rtprio if they have the proper privileges. I don't see the UTS having control of the subprocesses priority. Dan Eischen eischen@vigrid.com To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message