From owner-freebsd-arch Mon Nov 29 0: 6:49 1999 Delivered-To: freebsd-arch@freebsd.org Received: from ns1.yes.no (ns1.yes.no [195.204.136.10]) by hub.freebsd.org (Postfix) with ESMTP id EEF4414F41 for ; Mon, 29 Nov 1999 00:06:46 -0800 (PST) (envelope-from eivind@bitbox.follo.net) Received: from bitbox.follo.net (bitbox.follo.net [195.204.143.218]) by ns1.yes.no (8.9.3/8.9.3) with ESMTP id JAA11562 for ; Mon, 29 Nov 1999 09:06:45 +0100 (CET) Received: (from eivind@localhost) by bitbox.follo.net (8.8.8/8.8.6) id JAA62209 for freebsd-arch@freebsd.org; Mon, 29 Nov 1999 09:06:45 +0100 (MET) Received: from alpo.whistle.com (alpo.whistle.com [207.76.204.38]) by hub.freebsd.org (Postfix) with ESMTP id 4A15514F41 for ; Mon, 29 Nov 1999 00:06:36 -0800 (PST) (envelope-from julian@whistle.com) Received: from current1.whiste.com (current1.whistle.com [207.76.205.22]) by alpo.whistle.com (8.9.1a/8.9.1) with ESMTP id AAA34147; Mon, 29 Nov 1999 00:05:36 -0800 (PST) Date: Mon, 29 Nov 1999 00:05:36 -0800 (PST) From: Julian Elischer To: Matthew Dillon Cc: "Daniel M. Eischen" , arch@freebsd.org Subject: Re: Threads stuff In-Reply-To: <199911290116.RAA47293@apollo.backplane.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG On Sun, 28 Nov 1999, Matthew Dillon wrote: > > :The UTS can treat a page fault in the same way as a blockage on I/O (unless > :the page fault occurs in the scheduler itself). A new thread can be chosen > :and run, and the UTS can be notified when the page fault is cleared. > : > :> * The userland scheduler must deal with scheduling the N cpu case > :> itself - this is something more suitable to the kernel because the > :> userland scheduler has no knowledge of other unrelated > :> processes/threads running in the system. This means that > :> if the userland scheduler is trying to switch or schedule threads > :> without making a system call, the whole mess becomes much more > :> complex when the kernel winds up having to manage the same > :> threads itself. > : > :I think the UTS need only concern itself with its own allocated subprocesses. > :It multiplexes threads onto processes, and it's the kernels job to multiplex > :processes onto CPUs. I think I do agree with you on having to make a system > :call to switch threads, but I'm not completely off the fence yet ;-) > > I think this is a big mistake. Scheduling is already a big issue with > KSE's, there is absolutely no need to make it even more complex by having > two scheduling entities -- processes and KSE's when you only really need > to have one -- the KSE's. Here is where you and I part company to some extent. A process must be limitted in CPU resources to teh same anount whether it be unthreaded, or threaded. If you want yuor 20 threads to be scheduled at teh same weighting as teh other processes then a process with 20 threads will have 20 times the scheduling clout as one with 1. Subprocesses are 'containers' of threads. a supprocess is scheduled in exactly the same way as other processes. Processes have priorities and can be 'nice'd etc. If two processes share a machine each gets half. If one of them is threaded, then how it divides up it's time is it's business,but the moment it's tick is finished, it's descheduled and so are any threads it had running.. This is crucial to being able to get the behaviour expected by the threading standards. If you want a bigger slice of the cake you need to abide by the same rules a s anormal process.. i.e. fork (rfork) and spread your work over two sets of quanta. If you want to utilise two processors, you should allocate two virtual processors (the same thing) If you don't you are competing unfairly with the other processes. It's not a 'big mistake'. I think it's almost a requirement. We need to group threads to a limiting larger entity. and we need that larger entity to define sheduling behaviour with regards to the reat of the system. guess what, we already have such an entity.. it's called a process. > > We already have to associate kernel state with KSE's, which means we > already have to schedule KSE's. We want maximum parallel execution within > the kernel (once the MP lock becomes fine grained), and we should not have > to rfork() to get it. You are forgetting that you must ASK for parallelism otherwise you are limitted to one process-worth of quanta. A process is limitted to one CPU-second per second. To allow you to automatically get more is unfaair on the other processes. You need to go through the same limits as they do, though we make it almos infinitly easier for you to do so. > > There is absolutely no reason why KSE's associated with the same process > cannot be run in parallel, which means that you might as well use KSE's > as your scheduling entity rather then processes. Except that that's not a goal. You are limitted to the virtual processors (your words) that you have. > > By attempting to use a process as your scheduling entity you create a > very complex scheduling situation whereby the kernel threads require > completely different code to support then the user threads. I'd rather > have the same code control both - it's easier in concept, design, and > implementation. no I disagree about the complexity.. if you wish to add the required contrraints directly upon KSEs and then make the m behave correctly With respect to other processes.. (i.e. limit themselves to their quanta) you will add back almost all of that complexity, except now you have written it as separate code. > > There are many, many advantages to using a KSE as your scheduling entity. > You can assign a floating KSE to each cpu and associate it with the > currently running thread. When a context switch occurs, if the KSE's > stack is not in use (i.e. the thread was not blocked in a system call), > you can use the *same* KSE for the next running thread on that cpu. The > result is massive L1/L2 cache sharing for the kernel stack to the point > where you don't even need a kernel stack pre-assigned for your long-idle > processes in some cases (restartable system call case). You dont assign a floating KSE to each processor, you assign one to each virtual processor, and guess what that is? Everything you say above is still true for that case. Especially if there is processor affinity for (sub)processes. > > -Matt > > To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message