From owner-freebsd-arch Sun Nov 26 6:22:26 2000 Delivered-To: freebsd-arch@freebsd.org Received: from mail.interware.hu (mail.interware.hu [195.70.32.130]) by hub.freebsd.org (Postfix) with ESMTP id 3FAA137B479; Sun, 26 Nov 2000 06:22:21 -0800 (PST) Received: from kinshasa-57.budapest.interware.hu ([195.70.51.185] helo=elischer.org) by mail.interware.hu with esmtp (Exim 3.16 #1 (Debian)) id 1402gp-0004yJ-00; Sun, 26 Nov 2000 15:22:00 +0100 Message-ID: <3A211C82.2464D07E@elischer.org> Date: Sun, 26 Nov 2000 06:21:54 -0800 From: Julian Elischer X-Mailer: Mozilla 4.7 [en] (X11; U; FreeBSD 5.0-CURRENT i386) X-Accept-Language: en, hu MIME-Version: 1.0 To: arch@FreeBSD.ORG, jasone@freebsd.org Subject: Re: Threads (KSE etc) comments References: <3A1B0B64.6D694248@elischer.org> Content-Type: text/plain; charset=iso-8859-15 Content-Transfer-Encoding: 7bit Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG Where's jasone? He's been perculiarly silent during this.... There has been some discussion as to what the function of the KSEG is.... I was shaving today (doesn't the best thinkin happen at such times?) and thinking about why we needed KSEGs. The basic answer is, "We need some method by which we group the scheduled entities so as to be able to ensure that the scheduler has full information and control over what is going on." Whether we actually need a KSEG and what it does depends upon what semantics we want our threading support to have. If we want to provide a virtual machine for the process, that looks as if it has an unlimited number of virtual processors, then we allow the KSEG to spawn an unlimited number of KSEs. In this case, do we allow the "scheduling clout" to build up linearly with the number of KSEs or do we limit it in some way? Theoretically you would want a KSEG with two KSEs to have the same clout as a process running unthreaded, so that cpu time would be divided 50-50. However this would mean assigning the threaded process 'partial quantum' for each processor. By this I mean that after 5 ticks the KSE on each processor for the KSE would be interrupted and the other process allowed to run. This is unworkable. Another way of sharing the processors between the two processors would be to schedule bith KSEs on one process and allow the other process to run uninterrupted on the other. This is also quite unworkable - what if there are three competing processes and only 2 processors? Maybe this 'exact fairness' is too hard to achieve.. In my world, we allow the KSEG to become SLIGHTLTY unfair, by allowing it to compete independently on each processor. If we allow the KSEG to have an unlimited number of KSEs then we need some other item that competes on behalf of the KSEG on each processor. That is, we invent some other structure (KSEG-agent) that sits in the scheduling queue(s) on behalf of the KSEG. When the 'agent' gets a quantum, it allows the KSEG to decide which of it's KSEGs will be run next. (The KSEG could round robin them for example). When a KSE is pre-empted, the kernel saves state for that thread in the thread-control-block and the next KSE to upcall to the UTS will include that thread-control-block in its list of reportable entities. I'm not clear on whether it's the next upcall on ANY KSE, or just the next upcall on that KSE.. If the latter then having multiple KSEs on the same processor, allows the KSEG round-robin scheduler to make the UTS believe that it has N virtual processors, (N-KSEs). However, it also means that the KSEG round-robin scheduler is usurping the decision from the UTS as to which thread is to be run next, as the UTS doesn't know that the thread on the other KSE was pre-empted in favour of this one. (It's on a different virtual CPU). If the Former (All KSEs report all events) then there is no real advantage to having more than N KSEs (N processors), because that means that the UTS will probably keep swapping the threads it thinks are most important to the KSEs which means that the thread that was pre-empted on KSE-A will probably be re-scheduled on KSE-B when it preempts KSE-A. So why have KSE-B at all? All it does is massively confuse things, and creates a whole new class of scheduling problems. So, in summary: Assuming we allow only SLIGHT unfairness, if you allow the process to have more than N KSEs in a KSEG, you have one of the following: 1/ A lot of unfairness if you allow each KSE to be in the queues by itself. 2/ The KSEG scheduler usurping the role of the UTS if it really does hide the true number of processors. 3/ An increased level of UTS complexity, and un-needed work, as the UTS struggles to switch the important threads onto the ever-changing set of running KSEs (it must be ever changing because there are more of them than CPUs). If you only allow N KSEs to the KSEG, then all these problems go away. The UTS can be aware that it has a limit. But it can also be aware that a KSE will not be re-empted by another of it's own KSEs. (this simplifies things). It gets the same amount of CPU-time, but has less work to do. It has full control of which threads are running, and competes fairly with other processes and KSEGs. The reason for having KSEGs is simply as an entity that competes for CPU to assure fairness. It may not even exist as a separate structure in the case where there are separate per-CPU scheduling queues, (though I think it would for efficiency's sake). It would PROBABLY have a analogous partner in the UTS that represents the virtual machine that runs all the threads that are competing at the same scope. On a single scheduling queue system, I think I would have the KSEG in the queue rather than the independent KSEs. When it get's to the head, you schedule KSEs on all the CPUs. This allows the threads to communicate quickly using shared memory should they want. The UTS has the entire quantum across as many CPUs as it has. I hope that his answers some of the questions as to why I think there are reasons for having the KSEG entity. I hope there will be a good argument about this. We want as many people thinking about it as possible. I'll try draw up some more pictures.....(like last time) to illustrate my thoughts as to how this all works. Julian -- __--_|\ Julian Elischer / \ julian@elischer.org ( OZ ) World tour 2000 ---> X_.---._/ presently in: Budapest v To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message