From owner-freebsd-threads@FreeBSD.ORG Thu Apr 17 09:16:42 2003 Return-Path: Delivered-To: freebsd-threads@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 7A76037B401 for ; Thu, 17 Apr 2003 09:16:42 -0700 (PDT) Received: from heron.mail.pas.earthlink.net (heron.mail.pas.earthlink.net [207.217.120.189]) by mx1.FreeBSD.org (Postfix) with ESMTP id CE21E43F3F for ; Thu, 17 Apr 2003 09:16:41 -0700 (PDT) (envelope-from tlambert2@mindspring.com) Received: from pool0064.cvx21-bradley.dialup.earthlink.net ([209.179.192.64] helo=mindspring.com) by heron.mail.pas.earthlink.net with asmtp (SSLv3:RC4-MD5:128) (Exim 3.33 #1) id 196C3u-0001Ic-00; Thu, 17 Apr 2003 09:16:35 -0700 Message-ID: <3E9ED311.1BC9610D@mindspring.com> Date: Thu, 17 Apr 2003 09:15:13 -0700 From: Terry Lambert X-Mailer: Mozilla 4.79 [en] (Win98; U) X-Accept-Language: en MIME-Version: 1.0 To: Jeff Roberson References: <20030417042350.X76635-100000@mail.chesapeake.net> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-ELNK-Trace: b1a02af9316fbb217a47c185c03b154d40683398e744b8a412eac2785dba42fd9dc9824dbe5d4599350badd9bab72f9c350badd9bab72f9c350badd9bab72f9c cc: Julian Elischer cc: freebsd-threads@freebsd.org Subject: Re: Patches for threads/scheduler abstraction. X-BeenThere: freebsd-threads@freebsd.org X-Mailman-Version: 2.1.1 Precedence: list List-Id: Threading on FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Apr 2003 16:16:42 -0000 Jeff Roberson wrote: > On Wed, 16 Apr 2003, Julian Elischer wrote: > > For a start the simple one would be queueing threads on the run queues. > > A system compiled with that scheduelr would have no KSEs anywhere > > in the entire kernel. > > The kse one would be queueing KSEs. I don't see how you can do this > > with a shared file. > > You're missing the point. The scheduler shouldn't be tied to the > threading implementation. I think you will lose CPU affinity and negaffinity, if you do this. I agree that the scheduler shouldn't know about threads, but it has to know about scheduling entities, given that it's, well, a scheduler, after all. Right now, there are too many locks taken in the scheduler path, as it is, and I don't see how concurrency will be improved by doing what you suggest. There is also no clustering support -- for migration of a process from one node to another -- something that can't be done with a scheduler that snoops shared memory, since the memory in question isn't shared. > This way you will not duplicate code and you will keep the two tasks > independant. Essentially the sched_*.c files decide system scope > contention while the threading implementation determines the process scope > contention which may include some concurrency limits imposed by KSE or > some other structure. > > Do you see? This way we could get KSEs out of the entire kernel other > than kern_kse.c and still support them with the sched_4bsd and sched_ule > scheduler. Otherwise we're going to have a copy of each scheduler for > each threading implementation and we wont be able to support two threading > implementations simultaneously. I think this is premature optimization; you're complaining about a multiplicy problem, but the most glaringly obvious multiplicy problem in the scheduling context is the fact that the 4BSD and ULE schedulers can't coexist in the same kernel. I know ULE is your baby, but what is needed is a cleaner abstraction than you are currently suggesting. I think if you go forward with a half-abstraction, which is going to end up setting in concrete the non-coexistance of schedulers, that it would be just as big a mistake as not doing the part that you are talking about. If you want to rename the terminology, you should go ahead and rename it. Mach calls the container abstraction for a process for a scheduler a "task". If you want to call it that instead of "KSE" (or "KSEGRP", which I personally don't like), then go ahead -- BUT there needs to be some type of container, and it needs to be common to all scheduler implementations, or a given implementation won't be able to provide CPU affinity and negaffinity for all the objects in the container class. > > Anyhow, the following hack (totaly unoptimised.... notice the > > I think this describes the whole project so far. 8-) 8-). -- Terry