Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 29 Nov 1999 07:34:24 -0800 (PST)
From:      Matthew Dillon <dillon@apollo.backplane.com>
To:        Julian Elischer <julian@whistle.com>
Cc:        "Daniel M. Eischen" <eischen@vigrid.com>, arch@freebsd.org
Subject:   Re: Threads stuff
Message-ID:  <199911291534.HAA05346@apollo.backplane.com>
References:   <Pine.BSF.4.10.9911282347250.544-100000@current1.whistle.com>

next in thread | previous in thread | raw e-mail | index | archive | help

:Here is where you and I part company to some extent.
:
:A process must be limitted in CPU resources to teh same anount whether it
:be unthreaded, or threaded. If you want yuor 20 threads to be scheduled
:at teh same weighting as teh other processes then a process with 20
:threads will have 20 times the scheduling clout as one with 1.

    I believe I covered that point, but I will restate it in clearer 
    terms.  You can do this simply by tracking the quantum in the proc 
    structure.  It does not prevent you from using the KSE as the scheduling
    entity.  It does not change the complexity at all, in fact.

:This is crucial to being able to get the behaviour expected by the
:threading standards.  If you want a bigger slice of the cake you need to 
:abide by the same rules a s anormal process..
:i.e. fork (rfork) and spread your work over two sets of quanta.

    Which is not desireable.  You create a situation whereby the cpu resource
    is now being used non-deterministically based on the scheduling class
    within the multi-threaded application AND also based on other factors
    including the number of physical cpus in the system.  Also, in order to
    simulate N virtual cpus you wind up needing N rfork()'d processes.  In
    my scheme you can simulate N virtual cpus without taking up *any* 
    significant kernel resources - not even extra KSE's (see below).

:It's not a 'big mistake'. I think it's almost a requirement.
 
    Again, there is a big difference between the scheduling quantum and using
    the KSE as the scheduling entity.  The two do not have to go together.
    It does not add complexity to keep them separate.  Not at all.  I can
    outline the code involved if you are not convinced.  I've implemented
    two schedulers recently that use this concept.

:>     the kernel (once the MP lock becomes fine grained), and we should not have
:>     to rfork() to get it.  
:
:You are forgetting that you must ASK for parallelism otherwise you are
:limitted to one process-worth of quanta. A process is limitted to one
:CPU-second per second. To allow you to automatically get more is unfaair
:on the other processes. You need to go through the same limits as they
:do, though we make it almos infinitly easier for you to do so.

    In your scheme you must ask for parallelism.  In mine you don't.  Or,
    in other terms:  In your scheme you must specify the exact amount of
    parallelism you want whereas in mine you can simply specify that you
    want parallelism and do not necessarily have to specify how much, nor
    is the level of system resource use dependant on the amount of 
    (virtual) parallelism you want.

:
:>     There is absolutely no reason why KSE's associated with the same process
:>     cannot be run in parallel, which means that you might as well use KSE's
:>     as your scheduling entity rather then processes.
:
:Except that that's not a goal. You are limitted to the virtual processors
:(your words) that you have.

    That's not a goal for you.  It is a goal for me for many reasons, the
    most important one being that by using KSE's in this matter you can 
    collapse four different scheduling problems into a single algorithm.
    (kernel threads, kernel interrupt threads, standard processes, user 
    threads).

:>     By attempting to use a process as your scheduling entity you create a
:>     very complex scheduling situation whereby the kernel threads require
:>     completely different code to support then the user threads.  I'd rather
:>     have the same code control both - it's easier in concept, design, and
:>     implementation.
:
:no I disagree about the complexity..
:if you wish to add the required contrraints directly upon KSEs and 
:then make the m behave correctly With respect to other processes..
:(i.e. limit themselves to their quanta) you will add back almost all of
:that complexity, except now you have written it as separate code.

    This is absolutely NOT true.  Not in the least.  Having the quanta
    in the process structure adds no significant complexity over having it
    in the KSE.  The only thing that happens is that kse->quantum_counter
    would become kse->parent_proc->quantum_counter.  Not a big deal.  You
    do not even need to worry about locking the field because the small 
    simultanious write window is not statistically relevant.

:>     you can use the *same* KSE for the next running thread on that cpu.  The
:>     result is massive L1/L2 cache sharing for the kernel stack to the point
:>     where you don't even need a kernel stack pre-assigned for your long-idle 
:>     processes in some cases (restartable system call case).
:
:You dont assign a floating KSE to each processor, you assign one to each
:virtual processor, and guess what that is?
:Everything you say above is still true for that case. Especially if there
:is processor affinity for (sub)processes.

    No, you do NOT have to assign a floating KSE to each virtual processor,
    at least not in my scheme.  There are only two situations where a KSE
    must be assigned:

	* When a KSE is currently *running* on a physical cpu (not a virtual
	  cpu).
	* When a KSE is blocked in the kernel

    When a thread is runnable but not blocked in the kernel and *NOT*
    currently assigned to a physical cpu by the kernel, it does NOT need to 
    have a KSE assigned to it.

					-Matt
					Matthew Dillon 
					<dillon@backplane.com>




To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199911291534.HAA05346>