Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 27 Nov 1999 19:38:52 -0800 (PST)
From:      Matthew Dillon <dillon@apollo.backplane.com>
To:        Jason Evans <jasone@canonware.com>
Cc:        freebsd-arch@freebsd.org
Subject:   Re: Threads
Message-ID:  <199911280338.TAA40637@apollo.backplane.com>
References:  <Pine.SUN.3.91.991124134533.26314A-100000@pcnet1.pcnet.com> <199911241905.LAA20045@apollo.backplane.com> <14396.15070.190669.25400@avalon.east> <199911241941.LAA20231@apollo.backplane.com> <19991124212521.W301@sturm.canonware.com>

next in thread | previous in thread | raw e-mail | index | archive | help
:Until reading about DEC's threading efforts
:(http://www.digital.com/info/DTJF03/DTJF03SC.TXT) a few days ago, I would
:have agreed with you.  However, that paper makes some very valid arguments
:for needing multiple scheduling queues.  The paper is very worthwhile
:reading.
:
:Jason

    hmm.  A very interesting article.  They could well be correct - certainly
    a single scheduling queue would become a problem if you have many 
    processors.  I don't think there would be much contention with 4-8 cpu's,
    at least not if we limit the lock to *just* the operation surrounding
    the addition and removal of the KSE (using Julian's terminology) from
    the queue.  Also, this contention would *not* occur during a context
    switch, only when a task is scheduled and descheduled.

    For a task switch, we simply trylock the next task in the queue relative
    to the current one - no locking of the 'queue' itself is really required.
    Many cpu's could be operating on the same queue and scale just fine for
    the task switch.  Oh fubar, I went back to my own terminology.  
    task == KSE.

    We have to balance the time lost through potential lock contention against
    the extra overhead of managing multiple queues, so perhaps an examination
    of the overhead required to manage multiple queues is in order. 

    Here's my take: Lets say that each cpu has its own queue.  Let us further 
    say that we implement a simple aggregate priority mechanism to balance 
    the load on each cpu, and further say that we have a rollover mechanism 
    when a cpu's queue is empty - the scheduler for that cpu rolls over to
    another cpu's queue.

    Ok, so we have three situations:

	* task is woken up.  Must select cpu to place task on.  There is
	  some extra overhead here.

	* task goes to sleep, is removed from current queue.  Lock current
	  queue.  No additional overhead verses one-queue model.

	* task switch.  No additional overhead under load (which may be
	  all that matters), some additional overhead if queue is empty
	  (we have to pull a task off another queue or ask the system to
	  rebalance the queues).

    Overheads in a single queue system:

	* Skip-over.  N cpu's, thus N tasks running, N+1 tasks runnable.  
	  cpu doing the switching must skip over N tasks to find the one
	  that is runnable.

	* Lock contention

    The added complexity does not seem too bad, but we would have to deal
    with queue rebalancing issues which could get pretty sticky.  Having a
    single queue is a lot easier to manage in regards to implementing 
    various sorts of schedulers.  For example, you can create a recursive
    scheduler hierarchy very easily.  

    On the otherhand, I totally forgot about the skip-over problem which 
    you have with a single queue.  The only real way to deal with skip-over
    is to actually dequeue the task currently running (on any given cpu).
    The advantage is that then you don't have to do anything to it if it
    goes to sleep, the disadvantage is greater scheduler overhead because
    a separate pointer into the circular runnable list must be maintained.

    Another possible solution to the single-queue case is to not lock the
    queue for sleep/wakeup operations but to instead shared-lock the
    surrounding task(s).  This would allow multiple cpu's to add and remove 
    tasks from different parts of a single queue simultaniously.  I've
    never done this myself -- it's never seemed worthwhile.

					-Matt
					Matthew Dillon 
					<dillon@backplane.com>




To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199911280338.TAA40637>