Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 31 Oct 2003 03:16:05 -0800 (PST)
From:      Jeff Roberson <jeff@FreeBSD.org>
To:        src-committers@FreeBSD.org, cvs-src@FreeBSD.org, cvs-all@FreeBSD.org
Subject:   cvs commit: src/sys/kern sched_ule.c
Message-ID:  <200310311116.h9VBG5QS055802@repoman.freebsd.org>

next in thread | raw e-mail | index | archive | help
jeff        2003/10/31 03:16:05 PST

  FreeBSD src repository

  Modified files:
    sys/kern             sched_ule.c 
  Log:
   - Add static to local functions and data where it was missing.
   - Add an IPI based mechanism for migrating kses.  This mechanism is
     broken down into several components.  This is intended to reduce cache
     thrashing by eliminating most cases where one cpu touches another's
     run queues.
   - kseq_notify() appends a kse to a lockless singly linked list and
     conditionally sends an IPI to the target processor.  Right now this is
     protected by sched_lock but at some point I'd like to get rid of the
     global lock.  This is why I used something more complicated than a
     standard queue.
   - kseq_assign() processes our list of kses that have been assigned to us
     by other processors.  This simply calls sched_add() for each item on the
     list after clearing the new KEF_ASSIGNED flag.  This flag is used to
     indicate that we have been appeneded to the assigned queue but not
     added to the run queue yet.
   - In sched_add(), instead of adding a KSE to another processor's queue we
     use kse_notify() so that we don't touch their queue.  Also in sched_add(),
     if KEF_ASSIGNED is already set return immediately.  This can happen if
     a thread is removed and readded so that the priority is recorded properly.
   - In sched_rem() return immediately if KEF_ASSIGNED is set.  All callers
     immediately readd simply to adjust priorites etc.
   - In sched_choose(), if we're running an IDLE task or the per cpu idle thread
     set our cpumask bit in 'kseq_idle' so that other processors may know that
     we are idle.  Before this, make a single pass through the run queues of
     other processors so that we may find work more immediately if it is
     available.
   - In sched_runnable(), don't scan each processor's run queue, they will IPI
     us if they have work for us to do.
   - In sched_add(), if we're adding a thread that can be migrated and we have
     plenty of work to do, try to migrate the thread to an idle kseq.
   - Simplify the logic in sched_prio() and take the KEF_ASSIGNED flag into
     consideration.
   - No longer use kseq_choose() to steal threads, it can lose it's last
     argument.
   - Create a new function runq_steal() which operates like runq_choose() but
     skips threads based on some criteria.  Currently it will not steal
     PRI_ITHD threads.  In the future this will be used for CPU binding.
   - Create a kseq_steal() that checks each run queue with runq_steal(), use
     kseq_steal() in the places where we used kseq_choose() to steal with
     before.
  
  Revision  Changes    Path
  1.70      +222 -78   src/sys/kern/sched_ule.c



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200310311116.h9VBG5QS055802>