Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 29 Dec 2010 18:17:36 +0000 (UTC)
From:      Attilio Rao <attilio@FreeBSD.org>
To:        cvs-src-old@freebsd.org
Subject:   cvs commit: src/sys/kern kern_timeout.c
Message-ID:  <201012291817.oBTIHv79045906@repoman.freebsd.org>

next in thread | raw e-mail | index | archive | help
attilio     2010-12-29 18:17:36 UTC

  FreeBSD src repository

  Modified files:
    sys/kern             kern_timeout.c 
  Log:
  SVN rev 216805 on 2010-12-29 18:17:36Z by attilio
  
  Fix several callout migration races:
   - Problem1:
     Hypothesis: thread1 is doing a callout_reset_on(), within his
     callout handler, willing to implicitly or explicitly migrate the
     callout.  thread2 is draining the callout.
  
     Thesys:
     * thread1 calls callout_lock() and locks the old callout cpu
     * thread1 performs the checks in the first path of the
       callout_reset_on()
     * thread1 hits this codepiece:
         /*
          * If the lock must migrate we have to check the state again as
          * we can't hold both the new and old locks simultaneously.
          */
         if (c->c_cpu != cpu) {
                 c->c_cpu = cpu;
                 CC_UNLOCK(cc);
                 goto retry;
         }
  
       which means it will drop the lock and 'retry'
     * thread2 will callout_lock() and locks the new callout cpu.
       thread1 spins on the new lock and will not keep going for the
       moment.
     * thread2 checks that the callout is not pending (as callout is
       currently running) and that it is not on cc->cc_curr (because cc
       now refers to the new callout and the callout is running on the
       old callout cpu) thus it thinks it is done and returns.
     * thread1  will now acquire the lock and then adds the callout
       to the new callout cpu queue
  
     That seems an obvious race as callout_stop() falsely reports
     the callout stopped or worse, callout_drain() falsely returns
     while the callout is still in use.
   - Solution1:
     Fixing this problem would require, in general, to lock both
     callout cpus at once while switching the c_cpu field and avoid
     cyclic deadlocks between callout cpus locks.
     The concept of CPUBLOCK is then introduced (working more or less
     like the blocked_lock for thread_lock() function) meaning:
     "in callout_lock(), spin until the c->c_cpu is not different from
     CPUBLOCK". That way the "original" callout cpu, referred to the
     above mentioned code snippet, will remain blocked until the lock
     handover is over critical path will remain covered.
  
   - Problem2:
     Having the callout currently executed on a specific callout cpu
     and contemporary pending on another callout cpu (as it can happen
     with current code) breaks, at least, the assumption callout_drain()
     returns just once the callout cannot be referenced anymore.
   - Solution2:
     Callout migration is deferred if the current callout is already
     under execution.
     The best place to do that is in softclock() and new members are
     added to the callout cpu structure in order to specify a pending
     migration is requested. That is necessary because the callout
     cannot be trusted (not freed) the 100% of times after the execution
     of the callout handler.
     CPUBLOCK will prevent, in the "deferred migration" case, that the
     callout gets freed in this case, stopping any callout_stop() and
     callout_drain() possible activity until the migration is
     actually performed.
  
   - Problem3:
     There is a further race in callout_drain().
     In order to avoid a race between sleepqueue lock and callout cpu
     spinlock, in _callout_stop_safe(), the callout cpu lock is dropped,
     the sleepqueue lock is acquired and a new callout cpu lookup is
     performed.  Note that the channel used for locking the sleepqueue is
     obtained from the "current" callout cpu (&cc->cc_waiting).
     If the callout migrated in the meanwhile, callout_drain() will end up
     using the wrong wchan for the sleepqueue (the locked one will be the
     older, while the new one will not really be locked) leading to a
     lock leak and a race access to sleepqueue.
   - Solution3:
     It is enough to check if a migration happened between the operation
     of acquiring the sleepqueue lock and the new callout cpu lock and
     eventually unwind all those and try again.
  
  This problems can lead to deathly races on moderate (4-ways) SMP
  environment, leading to easy panic or deadlocks.
  The 24-ways of the reporter, could easilly panic, with completely
  normal workload, almost daily.
  gianni@ kindly wrote the following prof-of-concept which can
  panic a FreeBSD machine in less than one hour, in smaller SMP:
  http://www.freebsd.org/~attilio/callout/test.c
  
  Reported by:    Nicholas Esborn <nick at desert dot net>, DesertNet
  In collabouration with: gianni, pho, Nicholas Esborn
  Reviewed by:    jhb
  MFC after:      1 week (*)
  
  * Usually, I would aim for a larger MFC timeout, but I really want this
    in before 8.2-RELEASE, thus re@ accepted a shorter timeout as a special
    case for this patch
  
  Revision  Changes    Path
  1.128     +119 -23   src/sys/kern/kern_timeout.c



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201012291817.oBTIHv79045906>