Date: Sun, 21 Aug 2011 10:52:50 +0000 (UTC) From: Attilio Rao <attilio@FreeBSD.org> To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r225057 - head/sys/kern Message-ID: <201108211052.p7LAqoJJ000911@svn.freebsd.org>
next in thread | raw e-mail | index | archive | help
Author: attilio Date: Sun Aug 21 10:52:50 2011 New Revision: 225057 URL: http://svn.freebsd.org/changeset/base/225057 Log: callout_cpu_switch() allows preemption when dropping the outcoming callout cpu lock (and after having dropped it). If the newly scheduled thread wants to acquire the old queue it will just spin forever. Fix this by disabling preemption and interrupts entirely (because fast interrupt handlers may incur in the same problem too) while switching locks. Reported by: hrs, Mike Tancsa <mike AT sentex DOT net>, Chip Camden <sterling AT camdensoftware DOT com> Tested by: hrs, Mike Tancsa <mike AT sentex DOT net>, Chip Camden <sterling AT camdensoftware DOT com>, Nicholas Esborn <nick AT desert DOT net> Approved by: re (kib) MFC after: 10 days Modified: head/sys/kern/kern_timeout.c Modified: head/sys/kern/kern_timeout.c ============================================================================== --- head/sys/kern/kern_timeout.c Sun Aug 21 10:05:39 2011 (r225056) +++ head/sys/kern/kern_timeout.c Sun Aug 21 10:52:50 2011 (r225057) @@ -269,10 +269,17 @@ callout_cpu_switch(struct callout *c, st MPASS(c != NULL && cc != NULL); CC_LOCK_ASSERT(cc); + /* + * Avoid interrupts and preemption firing after the callout cpu + * is blocked in order to avoid deadlocks as the new thread + * may be willing to acquire the callout cpu lock. + */ c->c_cpu = CPUBLOCK; + spinlock_enter(); CC_UNLOCK(cc); new_cc = CC_CPU(new_cpu); CC_LOCK(new_cc); + spinlock_exit(); c->c_cpu = new_cpu; return (new_cc); }
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201108211052.p7LAqoJJ000911>