Date: Fri, 10 Jul 2015 08:54:13 +0000 (UTC) From: Konstantin Belousov <kib@FreeBSD.org> To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r285353 - head/sys/kern Message-ID: <201507100854.t6A8sDOa038572@repo.freebsd.org>
next in thread | raw e-mail | index | archive | help
Author: kib Date: Fri Jul 10 08:54:12 2015 New Revision: 285353 URL: https://svnweb.freebsd.org/changeset/base/285353 Log: Change the mb() use in the sched_ult tdq_notify() and sched_idletd() to more C11-ish atomic_thread_fence_seq_cst(). Note that on PowerPC, which currently uses lwsync for mb(), the change actually fixes the missed store/load barrier, intended by r271604 [*]. Reviewed by: alc Noted by: alc [*] Sponsored by: The FreeBSD Foundation MFC after: 3 weeks Modified: head/sys/kern/sched_ule.c Modified: head/sys/kern/sched_ule.c ============================================================================== --- head/sys/kern/sched_ule.c Fri Jul 10 08:36:22 2015 (r285352) +++ head/sys/kern/sched_ule.c Fri Jul 10 08:54:12 2015 (r285353) @@ -1057,7 +1057,7 @@ tdq_notify(struct tdq *tdq, struct threa * globally visible before we read tdq_cpu_idle. Idle thread * accesses both of them without locks, and the order is important. */ - mb(); + atomic_thread_fence_seq_cst(); if (TD_IS_IDLETHREAD(ctd)) { /* @@ -2667,7 +2667,7 @@ sched_idletd(void *dummy) * before cpu_idle() read tdq_load. The order is important * to avoid race with tdq_notify. */ - mb(); + atomic_thread_fence_seq_cst(); cpu_idle(switchcnt * 4 > sched_idlespinthresh); tdq->tdq_cpu_idle = 0;
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201507100854.t6A8sDOa038572>