Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 24 Oct 2012 18:36:41 +0000 (UTC)
From:      Jim Harris <jimharris@FreeBSD.org>
To:        src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org
Subject:   svn commit: r242014 - head/sys/kern
Message-ID:  <201210241836.q9OIafqo073002@svn.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: jimharris
Date: Wed Oct 24 18:36:41 2012
New Revision: 242014
URL: http://svn.freebsd.org/changeset/base/242014

Log:
  Pad tdq_lock to avoid false sharing with tdq_load and tdq_cpu_idle.
  
  This enables CPU searches (which read tdq_load) to operate independently
  of any contention on the spinlock.  Some scheduler-intensive workloads
  running on an 8C single-socket SNB Xeon show considerable improvement with
  this change (2-3% perf improvement, 5-6% decrease in CPU util).
  
  Sponsored by:	Intel
  Reviewed by:	jeff

Modified:
  head/sys/kern/sched_ule.c

Modified: head/sys/kern/sched_ule.c
==============================================================================
--- head/sys/kern/sched_ule.c	Wed Oct 24 18:33:44 2012	(r242013)
+++ head/sys/kern/sched_ule.c	Wed Oct 24 18:36:41 2012	(r242014)
@@ -223,8 +223,13 @@ static int sched_idlespinthresh = -1;
  * locking in sched_pickcpu();
  */
 struct tdq {
-	/* Ordered to improve efficiency of cpu_search() and switch(). */
+	/* 
+	 * Ordered to improve efficiency of cpu_search() and switch().
+	 * tdq_lock is padded to avoid false sharing with tdq_load and
+	 * tdq_cpu_idle.
+	 */
 	struct mtx	tdq_lock;		/* run queue lock. */
+	char		pad[64 - sizeof(struct mtx)];
 	struct cpu_group *tdq_cg;		/* Pointer to cpu topology. */
 	volatile int	tdq_load;		/* Aggregate load. */
 	volatile int	tdq_cpu_idle;		/* cpu_idle() is active. */



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201210241836.q9OIafqo073002>