From owner-svn-src-all@FreeBSD.ORG Wed Oct 24 18:44:24 2012 Return-Path: Delivered-To: svn-src-all@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id B016D676; Wed, 24 Oct 2012 18:44:24 +0000 (UTC) (envelope-from jhb@freebsd.org) Received: from bigwig.baldwin.cx (bigknife-pt.tunnel.tserv9.chi1.ipv6.he.net [IPv6:2001:470:1f10:75::2]) by mx1.freebsd.org (Postfix) with ESMTP id 82AAE8FC18; Wed, 24 Oct 2012 18:44:24 +0000 (UTC) Received: from jhbbsd.localnet (unknown [209.249.190.124]) by bigwig.baldwin.cx (Postfix) with ESMTPSA id DCC90B95E; Wed, 24 Oct 2012 14:44:23 -0400 (EDT) From: John Baldwin To: Jim Harris Subject: Re: svn commit: r242014 - head/sys/kern Date: Wed, 24 Oct 2012 14:43:25 -0400 User-Agent: KMail/1.13.5 (FreeBSD/8.2-CBSD-20110714-p20; KDE/4.5.5; amd64; ; ) References: <201210241836.q9OIafqo073002@svn.freebsd.org> In-Reply-To: <201210241836.q9OIafqo073002@svn.freebsd.org> MIME-Version: 1.0 Content-Type: Text/Plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <201210241443.25988.jhb@freebsd.org> X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7 (bigwig.baldwin.cx); Wed, 24 Oct 2012 14:44:24 -0400 (EDT) Cc: svn-src-head@freebsd.org, svn-src-all@freebsd.org, src-committers@freebsd.org X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 24 Oct 2012 18:44:24 -0000 On Wednesday, October 24, 2012 2:36:41 pm Jim Harris wrote: > Author: jimharris > Date: Wed Oct 24 18:36:41 2012 > New Revision: 242014 > URL: http://svn.freebsd.org/changeset/base/242014 > > Log: > Pad tdq_lock to avoid false sharing with tdq_load and tdq_cpu_idle. > > This enables CPU searches (which read tdq_load) to operate independently > of any contention on the spinlock. Some scheduler-intensive workloads > running on an 8C single-socket SNB Xeon show considerable improvement with > this change (2-3% perf improvement, 5-6% decrease in CPU util). > > Sponsored by: Intel > Reviewed by: jeff > > Modified: > head/sys/kern/sched_ule.c > > Modified: head/sys/kern/sched_ule.c > ============================================================================== > --- head/sys/kern/sched_ule.c Wed Oct 24 18:33:44 2012 (r242013) > +++ head/sys/kern/sched_ule.c Wed Oct 24 18:36:41 2012 (r242014) > @@ -223,8 +223,13 @@ static int sched_idlespinthresh = -1; > * locking in sched_pickcpu(); > */ > struct tdq { > - /* Ordered to improve efficiency of cpu_search() and switch(). */ > + /* > + * Ordered to improve efficiency of cpu_search() and switch(). > + * tdq_lock is padded to avoid false sharing with tdq_load and > + * tdq_cpu_idle. > + */ > struct mtx tdq_lock; /* run queue lock. */ > + char pad[64 - sizeof(struct mtx)]; Can this use 'tdq_lock __aligned(CACHE_LINE_SIZE)' instead? -- John Baldwin