From owner-svn-src-all@FreeBSD.ORG Sat Oct 6 13:01:08 2012 Return-Path: Delivered-To: svn-src-all@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 9CCDA106566B; Sat, 6 Oct 2012 13:01:08 +0000 (UTC) (envelope-from mav@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:4f8:fff6::2c]) by mx1.freebsd.org (Postfix) with ESMTP id 868E78FC1A; Sat, 6 Oct 2012 13:01:08 +0000 (UTC) Received: from svn.freebsd.org (localhost [127.0.0.1]) by svn.freebsd.org (8.14.4/8.14.4) with ESMTP id q96D18Tv067460; Sat, 6 Oct 2012 13:01:08 GMT (envelope-from mav@svn.freebsd.org) Received: (from mav@localhost) by svn.freebsd.org (8.14.4/8.14.4/Submit) id q96D18DE067458; Sat, 6 Oct 2012 13:01:08 GMT (envelope-from mav@svn.freebsd.org) Message-Id: <201210061301.q96D18DE067458@svn.freebsd.org> From: Alexander Motin Date: Sat, 6 Oct 2012 13:01:08 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-stable@freebsd.org, svn-src-stable-9@freebsd.org X-SVN-Group: stable-9 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Cc: Subject: svn commit: r241250 - stable/9/sys/kern X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 06 Oct 2012 13:01:08 -0000 Author: mav Date: Sat Oct 6 13:01:08 2012 New Revision: 241250 URL: http://svn.freebsd.org/changeset/base/241250 Log: MFC r239194: Allow idle threads to steal second threads from other cores on systems with 8 or more cores to improve utilization. None of my tests on 2xXeon (2x6x2) system shown any slowdown from mentioned "excess thrashing". Same time in pbzip2 test with number of threads more then number of CPUs I see up to 10% speedup with SMT disabled and up 5% with SMT enabled. Thinking about trashing I was trying to limit that stealing within same last level cache, but got only worse results. Present code any way prefers to steal threads from topologically closer cores. Modified: stable/9/sys/kern/sched_ule.c Directory Properties: stable/9/sys/ (props changed) Modified: stable/9/sys/kern/sched_ule.c ============================================================================== --- stable/9/sys/kern/sched_ule.c Sat Oct 6 12:58:56 2012 (r241249) +++ stable/9/sys/kern/sched_ule.c Sat Oct 6 13:01:08 2012 (r241250) @@ -1404,12 +1404,6 @@ sched_initticks(void *dummy) * what realstathz is. */ balance_interval = realstathz; - /* - * Set steal thresh to roughly log2(mp_ncpu) but no greater than 4. - * This prevents excess thrashing on large machines and excess idle - * on smaller machines. - */ - steal_thresh = min(fls(mp_ncpus) - 1, 3); affinity = SCHED_AFFINITY_DEFAULT; #endif if (sched_idlespinthresh < 0)