From owner-freebsd-current@FreeBSD.ORG Mon Aug 1 12:53:35 2011 Return-Path: Delivered-To: freebsd-current@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8F799106566B for ; Mon, 1 Aug 2011 12:53:35 +0000 (UTC) (envelope-from avg@FreeBSD.org) Received: from citadel.icyb.net.ua (citadel.icyb.net.ua [212.40.38.140]) by mx1.freebsd.org (Postfix) with ESMTP id D0E398FC16 for ; Mon, 1 Aug 2011 12:53:34 +0000 (UTC) Received: from odyssey.starpoint.kiev.ua (alpha-e.starpoint.kiev.ua [212.40.38.101]) by citadel.icyb.net.ua (8.8.8p3/ICyb-2.3exp) with ESMTP id PAA26312; Mon, 01 Aug 2011 15:53:32 +0300 (EEST) (envelope-from avg@FreeBSD.org) Message-ID: <4E36A1CB.1000901@FreeBSD.org> Date: Mon, 01 Aug 2011 15:53:31 +0300 From: Andriy Gapon User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:5.0) Gecko/20110705 Thunderbird/5.0 MIME-Version: 1.0 To: freebsd-current@FreeBSD.org, Jeff Roberson References: <20110706170132.GA68775@troutmask.apl.washington.edu> <5080.1309971941@critter.freebsd.dk> <20110706180001.GA69157@troutmask.apl.washington.edu> <4E14A54A.4050106@freebsd.org> <4E155FF9.5090905@FreeBSD.org> <20110707151440.GA75537@troutmask.apl.washington.edu> <4E160C2F.8020001@FreeBSD.org> <20110707200845.GA77049@troutmask.apl.washington.edu> <4E1B1198.6090308@FreeBSD.org> <20110711161654.GA97361@troutmask.apl.washington.edu> <4E1C003B.4090604@FreeBSD.org> In-Reply-To: <4E1C003B.4090604@FreeBSD.org> X-Enigmail-Version: 1.2pre Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Cc: Steve Kargl Subject: Re: Heavy I/O blocks FreeBSD box for several seconds X-BeenThere: freebsd-current@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Discussions about the use of FreeBSD-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 01 Aug 2011 12:53:35 -0000 I've looked at this issue some more and it appears to me that sched_pickcpu() is never called for CPU-bound threads that use up their slices completely and never do any voluntary switches. A result of this is that the threads stay on their initial CPU until re-balance mechanism kicks in. There also seem to be some peculiarities that adversely affect the rebalance algorithm and potentially other CPU selection/affinity code. 1. For single-socket SMP systems our current sched topology code builds a topology with a redundant layer. Here is a real example for a single-socket dual-core Core2 Duo system: 0, 1 0, 1 As we can see both layers contain exactly the same set of CPUs. I will try to demonstrate below that having such a topology is not only redundant, but also harmful. 2. The algorithm for searching for CPUs with highest/lowest load within a CPU group is biased towards CPUs with lower IDs. E.g. if a CPU group contains CPUs with IDs 0..3 and all the CPUs have equal load, then CPU #0 is always selected. This happens as an obvious result of iterating from lower CPU IDs towards higher ones and selecting lowest loaded CPU by "this CPU load" < "lowest load so far" condition. It's also important to note that the algorithm above also takes into account a structure of a group being searched. E.g. when searching for a lowest loaded CPU in a group that has two subgroups, then we actually search for a lowest loaded CPU in a lowest loaded sub-group which may not be a CPU that has the lowest load overall. Say, we have 4 CPUs in a group and the group is sub-divided into two sub-groups with two CPUs in each. Let CPU loads be: 5 1 | 2 3 where a number corresponds to a load of a CPU and its position corresponds to CPU ID and the pipe symbol corresponds to a boundary between the sub-groups. Then the algorithm would select CPU 2 with its load of 2 as the lowest loaded, because its subgroup has a combined load of 2+3=5 which less than combined load of the other group (5+1=6), which contains the CPU with the lowest overall load of 1. Now let's assume the following configuration: 1 package x 2 cores x 2 HTT/SMT. Let CPU IDs be 0 .. 3. And let there be 5 CPU-bound threads. Our x86 topology code will create the following scheduling topology: level 1, cachelevel 0 ("shared nothing"), CPUs 0 .. 3 level 2, cache level 2 ("L2"), CPUs 0 .. 3 level 3, cache level 1 ("L1"), CPUs 0, 1 level 3, cache level 1 ("L1"), CPUs 2, 3 Let's assume initial load distribution of 1 1 | 1 2 where a number corresponds to a load of a CPU and its position corresponds to CPU ID from 0 to 3 and the pipe symbol corresponds to a boundary between the L1 sharing groups. Rebalancing algorithm first rebalances a top level group by moving some threads from a highest loaded CPU to a lowest loaded CPU. Then the procedure is repeated for each of the child groups, then their child groups, etc, until leaf groups are rebalanced. So, when the rebalancing is done for the top shared nothing group we get: 2 1 | 1 1 this is because CPU 0 is found to have the lowest load (as described above) and CPU 3 obviously has the highest load. Then the rebalancing is done in the L2 group which contains all the same CPUs and we get: 1 1 | 2 1 as described above, the "extra" thread is moved from the higher loaded sub-group to the lower loaded sub-group. Then the rebalanacing is done in each of the L1 groups. In the first group the load is already equal between CPUs and in the seconds group the load is moved from one CPU to the other: 1 1 | 1 2 So in the end of the rebalancing we get exactly the same load distribution as at the start. The extra load is always on the same CPU. I believe that this might describe the problem that Steve observes. But I am not quite sure of that without additional objective data. The thing is that the above reasoning demonstrates the load distribution only from the point of view of per-CPU run-queues. But it doesn't demonstrate how much of CPU time each individual _thread_ gets as a result. What would we have with the extra shared-nothing level? Let's start with the same initial distribution: 1 1 | 1 2 After L2 rebalancing: 2 1 | 1 1 After L1 rebalancing: 1 2 | 1 1 What happens at the next rebalancing run? After L2 rebalancing: 1 1 | 2 1 After L1 rebalancing: 1 1 | 1 2 So in this case the extra load oscillated between CPUs 1 and 3. Better but still not perfect. Perhaps the things could be improved if we "tossed a coin" when comparing equal loads. Or maybe some load history could be taken into account. But definitely we should not have CPU groups that contain exactly the same sets of CPUs. -- Andriy Gapon