From owner-svn-src-head@freebsd.org Sat Mar 3 03:48:16 2018 Return-Path: Delivered-To: svn-src-head@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0DDDEF3BCF0; Sat, 3 Mar 2018 03:48:16 +0000 (UTC) (envelope-from brde@optusnet.com.au) Received: from mail110.syd.optusnet.com.au (mail110.syd.optusnet.com.au [211.29.132.97]) by mx1.freebsd.org (Postfix) with ESMTP id AB9656AAA0; Sat, 3 Mar 2018 03:48:14 +0000 (UTC) (envelope-from brde@optusnet.com.au) Received: from [192.168.0.102] (c110-21-101-228.carlnfd1.nsw.optusnet.com.au [110.21.101.228]) by mail110.syd.optusnet.com.au (Postfix) with ESMTPS id C81BF1058B2; Sat, 3 Mar 2018 14:48:06 +1100 (AEDT) Date: Sat, 3 Mar 2018 14:48:06 +1100 (EST) From: Bruce Evans X-X-Sender: bde@besplex.bde.org To: Mateusz Guzik cc: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: Re: svn commit: r330294 - head/sys/kern In-Reply-To: <201803022126.w22LQRKH034839@repo.freebsd.org> Message-ID: <20180303135130.L1442@besplex.bde.org> References: <201803022126.w22LQRKH034839@repo.freebsd.org> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.2 cv=cIaQihWN c=1 sm=1 tr=0 a=PalzARQSbocsUSjMRkwAPg==:117 a=PalzARQSbocsUSjMRkwAPg==:17 a=kj9zAlcOel0A:10 a=EYzh-tRUpJSNR7TSCSsA:9 a=qtO0BB4Uwvb006sz:21 a=UzmnqtpqlAK4NFog:21 a=CjuIK1q_8ugA:10 X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 03 Mar 2018 03:48:16 -0000 On Fri, 2 Mar 2018, Mateusz Guzik wrote: > Log: > sx: fix adaptive spinning broken in r327397 > > The condition was flipped. > > In particular heavy multithreaded kernel builds on zfs started suffering > due to nested sx locks. > > For instance make -s -j 128 buildkernel: > > before: 3326.67s user 1269.62s system 6981% cpu 1:05.84 total > after: 3365.55s user 911.27s system 6871% cpu 1:02.24 total What are you doing to make builds so slow? Kernel builds here take about 5 seconds. A full world takes about 120 seconds. This is with old (~5.2) worlds. (I don't like newer worlds since they are so much slower. Yesterday I did more than 200 makeworlds looking for a lost wakeup for idle threads. When makeworld has a sys time of 50-100 seconds, it is hard to see the effect of optimizations in the noise. I recently rewrote the statistics reporting to show real, user, sys, interrupt, total = sum of these and idle times for the whole system. Based on kern.cp_time. This is innaccurate, but it is much better than rusage times which can't show times for other threads or even interrupt and idle times for threads in the build's tree. Results look like: 128.57r 566.24u 77.10s 0.83i 644.17t 384.36I 408X i386-12-cold-102-40 168.53r 559.01u 76.95s 1.07i 637.03t 711.19I 735X i386-12-cold-102-40 Here I is the idle time according to kern.cp_time and X is the idle time according to td_runtime. The latter is much more accurate and shows that there is an error of about 24 seconds = 24 * (stathz = 127) = 3048 ticks in the u+s+i vs I ticks (u+s+i is too high by 24 seconds or 3 seconds per CPU, and I is too low by 24 seconds). The second run shows an enormous variance in the idle time -- of 327 seconds or 41 seconds per CPU. This pessimization was mostly the result of enabling PREEMPTION and IPI_PREEMPTION, which was supposed to decrease the variance. Normally I turn these off to reduce context switches, and this somehow reduces the variance to more like 5 seconds per CPU. Context switches turned out to be mostly for waking up idle threads and this should not be affected by PREEMPTION, and my optimization of turning off PREEMPTION makes little difference. That it helps on average and for the best case was determined by running many more than a few hundred makeworlds. Increasing kern.sched.quantum makes similarly little difference. I don't normally use that since it was a pessimization on average. Now I think I understand why it is a pessimization: it saves a little u+s+i time by reducing context switches, but increases I time by reducing compensation for the missing wakeups. These benchmarks are with 4BSD. ULE is slightly slower and has lower variance but its variance is high enough to indicate missed wakeups. I can reduce the variance using spinloops, so that idle threads never sleep. This only works with HTT turned off (the above is wirh 4 cores times 2 threads). With HTT on, spinloops in idle threads slow down running threads by abiut 50%. ULE is aware of this and avoids internal spinloops for HTT cases. 4BSD and lower levels are not aware of this, but it takes the misconfigurations machdep.idle_mwait=0 and machdep.idle= spin to reach the spinloop case. Some times for ULE: 127.06r 565.81u 79.40s 0.65i 645.86t 370.58I 394X i386-12-cold-102-40 125.83r 566.72u 79.10s 0.61i 646.43t 360.24I 382X i386-12-warm-102-40 130.33r 565.56u 79.41s 0.65i 645.62t 397.02I 417X i386-12-cold-102-40 118.70r 566.55u 80.46s 0.54i 647.56t 302.05I 325X i386-12-warm-102-40 127.09r 569.36u 77.07s 0.98i 647.41t 369.28I 390X i386-12-cold-102-40 115.80r 570.67u 79.17s 0.63i 650.46t 275.90I 297X i386-12-warm-102-40 115.80r is the best time seen recently. The u+s+i = t times average a couple of seconds higher for ULE. This is not quite in the noise -- 2 seconds for 8 CPUs is 0.25 seconds in real time. The noise in the I time dominates the signal in the u+s+i = t times, so it isn't clear if ULE is doing anything better or worse. If the u+s+i times can be trusted, then ULE's affinity optimizations are apparently negative. I have some optimizations in 4BSD too. These might be negative too. One seemed to work, but only on 1 system, and recent benchmarks indicated that this is just because it accidentally reduced lost wakeups. Its improvements for u+s+i are in the noise. Bruce