From owner-svn-src-head@FreeBSD.ORG Tue Aug 26 22:21:35 2014 Return-Path: Delivered-To: svn-src-head@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 44BA6CD9; Tue, 26 Aug 2014 22:21:35 +0000 (UTC) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id ED183368C; Tue, 26 Aug 2014 22:21:34 +0000 (UTC) Received: from slw by zxy.spb.ru with local (Exim 4.82 (FreeBSD)) (envelope-from ) id 1XMP7M-0004c1-0j; Wed, 27 Aug 2014 02:21:32 +0400 Date: Wed, 27 Aug 2014 02:21:32 +0400 From: Slawa Olhovchenkov To: Adrian Chadd Subject: Re: svn commit: r265792 - head/sys/kern Message-ID: <20140826222131.GG2075@zxy.spb.ru> References: <201405100053.s4A0rbF9080571@svn.freebsd.org> <20140511083114.GA53503@zxy.spb.ru> <20140520154113.GA23318@zxy.spb.ru> <20140826185407.GE2075@zxy.spb.ru> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false Cc: "svn-src-head@freebsd.org" , "svn-src-all@freebsd.org" , "src-committers@freebsd.org" X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 26 Aug 2014 22:21:35 -0000 On Tue, Aug 26, 2014 at 03:05:41PM -0700, Adrian Chadd wrote: > This to me reads like "we need to fix pmc's locking so it's not so > terrible on multi-multi-core-machines." :) I am may be wrong, but I don't see in pmc interrupt path. May be this is need fix too? > On 26 August 2014 11:54, Slawa Olhovchenkov wrote: > > On Tue, May 20, 2014 at 09:04:25AM -0700, Adrian Chadd wrote: > > > >> On 20 May 2014 08:41, Slawa Olhovchenkov wrote: > >> > >> >> (But if you try it on 10.0 and it changes things, by all means let me know.) > >> > > >> > I am try on 10.0, but not sure about significant improvement (may be > >> > 10%). > >> > > >> > For current CPU (E5-2650 v2 @ 2.60GHz) hwpmc don't working (1. after > >> > collect some data `pmcstat -R sample.out -G out.txt` don't decode any; > >> > 2. kldunload hwpmc do kernel crash) and I can't collect detailed > >> > profile information. > >> > >> Yup. I'm starting to get really ticked off at how pmc logging on > >> multi-core devices just "stops" after a while. I'll talk with other > >> pmc people and see if we can figure out what the heck is going on. :( > > > > Now I can test you work on CPU w/ working pmc. > > Current traffic 16.8 Gbit/s. > > CPU: Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz (2000.04-MHz K8-class CPU) > > > > last pid: 22677; load averages: 9.12, 9.07, 8.90 up 0+04:06:06 22:52:57 > > 47 processes: 3 running, 44 sleeping > > CPU 0: 11.8% user, 0.0% nice, 43.1% system, 1.6% interrupt, 43.5% idle > > CPU 1: 11.4% user, 0.0% nice, 51.8% system, 0.0% interrupt, 36.9% idle > > CPU 2: 10.6% user, 0.0% nice, 46.3% system, 0.8% interrupt, 42.4% idle > > CPU 3: 11.8% user, 0.0% nice, 45.1% system, 0.4% interrupt, 42.7% idle > > CPU 4: 13.7% user, 0.0% nice, 43.1% system, 0.0% interrupt, 43.1% idle > > CPU 5: 14.5% user, 0.0% nice, 45.9% system, 0.4% interrupt, 39.2% idle > > CPU 6: 0.0% user, 0.0% nice, 5.5% system, 68.6% interrupt, 25.9% idle > > CPU 7: 0.0% user, 0.0% nice, 4.3% system, 70.2% interrupt, 25.5% idle > > CPU 8: 0.0% user, 0.0% nice, 2.7% system, 69.4% interrupt, 27.8% idle > > CPU 9: 0.0% user, 0.0% nice, 4.7% system, 67.1% interrupt, 28.2% idle > > CPU 10: 0.0% user, 0.0% nice, 3.1% system, 76.9% interrupt, 20.0% idle > > CPU 11: 0.0% user, 0.0% nice, 5.1% system, 58.8% interrupt, 36.1% idle > > Mem: 322M Active, 15G Inact, 96G Wired, 956K Cache, 13G Free > > ARC: 90G Total, 84G MFU, 5690M MRU, 45M Anon, 394M Header, 98M Other > > Swap: > > > > > > > > @ CPU_CLK_UNHALTED_CORE [241440 samples] > > > > 10.59% [25561] _mtx_lock_spin_cookie @ /boot/kernel/kernel > > 94.25% [24092] pmclog_reserve @ /boot/kernel/hwpmc.ko > > 100.0% [24092] pmclog_process_callchain > > 100.0% [24092] pmc_process_samples > > 100.0% [24092] pmc_hook_handler > > 100.0% [24092] hardclock_cnt @ /boot/kernel/kernel > > 100.0% [24092] handleevents > > 99.62% [24001] timercb > > 100.0% [24001] lapic_handle_timer > > 00.38% [91] cpu_activeclock > > 100.0% [91] cpu_idle > > 100.0% [91] sched_idletd > > 100.0% [91] fork_exit > > 03.04% [777] callout_lock > > 91.63% [712] callout_reset_sbt_on > > 98.60% [702] tcp_timer_activate > > 94.87% [666] tcp_do_segment > > 100.0% [666] tcp_input > > 100.0% [666] ip_input > > 100.0% [666] netisr_dispatch_src > > 100.0% [666] ether_demux > > 100.0% [666] ether_nh_input > > 100.0% [666] netisr_dispatch_src > > 98.05% [653] ixgbe_rxeof @ /boot/kernel/if_ixgbe.ko > > 87.44% [571] ixgbe_msix_que > > 100.0% [571] intr_event_execute_handlers @ /boot/kernel/kernel > > 100.0% [571] ithread_loop > > 100.0% [571] fork_exit > > 12.56% [82] ixgbe_handle_que @ /boot/kernel/if_ixgbe.ko > > 100.0% [82] taskqueue_run_locked @ /boot/kernel/kernel > > 100.0% [82] taskqueue_thread_loop > > 100.0% [82] fork_exit > > 01.95% [13] tcp_lro_flush > > 92.31% [12] ixgbe_rxeof @ /boot/kernel/if_ixgbe.ko > > 100.0% [12] ixgbe_msix_que > > 100.0% [12] intr_event_execute_handlers @ /boot/kernel/kernel > > 100.0% [12] ithread_loop > > 07.69% [1] tcp_lro_rx > > 100.0% [1] ixgbe_rxeof @ /boot/kernel/if_ixgbe.ko > > 100.0% [1] ixgbe_msix_que > > 100.0% [1] intr_event_execute_handlers @ /boot/kernel/kernel > > 05.13% [36] tcp_output > > 100.0% [36] tcp_do_segment > > 100.0% [36] tcp_input > > 100.0% [36] ip_input > > 100.0% [36] netisr_dispatch_src > > 100.0% [36] ether_demux > > 100.0% [36] ether_nh_input > > 100.0% [36] netisr_dispatch_src > > 100.0% [36] ixgbe_rxeof @ /boot/kernel/if_ixgbe.ko > > 91.67% [33] ixgbe_msix_que > > 100.0% [33] intr_event_execute_handlers @ /boot/kernel/kernel > > 100.0% [33] ithread_loop > > 08.33% [3] ixgbe_handle_que @ /boot/kernel/if_ixgbe.ko > > 100.0% [3] taskqueue_run_locked @ /boot/kernel/kernel > > 100.0% [3] taskqueue_thread_loop