From owner-freebsd-stable@freebsd.org Sun Sep 18 16:43:04 2016 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D5BC9BDF6D6 for ; Sun, 18 Sep 2016 16:43:04 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 963DB11FF; Sun, 18 Sep 2016 16:43:04 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from slw by zxy.spb.ru with local (Exim 4.86 (FreeBSD)) (envelope-from ) id 1blfBB-000AXS-AF; Sun, 18 Sep 2016 19:42:57 +0300 Date: Sun, 18 Sep 2016 19:42:57 +0300 From: Slawa Olhovchenkov To: hiren panchasara Cc: jch@FreeBSD.org, Konstantin Belousov , freebsd-stable@FreeBSD.org Subject: Re: 11.0 stuck on high network load Message-ID: <20160918164257.GF2960@zxy.spb.ru> References: <20160904215739.GC22212@zxy.spb.ru> <20160905014612.GA42393@strugglingcoder.info> <20160914213503.GJ2840@zxy.spb.ru> <20160915085938.GN38409@kib.kiev.ua> <20160915090633.GS2840@zxy.spb.ru> <20160916181839.GC2960@zxy.spb.ru> <20160916183053.GL9397@strugglingcoder.info> <20160916190330.GG2840@zxy.spb.ru> <20160916191155.GM9397@strugglingcoder.info> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160916191155.GM9397@strugglingcoder.info> User-Agent: Mutt/1.5.24 (2015-08-30) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 18 Sep 2016 16:43:04 -0000 On Fri, Sep 16, 2016 at 12:11:55PM -0700, hiren panchasara wrote: > + jch@ > On 09/16/16 at 10:03P, Slawa Olhovchenkov wrote: > > On Fri, Sep 16, 2016 at 11:30:53AM -0700, hiren panchasara wrote: > > > > > On 09/16/16 at 09:18P, Slawa Olhovchenkov wrote: > > > > On Thu, Sep 15, 2016 at 12:06:33PM +0300, Slawa Olhovchenkov wrote: > > > > > > > > > On Thu, Sep 15, 2016 at 11:59:38AM +0300, Konstantin Belousov wrote: > > > > > > > > > > > On Thu, Sep 15, 2016 at 12:35:04AM +0300, Slawa Olhovchenkov wrote: > > > > > > > On Sun, Sep 04, 2016 at 06:46:12PM -0700, hiren panchasara wrote: > > > > > > > > > > > > > > > On 09/05/16 at 12:57P, Slawa Olhovchenkov wrote: > > > > > > > > > I am try using 11.0 on Dual E5-2620 (no X2APIC). > > > > > > > > > Under high network load and may be addtional conditional system go to > > > > > > > > > unresponsible state -- no reaction to network and console (USB IPMI > > > > > > > > > emulation). INVARIANTS give to high overhad. Is this exist some way to > > > > > > > > > debug this? > > > > > > > > > > > > > > > > Can you panic it from console to get to db> to get backtrace and other > > > > > > > > info when it goes unresponsive? > > > > > > > > > > > > > > ipmi console don't respond (chassis power diag don't react) > > > > > > > login on sol console stuck on *tcp. > > > > > > > > > > > > Is 'login' you reference is the ipmi client state, or you mean login(1) > > > > > > on the wedged host ? > > > > > > > > > > on the wedged host > > > > > > > > > > > If BMC stops responding simultaneously with the host, I would suspect > > > > > > the hardware platform issues instead of a software problem. Do you have > > > > > > dedicated LAN port for BMC ? > > > > > > > > > > Yes. > > > > > But BMC emulate USB keyboard and this is may be lock inside USB > > > > > system. > > > > > "ipmi console don't respond" must be read as "ipmi console runnnig and > > > > > attached but system don't react to keypress on this console". > > > > > at the sime moment system respon to `enter` on ipmi sol console, but > > > > > after enter `root` stuck in login in the '*tcp' state (I think this is > > > > > NIS related). > > > > > > > > ~^B don't break to debuger. > > > > But I can login to sol console. > > > > > > You can probably: > > > debug.kdb.enter: set to enter the debugger > > > > > > or force a panic and get vmcore: > > > debug.kdb.panic: set to panic the kernel > > > > I am reset this host. > > PMC samples collected and decoded: > > > > @ CPU_CLK_UNHALTED_CORE [4653445 samples] > > > > 51.86% [2413083] lock_delay @ /boot/kernel.VSTREAM/kernel > > 100.0% [2413083] __rw_wlock_hard > > 100.0% [2413083] tcp_tw_2msl_scan > > 99.99% [2412958] pfslowtimo > > 100.0% [2412958] softclock_call_cc > > 100.0% [2412958] softclock > > 100.0% [2412958] intr_event_execute_handlers > > 100.0% [2412958] ithread_loop > > 100.0% [2412958] fork_exit > > 00.01% [125] tcp_twstart > > 100.0% [125] tcp_do_segment > > 100.0% [125] tcp_input > > 100.0% [125] ip_input > > 100.0% [125] swi_net > > 100.0% [125] intr_event_execute_handlers > > 100.0% [125] ithread_loop > > 100.0% [125] fork_exit > > > > 09.43% [438774] _rw_runlock_cookie @ /boot/kernel.VSTREAM/kernel > > 100.0% [438774] tcp_tw_2msl_scan > > 99.99% [438735] pfslowtimo > > 100.0% [438735] softclock_call_cc > > 100.0% [438735] softclock > > 100.0% [438735] intr_event_execute_handlers > > 100.0% [438735] ithread_loop > > 100.0% [438735] fork_exit > > 00.01% [39] tcp_twstart > > 100.0% [39] tcp_do_segment > > 100.0% [39] tcp_input > > 100.0% [39] ip_input > > 100.0% [39] swi_net > > 100.0% [39] intr_event_execute_handlers > > 100.0% [39] ithread_loop > > 100.0% [39] fork_exit > > > > 08.57% [398970] __rw_wlock_hard @ /boot/kernel.VSTREAM/kernel > > 100.0% [398970] tcp_tw_2msl_scan > > 99.99% [398940] pfslowtimo > > 100.0% [398940] softclock_call_cc > > 100.0% [398940] softclock > > 100.0% [398940] intr_event_execute_handlers > > 100.0% [398940] ithread_loop > > 100.0% [398940] fork_exit > > 00.01% [30] tcp_twstart > > 100.0% [30] tcp_do_segment > > 100.0% [30] tcp_input > > 100.0% [30] ip_input > > 100.0% [30] swi_net > > 100.0% [30] intr_event_execute_handlers > > 100.0% [30] ithread_loop > > 100.0% [30] fork_exit > > > > 05.79% [269224] __rw_try_rlock @ /boot/kernel.VSTREAM/kernel > > 100.0% [269224] tcp_tw_2msl_scan > > 99.99% [269203] pfslowtimo > > 100.0% [269203] softclock_call_cc > > 100.0% [269203] softclock > > 100.0% [269203] intr_event_execute_handlers > > 100.0% [269203] ithread_loop > > 100.0% [269203] fork_exit > > 00.01% [21] tcp_twstart > > 100.0% [21] tcp_do_segment > > 100.0% [21] tcp_input > > 100.0% [21] ip_input > > 100.0% [21] swi_net > > 100.0% [21] intr_event_execute_handlers > > 100.0% [21] ithread_loop > > 100.0% [21] fork_exit > > > > 05.35% [249141] _rw_wlock_cookie @ /boot/kernel.VSTREAM/kernel > > 99.76% [248543] tcp_tw_2msl_scan > > 99.99% [248528] pfslowtimo > > 100.0% [248528] softclock_call_cc > > 100.0% [248528] softclock > > 100.0% [248528] intr_event_execute_handlers > > 100.0% [248528] ithread_loop > > 100.0% [248528] fork_exit > > 00.01% [15] tcp_twstart > > 100.0% [15] tcp_do_segment > > 100.0% [15] tcp_input > > 100.0% [15] ip_input > > 100.0% [15] swi_net > > 100.0% [15] intr_event_execute_handlers > > 100.0% [15] ithread_loop > > 100.0% [15] fork_exit > > 00.24% [598] pfslowtimo > > 100.0% [598] softclock_call_cc > > 100.0% [598] softclock > > 100.0% [598] intr_event_execute_handlers > > 100.0% [598] ithread_loop > > 100.0% [598] fork_exit > > > > As I suspected, this looks like a hang trying to lock V_tcbinfo. > > I'm ccing Julien here who worked on WLOCK -> RLOCK transition to improve > performance for short-lived connections. I am not too sure if thats the > problem but looks in similar area so he may be able to provide some > insights. I am point to tcp_tw_2msl_scan. I am expect traveling by list V_twq_2msl. But I am see only endless attempt to lock first element from this list. Is this correct?