From owner-freebsd-stable@freebsd.org Fri Sep 16 19:11:56 2016 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A25B8BDDB41 for ; Fri, 16 Sep 2016 19:11:56 +0000 (UTC) (envelope-from hiren@strugglingcoder.info) Received: from mail.strugglingcoder.info (strugglingcoder.info [104.236.146.68]) by mx1.freebsd.org (Postfix) with ESMTP id 8B0EF1E8; Fri, 16 Sep 2016 19:11:56 +0000 (UTC) (envelope-from hiren@strugglingcoder.info) Received: from localhost (unknown [10.1.1.3]) (Authenticated sender: hiren@strugglingcoder.info) by mail.strugglingcoder.info (Postfix) with ESMTPA id ACC8A17E72; Fri, 16 Sep 2016 12:11:55 -0700 (PDT) Date: Fri, 16 Sep 2016 12:11:55 -0700 From: hiren panchasara To: Slawa Olhovchenkov , jch@FreeBSD.org Cc: Konstantin Belousov , freebsd-stable@FreeBSD.org Subject: Re: 11.0 stuck on high network load Message-ID: <20160916191155.GM9397@strugglingcoder.info> References: <20160904215739.GC22212@zxy.spb.ru> <20160905014612.GA42393@strugglingcoder.info> <20160914213503.GJ2840@zxy.spb.ru> <20160915085938.GN38409@kib.kiev.ua> <20160915090633.GS2840@zxy.spb.ru> <20160916181839.GC2960@zxy.spb.ru> <20160916183053.GL9397@strugglingcoder.info> <20160916190330.GG2840@zxy.spb.ru> MIME-Version: 1.0 Content-Type: multipart/signed; micalg=pgp-sha512; protocol="application/pgp-signature"; boundary="Rl6QTk5rIKeAVhSO" Content-Disposition: inline In-Reply-To: <20160916190330.GG2840@zxy.spb.ru> User-Agent: Mutt/1.5.23 (2014-03-12) X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 16 Sep 2016 19:11:56 -0000 --Rl6QTk5rIKeAVhSO Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: quoted-printable + jch@=20 On 09/16/16 at 10:03P, Slawa Olhovchenkov wrote: > On Fri, Sep 16, 2016 at 11:30:53AM -0700, hiren panchasara wrote: >=20 > > On 09/16/16 at 09:18P, Slawa Olhovchenkov wrote: > > > On Thu, Sep 15, 2016 at 12:06:33PM +0300, Slawa Olhovchenkov wrote: > > >=20 > > > > On Thu, Sep 15, 2016 at 11:59:38AM +0300, Konstantin Belousov wrote: > > > >=20 > > > > > On Thu, Sep 15, 2016 at 12:35:04AM +0300, Slawa Olhovchenkov wrot= e: > > > > > > On Sun, Sep 04, 2016 at 06:46:12PM -0700, hiren panchasara wrot= e: > > > > > >=20 > > > > > > > On 09/05/16 at 12:57P, Slawa Olhovchenkov wrote: > > > > > > > > I am try using 11.0 on Dual E5-2620 (no X2APIC). > > > > > > > > Under high network load and may be addtional conditional sy= stem go to > > > > > > > > unresponsible state -- no reaction to network and console (= USB IPMI > > > > > > > > emulation). INVARIANTS give to high overhad. Is this exist = some way to > > > > > > > > debug this? > > > > > > >=20 > > > > > > > Can you panic it from console to get to db> to get backtrace = and other > > > > > > > info when it goes unresponsive? > > > > > >=20 > > > > > > ipmi console don't respond (chassis power diag don't react) > > > > > > login on sol console stuck on *tcp. > > > > >=20 > > > > > Is 'login' you reference is the ipmi client state, or you mean lo= gin(1) > > > > > on the wedged host ? > > > >=20 > > > > on the wedged host > > > >=20 > > > > > If BMC stops responding simultaneously with the host, I would sus= pect > > > > > the hardware platform issues instead of a software problem. Do y= ou have > > > > > dedicated LAN port for BMC ? > > > >=20 > > > > Yes. > > > > But BMC emulate USB keyboard and this is may be lock inside USB > > > > system. > > > > "ipmi console don't respond" must be read as "ipmi console runnnig = and > > > > attached but system don't react to keypress on this console". > > > > at the sime moment system respon to `enter` on ipmi sol console, but > > > > after enter `root` stuck in login in the '*tcp' state (I think this= is > > > > NIS related). > > >=20 > > > ~^B don't break to debuger. > > > But I can login to sol console. > >=20 > > You can probably: > > debug.kdb.enter: set to enter the debugger > >=20 > > or force a panic and get vmcore: > > debug.kdb.panic: set to panic the kernel >=20 > I am reset this host. > PMC samples collected and decoded: >=20 > @ CPU_CLK_UNHALTED_CORE [4653445 samples] >=20 > 51.86% [2413083] lock_delay @ /boot/kernel.VSTREAM/kernel > 100.0% [2413083] __rw_wlock_hard > 100.0% [2413083] tcp_tw_2msl_scan > 99.99% [2412958] pfslowtimo > 100.0% [2412958] softclock_call_cc > 100.0% [2412958] softclock > 100.0% [2412958] intr_event_execute_handlers > 100.0% [2412958] ithread_loop > 100.0% [2412958] fork_exit > 00.01% [125] tcp_twstart > 100.0% [125] tcp_do_segment > 100.0% [125] tcp_input > 100.0% [125] ip_input > 100.0% [125] swi_net > 100.0% [125] intr_event_execute_handlers > 100.0% [125] ithread_loop > 100.0% [125] fork_exit >=20 > 09.43% [438774] _rw_runlock_cookie @ /boot/kernel.VSTREAM/kernel > 100.0% [438774] tcp_tw_2msl_scan > 99.99% [438735] pfslowtimo > 100.0% [438735] softclock_call_cc > 100.0% [438735] softclock > 100.0% [438735] intr_event_execute_handlers > 100.0% [438735] ithread_loop > 100.0% [438735] fork_exit > 00.01% [39] tcp_twstart > 100.0% [39] tcp_do_segment > 100.0% [39] tcp_input > 100.0% [39] ip_input > 100.0% [39] swi_net > 100.0% [39] intr_event_execute_handlers > 100.0% [39] ithread_loop > 100.0% [39] fork_exit >=20 > 08.57% [398970] __rw_wlock_hard @ /boot/kernel.VSTREAM/kernel > 100.0% [398970] tcp_tw_2msl_scan > 99.99% [398940] pfslowtimo > 100.0% [398940] softclock_call_cc > 100.0% [398940] softclock > 100.0% [398940] intr_event_execute_handlers > 100.0% [398940] ithread_loop > 100.0% [398940] fork_exit > 00.01% [30] tcp_twstart > 100.0% [30] tcp_do_segment > 100.0% [30] tcp_input > 100.0% [30] ip_input > 100.0% [30] swi_net > 100.0% [30] intr_event_execute_handlers > 100.0% [30] ithread_loop > 100.0% [30] fork_exit >=20 > 05.79% [269224] __rw_try_rlock @ /boot/kernel.VSTREAM/kernel > 100.0% [269224] tcp_tw_2msl_scan > 99.99% [269203] pfslowtimo > 100.0% [269203] softclock_call_cc > 100.0% [269203] softclock > 100.0% [269203] intr_event_execute_handlers > 100.0% [269203] ithread_loop > 100.0% [269203] fork_exit > 00.01% [21] tcp_twstart > 100.0% [21] tcp_do_segment > 100.0% [21] tcp_input > 100.0% [21] ip_input > 100.0% [21] swi_net > 100.0% [21] intr_event_execute_handlers > 100.0% [21] ithread_loop > 100.0% [21] fork_exit >=20 > 05.35% [249141] _rw_wlock_cookie @ /boot/kernel.VSTREAM/kernel > 99.76% [248543] tcp_tw_2msl_scan > 99.99% [248528] pfslowtimo > 100.0% [248528] softclock_call_cc > 100.0% [248528] softclock > 100.0% [248528] intr_event_execute_handlers > 100.0% [248528] ithread_loop > 100.0% [248528] fork_exit > 00.01% [15] tcp_twstart > 100.0% [15] tcp_do_segment > 100.0% [15] tcp_input > 100.0% [15] ip_input > 100.0% [15] swi_net > 100.0% [15] intr_event_execute_handlers > 100.0% [15] ithread_loop > 100.0% [15] fork_exit > 00.24% [598] pfslowtimo > 100.0% [598] softclock_call_cc > 100.0% [598] softclock > 100.0% [598] intr_event_execute_handlers > 100.0% [598] ithread_loop > 100.0% [598] fork_exit > As I suspected, this looks like a hang trying to lock V_tcbinfo. I'm ccing Julien here who worked on WLOCK -> RLOCK transition to improve performance for short-lived connections. I am not too sure if thats the problem but looks in similar area so he may be able to provide some insights. Cheers, Hiren --Rl6QTk5rIKeAVhSO Content-Type: application/pgp-signature -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQF8BAABCgBmBQJX3EP4XxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRBNEUyMEZBMUQ4Nzg4RjNGMTdFNjZGMDI4 QjkyNTBFMTU2M0VERkU1AAoJEIuSUOFWPt/lHn8H/1xhU1sBwwDrER+AdJdWwrGp SiM06yPhh/7tKt1ouP0omUF4/OoBQonbnyKkNr59UE4N+xVpacTGuQxysb6OE/+c 6GxaDbgoajv2NPHrJ5VZvXV22bcc7nHz4NfArWGQ0HfJZDFum3cDiwm99qj9svUW R5Iy76a2RnW2wB8wKxuQ08lCZ+0eGZzyXV+DJYEvl7LwApPmFgvxmOTWCeenhcAw QFcu3XnAYtpP/FoYP8zFq6hiib/UlyyhZFNWHxuApYvLbLYa9Cwbfn+R77P8K4ns nn42f1ecPyvDDGkq99qyHVCrA2q8b6DWiiZaFIOdRVX5NdWx8pP3tZeMuAA68aw= =eNR2 -----END PGP SIGNATURE----- --Rl6QTk5rIKeAVhSO--