From owner-freebsd-stable@freebsd.org Mon Sep 26 12:39:01 2016 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 68D56BE64EF for ; Mon, 26 Sep 2016 12:39:01 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 25F3A7BE; Mon, 26 Sep 2016 12:39:01 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from slw by zxy.spb.ru with local (Exim 4.86 (FreeBSD)) (envelope-from ) id 1boVBT-0007Hv-4t; Mon, 26 Sep 2016 15:38:59 +0300 Date: Mon, 26 Sep 2016 15:38:59 +0300 From: Slawa Olhovchenkov To: Julien Charbon Cc: Konstantin Belousov , freebsd-stable@FreeBSD.org, hiren panchasara Subject: Re: 11.0 stuck on high network load Message-ID: <20160926123859.GB6177@zxy.spb.ru> References: <78cbcdc9-f565-1046-c157-2ddd8fcccc62@freebsd.org> <20160919204328.GN2840@zxy.spb.ru> <8ba75d6e-4f01-895e-0aed-53c6c6692cb9@freebsd.org> <20160920202633.GQ2840@zxy.spb.ru> <20160921195155.GW2840@zxy.spb.ru> <20160923200143.GG2840@zxy.spb.ru> <20160925124626.GI2840@zxy.spb.ru> <064be3ae-8df9-fa22-06ab-346af024afd3@freebsd.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <064be3ae-8df9-fa22-06ab-346af024afd3@freebsd.org> User-Agent: Mutt/1.5.24 (2015-08-30) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 26 Sep 2016 12:39:01 -0000 On Mon, Sep 26, 2016 at 01:57:03PM +0200, Julien Charbon wrote: > > Hi Slawa, > > On 9/25/16 2:46 PM, Slawa Olhovchenkov wrote: > > On Fri, Sep 23, 2016 at 11:01:43PM +0300, Slawa Olhovchenkov wrote: > >> On Wed, Sep 21, 2016 at 11:25:18PM +0200, Julien Charbon wrote: > >>> > >>> On 9/21/16 9:51 PM, Slawa Olhovchenkov wrote: > >>>> On Wed, Sep 21, 2016 at 09:11:24AM +0200, Julien Charbon wrote: > >>>>> You can also use Dtrace and lockstat (especially with the lockstat -s > >>>>> option): > >>>>> > >>>>> https://wiki.freebsd.org/DTrace/One-Liners#Kernel_Locks > >>>>> https://www.freebsd.org/cgi/man.cgi?query=lockstat&manpath=FreeBSD+11.0-RELEASE > >>>>> > >>>>> But I am less familiar with Dtrace/lockstat tools. > >>>> > >>>> I am still use old kernel and got lockdown again. > >>>> Try using lockstat (I am save more output), interesting may be next: > >>>> > >>>> R/W writer spin on writer: 190019 events in 1.070 seconds (177571 events/sec) > >>>> > >>>> ------------------------------------------------------------------------------- > >>>> Count indv cuml rcnt nsec Lock Caller > >>>> 140839 74% 74% 0.00 24659 tcpinp tcp_tw_2msl_scan+0xc6 > >>>> > >>>> nsec ------ Time Distribution ------ count Stack > >>>> 4096 | 913 tcp_twstart+0xa3 > >>>> 8192 |@@@@@@@@@@@@ 58191 tcp_do_segment+0x201f > >>>> 16384 |@@@@@@ 29594 tcp_input+0xe1c > >>>> 32768 |@@@@ 23447 ip_input+0x15f > >>>> 65536 |@@@ 16197 > >>>> 131072 |@ 8674 > >>>> 262144 | 3358 > >>>> 524288 | 456 > >>>> 1048576 | 9 > >>>> ------------------------------------------------------------------------------- > >>>> Count indv cuml rcnt nsec Lock Caller > >>>> 49180 26% 100% 0.00 15929 tcpinp tcp_tw_2msl_scan+0xc6 > >>>> > >>>> nsec ------ Time Distribution ------ count Stack > >>>> 4096 | 157 pfslowtimo+0x54 > >>>> 8192 |@@@@@@@@@@@@@@@ 24796 softclock_call_cc+0x179 > >>>> 16384 |@@@@@@ 11223 softclock+0x44 > >>>> 32768 |@@@@ 7426 intr_event_execute_handlers+0x95 > >>>> 65536 |@@ 3918 > >>>> 131072 | 1363 > >>>> 262144 | 278 > >>>> 524288 | 19 > >>>> ------------------------------------------------------------------------------- > >>> > >>> This is interesting, it seems that you have two call paths competing > >>> for INP locks here: > >>> > >>> - pfslowtimo()/tcp_tw_2msl_scan(reuse=0) and > >>> > >>> - tcp_input()/tcp_twstart()/tcp_tw_2msl_scan(reuse=1) > >> > >> My current hypothesis: > >> > >> nginx do write() (or may be close()?) to socket, kernel lock > >> first inp in V_twq_2msl, happen callout for pfslowtimo() on the same > >> CPU core and tcp_tw_2msl_scan infinity locked on same inp. > >> > >> In this case you modification can't help, before next try we need some > >> like yeld(). > > > > Or may be locks leaks. > > Or both. > > Actually one extra debug thing you can do is launching lockstat with > below extra options: > > -H For Hold lock stats > -P To get the overall time > -s 20 To get the stackstrace > > To see who is holding the INP lock for so long. Thanks to Hiren for > pointing the -H option. At time of this graph I am collect output from `lockstat -PH -s 5 sleep 1` too and don't see any interesting -- I am think lock holded before lockstat run don't detected and don't showed. I still can show collected output, if you need. hundreds of lines.