From owner-freebsd-stable@freebsd.org Thu Sep 22 10:04:50 2016 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 338EDBE5F2B for ; Thu, 22 Sep 2016 10:04:50 +0000 (UTC) (envelope-from julien.charbon@gmail.com) Received: from mail-wm0-f48.google.com (mail-wm0-f48.google.com [74.125.82.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id C705FE38 for ; Thu, 22 Sep 2016 10:04:49 +0000 (UTC) (envelope-from julien.charbon@gmail.com) Received: by mail-wm0-f48.google.com with SMTP id l132so138814886wmf.1 for ; Thu, 22 Sep 2016 03:04:49 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:subject:to:references:cc:from:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding; bh=2/trLX7Oqf49tG+npLioKVH7OvVp2s0tV4QGicJCc80=; b=AcsiA+4HCIZRtdbW13meKoIYD8Fj+DyKXKMqrgKos4lN/v5Z72Lh4zvaIVDkpuU8Uy JAdqvtP6cINTEnB7hzvOP9GyNFrdf1kEQnlnJRN0CmwOeJ0czumzBloktQwVascGa9Yh kcinQT3Ib39K44Jif+0kJOFTo7fLtVduwMRVKuCOs7cNMQQ1GBWWFQgFZdrjeCChGg+8 16LG5mS2DjsxuBwIQ1YEuYqt1eu3GBohDq/YRExvoz0ufrjtQ1DjRMW3yXYD4M3llLp7 /aJmhdbuwka+eIwTNmxdjvrK0iIrtH8/byXfhcjnNtcNa8rfaz9YE6eFz44SVW1I8I7y z0Gw== X-Gm-Message-State: AE9vXwPDX9lLaeLOBplVBrPPJ+zCr0bwqyeupWNqOmtoAZjnghCP5LyK2u7wMf4AkY/RfQ== X-Received: by 10.28.189.197 with SMTP id n188mr6895136wmf.116.1474538682441; Thu, 22 Sep 2016 03:04:42 -0700 (PDT) Received: from [10.100.64.44] ([217.30.88.7]) by smtp.gmail.com with ESMTPSA id t202sm36782033wmt.22.2016.09.22.03.04.41 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 22 Sep 2016 03:04:41 -0700 (PDT) Subject: Re: 11.0 stuck on high network load To: Slawa Olhovchenkov References: <20160916181839.GC2960@zxy.spb.ru> <20160916183053.GL9397@strugglingcoder.info> <20160916190330.GG2840@zxy.spb.ru> <78cbcdc9-f565-1046-c157-2ddd8fcccc62@freebsd.org> <20160919204328.GN2840@zxy.spb.ru> <8ba75d6e-4f01-895e-0aed-53c6c6692cb9@freebsd.org> <20160920202633.GQ2840@zxy.spb.ru> <20160921195155.GW2840@zxy.spb.ru> <20160922095331.GB2840@zxy.spb.ru> Cc: Konstantin Belousov , freebsd-stable@FreeBSD.org, hiren panchasara From: Julien Charbon Message-ID: <67862b33-63c0-2f23-d254-5ddc55dbb554@freebsd.org> Date: Thu, 22 Sep 2016 12:04:40 +0200 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:45.0) Gecko/20100101 Thunderbird/45.2.0 MIME-Version: 1.0 In-Reply-To: <20160922095331.GB2840@zxy.spb.ru> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Sep 2016 10:04:50 -0000 Hi Slawa, On 9/22/16 11:53 AM, Slawa Olhovchenkov wrote: > On Wed, Sep 21, 2016 at 11:25:18PM +0200, Julien Charbon wrote: >> On 9/21/16 9:51 PM, Slawa Olhovchenkov wrote: >>> On Wed, Sep 21, 2016 at 09:11:24AM +0200, Julien Charbon wrote: >>>> You can also use Dtrace and lockstat (especially with the lockstat -s >>>> option): >>>> >>>> https://wiki.freebsd.org/DTrace/One-Liners#Kernel_Locks >>>> https://www.freebsd.org/cgi/man.cgi?query=lockstat&manpath=FreeBSD+11.0-RELEASE >>>> >>>> But I am less familiar with Dtrace/lockstat tools. >>> >>> I am still use old kernel and got lockdown again. >>> Try using lockstat (I am save more output), interesting may be next: >>> >>> R/W writer spin on writer: 190019 events in 1.070 seconds (177571 events/sec) >>> >>> ------------------------------------------------------------------------------- >>> Count indv cuml rcnt nsec Lock Caller >>> 140839 74% 74% 0.00 24659 tcpinp tcp_tw_2msl_scan+0xc6 >>> >>> nsec ------ Time Distribution ------ count Stack >>> 4096 | 913 tcp_twstart+0xa3 >>> 8192 |@@@@@@@@@@@@ 58191 tcp_do_segment+0x201f >>> 16384 |@@@@@@ 29594 tcp_input+0xe1c >>> 32768 |@@@@ 23447 ip_input+0x15f >>> 65536 |@@@ 16197 >>> 131072 |@ 8674 >>> 262144 | 3358 >>> 524288 | 456 >>> 1048576 | 9 >>> ------------------------------------------------------------------------------- >>> Count indv cuml rcnt nsec Lock Caller >>> 49180 26% 100% 0.00 15929 tcpinp tcp_tw_2msl_scan+0xc6 >>> >>> nsec ------ Time Distribution ------ count Stack >>> 4096 | 157 pfslowtimo+0x54 >>> 8192 |@@@@@@@@@@@@@@@ 24796 softclock_call_cc+0x179 >>> 16384 |@@@@@@ 11223 softclock+0x44 >>> 32768 |@@@@ 7426 intr_event_execute_handlers+0x95 >>> 65536 |@@ 3918 >>> 131072 | 1363 >>> 262144 | 278 >>> 524288 | 19 >>> ------------------------------------------------------------------------------- >> >> This is interesting, it seems that you have two call paths competing >> for INP locks here: >> >> - pfslowtimo()/tcp_tw_2msl_scan(reuse=0) and >> >> - tcp_input()/tcp_twstart()/tcp_tw_2msl_scan(reuse=1) > > I think same. > >> These paths can indeed compete for the same INP lock, as both >> tcp_tw_2msl_scan() calls always start with the first inp found in >> twq_2msl list. But in both cases, this first inp should be quickly used >> and its lock released anyway, thus that could explain your situation it >> that the TCP stack is doing that all the time, for example: >> >> - Let say that you are running out completely and constantly of tcptw, >> and then all connections transitioning to TIME_WAIT state are competing >> with the TIME_WAIT timeout scan that tries to free all the expired >> tcptw. If the stack is doing that all the time, it can appear like >> "live" locked. >> >> This is just an hypothesis and as usual might be a red herring. >> Anyway, could you run: >> >> $ vmstat -z | head -2; vmstat -z | grep -E 'tcp|sock' > > ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP > > socket: 864, 4192664, 18604, 25348,49276158, 0, 0 > tcp_inpcb: 464, 4192664, 34226, 18702,49250593, 0, 0 > tcpcb: 1040, 4192665, 18424, 18953,49250593, 0, 0 > tcptw: 88, 16425, 15802, 623,14526919, 8, 0 > tcpreass: 40, 32800, 15, 2285, 632381, 0, 0 > > In normal case tcptw is about 16425/600/900 > > And after `sysctl -a | grep tcp` system stuck on serial console and I am reset it. > >> Ideally, once when everything is ok, and once when you have the issue >> to see the differences (if any). >> >> If it appears your are quite low in tcptw, and if you have enough >> memory, could you try increase the tcptw limit using sysctl > > I think this is not eliminate stuck, just may do it less frequency You are right, it would just be a big hint that the tcp_tw_2msl_scan() contention hypothesis is the right one. As I see you have plenty of memory on your server, thus could you try with: net.inet.tcp.maxtcptw=4192665 And see what happen. Just to validate this hypothesis. Thanks. -- Julien