From owner-freebsd-stable@freebsd.org Thu Sep 22 10:20:48 2016 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 68621BE4FB7 for ; Thu, 22 Sep 2016 10:20:48 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 2B34CAF7; Thu, 22 Sep 2016 10:20:48 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from slw by zxy.spb.ru with local (Exim 4.86 (FreeBSD)) (envelope-from ) id 1bn17V-000JSS-KO; Thu, 22 Sep 2016 13:20:45 +0300 Date: Thu, 22 Sep 2016 13:20:45 +0300 From: Slawa Olhovchenkov To: Julien Charbon Cc: Konstantin Belousov , freebsd-stable@FreeBSD.org, hiren panchasara Subject: Re: 11.0 stuck on high network load Message-ID: <20160922102045.GC2840@zxy.spb.ru> References: <20160916190330.GG2840@zxy.spb.ru> <78cbcdc9-f565-1046-c157-2ddd8fcccc62@freebsd.org> <20160919204328.GN2840@zxy.spb.ru> <8ba75d6e-4f01-895e-0aed-53c6c6692cb9@freebsd.org> <20160920202633.GQ2840@zxy.spb.ru> <20160921195155.GW2840@zxy.spb.ru> <20160922095331.GB2840@zxy.spb.ru> <67862b33-63c0-2f23-d254-5ddc55dbb554@freebsd.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <67862b33-63c0-2f23-d254-5ddc55dbb554@freebsd.org> User-Agent: Mutt/1.5.24 (2015-08-30) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Sep 2016 10:20:48 -0000 On Thu, Sep 22, 2016 at 12:04:40PM +0200, Julien Charbon wrote: > >> These paths can indeed compete for the same INP lock, as both > >> tcp_tw_2msl_scan() calls always start with the first inp found in > >> twq_2msl list. But in both cases, this first inp should be quickly used > >> and its lock released anyway, thus that could explain your situation it > >> that the TCP stack is doing that all the time, for example: > >> > >> - Let say that you are running out completely and constantly of tcptw, > >> and then all connections transitioning to TIME_WAIT state are competing > >> with the TIME_WAIT timeout scan that tries to free all the expired > >> tcptw. If the stack is doing that all the time, it can appear like > >> "live" locked. > >> > >> This is just an hypothesis and as usual might be a red herring. > >> Anyway, could you run: > >> > >> $ vmstat -z | head -2; vmstat -z | grep -E 'tcp|sock' > > > > ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP > > > > socket: 864, 4192664, 18604, 25348,49276158, 0, 0 > > tcp_inpcb: 464, 4192664, 34226, 18702,49250593, 0, 0 > > tcpcb: 1040, 4192665, 18424, 18953,49250593, 0, 0 > > tcptw: 88, 16425, 15802, 623,14526919, 8, 0 > > tcpreass: 40, 32800, 15, 2285, 632381, 0, 0 > > > > In normal case tcptw is about 16425/600/900 > > > > And after `sysctl -a | grep tcp` system stuck on serial console and I am reset it. > > > >> Ideally, once when everything is ok, and once when you have the issue > >> to see the differences (if any). > >> > >> If it appears your are quite low in tcptw, and if you have enough > >> memory, could you try increase the tcptw limit using sysctl > > > > I think this is not eliminate stuck, just may do it less frequency > > You are right, it would just be a big hint that the tcp_tw_2msl_scan() > contention hypothesis is the right one. As I see you have plenty of > memory on your server, thus could you try with: > > net.inet.tcp.maxtcptw=4192665 > > And see what happen. Just to validate this hypothesis. This is bad way for validate, with maxtcptw=16384 happened is random and can be waited for month. After maxtcptw=4192665 I am don't know how long need to wait for verification this hypothesis. More frequency (may be 3-5 times per day) happening less traffic drops (not to zero for minutes). May be this caused also by contention in tcp_tw_2msl_scan, but fast resolved (stochastic process). By eating CPU power nginx can't service connection and clients closed connections and need more TIME_WAIT and can trigered tcp_tw_2msl_scan(reuse=1). After this we can got live lock. May be after I learning to catch and dignostic this validation is more accurately.