From owner-freebsd-stable@freebsd.org Tue Oct 11 12:11:55 2016 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 994F1C0B0F2 for ; Tue, 11 Oct 2016 12:11:55 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 573F6381; Tue, 11 Oct 2016 12:11:55 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from slw by zxy.spb.ru with local (Exim 4.86 (FreeBSD)) (envelope-from ) id 1btvuL-000MyY-KS; Tue, 11 Oct 2016 15:11:45 +0300 Date: Tue, 11 Oct 2016 15:11:45 +0300 From: Slawa Olhovchenkov To: Julien Charbon Cc: Konstantin Belousov , freebsd-stable@FreeBSD.org, hiren panchasara Subject: Re: 11.0 stuck on high network load Message-ID: <20161011121145.GJ6177@zxy.spb.ru> References: <20160928115909.GC54003@zxy.spb.ru> <20161006111043.GH54003@zxy.spb.ru> <1431484c-c00e-24c5-bd76-714be8ae5ed5@freebsd.org> <20161010133220.GU54003@zxy.spb.ru> <23f1200e-383e-befb-b76d-c88b3e1287b0@freebsd.org> <20161010142941.GV54003@zxy.spb.ru> <52d634aa-639c-bef7-1f10-c46dbadc4d85@freebsd.org> <20161010173531.GI6177@zxy.spb.ru> <8143cd8f-c007-2378-b004-b2b037402d03@freebsd.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <8143cd8f-c007-2378-b004-b2b037402d03@freebsd.org> User-Agent: Mutt/1.5.24 (2015-08-30) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 11 Oct 2016 12:11:55 -0000 On Tue, Oct 11, 2016 at 09:20:17AM +0200, Julien Charbon wrote: > Then threads are competing for the INP_WLOCK lock. For the example, > let's say the thread A wants to run tcp_input()/in_pcblookup_mbuf() and > racing for this INP_WLOCK: > > https://github.com/freebsd/freebsd/blob/release/11.0.0/sys/netinet/in_pcb.c#L1964 > > And thread B wants to run tcp_timer_2msl()/tcp_close()/in_pcbdrop() and > racing for this INP_WLOCK: > > https://github.com/freebsd/freebsd/blob/release/11.0.0/sys/netinet/tcp_timer.c#L323 > > That leads to two cases: > > o Thread A wins the race: > > Thread A will continue tcp_input() as usal and INP_DROPPED flags is > not set and inp is still in TCP hash table. > Thread B is waiting on thread A to release INP_WLOCK after finishing > tcp_input() processing, and thread B will continue > tcp_timer_2msl()/tcp_close()/in_pcbdrop() processing. > > o Thread B wins the race: > > Thread B runs tcp_timer_2msl()/tcp_close()/in_pcbdrop() and inp > INP_DROPPED is set and inp being removed from TCP hash table. > In parallel, thread A has found the inp in TCP hash before is was > removed, and waiting on the found inp INP_WLOCK lock. > Once thread B has released the INP_WLOCK lock, thread A gets this lock > and sees the INP_DROPPED flag and do "goto findpcb" but here because the > inp is not more in TCP hash table and it will not be find again by > in_pcblookup_mbuf(). > > Hopefully I am clear enough here. Thanks, very clear. Small qeustion: when both thread run on same CPU core, INP_WLOCK will be re-schedule? As I remeber race created by call tcp_twstart() at time of end tcp_close(), at path sofree()-tcp_usr_detach() and unexpected INP_TIMEWAIT state in the tcp_usr_detach(). INP_TIMEWAIT set in tcp_twstart() After check source code I am found invocation of tcp_twstart() in sys/netinet/tcp_stacks/fastpath.c, sys/netinet/tcp_input.c, sys/dev/cxgb/ulp/tom/cxgb_cpl_io.c, sys/dev/cxgbe/tom/t4_cpl_io.c. Invocation from sys/netinet/tcp_stacks/fastpath.c and sys/netinet/tcp_input.c guarded by INP_WLOCK in tcp_input(), and now will be OK. Invocation from sys/dev/cxgb/ulp/tom/cxgb_cpl_io.c and sys/dev/cxgbe/tom/t4_cpl_io.c is not clear to me, I am see independed INP_WLOCK. Is this OK? Can be thread A wants do_peer_close() directed from chelsio IRQ handler, bypass tcp_input()?