From owner-freebsd-stable@freebsd.org Wed Oct 12 08:40:49 2016 Return-Path: Delivered-To: freebsd-stable@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 3A4C9C0DC5F for ; Wed, 12 Oct 2016 08:40:49 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from zxy.spb.ru (zxy.spb.ru [195.70.199.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id ED452FDB; Wed, 12 Oct 2016 08:40:48 +0000 (UTC) (envelope-from slw@zxy.spb.ru) Received: from slw by zxy.spb.ru with local (Exim 4.86 (FreeBSD)) (envelope-from ) id 1buF5h-000Ge1-PU; Wed, 12 Oct 2016 11:40:45 +0300 Date: Wed, 12 Oct 2016 11:40:45 +0300 From: Slawa Olhovchenkov To: Julien Charbon Cc: Konstantin Belousov , freebsd-stable@FreeBSD.org, hiren panchasara Subject: Re: 11.0 stuck on high network load Message-ID: <20161012084045.GA57714@zxy.spb.ru> References: <20161006111043.GH54003@zxy.spb.ru> <1431484c-c00e-24c5-bd76-714be8ae5ed5@freebsd.org> <20161010133220.GU54003@zxy.spb.ru> <23f1200e-383e-befb-b76d-c88b3e1287b0@freebsd.org> <20161010142941.GV54003@zxy.spb.ru> <52d634aa-639c-bef7-1f10-c46dbadc4d85@freebsd.org> <20161010173531.GI6177@zxy.spb.ru> <8143cd8f-c007-2378-b004-b2b037402d03@freebsd.org> <20161011121145.GJ6177@zxy.spb.ru> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) X-SA-Exim-Connect-IP: X-SA-Exim-Mail-From: slw@zxy.spb.ru X-SA-Exim-Scanned: No (on zxy.spb.ru); SAEximRunCond expanded to false X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 12 Oct 2016 08:40:49 -0000 On Wed, Oct 12, 2016 at 10:18:18AM +0200, Julien Charbon wrote: > > Hi Slawa, > > On 10/11/16 2:11 PM, Slawa Olhovchenkov wrote: > > On Tue, Oct 11, 2016 at 09:20:17AM +0200, Julien Charbon wrote: > >> Then threads are competing for the INP_WLOCK lock. For the example, > >> let's say the thread A wants to run tcp_input()/in_pcblookup_mbuf() and > >> racing for this INP_WLOCK: > >> > >> https://github.com/freebsd/freebsd/blob/release/11.0.0/sys/netinet/in_pcb.c#L1964 > >> > >> And thread B wants to run tcp_timer_2msl()/tcp_close()/in_pcbdrop() and > >> racing for this INP_WLOCK: > >> > >> https://github.com/freebsd/freebsd/blob/release/11.0.0/sys/netinet/tcp_timer.c#L323 > >> > >> That leads to two cases: > >> > >> o Thread A wins the race: > >> > >> Thread A will continue tcp_input() as usal and INP_DROPPED flags is > >> not set and inp is still in TCP hash table. > >> Thread B is waiting on thread A to release INP_WLOCK after finishing > >> tcp_input() processing, and thread B will continue > >> tcp_timer_2msl()/tcp_close()/in_pcbdrop() processing. > >> > >> o Thread B wins the race: > >> > >> Thread B runs tcp_timer_2msl()/tcp_close()/in_pcbdrop() and inp > >> INP_DROPPED is set and inp being removed from TCP hash table. > >> In parallel, thread A has found the inp in TCP hash before is was > >> removed, and waiting on the found inp INP_WLOCK lock. > >> Once thread B has released the INP_WLOCK lock, thread A gets this lock > >> and sees the INP_DROPPED flag and do "goto findpcb" but here because the > >> inp is not more in TCP hash table and it will not be find again by > >> in_pcblookup_mbuf(). > >> > >> Hopefully I am clear enough here. > > > > Thanks, very clear. > > Small qeustion: when both thread run on same CPU core, INP_WLOCK will > > be re-schedule? > > Hmm, a thread can re-scheduled but not a lock. Thus no sure I > understand your question here. :) I am don't know how work INP_WLOCK in this case (all on same cpu): thread1: INP_WLOCK -interrupt-- thread2: INP_WLOCK if INP_WLOCK is like spinlock -- this is dead lock. if INP_WLOCK is like mutex -- thread1 resheduled. > > As I remeber race created by call tcp_twstart() at time of end > > tcp_close(), at path sofree()-tcp_usr_detach() and unexpected > > INP_TIMEWAIT state in the tcp_usr_detach(). INP_TIMEWAIT set in tcp_twstart() > > Exactly, thus the current fix is: If you already have the INP_DROPPED > flag set you are not allowed to call tcp_twstart(), actually it is a > good candidate for a new INVARIANT. Let me add that. > > > After check source code I am found invocation of tcp_twstart() in > > sys/netinet/tcp_stacks/fastpath.c, sys/netinet/tcp_input.c, > > sys/dev/cxgb/ulp/tom/cxgb_cpl_io.c, sys/dev/cxgbe/tom/t4_cpl_io.c. > > > > Invocation from sys/netinet/tcp_stacks/fastpath.c and > > sys/netinet/tcp_input.c guarded by INP_WLOCK in tcp_input(), and now > > will be OK. > > > > Invocation from sys/dev/cxgb/ulp/tom/cxgb_cpl_io.c and > > sys/dev/cxgbe/tom/t4_cpl_io.c is not clear to me, I am see independed > > INP_WLOCK. Is this OK? > > > > Can be thread A wants do_peer_close() directed from chelsio IRQ > > handler, bypass tcp_input()? > > If you look carefully INP_WLOCK is used in cxgb_cpl_io.c and > t4_cpl_io.c before calling tcp_twstart(). Yes, and you remeber: sys/netinet/tcp_subr.c 1535 struct tcpcb * 1536 tcp_close(struct tcpcb *tp) 1537 { ... 1569 INP_WUNLOCK(inp); 1570 ACCEPT_LOCK(); 1571 SOCK_LOCK(so); 1572 so->so_state &= ~SS_PROTOREF; 1573 sofree(so); 1574 return (NULL); sofree() call tcp_usr_detach() and in tcp_usr_detach() we have unexpected INP_TIMEWAIT.