Date: Fri, 30 May 2014 12:12:21 +0800 From: k simon <chio1990@gmail.com> To: freebsd-net@freebsd.org Subject: Re: TCP stack lock contention with short-lived connections Message-ID: <53880525.6000203@gmail.com> In-Reply-To: <53861209.2000306@verisign.com> References: <op.w51mxed6ak5tgc@fri2jcharbon-m1.local> <op.w56mamc0ak5tgc@dul1rjacobso-l3.vcorp.ad.vrsn.com> <len481$sfv$2@ger.gmane.org> <537F39DF.1090900@verisign.com> <537FB51D.2060401@verisign.com> <53861209.2000306@verisign.com>
next in thread | previous in thread | raw e-mail | index | archive | help
Hi, Does any plan commit and MFC to the 10-stable ? Regards Simon δΊ 14-5-29 0:42, Julien Charbon ει: > > Hi, > > On 23/05/14 22:52, Julien Charbon wrote: >> On 23/05/14 14:06, Julien Charbon wrote: >>> On 27/02/14 11:32, Julien Charbon wrote: >>>> On 07/11/13 14:55, Julien Charbon wrote: >>>>> On Mon, 04 Nov 2013 22:21:04 +0100, Julien Charbon >>>>> <jcharbon@verisign.com> wrote: >>>>>> I have put technical and how-to-repeat details in below PR: >>>>>> >>>>>> kern/183659: TCP stack lock contention with short-lived connections >>>>>> http://www.freebsd.org/cgi/query-pr.cgi?pr=183659 >>>>>> >>>>>> We are currently working on this performance improvement >>>>>> effort; it >>>>>> will impact only the TCP locking strategy not the TCP stack logic >>>>>> itself. We will share on freebsd-net the patches we made for >>>>>> reviewing and improvement propositions; anyway this change might >>>>>> also >>>>>> require enough eyeballs to avoid tricky race conditions introduction >>>>>> in TCP stack. >> >> Joined the two cumulative patches (tcp-scale-inp-list-v1.patch and >> tcp-scale-pcbinfo-rlock-v1.patch) we discussed the most at BSDCan 2014. > > At BSDCan 2014 we were also asked to provide flame graph [1][2] to > highlight impacts of these TCP changes. The Dtrace sampling was done on > a NIC receive queue IRQ bound core. > > o First CPU flame graph on 10.0-RELENG at 40k TCP connection/secs: > > https://googledrive.com/host/0BwwgoN552srvQi1JWG42TklfQ28/releng10-40k.html > > Note: > > - __rw_wlock_hard on ipi_lock contention is clear as usual. > > o Second, same test with all our patches applied (thus from 10.0-next > branch [3]): > > https://googledrive.com/host/0BwwgoN552srvQi1JWG42TklfQ28/tcp-scale-40k.html > > > Note: > > - Almost all __rw_wlock_hard on ipi_lock contention is converted in > idle time. > > o Third, still using 10.0-next branch, the flame graph when doubling > the rate to 80k TCP connection/sec: > > https://googledrive.com/host/0BwwgoN552srvQi1JWG42TklfQ28/tcp-scale-80k.html > > > My 2 cents. > > [1] http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html > [2] https://wiki.freebsd.org/201405DevSummit/NetworkStack > [3] https://github.com/verisign/freebsd/commits/share/10.0-next > > -- > Julien > > _______________________________________________ > freebsd-net@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?53880525.6000203>