Date: Wed, 3 Feb 2016 13:34:03 -0800 From: Adrian Chadd <adrian.chadd@gmail.com> To: "Meyer, Wolfgang" <wolfgang.meyer@hob.de> Cc: "freebsd-net@FreeBSD.org" <freebsd-net@freebsd.org>, "freebsd-performance@FreeBSD.org" <freebsd-performance@freebsd.org> Subject: Re: ixgbe: Network performance tuning (#TCP connections) Message-ID: <CAJ-Vmo=Jy1wDqb-7cPt=9DC8Rfb9o8d0APwqpG-NH-vBKg8prQ@mail.gmail.com> In-Reply-To: <EC88118611AE564AB0B10C6A4569004D0137D57AEB@HOBEX11.hob.de> References: <EC88118611AE564AB0B10C6A4569004D0137D57AEB@HOBEX11.hob.de>
next in thread | previous in thread | raw e-mail | index | archive | help
hi, can you share your testing program source? -a On 3 February 2016 at 05:37, Meyer, Wolfgang <wolfgang.meyer@hob.de> wrote: > Hello, > > we are evaluating network performance on a DELL-Server (PowerEdge R930 wi= th 4 Sockets, hw.model: Intel(R) Xeon(R) CPU E7-8891 v3 @ 2.80GHz) with 10 = GbE-Cards. We use programs that on server side accepts connections on a IP-= address+port from the client side and after establishing the connection dat= a is sent in turns between server and client in a predefined pattern (serve= r side sends more data than client side) with sleeps in between the send ph= ases. The test set-up is chosen in such way that every client process initi= ates 500 connections handled in threads and on the server side each process= representing an IP/Port pair also handles 500 connections in threads. > > The number of connections is then increased and the overall network throu= gput is observed using nload. On FreeBSD (on server side) roughly at 50,000= connections errors begin to occur and the overall throughput won't increas= e further with more connections. With Linux on the server side it is possib= le to establish more than 120,000 connections and at 50,000 connections the= overall throughput ist double that of FreeBSD with the same sending patter= n. Furthermore system load on FreeBSD is much higher with 50 % system usage= on each core and 80 % interrupt usage on the 8 cores handling the interrup= t queues for the NIC. In comparison Linux has <10 % system usage, <10 % use= r usage and about 15 % interrupt usage on the 16 cores handling the network= interrupts for 50,000 connections. > > Varying the numbers for the NIC interrupt queues won't change the perform= ance (rather worsens the situation). Disabling Hyperthreading (utilising 40= cores) degrades the performance. Increasing MAXCPU to utilise all 80 cores= won't improve compared to 64 cores, atkbd and uart had to be disabled to a= void kernel panics with increased MAXCPU (thanks to Andre Oppermann for inv= estigating this). Initiallly the tests were made on 10.2 Release, later I s= witched to 10 Stable (later with ixgbe driver version 3.1.0) but that didn'= t change the numbers. > > Some sysctl configurables were modified along the network performance gui= delines found on the net (e.g. https://calomel.org/freebsd_network_tuning.h= tml, https://www.freebsd.org/doc/handbook/configtuning-kernel-limits.html, = https://pleiades.ucsc.edu/hyades/FreeBSD_Network_Tuning) but most of them d= idn't have any measuarable impact. Final sysctl.conf and loader.conf settin= gs see below. Actually the only tunables that provided any improvement were= identified to be hw.ix.txd, and hw.ix.rxd that were reduced (!) to the min= imum value of 64 and hw.ix.tx_process_limit and hw.ix.rx_process_limit that= were set to -1. > > Any ideas what tunables might be changed to get a higher number of TCP co= nnections (it's not a question of the overall throughput as changing the se= nding pattern allows me to fully utilise the 10Gb bandwidth)? How can I det= ermine where the kernel is spending its time that causes the high CPU load?= Any pointers are highly appreciated, I can't believe that there is such a = blatant difference in network performance compared to Linux. > > Regards, > Wolfgang > > <loader.conf>: > cc_htcp_load=3D"YES" > hw.ix.txd=3D"64" > hw.ix.rxd=3D"64" > hw.ix.tx_process_limit=3D"-1" > hw.ix.rx_process_limit=3D"-1" > hw.ix.num_queues=3D"8" > #hw.ix.enable_aim=3D"0" > #hw.ix.max_interrupt_rate=3D"31250" > > #net.isr.maxthreads=3D"16" > > <sysctl.conf>: > kern.ipc.soacceptqueue=3D1024 > > kern.ipc.maxsockbuf=3D16777216 > net.inet.tcp.sendbuf_max=3D16777216 > net.inet.tcp.recvbuf_max=3D16777216 > > net.inet.tcp.tso=3D0 > net.inet.tcp.mssdflt=3D1460 > net.inet.tcp.minmss=3D1300 > > net.inet.tcp.nolocaltimewait=3D1 > net.inet.tcp.syncache.rexmtlimit=3D0 > > #net.inet.tcp.syncookies=3D0 > net.inet.tcp.drop_synfin=3D1 > net.inet.tcp.fast_finwait2_recycle=3D1 > > net.inet.tcp.icmp_may_rst=3D0 > net.inet.tcp.msl=3D5000 > net.inet.tcp.path_mtu_discovery=3D0 > net.inet.tcp.blackhole=3D1 > net.inet.udp.blackhole=3D1 > > net.inet.tcp.cc.algorithm=3Dhtcp > net.inet.tcp.cc.htcp.adaptive_backoff=3D1 > net.inet.tcp.cc.htcp.rtt_scaling=3D1 > > net.inet.ip.forwarding=3D1 > net.inet.ip.fastforwarding=3D1 > net.inet.ip.rtexpire=3D1 > net.inet.ip.rtminexpire=3D1 > > > > > ________________________________ > > Follow HOB: > > - HOB: http://www.hob.de/redirect/hob.html > - Xing: http://www.hob.de/redirect/xing.html > - LinkedIn: http://www.hob.de/redirect/linkedin.html > - HOBLink Mobile: http://www.hob.de/redirect/hoblinkmobile.html > - Facebook: http://www.hob.de/redirect/facebook.html > - Twitter: http://www.hob.de/redirect/twitter.html > - YouTube: http://www.hob.de/redirect/youtube.html > - E-Mail: http://www.hob.de/redirect/mail.html > > > HOB GmbH & Co. KG > Schwadermuehlstr. 3 > D-90556 Cadolzburg > > Geschaeftsfuehrung: Klaus Brandstaetter, Zoran Adamovic > > AG Fuerth, HRA 5180 > Steuer-Nr. 218/163/00107 > USt-ID-Nr. DE 132747002 > > Komplementaerin HOB electronic Beteiligungs GmbH > AG Fuerth, HRB 3416 > _______________________________________________ > freebsd-net@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAJ-Vmo=Jy1wDqb-7cPt=9DC8Rfb9o8d0APwqpG-NH-vBKg8prQ>