Date: Wed, 3 Feb 2016 13:37:21 +0000 From: "Meyer, Wolfgang" <wolfgang.meyer@hob.de> To: "'freebsd-net@FreeBSD.org'" <freebsd-net@FreeBSD.org> Cc: "'freebsd-performance@FreeBSD.org'" <freebsd-performance@FreeBSD.org> Subject: ixgbe: Network performance tuning (#TCP connections) Message-ID: <EC88118611AE564AB0B10C6A4569004D0137D57AEB@HOBEX11.hob.de>
next in thread | raw e-mail | index | archive | help
Hello, we are evaluating network performance on a DELL-Server (PowerEdge R930 with= 4 Sockets, hw.model: Intel(R) Xeon(R) CPU E7-8891 v3 @ 2.80GHz) with 10 Gb= E-Cards. We use programs that on server side accepts connections on a IP-ad= dress+port from the client side and after establishing the connection data = is sent in turns between server and client in a predefined pattern (server = side sends more data than client side) with sleeps in between the send phas= es. The test set-up is chosen in such way that every client process initiat= es 500 connections handled in threads and on the server side each process r= epresenting an IP/Port pair also handles 500 connections in threads. The number of connections is then increased and the overall network througp= ut is observed using nload. On FreeBSD (on server side) roughly at 50,000 c= onnections errors begin to occur and the overall throughput won't increase = further with more connections. With Linux on the server side it is possible= to establish more than 120,000 connections and at 50,000 connections the o= verall throughput ist double that of FreeBSD with the same sending pattern.= Furthermore system load on FreeBSD is much higher with 50 % system usage o= n each core and 80 % interrupt usage on the 8 cores handling the interrupt = queues for the NIC. In comparison Linux has <10 % system usage, <10 % user = usage and about 15 % interrupt usage on the 16 cores handling the network i= nterrupts for 50,000 connections. Varying the numbers for the NIC interrupt queues won't change the performan= ce (rather worsens the situation). Disabling Hyperthreading (utilising 40 c= ores) degrades the performance. Increasing MAXCPU to utilise all 80 cores w= on't improve compared to 64 cores, atkbd and uart had to be disabled to avo= id kernel panics with increased MAXCPU (thanks to Andre Oppermann for inves= tigating this). Initiallly the tests were made on 10.2 Release, later I swi= tched to 10 Stable (later with ixgbe driver version 3.1.0) but that didn't = change the numbers. Some sysctl configurables were modified along the network performance guide= lines found on the net (e.g. https://calomel.org/freebsd_network_tuning.htm= l, https://www.freebsd.org/doc/handbook/configtuning-kernel-limits.html, ht= tps://pleiades.ucsc.edu/hyades/FreeBSD_Network_Tuning) but most of them did= n't have any measuarable impact. Final sysctl.conf and loader.conf settings= see below. Actually the only tunables that provided any improvement were i= dentified to be hw.ix.txd, and hw.ix.rxd that were reduced (!) to the minim= um value of 64 and hw.ix.tx_process_limit and hw.ix.rx_process_limit that w= ere set to -1. Any ideas what tunables might be changed to get a higher number of TCP conn= ections (it's not a question of the overall throughput as changing the send= ing pattern allows me to fully utilise the 10Gb bandwidth)? How can I deter= mine where the kernel is spending its time that causes the high CPU load? A= ny pointers are highly appreciated, I can't believe that there is such a bl= atant difference in network performance compared to Linux. Regards, Wolfgang <loader.conf>: cc_htcp_load=3D"YES" hw.ix.txd=3D"64" hw.ix.rxd=3D"64" hw.ix.tx_process_limit=3D"-1" hw.ix.rx_process_limit=3D"-1" hw.ix.num_queues=3D"8" #hw.ix.enable_aim=3D"0" #hw.ix.max_interrupt_rate=3D"31250" #net.isr.maxthreads=3D"16" <sysctl.conf>: kern.ipc.soacceptqueue=3D1024 kern.ipc.maxsockbuf=3D16777216 net.inet.tcp.sendbuf_max=3D16777216 net.inet.tcp.recvbuf_max=3D16777216 net.inet.tcp.tso=3D0 net.inet.tcp.mssdflt=3D1460 net.inet.tcp.minmss=3D1300 net.inet.tcp.nolocaltimewait=3D1 net.inet.tcp.syncache.rexmtlimit=3D0 #net.inet.tcp.syncookies=3D0 net.inet.tcp.drop_synfin=3D1 net.inet.tcp.fast_finwait2_recycle=3D1 net.inet.tcp.icmp_may_rst=3D0 net.inet.tcp.msl=3D5000 net.inet.tcp.path_mtu_discovery=3D0 net.inet.tcp.blackhole=3D1 net.inet.udp.blackhole=3D1 net.inet.tcp.cc.algorithm=3Dhtcp net.inet.tcp.cc.htcp.adaptive_backoff=3D1 net.inet.tcp.cc.htcp.rtt_scaling=3D1 net.inet.ip.forwarding=3D1 net.inet.ip.fastforwarding=3D1 net.inet.ip.rtexpire=3D1 net.inet.ip.rtminexpire=3D1 ________________________________ Follow HOB: - HOB: http://www.hob.de/redirect/hob.html - Xing: http://www.hob.de/redirect/xing.html - LinkedIn: http://www.hob.de/redirect/linkedin.html - HOBLink Mobile: http://www.hob.de/redirect/hoblinkmobile.html - Facebook: http://www.hob.de/redirect/facebook.html - Twitter: http://www.hob.de/redirect/twitter.html - YouTube: http://www.hob.de/redirect/youtube.html - E-Mail: http://www.hob.de/redirect/mail.html HOB GmbH & Co. KG Schwadermuehlstr. 3 D-90556 Cadolzburg Geschaeftsfuehrung: Klaus Brandstaetter, Zoran Adamovic AG Fuerth, HRA 5180 Steuer-Nr. 218/163/00107 USt-ID-Nr. DE 132747002 Komplementaerin HOB electronic Beteiligungs GmbH AG Fuerth, HRB 3416
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?EC88118611AE564AB0B10C6A4569004D0137D57AEB>