From owner-freebsd-net@freebsd.org Wed Feb 10 14:18:16 2016 Return-Path: Delivered-To: freebsd-net@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id AD332AA4D4F for ; Wed, 10 Feb 2016 14:18:16 +0000 (UTC) (envelope-from voltagex@voltagex.org) Received: from mail-qg0-x235.google.com (mail-qg0-x235.google.com [IPv6:2607:f8b0:400d:c04::235]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 77B7B1BB6 for ; Wed, 10 Feb 2016 14:18:16 +0000 (UTC) (envelope-from voltagex@voltagex.org) Received: by mail-qg0-x235.google.com with SMTP id y9so14274309qgd.3 for ; Wed, 10 Feb 2016 06:18:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=voltagex-org.20150623.gappssmtp.com; s=20150623; h=mime-version:date:message-id:subject:from:to:content-type; bh=pzzY2yYW8KJO2djtE22wc7uHwuWGMAfRWzIuEo1ZUGE=; b=ctwN2A+RNIllRnV/evW6HrPCJowl8ioLWdToj6o/92rRT6o7qtSpcQyti+GdOqOkR9 Wz0kcCacqS+f2jI+OrE3fwwWfAcDhqxNE+L6jJ46WKzMGW1LtaFjzcHnRT7pq9oo8WQN 8WhjxVfiVbnMZxk/XoB4YvYvvSmMJVolX4KjDQZzvdpXS5CvqFjzqZcRE0egLokITlzH Tc9Mav10Gua6/rq2MSLVhuVCywKIrPmPVnQvwwJnyHfmS7IgXXq/KhsESCzBP8xtCwmT 45z1PNyPxvBJcmTpa+si0pWzs+L/ttyg6p7xfu07kkf0b0wYptYNFZ8PhJuA89D1p+KS ob3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:date:message-id:subject:from:to :content-type; bh=pzzY2yYW8KJO2djtE22wc7uHwuWGMAfRWzIuEo1ZUGE=; b=YQSP0agGwSNzfQdWiIvoL9lbz/nbQH8iEvRfo7/ad3esu3vgSKBKng+1TMpSsvMbsJ 1U22RqNQtHbsRIuWVcphq9vsRsgvyjA37giWDYop5z1tBfd1+oIbRgI3Xj+g+bp/RDJa JgwPhNG8Lt7CL/5U9GFXOvpbbJsiLlgmy6vio+3NoGT81SNp7ZQ4xUVKF3xlQu4hQr+j bITxR4IZXyajJVC6b61X9g2YflevCD1QO6tWLR54cBaG0aaPcMjhRvkDt9KIUsaeum4k ShSjT1nG2RxH7Hd+QZ6l7NqfbnPtBrgvq7BNnvZ2ZFCCSqCHq0QY6YSX7zUW4+ze0C2C 6CHg== X-Gm-Message-State: AG10YOTUzCiIroEXUZqxF1/sHe5IrY1hSjKosAMJ4h3WYApIowHQdTbfQS/BoLTZtjaimZXbKcunmAYbAytDwQ== MIME-Version: 1.0 X-Received: by 10.140.142.134 with SMTP id 128mr52336314qho.62.1455113895036; Wed, 10 Feb 2016 06:18:15 -0800 (PST) Received: by 10.55.119.196 with HTTP; Wed, 10 Feb 2016 06:18:14 -0800 (PST) Date: Thu, 11 Feb 2016 01:18:14 +1100 Message-ID: Subject: Slow performance in high latency situation on FreeNAS / FreeBSD 9 From: Adam Baxter To: freebsd-net@freebsd.org Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 10 Feb 2016 14:18:16 -0000 Hi all, I've got a new FreeNAS 9.3 box which is getting very, very slow transfers once the latency of the remote host goes over 200ms. The system is based on a SuperMicro A1SRi-2758F board - see http://www.supermicro.com/products/motherboard/Atom/X10/A1SRi-2758F.cfm. FreeNAS boots fine on it once you tell it to load the xhci driver on boot. uname -a says FreeBSD freenas.local 9.3-RELEASE-p31 FreeBSD 9.3-RELEASE-p31 #0 r288272+33bb475: Wed Feb 3 02:19:35 PST 2016 root@build3.ixsystems.com:/tank/home/stable-builds/FN/objs/os-base/amd64/tank/home/stable-builds/FN/FreeBSD/src/sys/FREENAS.amd64 amd64 The network card is new to me, apparently it's an Intel i354 / C2000 integrated thing - there are 4 gigabit ports on the back of the machine and a 5th for IPMI. I realise I'm limiting myself by staying on an OS based on FreeBSD 9 but I don't feel confident enough with FreeBSD to jump to 10 yet. Please let me know if I've missed any critical information out. the iperf server was a Linux VM on a Windows host, VirtualBox bridged interface to a Broadcom NIC. 0.5ms latency - standard LAN transfer to FreeNAS [ 3] local 10.1.1.2 port 40116 connected with 10.1.1.111 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.01 GBytes 871 Mbits/sec Which looks fine-ish The problem occurs when I crank up the latency (using tc qdisc on the VM). This matches the transfer rates I see from remote hosts once the latency hits 200-300ms (common for Australia->UK) 300ms simulated latency - Linux VM -> FreeNAS [voltagex@freenas ~]$ iperf -c 10.1.1.111 ------------------------------------------------------------ Client connecting to 10.1.1.111, TCP port 5001 TCP window size: 32.5 KByte (default) ------------------------------------------------------------ [ 3] local 10.1.1.2 port 33023 connected with 10.1.1.111 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.3 sec 3.75 MBytes 3.06 Mbits/sec Whereas Linux VM -> Linux VM fares quite a lot better, even with the added latency: voltagex@devbox:~$ iperf -c 10.1.1.111 ------------------------------------------------------------ Client connecting to 10.1.1.111, TCP port 5001 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.1.1.112 port 51790 connected with 10.1.1.111 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 23.5 MBytes 19.6 Mbits/sec Cranking up the window size on FreeNAS/FreeBSD doesn't seem to help, either. [voltagex@freenas ~]$ iperf -w 85k -c 10.1.1.111 ------------------------------------------------------------ Client connecting to 10.1.1.111, TCP port 5001 TCP window size: 86.3 KByte (WARNING: requested 85.0 KByte) ------------------------------------------------------------ [ 3] local 10.1.1.2 port 15033 connected with 10.1.1.111 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.6 sec 2.38 MBytes 1.88 Mbits/sec I also tried booting the machine from a LiveCD of Ubuntu 15.10 - the numbers are what you expect, except when capturing the no-latency test with tshark, the throughput dropped to around 200 megabits. With simulated latency: ubuntu@ubuntu:~$ iperf -c 10.1.1.115 ------------------------------------------------------------ Client connecting to 10.1.1.115, TCP port 5001 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.1.1.2 port 56184 connected with 10.1.1.115 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.4 sec 16.0 MBytes 12.9 Mbits/sec Without simulated latency + tshark running: [ 3] local 10.1.1.2 port 56192 connected with 10.1.1.115 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 250 MBytes 209 Mbits/sec "Normal" throughput in Ubuntu is about 730 megabits. The info I can pull from the card itself: igb0: port 0xe0c0-0xe0df mem 0xdf260000-0xdf27ffff,0xdf30c000-0xdf30ffff irq 20 at device 20.0 on pci0 igb0: Using MSIX interrupts with 9 vectors igb0: Ethernet address: 0c:c4:7a:6b:bf:34 igb0: Bound queue 0 to cpu 0 igb0: Bound queue 1 to cpu 1 igb0: Bound queue 2 to cpu 2 igb0: Bound queue 3 to cpu 3 igb0: Bound queue 4 to cpu 4 igb0: Bound queue 5 to cpu 5 igb0: Bound queue 6 to cpu 6 igb0: Bound queue 7 to cpu 7 igb0: promiscuous mode enabled igb0: link state changed to DOWN igb0: link state changed to UP igb0@pci0:0:20:0: class=0x020000 card=0x1f4115d9 chip=0x1f418086 rev=0x03 hdr=0x00 vendor = 'Intel Corporation' class = network subclass = ethernet I am not running pf (yet) and running 'ifconfig igb0 -tso' seemed to have no impact. I have not yet had a chance to try FreeBSD 10 in live mode. Packet captures are available at http://static.voltagex.org/freebsd-troubleshooting/iperf.tar.xz in pcapng format (unpacks to about 750MB, sorry!) Thanks in advance, Adam