From owner-freebsd-net@FreeBSD.ORG Wed Jul 3 04:28:59 2013 Return-Path: Delivered-To: net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 483E1B4B for ; Wed, 3 Jul 2013 04:28:59 +0000 (UTC) (envelope-from outbackdingo@gmail.com) Received: from mail-oa0-x234.google.com (mail-oa0-x234.google.com [IPv6:2607:f8b0:4003:c02::234]) by mx1.freebsd.org (Postfix) with ESMTP id 1B0051AF5 for ; Wed, 3 Jul 2013 04:28:59 +0000 (UTC) Received: by mail-oa0-f52.google.com with SMTP id g12so7376681oah.25 for ; Tue, 02 Jul 2013 21:28:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=P83rM3GQmtdwZaF6ZooVv/GR5DeI2rfgWChxbEzQ7ec=; b=lv5764cagt8mFyLzDyYmkdI2//RusMtRBC+//kPawOspDcDWNxzQsmEEoxkRSL1idj rU5HYCO6ttShEsDBcy8TiuqhyOb9hC8qZRHdULGY3h2JahP3K0BUVbBSIm8eL9Z5Oqk+ w7CeWbJIFrjmZM4J25wnV4KG+CvnhfMJJTh30W+kyKUYCrSVA/4gmbZyhZqrHXFJ0a8Z xm5Wxa9JwLky5qMT8ekyvAMfOfU8PMhgda8cNPHedrhyxSfAAvBKIDV8uVBI8T0Dr62P Ybp0xxPLHKlgoKQECFJaMSl3h9rnHhFtPeb5acMT5t29TYykIZ7RAqleA73QzI5HsXId bPNA== MIME-Version: 1.0 X-Received: by 10.182.46.230 with SMTP id y6mr15205316obm.79.1372825738660; Tue, 02 Jul 2013 21:28:58 -0700 (PDT) Received: by 10.76.90.197 with HTTP; Tue, 2 Jul 2013 21:28:58 -0700 (PDT) Date: Wed, 3 Jul 2013 00:28:58 -0400 Message-ID: Subject: Terrible ix performance From: Outback Dingo To: net@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Jul 2013 04:28:59 -0000 Ive got a high end storage server here, iperf shows decent network io iperf -i 10 -t 20 -c 10.0.96.1 -w 2.5M -l 2.5M ------------------------------------------------------------ Client connecting to 10.0.96.1, TCP port 5001 TCP window size: 2.50 MByte (WARNING: requested 2.50 MByte) ------------------------------------------------------------ [ 3] local 10.0.96.2 port 34753 connected with 10.0.96.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 9.78 GBytes 8.40 Gbits/sec [ 3] 10.0-20.0 sec 8.95 GBytes 7.69 Gbits/sec [ 3] 0.0-20.0 sec 18.7 GBytes 8.05 Gbits/sec the card has a 3 meter twinax cable from cisco connected to it, going through a fujitsu switch. We have tweaked various networking, and kernel sysctls, however from a sftp and nfs session i cant get better then 100MBs from a zpool with 8 mirrored vdevs. We also have an identical box that will get 1.4Gbs with a 1 meter cisco twinax cables that writes 2.4Gbs compared to reads only 1.4Gbs... does anyone have an idea of what the bottle neck could be?? This is a shared storage array with dual LSI controllers connected to 32 drives via an enclosure, local dd and other tests show the zpool performs quite well. however as soon as we introduce any type of protocol, sftp, samba, nfs performance plummets. Im quite puzzled and have run out of ideas. so now curiousity has me........ its loading the ix driver and working but not up to speed, it is feasible it should be using the ixgbe driver?? ix0@pci0:2:0:0: class=0x020000 card=0x000c8086 chip=0x10fb8086 rev=0x01 hdr=0x00 vendor = 'Intel Corporation' device = '82599EB 10-Gigabit SFI/SFP+ Network Connection' class = network subclass = ethernet ix1@pci0:2:0:1: class=0x020000 card=0x000c8086 chip=0x10fb8086 rev=0x01 hdr=0x00 vendor = 'Intel Corporation' device = '82599EB 10-Gigabit SFI/SFP+ Network Connection' class = network subclass = ethernet