From owner-freebsd-questions@FreeBSD.ORG Tue Jul 2 15:04:04 2013 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 70DF2DC2 for ; Tue, 2 Jul 2013 15:04:04 +0000 (UTC) (envelope-from outbackdingo@gmail.com) Received: from mail-oa0-x233.google.com (mail-oa0-x233.google.com [IPv6:2607:f8b0:4003:c02::233]) by mx1.freebsd.org (Postfix) with ESMTP id 4369D127A for ; Tue, 2 Jul 2013 15:04:04 +0000 (UTC) Received: by mail-oa0-f51.google.com with SMTP id i4so6530405oah.10 for ; Tue, 02 Jul 2013 08:04:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=kor+tcZVrS3TtrVT7FhcHi+nNPk7Rnszc0C18lw9UEU=; b=J4/JUniwwUytZxhEj3bVUer8nfCEYJ11evo6o9HQ4Sw1MOJT9nHzRilM3DWmz1Cg5B M+b4GIRM5ooD1eST/C8VtmRVX1Tf3fGbWcYxiBVEKK+mBGCWzrrnf5oW9+YILG/2sIE5 6YDF+xgEKXYfiEV8+ggA/4a8uaksoYGqVDtLDkI5KaGWcHOmNgPw5bcUT1h29y9pOlRO ZPTpn21ZQISf6I/30qaMXrhSkF6gB9TkJIy4I1ibgdRTLGYmkcwIWEyCFrcnzH6goYI6 k+R3n0WLXK3oMs5dPHUecRfcekTG0zIeXu8cmGQYfFBxMPRRIDr7bOgVWzWEB2Rv0FRw 9aXg== MIME-Version: 1.0 X-Received: by 10.182.61.73 with SMTP id n9mr13268910obr.86.1372777443889; Tue, 02 Jul 2013 08:04:03 -0700 (PDT) Received: by 10.76.90.197 with HTTP; Tue, 2 Jul 2013 08:04:03 -0700 (PDT) Date: Tue, 2 Jul 2013 11:04:03 -0400 Message-ID: Subject: Terrible ix performance From: Outback Dingo To: freebsd-questions@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 02 Jul 2013 15:04:04 -0000 Ive got a high end storage server here, iperf shows decent network io iperf -i 10 -t 20 -c 10.0.96.1 -w 2.5M -l 2.5M ------------------------------------------------------------ Client connecting to 10.0.96.1, TCP port 5001 TCP window size: 2.50 MByte (WARNING: requested 2.50 MByte) ------------------------------------------------------------ [ 3] local 10.0.96.2 port 34753 connected with 10.0.96.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 9.78 GBytes 8.40 Gbits/sec [ 3] 10.0-20.0 sec 8.95 GBytes 7.69 Gbits/sec [ 3] 0.0-20.0 sec 18.7 GBytes 8.05 Gbits/sec the card has a 3 meter twinax cable from cisco connected to it, going through a fujitsu switch. We have tweaked various networking, and kernel sysctls, however from a sftp and nfs session i cant get better then 100MBs from a zpool with 8 mirrored vdevs. We also have an identical box that will get 1.4Gbs with a 1 meter cisco twinax cables that writes 2.4Gbs compared to reads only 1.4Gbs... does anyone have an idea of what the bottle neck could be?? This is a shared storage array with dual LSI controllers connected to 32 drives via an enclosure, local dd and other tests show the zpool performs quite well. however as soon as we introduce any type of protocol, sftp, samba, nfs performance plummets. Im quite puzzled and have run out of ideas. ix0@pci0:2:0:0: class=0x020000 card=0x000c8086 chip=0x10fb8086 rev=0x01 hdr=0x00 vendor = 'Intel Corporation' device = '82599EB 10-Gigabit SFI/SFP+ Network Connection' class = network subclass = ethernet ix1@pci0:2:0:1: class=0x020000 card=0x000c8086 chip=0x10fb8086 rev=0x01 hdr=0x00 vendor = 'Intel Corporation' device = '82599EB 10-Gigabit SFI/SFP+ Network Connection' class = network subclass = ethernet