From owner-freebsd-questions@FreeBSD.ORG Wed Jul 3 04:20:05 2013 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 0EE55582 for ; Wed, 3 Jul 2013 04:20:05 +0000 (UTC) (envelope-from outbackdingo@gmail.com) Received: from mail-ob0-x22d.google.com (mail-ob0-x22d.google.com [IPv6:2607:f8b0:4003:c01::22d]) by mx1.freebsd.org (Postfix) with ESMTP id D3DD91AAB for ; Wed, 3 Jul 2013 04:20:04 +0000 (UTC) Received: by mail-ob0-f173.google.com with SMTP id wc20so6497226obb.18 for ; Tue, 02 Jul 2013 21:20:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=ZocACLo+Ez8f0eMHuHAsZl2Kc2wa81c25tUzGcZpIIY=; b=mAFNSKk3TMwz2JqpPDoL9hVXCVAJBqGLSXMt8VsU5IkRLTusrYggIrx2Cq333dq8k7 3ieGLCweImOQ5G6EiAcydFMm6VPsUqMGvppTJqvU7rfMv8zWP00xrAVKP0twrb3N9T1J s2ktlgGCuZdUVHI8CnMD+7+WEITLyEWUkL8MxFbMcqzE+CGnt6PF9hBSH7CI0p3B8bfS GvHbxsfb/sXG2wwEzvMWHaN80CUp3tavKj9gnDfn8lFJG1g0Wh97JADyt4VkfU1tGkkH 3a1JnXZyZGe1poc1TlutUgkTqbr/FfFBZfRlJ4mLtlaHV8BjKIwXriTIEO6KeXzBODCj Za4Q== MIME-Version: 1.0 X-Received: by 10.60.38.164 with SMTP id h4mr14059982oek.22.1372825204449; Tue, 02 Jul 2013 21:20:04 -0700 (PDT) Received: by 10.76.90.197 with HTTP; Tue, 2 Jul 2013 21:20:04 -0700 (PDT) In-Reply-To: References: Date: Wed, 3 Jul 2013 00:20:04 -0400 Message-ID: Subject: Re: Terrible ix performance From: Outback Dingo To: freebsd-questions@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.14 X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 03 Jul 2013 04:20:05 -0000 On Tue, Jul 2, 2013 at 11:04 AM, Outback Dingo wrote: > Ive got a high end storage server here, iperf shows decent network io > > iperf -i 10 -t 20 -c 10.0.96.1 -w 2.5M -l 2.5M > ------------------------------------------------------------ > Client connecting to 10.0.96.1, TCP port 5001 > TCP window size: 2.50 MByte (WARNING: requested 2.50 MByte) > ------------------------------------------------------------ > [ 3] local 10.0.96.2 port 34753 connected with 10.0.96.1 port 5001 > [ ID] Interval Transfer Bandwidth > [ 3] 0.0-10.0 sec 9.78 GBytes 8.40 Gbits/sec > [ 3] 10.0-20.0 sec 8.95 GBytes 7.69 Gbits/sec > [ 3] 0.0-20.0 sec 18.7 GBytes 8.05 Gbits/sec > > > the card has a 3 meter twinax cable from cisco connected to it, going > through a fujitsu switch. We have tweaked various networking, and kernel > sysctls, however from a sftp and nfs session i cant get better then 100MBs > from a zpool with 8 mirrored vdevs. We also have an identical box that will > get 1.4Gbs with a 1 meter cisco twinax cables that writes 2.4Gbs compared > to reads only 1.4Gbs... > > does anyone have an idea of what the bottle neck could be?? This is a > shared storage array with dual LSI controllers connected to 32 drives via > an enclosure, local dd and other tests show the zpool performs quite well. > however as soon as we introduce any type of protocol, sftp, samba, nfs > performance plummets. Im quite puzzled and have run out of ideas. > > ix0@pci0:2:0:0: class=0x020000 card=0x000c8086 chip=0x10fb8086 rev=0x01 > hdr=0x00 > vendor = 'Intel Corporation' > device = '82599EB 10-Gigabit SFI/SFP+ Network Connection' > class = network > subclass = ethernet > ix1@pci0:2:0:1: class=0x020000 card=0x000c8086 chip=0x10fb8086 rev=0x01 > hdr=0x00 > vendor = 'Intel Corporation' > device = '82599EB 10-Gigabit SFI/SFP+ Network Connection' > class = network > subclass = ethernet > Okay so now curiousity has me........ its loading the ix driver and working but not up to speed, it is feasible it should be using the ixgbe driver??