From owner-freebsd-performance@FreeBSD.ORG Wed Apr 21 14:48:16 2010 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 21B661065670 for ; Wed, 21 Apr 2010 14:48:16 +0000 (UTC) (envelope-from ssanders@softhammer.net) Received: from smtp-hq1.opnet.com (smtp-hq1.opnet.com [192.104.65.248]) by mx1.freebsd.org (Postfix) with ESMTP id F110D8FC17 for ; Wed, 21 Apr 2010 14:48:15 +0000 (UTC) Received: from [172.16.12.251] (wtn12251.opnet.com [172.16.12.251]) by smtp.opnet.com (Postfix) with ESMTPSA id B30A417E80A9 for ; Wed, 21 Apr 2010 10:32:58 -0400 (EDT) Message-ID: <4BCF0C9A.10005@softhammer.net> Date: Wed, 21 Apr 2010 10:32:58 -0400 From: Stephen Sanders User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.9) Gecko/20100317 Lightning/1.0b1 Thunderbird/3.0.4 MIME-Version: 1.0 To: freebsd-performance@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: FreeBSD 8.0 ixgbe Poor Performance X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Stephen Sanders List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Apr 2010 14:48:16 -0000 I am running speed tests on a pair of systems equipped with Intel 10Gbps cards and am getting poor performance. iperf and tcpdump testing indicates that the card is running at roughly 2.5Gbps max transmit/receive. My attempts at turning fiddling with netisr, polling, and varying the buffer sizes has been fruitless. I'm sure there is something that I'm missing so I'm hoping for suggestions. There are two systems that are connected head to head via cross over cable. The two systems have the same hardware configuration. The hardware is as follows: 2 Intel E5430 (Quad core) @ 2.66 Ghz Intel S5000PAL Motherboard 16GB Memory My iperf command line for the client is: iperf -t 10 -c 169.0.0.1 -w 2.5M -l 2.5M My TCP dump test command lines are: tcpdump -i ix0 -w/dev/null tcpreplay -i ix0 -t -l 0 -K ./test.pcap Thanks From owner-freebsd-performance@FreeBSD.ORG Wed Apr 21 15:40:51 2010 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 41638106566C for ; Wed, 21 Apr 2010 15:40:51 +0000 (UTC) (envelope-from jamesbrandongooch@gmail.com) Received: from mail-iw0-f199.google.com (mail-iw0-f199.google.com [209.85.223.199]) by mx1.freebsd.org (Postfix) with ESMTP id 0899A8FC16 for ; Wed, 21 Apr 2010 15:40:50 +0000 (UTC) Received: by iwn37 with SMTP id 37so57834iwn.15 for ; Wed, 21 Apr 2010 08:40:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:received:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=Gh35+TeHzJEk7nfUcmXqD7yg+4B+WOs9Zukn+v+b7Yg=; b=Gl8YX/J72KKjDpgrGkTjCky3jY4fZn3diBv2DEHSre105msDCg3XHUdhSJHT0FRAlv J/pfC1Kou+6QKowMwEk1EACJzJ+bMlLsT5VQGO2ci15S8VD1oKBbzUyQbDKcLy1DOLSm KcIi8qcD3Z/BrGKF2d2de6WfRC1QRf7nLVeUU= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=Mk8BcsxO3hbiQNtFI+29UwMN+JjnkNunahDhqm6dvzPobRkJtSjtIVU6eXOguqJHkh nHVMwMWL5LvkIbOARJZw/Nh7mDxvgAnN+D4lOmA8P0GXWBzrDm7UVxTEMh5FdWaUZjZn 7oqkI4JNncEDrKrfrX4FHPfwK8evqQxpwDjdc= MIME-Version: 1.0 Received: by 10.231.113.36 with HTTP; Wed, 21 Apr 2010 08:04:09 -0700 (PDT) In-Reply-To: <4BCF0C9A.10005@softhammer.net> References: <4BCF0C9A.10005@softhammer.net> Date: Wed, 21 Apr 2010 10:04:09 -0500 Received: by 10.231.183.133 with SMTP id cg5mr2928053ibb.12.1271862249795; Wed, 21 Apr 2010 08:04:09 -0700 (PDT) Message-ID: From: Brandon Gooch To: Stephen Sanders Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: freebsd-performance@freebsd.org Subject: Re: FreeBSD 8.0 ixgbe Poor Performance X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Apr 2010 15:40:51 -0000 On Wed, Apr 21, 2010 at 9:32 AM, Stephen Sanders wrote: > I am running speed tests on a pair of systems equipped with Intel 10Gbps > cards and am getting poor performance. > > iperf and tcpdump testing indicates that the card is running at roughly > 2.5Gbps max transmit/receive. > > My attempts at turning fiddling with netisr, polling, and varying the > buffer sizes has been fruitless. =A0I'm sure there is something that I'm > missing so I'm hoping for suggestions. > > There are two systems that are connected head to head via =A0cross over > cable. =A0The two systems have the same hardware configuration. =A0The > hardware is as follows: > > 2 Intel E5430 (Quad core) @ 2.66 Ghz > Intel S5000PAL Motherboard > 16GB Memory > > My iperf command line for the client is: > > iperf -t 10 -c 169.0.0.1 -w 2.5M -l 2.5M > > My TCP dump test command lines are: > > tcpdump -i ix0 -w/dev/null > tcpreplay -i ix0 -t -l 0 -K ./test.pcap If you're running 8.0-RELEASE, you might try updating to 8-STABLE. Jack Vogel recently committed updated Intel NIC driver code: http://svn.freebsd.org/viewvc/base/stable/8/sys/dev/ixgbe/ -Brandon From owner-freebsd-performance@FreeBSD.ORG Wed Apr 21 18:13:55 2010 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 844CF106564A for ; Wed, 21 Apr 2010 18:13:55 +0000 (UTC) (envelope-from jfvogel@gmail.com) Received: from mail-ww0-f54.google.com (mail-ww0-f54.google.com [74.125.82.54]) by mx1.freebsd.org (Postfix) with ESMTP id 1209B8FC1A for ; Wed, 21 Apr 2010 18:13:54 +0000 (UTC) Received: by wwa36 with SMTP id 36so4790992wwa.13 for ; Wed, 21 Apr 2010 11:13:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:received:message-id:subject:from:to:cc:content-type; bh=e+hUVD/dqeNykZa/GbVjvLv9TQSXwfxYDc9z9F+bitI=; b=kQPQZTxM8F5lwtpwx6jjlrXK0hJ/XGE5En+eMGg6smJeYikgHwH1mZrTCk8aenQQmj 1hlzRQWXAVA+psJckmvQ0JaWP7WlYehW6ysZZ3jkWj+x6fTCdvLa/Uh7O4uP0Hq1lUVp gQXb/MXCZ2/DDmf0hT9vkJTuqYMHRzd01btNo= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=RsjjZzM1kgCN7qrxsjY6hKsXozDcSlnr8fxG92edWHb/WToz63uwvPctT7LojVR5gS R2Z++rmamMbBQ1Amhg+beMb4YcdD0PaHW3lSymxpkPepoDke5zS+/cEtWwYHQwXvDC4E tM42XZlo2tXk2orkHM4QtPIVHsxeZM94yxe6o= MIME-Version: 1.0 Received: by 10.216.11.8 with HTTP; Wed, 21 Apr 2010 11:13:53 -0700 (PDT) In-Reply-To: References: <4BCF0C9A.10005@softhammer.net> Date: Wed, 21 Apr 2010 11:13:53 -0700 Received: by 10.216.87.146 with SMTP id y18mr5524553wee.127.1271873633620; Wed, 21 Apr 2010 11:13:53 -0700 (PDT) Message-ID: From: Jack Vogel To: Brandon Gooch Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-performance@freebsd.org, Stephen Sanders Subject: Re: FreeBSD 8.0 ixgbe Poor Performance X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Apr 2010 18:13:55 -0000 When you get into the 10G world your performance will only be as good as your weakest link, what I mean is if you connect to something that has less than stellar bus and/or memory performance it is going to throttle everything. Running back to back with two good systems you should be able to get near line rate (9K range). Things that can effect that: 64 bit kernel, TSO, LRO, how many queues come to mind. The default driver config should get you there, so tell me more about your hardware/os config?? Jack On Wed, Apr 21, 2010 at 8:04 AM, Brandon Gooch wrote: > On Wed, Apr 21, 2010 at 9:32 AM, Stephen Sanders > wrote: > > I am running speed tests on a pair of systems equipped with Intel 10Gbps > > cards and am getting poor performance. > > > > iperf and tcpdump testing indicates that the card is running at roughly > > 2.5Gbps max transmit/receive. > > > > My attempts at turning fiddling with netisr, polling, and varying the > > buffer sizes has been fruitless. I'm sure there is something that I'm > > missing so I'm hoping for suggestions. > > > > There are two systems that are connected head to head via cross over > > cable. The two systems have the same hardware configuration. The > > hardware is as follows: > > > > 2 Intel E5430 (Quad core) @ 2.66 Ghz > > Intel S5000PAL Motherboard > > 16GB Memory > > > > My iperf command line for the client is: > > > > iperf -t 10 -c 169.0.0.1 -w 2.5M -l 2.5M > > > > My TCP dump test command lines are: > > > > tcpdump -i ix0 -w/dev/null > > tcpreplay -i ix0 -t -l 0 -K ./test.pcap > > If you're running 8.0-RELEASE, you might try updating to 8-STABLE. > Jack Vogel recently committed updated Intel NIC driver code: > > http://svn.freebsd.org/viewvc/base/stable/8/sys/dev/ixgbe/ > > -Brandon > _______________________________________________ > freebsd-performance@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-performance > To unsubscribe, send any mail to " > freebsd-performance-unsubscribe@freebsd.org" > From owner-freebsd-performance@FreeBSD.ORG Wed Apr 21 19:52:37 2010 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 69B87106566C for ; Wed, 21 Apr 2010 19:52:37 +0000 (UTC) (envelope-from ssanders@softhammer.net) Received: from smtp-hq1.opnet.com (smtp-hq1.opnet.com [192.104.65.248]) by mx1.freebsd.org (Postfix) with ESMTP id 2BF608FC19 for ; Wed, 21 Apr 2010 19:52:36 +0000 (UTC) Received: from [172.16.12.251] (wtn12251.opnet.com [172.16.12.251]) by smtp.opnet.com (Postfix) with ESMTPSA id C1AC117E80B6; Wed, 21 Apr 2010 15:52:35 -0400 (EDT) Message-ID: <4BCF5783.9050007@softhammer.net> Date: Wed, 21 Apr 2010 15:52:35 -0400 From: Stephen Sanders User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.9) Gecko/20100317 Lightning/1.0b1 Thunderbird/3.0.4 MIME-Version: 1.0 To: Jack Vogel References: <4BCF0C9A.10005@softhammer.net> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: Brandon Gooch , freebsd-performance@freebsd.org Subject: Re: FreeBSD 8.0 ixgbe Poor Performance X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Apr 2010 19:52:37 -0000 I'd be most pleased to get near 9k. I'm running FreeBSD 8.0 amd64 on both of the the test hosts. I've reset the configurations to system default as I was getting no where with sysctl and loader.conf settings. The motherboards have been configured to do MSI interrupts. The S5000PAL has a MSI to old style interrupt BIOS setting that confuses the driver interrupt setup. The 10Gbps cards should be plugged into the 8x PCI-E slots on both hosts. I'm double checking that claim right now and will get back later. Thanks On 4/21/2010 2:13 PM, Jack Vogel wrote: > When you get into the 10G world your performance will only be as good > as your weakest link, what I mean is if you connect to something that has > less than stellar bus and/or memory performance it is going to throttle > everything. > > Running back to back with two good systems you should be able to get > near line rate (9K range). Things that can effect that: 64 bit kernel, > TSO, LRO, how many queues come to mind. The default driver config > should get you there, so tell me more about your hardware/os config?? > > Jack > > > > On Wed, Apr 21, 2010 at 8:04 AM, Brandon Gooch > wrote: > > >> On Wed, Apr 21, 2010 at 9:32 AM, Stephen Sanders >> wrote: >> >>> I am running speed tests on a pair of systems equipped with Intel 10Gbps >>> cards and am getting poor performance. >>> >>> iperf and tcpdump testing indicates that the card is running at roughly >>> 2.5Gbps max transmit/receive. >>> >>> My attempts at turning fiddling with netisr, polling, and varying the >>> buffer sizes has been fruitless. I'm sure there is something that I'm >>> missing so I'm hoping for suggestions. >>> >>> There are two systems that are connected head to head via cross over >>> cable. The two systems have the same hardware configuration. The >>> hardware is as follows: >>> >>> 2 Intel E5430 (Quad core) @ 2.66 Ghz >>> Intel S5000PAL Motherboard >>> 16GB Memory >>> >>> My iperf command line for the client is: >>> >>> iperf -t 10 -c 169.0.0.1 -w 2.5M -l 2.5M >>> >>> My TCP dump test command lines are: >>> >>> tcpdump -i ix0 -w/dev/null >>> tcpreplay -i ix0 -t -l 0 -K ./test.pcap >>> >> If you're running 8.0-RELEASE, you might try updating to 8-STABLE. >> Jack Vogel recently committed updated Intel NIC driver code: >> >> http://svn.freebsd.org/viewvc/base/stable/8/sys/dev/ixgbe/ >> >> -Brandon >> _______________________________________________ >> freebsd-performance@freebsd.org mailing list >> http://lists.freebsd.org/mailman/listinfo/freebsd-performance >> To unsubscribe, send any mail to " >> freebsd-performance-unsubscribe@freebsd.org" >> >> > _______________________________________________ > freebsd-performance@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-performance > To unsubscribe, send any mail to "freebsd-performance-unsubscribe@freebsd.org" > > From owner-freebsd-performance@FreeBSD.ORG Wed Apr 21 20:53:23 2010 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8B2851065670 for ; Wed, 21 Apr 2010 20:53:23 +0000 (UTC) (envelope-from jfvogel@gmail.com) Received: from mail-wy0-f182.google.com (mail-wy0-f182.google.com [74.125.82.182]) by mx1.freebsd.org (Postfix) with ESMTP id 166B18FC1D for ; Wed, 21 Apr 2010 20:53:22 +0000 (UTC) Received: by wye20 with SMTP id 20so1261864wye.13 for ; Wed, 21 Apr 2010 13:53:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:received:message-id:subject:from:to:cc:content-type; bh=t4Qtis4Gi6c81i3vftAjkyuYo11Tnmm4Rr8fc3d1jwY=; b=pbWtjesWjGYKjg0RvOgX8c43WLcJvyVBu9JSre7NUwjb4iVYJpVQOW65UV+bzLpaeh yoaFR3Vn1ORLygS5w9MZpcTWG489t1fZrO8pZub5cpIN98+nwfiIAj1UHLY71YvVuIuC aQkiQ1iRW8FnrNv/hox15hZY8RQl8s5w570Xw= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=LJfZYBb0RiRwQ7cBI627m57KacI+D+PFPkbvvuVwyzKBPC0i/dRR2bWv/nm+zy7rac 2UCNCXXUJiMKxw+bCXHwKoKrA+cj8KCbITgdBy1oPTsKkgV7br523FthVjJ9bXsOUOdz WlCUDP8yuOcpZNhfUbAmpZU8NP16aqPUPmZgU= MIME-Version: 1.0 Received: by 10.216.11.8 with HTTP; Wed, 21 Apr 2010 13:53:21 -0700 (PDT) In-Reply-To: <4BCF5783.9050007@softhammer.net> References: <4BCF0C9A.10005@softhammer.net> <4BCF5783.9050007@softhammer.net> Date: Wed, 21 Apr 2010 13:53:21 -0700 Received: by 10.216.171.7 with SMTP id q7mr723761wel.40.1271883201653; Wed, 21 Apr 2010 13:53:21 -0700 (PDT) Message-ID: From: Jack Vogel To: Stephen Sanders Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: Brandon Gooch , freebsd-performance@freebsd.org Subject: Re: FreeBSD 8.0 ixgbe Poor Performance X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Apr 2010 20:53:23 -0000 Use my new driver and it will tell you when it comes up with the slot speed is, and if its substandard it will SQUAWK loudly at you :) I think the S5000PAL only has Gen1 PCIE slots which is going to limit you somewhat. Would recommend a current generation (x58 or 5520 chipset) system if you want the full benefit of 10G. BTW, you dont way what adapter, 82598 or 82599, you are using? Jack On Wed, Apr 21, 2010 at 12:52 PM, Stephen Sanders wrote: > I'd be most pleased to get near 9k. > > I'm running FreeBSD 8.0 amd64 on both of the the test hosts. I've reset > the configurations to system default as I was getting no where with > sysctl and loader.conf settings. > > The motherboards have been configured to do MSI interrupts. The > S5000PAL has a MSI to old style interrupt BIOS setting that confuses the > driver interrupt setup. > > The 10Gbps cards should be plugged into the 8x PCI-E slots on both > hosts. I'm double checking that claim right now and will get back later. > > Thanks > > > On 4/21/2010 2:13 PM, Jack Vogel wrote: > > When you get into the 10G world your performance will only be as good > > as your weakest link, what I mean is if you connect to something that has > > less than stellar bus and/or memory performance it is going to throttle > > everything. > > > > Running back to back with two good systems you should be able to get > > near line rate (9K range). Things that can effect that: 64 bit kernel, > > TSO, LRO, how many queues come to mind. The default driver config > > should get you there, so tell me more about your hardware/os config?? > > > > Jack > > > > > > > > On Wed, Apr 21, 2010 at 8:04 AM, Brandon Gooch > > wrote: > > > > > >> On Wed, Apr 21, 2010 at 9:32 AM, Stephen Sanders > >> wrote: > >> > >>> I am running speed tests on a pair of systems equipped with Intel > 10Gbps > >>> cards and am getting poor performance. > >>> > >>> iperf and tcpdump testing indicates that the card is running at roughly > >>> 2.5Gbps max transmit/receive. > >>> > >>> My attempts at turning fiddling with netisr, polling, and varying the > >>> buffer sizes has been fruitless. I'm sure there is something that I'm > >>> missing so I'm hoping for suggestions. > >>> > >>> There are two systems that are connected head to head via cross over > >>> cable. The two systems have the same hardware configuration. The > >>> hardware is as follows: > >>> > >>> 2 Intel E5430 (Quad core) @ 2.66 Ghz > >>> Intel S5000PAL Motherboard > >>> 16GB Memory > >>> > >>> My iperf command line for the client is: > >>> > >>> iperf -t 10 -c 169.0.0.1 -w 2.5M -l 2.5M > >>> > >>> My TCP dump test command lines are: > >>> > >>> tcpdump -i ix0 -w/dev/null > >>> tcpreplay -i ix0 -t -l 0 -K ./test.pcap > >>> > >> If you're running 8.0-RELEASE, you might try updating to 8-STABLE. > >> Jack Vogel recently committed updated Intel NIC driver code: > >> > >> http://svn.freebsd.org/viewvc/base/stable/8/sys/dev/ixgbe/ > >> > >> -Brandon > >> _______________________________________________ > >> freebsd-performance@freebsd.org mailing list > >> http://lists.freebsd.org/mailman/listinfo/freebsd-performance > >> To unsubscribe, send any mail to " > >> freebsd-performance-unsubscribe@freebsd.org" > >> > >> > > _______________________________________________ > > freebsd-performance@freebsd.org mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-performance > > To unsubscribe, send any mail to " > freebsd-performance-unsubscribe@freebsd.org" > > > > > > From owner-freebsd-performance@FreeBSD.ORG Wed Apr 21 21:34:11 2010 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A36B5106566B for ; Wed, 21 Apr 2010 21:34:11 +0000 (UTC) (envelope-from ssanders@softhammer.net) Received: from smtp-hq1.opnet.com (smtp-hq1.opnet.com [192.104.65.248]) by mx1.freebsd.org (Postfix) with ESMTP id 5F77B8FC0A for ; Wed, 21 Apr 2010 21:34:11 +0000 (UTC) Received: from [172.16.12.251] (wtn12251.opnet.com [172.16.12.251]) by smtp.opnet.com (Postfix) with ESMTPSA id B979A17E80B6; Wed, 21 Apr 2010 17:34:10 -0400 (EDT) Message-ID: <4BCF6F52.7030802@softhammer.net> Date: Wed, 21 Apr 2010 17:34:10 -0400 From: Stephen Sanders User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.9) Gecko/20100317 Lightning/1.0b1 Thunderbird/3.0.4 MIME-Version: 1.0 To: Jack Vogel References: <4BCF0C9A.10005@softhammer.net> <4BCF5783.9050007@softhammer.net> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: Brandon Gooch , freebsd-performance@freebsd.org Subject: Re: FreeBSD 8.0 ixgbe Poor Performance X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Apr 2010 21:34:11 -0000 According to pciconf, the card is a "82598EB 10 Gigabit AF Dual Port Network Connection". It looks to me like the card is plugged into a 4xPCIe slot. I'm sure this means we're not going to make the 10Gbps but I would imagine that we should get north of 5 Gbps. Is there a URL to pick the latest code up from or is the code the latest STABLE check in for ixgbe? Thanks. On 4/21/2010 4:53 PM, Jack Vogel wrote: > Use my new driver and it will tell you when it comes up with the slot > speed is, > and if its substandard it will SQUAWK loudly at you :) > > I think the S5000PAL only has Gen1 PCIE slots which is going to limit you > somewhat. Would recommend a current generation (x58 or 5520 chipset) > system if you want the full benefit of 10G. > > BTW, you dont way what adapter, 82598 or 82599, you are using? > > Jack > > > On Wed, Apr 21, 2010 at 12:52 PM, Stephen Sanders > > wrote: > > I'd be most pleased to get near 9k. > > I'm running FreeBSD 8.0 amd64 on both of the the test hosts. I've > reset > the configurations to system default as I was getting no where with > sysctl and loader.conf settings. > > The motherboards have been configured to do MSI interrupts. The > S5000PAL has a MSI to old style interrupt BIOS setting that > confuses the > driver interrupt setup. > > The 10Gbps cards should be plugged into the 8x PCI-E slots on both > hosts. I'm double checking that claim right now and will get back > later. > > Thanks > > > On 4/21/2010 2:13 PM, Jack Vogel wrote: > > When you get into the 10G world your performance will only be > as good > > as your weakest link, what I mean is if you connect to something > that has > > less than stellar bus and/or memory performance it is going to > throttle > > everything. > > > > Running back to back with two good systems you should be able to get > > near line rate (9K range). Things that can effect that: 64 bit > kernel, > > TSO, LRO, how many queues come to mind. The default driver config > > should get you there, so tell me more about your hardware/os > config?? > > > > Jack > > > > > > > > On Wed, Apr 21, 2010 at 8:04 AM, Brandon Gooch > > >wrote: > > > > > >> On Wed, Apr 21, 2010 at 9:32 AM, Stephen Sanders > >> > wrote: > >> > >>> I am running speed tests on a pair of systems equipped with > Intel 10Gbps > >>> cards and am getting poor performance. > >>> > >>> iperf and tcpdump testing indicates that the card is running > at roughly > >>> 2.5Gbps max transmit/receive. > >>> > >>> My attempts at turning fiddling with netisr, polling, and > varying the > >>> buffer sizes has been fruitless. I'm sure there is something > that I'm > >>> missing so I'm hoping for suggestions. > >>> > >>> There are two systems that are connected head to head via > cross over > >>> cable. The two systems have the same hardware configuration. The > >>> hardware is as follows: > >>> > >>> 2 Intel E5430 (Quad core) @ 2.66 Ghz > >>> Intel S5000PAL Motherboard > >>> 16GB Memory > >>> > >>> My iperf command line for the client is: > >>> > >>> iperf -t 10 -c 169.0.0.1 -w 2.5M -l 2.5M > >>> > >>> My TCP dump test command lines are: > >>> > >>> tcpdump -i ix0 -w/dev/null > >>> tcpreplay -i ix0 -t -l 0 -K ./test.pcap > >>> > >> If you're running 8.0-RELEASE, you might try updating to 8-STABLE. > >> Jack Vogel recently committed updated Intel NIC driver code: > >> > >> http://svn.freebsd.org/viewvc/base/stable/8/sys/dev/ixgbe/ > >> > >> -Brandon > >> _______________________________________________ > >> freebsd-performance@freebsd.org > mailing list > >> http://lists.freebsd.org/mailman/listinfo/freebsd-performance > >> To unsubscribe, send any mail to " > >> freebsd-performance-unsubscribe@freebsd.org > " > >> > >> > > _______________________________________________ > > freebsd-performance@freebsd.org > mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-performance > > To unsubscribe, send any mail to > "freebsd-performance-unsubscribe@freebsd.org > " > > > > > > From owner-freebsd-performance@FreeBSD.ORG Thu Apr 22 10:53:21 2010 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 30B51106564A for ; Thu, 22 Apr 2010 10:53:21 +0000 (UTC) (envelope-from jiashiun@gmail.com) Received: from mail-pz0-f172.google.com (mail-pz0-f172.google.com [209.85.222.172]) by mx1.freebsd.org (Postfix) with ESMTP id 00A1E8FC0A for ; Thu, 22 Apr 2010 10:53:20 +0000 (UTC) Received: by pzk2 with SMTP id 2so5376744pzk.27 for ; Thu, 22 Apr 2010 03:53:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :from:date:received:message-id:subject:to:cc:content-type :content-transfer-encoding; bh=8wn39FYO7bJxNj1B1Vos7ItF3ufsvCUZIyDj3jV54pA=; b=aiyAbxHl2ezcVdadnw9+v6etTR4L1OYHufck4zUi2ILTt142sWzYbIG9b1PA4pYDmY z471qrV9G7K1D8L0orTK/43LF2714deItJMBxYuixkAkgEWTEjunWC+U00Va8IgEOUaM VEZJOYf8Z1Zp1e9amNE0igObrx38+8n9oehBw= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type:content-transfer-encoding; b=RIZweO5bdlzOeTNcQN6GkNPaYUIeAGjIw4J/uKfhKebexJR1s7xUNFQfIR965I/hzi cF4wUqv2DC3/poDxIR0rWE87gsdX7mfEUp2Zw6OcoM5pRESLkutTxrLFem1zWgjls2EL OSqP8tSRzgmMmR3NRFTcZV3ZojSqBaIPi7ZXE= MIME-Version: 1.0 Received: by 10.140.143.7 with HTTP; Thu, 22 Apr 2010 03:30:44 -0700 (PDT) In-Reply-To: <4BCF6F52.7030802@softhammer.net> References: <4BCF0C9A.10005@softhammer.net> <4BCF5783.9050007@softhammer.net> <4BCF6F52.7030802@softhammer.net> From: Jia-Shiun Li Date: Thu, 22 Apr 2010 18:30:44 +0800 Received: by 10.140.248.4 with SMTP id v4mr4660117rvh.213.1271932264108; Thu, 22 Apr 2010 03:31:04 -0700 (PDT) Message-ID: To: Stephen Sanders Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: Brandon Gooch , freebsd-performance@freebsd.org, Jack Vogel Subject: Re: FreeBSD 8.0 ixgbe Poor Performance X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Apr 2010 10:53:21 -0000 On Thu, Apr 22, 2010 at 5:34 AM, Stephen Sanders wrote: > According to pciconf, the card is a "82598EB 10 Gigabit AF Dual Port > Network Connection". > > It looks to me like the card is plugged into a 4xPCIe slot. =C2=A0I'm sur= e > this means we're not going to make the 10Gbps but I would imagine that > we should get north of 5 Gbps. Some suggestions: - 'pciconf -lc' too can be used to check pcie link width. Look for pcie capability in the output. - IIRC the kernel auto adjusts TCP window size and one does not need to set it manually. - Set greater MTU size than default 1500. It helps reduce CPU loading. - Do not use file as source or destination unless you are sure disks and I/O paths are fast enough. 10Gbe means 1000+MB/s. Only storage beasts can handle that. BTW, is FreeBSD able to cache and buffer them all in memory? - Finally, use 'top' to make sure nothing is eating up all CPU cycles. Cheers, Jia-Shiun. From owner-freebsd-performance@FreeBSD.ORG Thu Apr 22 15:41:30 2010 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 586D3106566B for ; Thu, 22 Apr 2010 15:41:30 +0000 (UTC) (envelope-from ssanders@softhammer.net) Received: from smtp-hq1.opnet.com (smtp-hq1.opnet.com [192.104.65.248]) by mx1.freebsd.org (Postfix) with ESMTP id E73888FC1C for ; Thu, 22 Apr 2010 15:41:29 +0000 (UTC) Received: from [172.16.12.251] (wtn12251.opnet.com [172.16.12.251]) by smtp.opnet.com (Postfix) with ESMTPSA id DA3E317E80A4; Thu, 22 Apr 2010 11:41:28 -0400 (EDT) Message-ID: <4BD06E28.3060609@softhammer.net> Date: Thu, 22 Apr 2010 11:41:28 -0400 From: Stephen Sanders User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.9) Gecko/20100317 Lightning/1.0b1 Thunderbird/3.0.4 MIME-Version: 1.0 To: Jack Vogel References: <4BCF0C9A.10005@softhammer.net> <4BCF5783.9050007@softhammer.net> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: Brandon Gooch , freebsd-performance@freebsd.org Subject: Re: FreeBSD 8.0 ixgbe Poor Performance X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Apr 2010 15:41:30 -0000 I believe that "pciconf -lvc" showed that the cards were in the correct slot. I'm not sure as to what all of the output means but I'm guessing that " cap 10[a0] = PCI-Express 2 endpoint max data 128(256) link x8(x8)" means that the card is an 8 lane card and is using all 8 lanes. Setting kern.ipc.maxsockbuf to16777216 got a better result with ipref TCP testing. The rate when from ~2.5Gpbs to ~5.5Gbps. Running iperf in UDP test mode is still yielding ~2.5Gbps. Running tcpreplay tests is also yielding ~2.5Gbps as well. Command lines for iperf testing are: ipref -t 10 -w 2.5m -l 2.5m -c 169.1.0.2 iperf -s -w 2.5m -B 169.1.0.2 iperf -t 10 -w 2.5m -c 169.1.0.2 -u iperf -s -w 2.5m -B 169.1.0.2 -u For the tcpdump test, I'm sending output to /dev/null and using the cache flag on tcpreplay in order to avoid limiting my network interface throughput to the disk speed. Commands lines for this test are: tcpdump -i ix1 -w /dev/null tcpreplay -i ix1 -t -l 0 -K ./rate.pcap Please forgive my lack of kernel building prowess but I'm guessing that the latest driver needs to be built in a FreeBSD STABLE tree. I ran into an undefined symbol "drbr_needs_enqueue" in the ixgbe code I downloaded. Thanks for all the help. On 4/21/2010 4:53 PM, Jack Vogel wrote: > Use my new driver and it will tell you when it comes up with the slot > speed is, > and if its substandard it will SQUAWK loudly at you :) > > I think the S5000PAL only has Gen1 PCIE slots which is going to limit you > somewhat. Would recommend a current generation (x58 or 5520 chipset) > system if you want the full benefit of 10G. > > BTW, you dont way what adapter, 82598 or 82599, you are using? > > Jack > > > On Wed, Apr 21, 2010 at 12:52 PM, Stephen Sanders > > wrote: > > I'd be most pleased to get near 9k. > > I'm running FreeBSD 8.0 amd64 on both of the the test hosts. I've > reset > the configurations to system default as I was getting no where with > sysctl and loader.conf settings. > > The motherboards have been configured to do MSI interrupts. The > S5000PAL has a MSI to old style interrupt BIOS setting that > confuses the > driver interrupt setup. > > The 10Gbps cards should be plugged into the 8x PCI-E slots on both > hosts. I'm double checking that claim right now and will get back > later. > > Thanks > > > On 4/21/2010 2:13 PM, Jack Vogel wrote: > > When you get into the 10G world your performance will only be > as good > > as your weakest link, what I mean is if you connect to something > that has > > less than stellar bus and/or memory performance it is going to > throttle > > everything. > > > > Running back to back with two good systems you should be able to get > > near line rate (9K range). Things that can effect that: 64 bit > kernel, > > TSO, LRO, how many queues come to mind. The default driver config > > should get you there, so tell me more about your hardware/os > config?? > > > > Jack > > > > > > > > On Wed, Apr 21, 2010 at 8:04 AM, Brandon Gooch > > >wrote: > > > > > >> On Wed, Apr 21, 2010 at 9:32 AM, Stephen Sanders > >> > wrote: > >> > >>> I am running speed tests on a pair of systems equipped with > Intel 10Gbps > >>> cards and am getting poor performance. > >>> > >>> iperf and tcpdump testing indicates that the card is running > at roughly > >>> 2.5Gbps max transmit/receive. > >>> > >>> My attempts at turning fiddling with netisr, polling, and > varying the > >>> buffer sizes has been fruitless. I'm sure there is something > that I'm > >>> missing so I'm hoping for suggestions. > >>> > >>> There are two systems that are connected head to head via > cross over > >>> cable. The two systems have the same hardware configuration. The > >>> hardware is as follows: > >>> > >>> 2 Intel E5430 (Quad core) @ 2.66 Ghz > >>> Intel S5000PAL Motherboard > >>> 16GB Memory > >>> > >>> My iperf command line for the client is: > >>> > >>> iperf -t 10 -c 169.0.0.1 -w 2.5M -l 2.5M > >>> > >>> My TCP dump test command lines are: > >>> > >>> tcpdump -i ix0 -w/dev/null > >>> tcpreplay -i ix0 -t -l 0 -K ./test.pcap > >>> > >> If you're running 8.0-RELEASE, you might try updating to 8-STABLE. > >> Jack Vogel recently committed updated Intel NIC driver code: > >> > >> http://svn.freebsd.org/viewvc/base/stable/8/sys/dev/ixgbe/ > >> > >> -Brandon > >> _______________________________________________ > >> freebsd-performance@freebsd.org > mailing list > >> http://lists.freebsd.org/mailman/listinfo/freebsd-performance > >> To unsubscribe, send any mail to " > >> freebsd-performance-unsubscribe@freebsd.org > " > >> > >> > > _______________________________________________ > > freebsd-performance@freebsd.org > mailing list > > http://lists.freebsd.org/mailman/listinfo/freebsd-performance > > To unsubscribe, send any mail to > "freebsd-performance-unsubscribe@freebsd.org > " > > > > > > From owner-freebsd-performance@FreeBSD.ORG Thu Apr 22 16:39:27 2010 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C353B106566B for ; Thu, 22 Apr 2010 16:39:27 +0000 (UTC) (envelope-from jfvogel@gmail.com) Received: from mail-ww0-f54.google.com (mail-ww0-f54.google.com [74.125.82.54]) by mx1.freebsd.org (Postfix) with ESMTP id 4E3428FC0A for ; Thu, 22 Apr 2010 16:39:26 +0000 (UTC) Received: by wwa36 with SMTP id 36so5599736wwa.13 for ; Thu, 22 Apr 2010 09:39:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:received:message-id:subject:from:to:cc:content-type; bh=MlTnhghRfMIJP2lc7mpqOmE6TncqKk5TNDIrGv9vI+A=; b=bIAu+U2rFDzfEohJ/j04K2gME8DWEwJe3qALrez3AKUpDhgDhXjvcLO63cZm7pg2ZC 9zNVe3z+7J3mbcawYaExcGAW/ew5rY93FP6Fb9A/9kESFJxn0y+55aZGc/6xUGtQEme2 jmIGqDpV5obNautXejakPZMGhYb4+TUL8QtB8= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=RrN+N/UaWRwmnlF+QL3BlspCSg3c2jn+vi62hsp5pfdBrBXVBBMmn5fhgM1Rmt6Ud8 t8R5krhB8XiVyQ4YppqIjl47nthk+HX6qjXMTXGT4qkQ6b9/im7Z8InM9t6aiBRbHCvC gd9II6x/Bp+BnrJ2eiQsobM/56RBOujH7zQL0= MIME-Version: 1.0 Received: by 10.216.11.8 with HTTP; Thu, 22 Apr 2010 09:39:25 -0700 (PDT) In-Reply-To: <4BD06E28.3060609@softhammer.net> References: <4BCF0C9A.10005@softhammer.net> <4BCF5783.9050007@softhammer.net> <4BD06E28.3060609@softhammer.net> Date: Thu, 22 Apr 2010 09:39:25 -0700 Received: by 10.216.85.198 with SMTP id u48mr142987wee.225.1271954366053; Thu, 22 Apr 2010 09:39:26 -0700 (PDT) Message-ID: From: Jack Vogel To: Stephen Sanders Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: Brandon Gooch , freebsd-performance@freebsd.org Subject: Re: FreeBSD 8.0 ixgbe Poor Performance X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Apr 2010 16:39:27 -0000 Couple more things that come to mind: make sure you increase mbuf pool, nmbclusters up to at least 262144, and the driver uses 4K clusters if you go to jumbo frames (nmbjumbop). some workloads will benefit from increeasing the various sendspace and recvspace parameters, maxsockets and maxfiles are other candidates. Another item: look in /var/log/messages to see if you are getting any Interrupt storm messages, if you are that can throttle the irq and reduce performance, there is an intr_storm_threshold that you can increase to keep that from happening. Finally, it is sometimes not possible to fully utilize the hardware from a single process, you can get limited by the socket layer, stack, scheduler, whatever, so you might want to use multiple test processes. I believe iperf has a builtin way to do this. Run more threads and look at your cumulative. Good luck, Jack On Thu, Apr 22, 2010 at 8:41 AM, Stephen Sanders wrote: > I believe that "pciconf -lvc" showed that the cards were in the correct > slot. I'm not sure as to what all of the output means but I'm guessing that > " cap 10[a0] = PCI-Express 2 endpoint max data 128(256) link x8(x8)" means > that the card is an 8 lane card and is using all 8 lanes. > > Setting kern.ipc.maxsockbuf to16777216 got a better result with ipref TCP > testing. The rate when from ~2.5Gpbs to ~5.5Gbps. > > Running iperf in UDP test mode is still yielding ~2.5Gbps. Running > tcpreplay tests is also yielding ~2.5Gbps as well. > > Command lines for iperf testing are: > > ipref -t 10 -w 2.5m -l 2.5m -c 169.1.0.2 > iperf -s -w 2.5m -B 169.1.0.2 > > iperf -t 10 -w 2.5m -c 169.1.0.2 -u > iperf -s -w 2.5m -B 169.1.0.2 -u > > For the tcpdump test, I'm sending output to /dev/null and using the cache > flag on tcpreplay in order to avoid limiting my network interface throughput > to the disk speed. > Commands lines for this test are: > > tcpdump -i ix1 -w /dev/null > tcpreplay -i ix1 -t -l 0 -K ./rate.pcap > > Please forgive my lack of kernel building prowess but I'm guessing that the > latest driver needs to be built in a FreeBSD STABLE tree. I ran into an > undefined symbol "drbr_needs_enqueue" in the ixgbe code I downloaded. > > Thanks for all the help. > > On 4/21/2010 4:53 PM, Jack Vogel wrote: > > Use my new driver and it will tell you when it comes up with the slot speed > is, > and if its substandard it will SQUAWK loudly at you :) > > I think the S5000PAL only has Gen1 PCIE slots which is going to limit you > somewhat. Would recommend a current generation (x58 or 5520 chipset) > system if you want the full benefit of 10G. > > BTW, you dont way what adapter, 82598 or 82599, you are using? > > Jack > > > On Wed, Apr 21, 2010 at 12:52 PM, Stephen Sanders > wrote: > >> I'd be most pleased to get near 9k. >> >> I'm running FreeBSD 8.0 amd64 on both of the the test hosts. I've reset >> the configurations to system default as I was getting no where with >> sysctl and loader.conf settings. >> >> The motherboards have been configured to do MSI interrupts. The >> S5000PAL has a MSI to old style interrupt BIOS setting that confuses the >> driver interrupt setup. >> >> The 10Gbps cards should be plugged into the 8x PCI-E slots on both >> hosts. I'm double checking that claim right now and will get back later. >> >> Thanks >> >> >> On 4/21/2010 2:13 PM, Jack Vogel wrote: >> > When you get into the 10G world your performance will only be as good >> > as your weakest link, what I mean is if you connect to something that >> has >> > less than stellar bus and/or memory performance it is going to throttle >> > everything. >> > >> > Running back to back with two good systems you should be able to get >> > near line rate (9K range). Things that can effect that: 64 bit kernel, >> > TSO, LRO, how many queues come to mind. The default driver config >> > should get you there, so tell me more about your hardware/os config?? >> > >> > Jack >> > >> > >> > >> > On Wed, Apr 21, 2010 at 8:04 AM, Brandon Gooch >> > wrote: >> > >> > >> >> On Wed, Apr 21, 2010 at 9:32 AM, Stephen Sanders >> >> wrote: >> >> >> >>> I am running speed tests on a pair of systems equipped with Intel >> 10Gbps >> >>> cards and am getting poor performance. >> >>> >> >>> iperf and tcpdump testing indicates that the card is running at >> roughly >> >>> 2.5Gbps max transmit/receive. >> >>> >> >>> My attempts at turning fiddling with netisr, polling, and varying the >> >>> buffer sizes has been fruitless. I'm sure there is something that I'm >> >>> missing so I'm hoping for suggestions. >> >>> >> >>> There are two systems that are connected head to head via cross over >> >>> cable. The two systems have the same hardware configuration. The >> >>> hardware is as follows: >> >>> >> >>> 2 Intel E5430 (Quad core) @ 2.66 Ghz >> >>> Intel S5000PAL Motherboard >> >>> 16GB Memory >> >>> >> >>> My iperf command line for the client is: >> >>> >> >>> iperf -t 10 -c 169.0.0.1 -w 2.5M -l 2.5M >> >>> >> >>> My TCP dump test command lines are: >> >>> >> >>> tcpdump -i ix0 -w/dev/null >> >>> tcpreplay -i ix0 -t -l 0 -K ./test.pcap >> >>> >> >> If you're running 8.0-RELEASE, you might try updating to 8-STABLE. >> >> Jack Vogel recently committed updated Intel NIC driver code: >> >> >> >> http://svn.freebsd.org/viewvc/base/stable/8/sys/dev/ixgbe/ >> >> >> >> -Brandon >> >> _______________________________________________ >> >> freebsd-performance@freebsd.org mailing list >> >> http://lists.freebsd.org/mailman/listinfo/freebsd-performance >> >> To unsubscribe, send any mail to " >> >> freebsd-performance-unsubscribe@freebsd.org" >> >> >> >> >> > _______________________________________________ >> > freebsd-performance@freebsd.org mailing list >> > http://lists.freebsd.org/mailman/listinfo/freebsd-performance >> > To unsubscribe, send any mail to " >> freebsd-performance-unsubscribe@freebsd.org" >> > >> > >> >> > > From owner-freebsd-performance@FreeBSD.ORG Thu Apr 22 18:06:11 2010 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7B9A9106564A for ; Thu, 22 Apr 2010 18:06:11 +0000 (UTC) (envelope-from ssanders@softhammer.net) Received: from smtp-hq2.opnet.com (smtp-hq2.opnet.com [192.104.65.247]) by mx1.freebsd.org (Postfix) with ESMTP id F388A8FC08 for ; Thu, 22 Apr 2010 18:06:10 +0000 (UTC) Received: from [172.16.12.251] (wtn12251.opnet.com [172.16.12.251]) by smtp.opnet.com (Postfix) with ESMTPSA id E813221100A3; Thu, 22 Apr 2010 14:06:09 -0400 (EDT) Message-ID: <4BD09011.6000104@softhammer.net> Date: Thu, 22 Apr 2010 14:06:09 -0400 From: Stephen Sanders User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.9) Gecko/20100317 Lightning/1.0b1 Thunderbird/3.0.4 MIME-Version: 1.0 To: Jack Vogel References: <4BCF0C9A.10005@softhammer.net> <4BCF5783.9050007@softhammer.net> <4BD06E28.3060609@softhammer.net> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: Brandon Gooch , freebsd-performance@freebsd.org Subject: Re: FreeBSD 8.0 ixgbe Poor Performance X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Apr 2010 18:06:11 -0000 Adding "-P 2 " to the iperf client got the rate up to what it should be. Also, running multiple tcpreplay's pushed the rate up as well. Thanks again for the pointers. On 4/22/2010 12:39 PM, Jack Vogel wrote: > Couple more things that come to mind: > > make sure you increase mbuf pool, nmbclusters up to at least 262144, > and the driver uses 4K clusters if > you go to jumbo frames (nmbjumbop). some workloads will benefit from > increeasing the various sendspace > and recvspace parameters, maxsockets and maxfiles are other candidates. > > Another item: look in /var/log/messages to see if you are getting any > Interrupt storm messages, if you are > that can throttle the irq and reduce performance, there is an > intr_storm_threshold that you can increase to > keep that from happening. > > Finally, it is sometimes not possible to fully utilize the hardware > from a single process, you can get limited > by the socket layer, stack, scheduler, whatever, so you might want to > use multiple test processes. I believe > iperf has a builtin way to do this. Run more threads and look at your > cumulative. > > Good luck, > > Jack > > > On Thu, Apr 22, 2010 at 8:41 AM, Stephen Sanders > > wrote: > > I believe that "pciconf -lvc" showed that the cards were in the > correct slot. I'm not sure as to what all of the output means but > I'm guessing that " cap 10[a0] = PCI-Express 2 endpoint max data > 128(256) link x8(x8)" means that the card is an 8 lane card and is > using all 8 lanes. > > Setting kern.ipc.maxsockbuf to16777216 got a better result with > ipref TCP testing. The rate when from ~2.5Gpbs to ~5.5Gbps. > > Running iperf in UDP test mode is still yielding ~2.5Gbps. > Running tcpreplay tests is also yielding ~2.5Gbps as well. > > Command lines for iperf testing are: > > ipref -t 10 -w 2.5m -l 2.5m -c 169.1.0.2 > iperf -s -w 2.5m -B 169.1.0.2 > > iperf -t 10 -w 2.5m -c 169.1.0.2 -u > iperf -s -w 2.5m -B 169.1.0.2 -u > > For the tcpdump test, I'm sending output to /dev/null and using > the cache flag on tcpreplay in order to avoid limiting my network > interface throughput to the disk speed. > Commands lines for this test are: > > tcpdump -i ix1 -w /dev/null > tcpreplay -i ix1 -t -l 0 -K ./rate.pcap > > Please forgive my lack of kernel building prowess but I'm guessing > that the latest driver needs to be built in a FreeBSD STABLE > tree. I ran into an undefined symbol "drbr_needs_enqueue" in the > ixgbe code I downloaded. > > Thanks for all the help. > > On 4/21/2010 4:53 PM, Jack Vogel wrote: >> Use my new driver and it will tell you when it comes up with the >> slot speed is, >> and if its substandard it will SQUAWK loudly at you :) >> >> I think the S5000PAL only has Gen1 PCIE slots which is going to >> limit you >> somewhat. Would recommend a current generation (x58 or 5520 chipset) >> system if you want the full benefit of 10G. >> >> BTW, you dont way what adapter, 82598 or 82599, you are using? >> >> Jack >> >> >> On Wed, Apr 21, 2010 at 12:52 PM, Stephen Sanders >> > wrote: >> >> I'd be most pleased to get near 9k. >> >> I'm running FreeBSD 8.0 amd64 on both of the the test hosts. >> I've reset >> the configurations to system default as I was getting no >> where with >> sysctl and loader.conf settings. >> >> The motherboards have been configured to do MSI interrupts. The >> S5000PAL has a MSI to old style interrupt BIOS setting that >> confuses the >> driver interrupt setup. >> >> The 10Gbps cards should be plugged into the 8x PCI-E slots on >> both >> hosts. I'm double checking that claim right now and will get >> back later. >> >> Thanks >> >> >> On 4/21/2010 2:13 PM, Jack Vogel wrote: >> > When you get into the 10G world your performance will only >> be as good >> > as your weakest link, what I mean is if you connect to >> something that has >> > less than stellar bus and/or memory performance it is going >> to throttle >> > everything. >> > >> > Running back to back with two good systems you should be >> able to get >> > near line rate (9K range). Things that can effect that: >> 64 bit kernel, >> > TSO, LRO, how many queues come to mind. The default driver >> config >> > should get you there, so tell me more about your >> hardware/os config?? >> > >> > Jack >> > >> > >> > >> > On Wed, Apr 21, 2010 at 8:04 AM, Brandon Gooch >> > > >wrote: >> > >> > >> >> On Wed, Apr 21, 2010 at 9:32 AM, Stephen Sanders >> >> > >> wrote: >> >> >> >>> I am running speed tests on a pair of systems equipped >> with Intel 10Gbps >> >>> cards and am getting poor performance. >> >>> >> >>> iperf and tcpdump testing indicates that the card is >> running at roughly >> >>> 2.5Gbps max transmit/receive. >> >>> >> >>> My attempts at turning fiddling with netisr, polling, and >> varying the >> >>> buffer sizes has been fruitless. I'm sure there is >> something that I'm >> >>> missing so I'm hoping for suggestions. >> >>> >> >>> There are two systems that are connected head to head via >> cross over >> >>> cable. The two systems have the same hardware >> configuration. The >> >>> hardware is as follows: >> >>> >> >>> 2 Intel E5430 (Quad core) @ 2.66 Ghz >> >>> Intel S5000PAL Motherboard >> >>> 16GB Memory >> >>> >> >>> My iperf command line for the client is: >> >>> >> >>> iperf -t 10 -c 169.0.0.1 -w 2.5M -l 2.5M >> >>> >> >>> My TCP dump test command lines are: >> >>> >> >>> tcpdump -i ix0 -w/dev/null >> >>> tcpreplay -i ix0 -t -l 0 -K ./test.pcap >> >>> >> >> If you're running 8.0-RELEASE, you might try updating to >> 8-STABLE. >> >> Jack Vogel recently committed updated Intel NIC driver code: >> >> >> >> http://svn.freebsd.org/viewvc/base/stable/8/sys/dev/ixgbe/ >> >> >> >> -Brandon >> >> _______________________________________________ >> >> freebsd-performance@freebsd.org >> mailing list >> >> http://lists.freebsd.org/mailman/listinfo/freebsd-performance >> >> To unsubscribe, send any mail to " >> >> freebsd-performance-unsubscribe@freebsd.org >> " >> >> >> >> >> > _______________________________________________ >> > freebsd-performance@freebsd.org >> mailing list >> > http://lists.freebsd.org/mailman/listinfo/freebsd-performance >> > To unsubscribe, send any mail to >> "freebsd-performance-unsubscribe@freebsd.org >> " >> > >> > >> >> > > From owner-freebsd-performance@FreeBSD.ORG Thu Apr 22 18:20:54 2010 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4682A1065670 for ; Thu, 22 Apr 2010 18:20:54 +0000 (UTC) (envelope-from jfvogel@gmail.com) Received: from mail-wy0-f182.google.com (mail-wy0-f182.google.com [74.125.82.182]) by mx1.freebsd.org (Postfix) with ESMTP id A568D8FC28 for ; Thu, 22 Apr 2010 18:20:53 +0000 (UTC) Received: by wye20 with SMTP id 20so2040228wye.13 for ; Thu, 22 Apr 2010 11:20:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:received:message-id:subject:from:to:cc:content-type; bh=gSIcUiO+Zy2C/Osxdkvh+sxfbGvstmfrow4xuBdhSPM=; b=WiX1fDMham087lsX0H27hYKFWElB/402sj2hP8x4BWql2cC3gFb2mMGwsm6Cw8bvMb rXfdTPNs4sE/io4XQVAeDKOFlIAD33s1ajFyky0GyDaE4hJ9NHb1C5alMC+W+IyP3p6/ evvKKwSAmeAQZQ3y9uLgJ5atfUrky/IzA2ftY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=XiErfe1UZHepQuvGiPPtfKaopk1US5OVA6bgRR/hiEoEYgr9IHK1pArO2EV0ly7XYo 0qS0AkEybYWZMn2Qyqf1nrY8sdWBrVl4attumq+lIjD/Yu3mbkTlGevF6k3KZd1tJNyQ DrT2FPsrSZMVLqBC9SCVaWohDjeFmHeI2+hzA= MIME-Version: 1.0 Received: by 10.216.11.8 with HTTP; Thu, 22 Apr 2010 11:20:51 -0700 (PDT) In-Reply-To: <4BD09011.6000104@softhammer.net> References: <4BCF0C9A.10005@softhammer.net> <4BCF5783.9050007@softhammer.net> <4BD06E28.3060609@softhammer.net> <4BD09011.6000104@softhammer.net> Date: Thu, 22 Apr 2010 11:20:51 -0700 Received: by 10.216.177.82 with SMTP id c60mr1122768wem.25.1271960452394; Thu, 22 Apr 2010 11:20:52 -0700 (PDT) Message-ID: From: Jack Vogel To: Stephen Sanders Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: Brandon Gooch , freebsd-performance@freebsd.org Subject: Re: FreeBSD 8.0 ixgbe Poor Performance X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Apr 2010 18:20:54 -0000 Welcome, glad to have helped. Jack On Thu, Apr 22, 2010 at 11:06 AM, Stephen Sanders wrote: > Adding "-P 2 " to the iperf client got the rate up to what it should be. > Also, running multiple tcpreplay's pushed the rate up as well. > > Thanks again for the pointers. > > > On 4/22/2010 12:39 PM, Jack Vogel wrote: > > Couple more things that come to mind: > > make sure you increase mbuf pool, nmbclusters up to at least 262144, and > the driver uses 4K clusters if > you go to jumbo frames (nmbjumbop). some workloads will benefit from > increeasing the various sendspace > and recvspace parameters, maxsockets and maxfiles are other candidates. > > Another item: look in /var/log/messages to see if you are getting any > Interrupt storm messages, if you are > that can throttle the irq and reduce performance, there is an > intr_storm_threshold that you can increase to > keep that from happening. > > Finally, it is sometimes not possible to fully utilize the hardware from a > single process, you can get limited > by the socket layer, stack, scheduler, whatever, so you might want to use > multiple test processes. I believe > iperf has a builtin way to do this. Run more threads and look at your > cumulative. > > Good luck, > > Jack > > > On Thu, Apr 22, 2010 at 8:41 AM, Stephen Sanders wrote: > >> I believe that "pciconf -lvc" showed that the cards were in the correct >> slot. I'm not sure as to what all of the output means but I'm guessing that >> " cap 10[a0] = PCI-Express 2 endpoint max data 128(256) link x8(x8)" means >> that the card is an 8 lane card and is using all 8 lanes. >> >> Setting kern.ipc.maxsockbuf to16777216 got a better result with ipref TCP >> testing. The rate when from ~2.5Gpbs to ~5.5Gbps. >> >> Running iperf in UDP test mode is still yielding ~2.5Gbps. Running >> tcpreplay tests is also yielding ~2.5Gbps as well. >> >> Command lines for iperf testing are: >> >> ipref -t 10 -w 2.5m -l 2.5m -c 169.1.0.2 >> iperf -s -w 2.5m -B 169.1.0.2 >> >> iperf -t 10 -w 2.5m -c 169.1.0.2 -u >> iperf -s -w 2.5m -B 169.1.0.2 -u >> >> For the tcpdump test, I'm sending output to /dev/null and using the cache >> flag on tcpreplay in order to avoid limiting my network interface throughput >> to the disk speed. >> Commands lines for this test are: >> >> tcpdump -i ix1 -w /dev/null >> tcpreplay -i ix1 -t -l 0 -K ./rate.pcap >> >> Please forgive my lack of kernel building prowess but I'm guessing that >> the latest driver needs to be built in a FreeBSD STABLE tree. I ran into >> an undefined symbol "drbr_needs_enqueue" in the ixgbe code I downloaded. >> >> Thanks for all the help. >> >> On 4/21/2010 4:53 PM, Jack Vogel wrote: >> >> Use my new driver and it will tell you when it comes up with the slot >> speed is, >> and if its substandard it will SQUAWK loudly at you :) >> >> I think the S5000PAL only has Gen1 PCIE slots which is going to limit you >> somewhat. Would recommend a current generation (x58 or 5520 chipset) >> system if you want the full benefit of 10G. >> >> BTW, you dont way what adapter, 82598 or 82599, you are using? >> >> Jack >> >> >> On Wed, Apr 21, 2010 at 12:52 PM, Stephen Sanders < >> ssanders@softhammer.net> wrote: >> >>> I'd be most pleased to get near 9k. >>> >>> I'm running FreeBSD 8.0 amd64 on both of the the test hosts. I've reset >>> the configurations to system default as I was getting no where with >>> sysctl and loader.conf settings. >>> >>> The motherboards have been configured to do MSI interrupts. The >>> S5000PAL has a MSI to old style interrupt BIOS setting that confuses the >>> driver interrupt setup. >>> >>> The 10Gbps cards should be plugged into the 8x PCI-E slots on both >>> hosts. I'm double checking that claim right now and will get back later. >>> >>> Thanks >>> >>> >>> On 4/21/2010 2:13 PM, Jack Vogel wrote: >>> > When you get into the 10G world your performance will only be as good >>> > as your weakest link, what I mean is if you connect to something that >>> has >>> > less than stellar bus and/or memory performance it is going to throttle >>> > everything. >>> > >>> > Running back to back with two good systems you should be able to get >>> > near line rate (9K range). Things that can effect that: 64 bit >>> kernel, >>> > TSO, LRO, how many queues come to mind. The default driver config >>> > should get you there, so tell me more about your hardware/os config?? >>> > >>> > Jack >>> > >>> > >>> > >>> > On Wed, Apr 21, 2010 at 8:04 AM, Brandon Gooch >>> > wrote: >>> > >>> > >>> >> On Wed, Apr 21, 2010 at 9:32 AM, Stephen Sanders >>> >> wrote: >>> >> >>> >>> I am running speed tests on a pair of systems equipped with Intel >>> 10Gbps >>> >>> cards and am getting poor performance. >>> >>> >>> >>> iperf and tcpdump testing indicates that the card is running at >>> roughly >>> >>> 2.5Gbps max transmit/receive. >>> >>> >>> >>> My attempts at turning fiddling with netisr, polling, and varying the >>> >>> buffer sizes has been fruitless. I'm sure there is something that >>> I'm >>> >>> missing so I'm hoping for suggestions. >>> >>> >>> >>> There are two systems that are connected head to head via cross over >>> >>> cable. The two systems have the same hardware configuration. The >>> >>> hardware is as follows: >>> >>> >>> >>> 2 Intel E5430 (Quad core) @ 2.66 Ghz >>> >>> Intel S5000PAL Motherboard >>> >>> 16GB Memory >>> >>> >>> >>> My iperf command line for the client is: >>> >>> >>> >>> iperf -t 10 -c 169.0.0.1 -w 2.5M -l 2.5M >>> >>> >>> >>> My TCP dump test command lines are: >>> >>> >>> >>> tcpdump -i ix0 -w/dev/null >>> >>> tcpreplay -i ix0 -t -l 0 -K ./test.pcap >>> >>> >>> >> If you're running 8.0-RELEASE, you might try updating to 8-STABLE. >>> >> Jack Vogel recently committed updated Intel NIC driver code: >>> >> >>> >> http://svn.freebsd.org/viewvc/base/stable/8/sys/dev/ixgbe/ >>> >> >>> >> -Brandon >>> >> _______________________________________________ >>> >> freebsd-performance@freebsd.org mailing list >>> >> http://lists.freebsd.org/mailman/listinfo/freebsd-performance >>> >> To unsubscribe, send any mail to " >>> >> freebsd-performance-unsubscribe@freebsd.org" >>> >> >>> >> >>> > _______________________________________________ >>> > freebsd-performance@freebsd.org mailing list >>> > http://lists.freebsd.org/mailman/listinfo/freebsd-performance >>> > To unsubscribe, send any mail to " >>> freebsd-performance-unsubscribe@freebsd.org" >>> > >>> > >>> >>> >> >> > >