From owner-freebsd-performance@FreeBSD.ORG Thu Apr 22 18:06:11 2010 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7B9A9106564A for ; Thu, 22 Apr 2010 18:06:11 +0000 (UTC) (envelope-from ssanders@softhammer.net) Received: from smtp-hq2.opnet.com (smtp-hq2.opnet.com [192.104.65.247]) by mx1.freebsd.org (Postfix) with ESMTP id F388A8FC08 for ; Thu, 22 Apr 2010 18:06:10 +0000 (UTC) Received: from [172.16.12.251] (wtn12251.opnet.com [172.16.12.251]) by smtp.opnet.com (Postfix) with ESMTPSA id E813221100A3; Thu, 22 Apr 2010 14:06:09 -0400 (EDT) Message-ID: <4BD09011.6000104@softhammer.net> Date: Thu, 22 Apr 2010 14:06:09 -0400 From: Stephen Sanders User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.9) Gecko/20100317 Lightning/1.0b1 Thunderbird/3.0.4 MIME-Version: 1.0 To: Jack Vogel References: <4BCF0C9A.10005@softhammer.net> <4BCF5783.9050007@softhammer.net> <4BD06E28.3060609@softhammer.net> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: Brandon Gooch , freebsd-performance@freebsd.org Subject: Re: FreeBSD 8.0 ixgbe Poor Performance X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Apr 2010 18:06:11 -0000 Adding "-P 2 " to the iperf client got the rate up to what it should be. Also, running multiple tcpreplay's pushed the rate up as well. Thanks again for the pointers. On 4/22/2010 12:39 PM, Jack Vogel wrote: > Couple more things that come to mind: > > make sure you increase mbuf pool, nmbclusters up to at least 262144, > and the driver uses 4K clusters if > you go to jumbo frames (nmbjumbop). some workloads will benefit from > increeasing the various sendspace > and recvspace parameters, maxsockets and maxfiles are other candidates. > > Another item: look in /var/log/messages to see if you are getting any > Interrupt storm messages, if you are > that can throttle the irq and reduce performance, there is an > intr_storm_threshold that you can increase to > keep that from happening. > > Finally, it is sometimes not possible to fully utilize the hardware > from a single process, you can get limited > by the socket layer, stack, scheduler, whatever, so you might want to > use multiple test processes. I believe > iperf has a builtin way to do this. Run more threads and look at your > cumulative. > > Good luck, > > Jack > > > On Thu, Apr 22, 2010 at 8:41 AM, Stephen Sanders > > wrote: > > I believe that "pciconf -lvc" showed that the cards were in the > correct slot. I'm not sure as to what all of the output means but > I'm guessing that " cap 10[a0] = PCI-Express 2 endpoint max data > 128(256) link x8(x8)" means that the card is an 8 lane card and is > using all 8 lanes. > > Setting kern.ipc.maxsockbuf to16777216 got a better result with > ipref TCP testing. The rate when from ~2.5Gpbs to ~5.5Gbps. > > Running iperf in UDP test mode is still yielding ~2.5Gbps. > Running tcpreplay tests is also yielding ~2.5Gbps as well. > > Command lines for iperf testing are: > > ipref -t 10 -w 2.5m -l 2.5m -c 169.1.0.2 > iperf -s -w 2.5m -B 169.1.0.2 > > iperf -t 10 -w 2.5m -c 169.1.0.2 -u > iperf -s -w 2.5m -B 169.1.0.2 -u > > For the tcpdump test, I'm sending output to /dev/null and using > the cache flag on tcpreplay in order to avoid limiting my network > interface throughput to the disk speed. > Commands lines for this test are: > > tcpdump -i ix1 -w /dev/null > tcpreplay -i ix1 -t -l 0 -K ./rate.pcap > > Please forgive my lack of kernel building prowess but I'm guessing > that the latest driver needs to be built in a FreeBSD STABLE > tree. I ran into an undefined symbol "drbr_needs_enqueue" in the > ixgbe code I downloaded. > > Thanks for all the help. > > On 4/21/2010 4:53 PM, Jack Vogel wrote: >> Use my new driver and it will tell you when it comes up with the >> slot speed is, >> and if its substandard it will SQUAWK loudly at you :) >> >> I think the S5000PAL only has Gen1 PCIE slots which is going to >> limit you >> somewhat. Would recommend a current generation (x58 or 5520 chipset) >> system if you want the full benefit of 10G. >> >> BTW, you dont way what adapter, 82598 or 82599, you are using? >> >> Jack >> >> >> On Wed, Apr 21, 2010 at 12:52 PM, Stephen Sanders >> > wrote: >> >> I'd be most pleased to get near 9k. >> >> I'm running FreeBSD 8.0 amd64 on both of the the test hosts. >> I've reset >> the configurations to system default as I was getting no >> where with >> sysctl and loader.conf settings. >> >> The motherboards have been configured to do MSI interrupts. The >> S5000PAL has a MSI to old style interrupt BIOS setting that >> confuses the >> driver interrupt setup. >> >> The 10Gbps cards should be plugged into the 8x PCI-E slots on >> both >> hosts. I'm double checking that claim right now and will get >> back later. >> >> Thanks >> >> >> On 4/21/2010 2:13 PM, Jack Vogel wrote: >> > When you get into the 10G world your performance will only >> be as good >> > as your weakest link, what I mean is if you connect to >> something that has >> > less than stellar bus and/or memory performance it is going >> to throttle >> > everything. >> > >> > Running back to back with two good systems you should be >> able to get >> > near line rate (9K range). Things that can effect that: >> 64 bit kernel, >> > TSO, LRO, how many queues come to mind. The default driver >> config >> > should get you there, so tell me more about your >> hardware/os config?? >> > >> > Jack >> > >> > >> > >> > On Wed, Apr 21, 2010 at 8:04 AM, Brandon Gooch >> > > >wrote: >> > >> > >> >> On Wed, Apr 21, 2010 at 9:32 AM, Stephen Sanders >> >> > >> wrote: >> >> >> >>> I am running speed tests on a pair of systems equipped >> with Intel 10Gbps >> >>> cards and am getting poor performance. >> >>> >> >>> iperf and tcpdump testing indicates that the card is >> running at roughly >> >>> 2.5Gbps max transmit/receive. >> >>> >> >>> My attempts at turning fiddling with netisr, polling, and >> varying the >> >>> buffer sizes has been fruitless. I'm sure there is >> something that I'm >> >>> missing so I'm hoping for suggestions. >> >>> >> >>> There are two systems that are connected head to head via >> cross over >> >>> cable. The two systems have the same hardware >> configuration. The >> >>> hardware is as follows: >> >>> >> >>> 2 Intel E5430 (Quad core) @ 2.66 Ghz >> >>> Intel S5000PAL Motherboard >> >>> 16GB Memory >> >>> >> >>> My iperf command line for the client is: >> >>> >> >>> iperf -t 10 -c 169.0.0.1 -w 2.5M -l 2.5M >> >>> >> >>> My TCP dump test command lines are: >> >>> >> >>> tcpdump -i ix0 -w/dev/null >> >>> tcpreplay -i ix0 -t -l 0 -K ./test.pcap >> >>> >> >> If you're running 8.0-RELEASE, you might try updating to >> 8-STABLE. >> >> Jack Vogel recently committed updated Intel NIC driver code: >> >> >> >> http://svn.freebsd.org/viewvc/base/stable/8/sys/dev/ixgbe/ >> >> >> >> -Brandon >> >> _______________________________________________ >> >> freebsd-performance@freebsd.org >> mailing list >> >> http://lists.freebsd.org/mailman/listinfo/freebsd-performance >> >> To unsubscribe, send any mail to " >> >> freebsd-performance-unsubscribe@freebsd.org >> " >> >> >> >> >> > _______________________________________________ >> > freebsd-performance@freebsd.org >> mailing list >> > http://lists.freebsd.org/mailman/listinfo/freebsd-performance >> > To unsubscribe, send any mail to >> "freebsd-performance-unsubscribe@freebsd.org >> " >> > >> > >> >> > >