From owner-freebsd-performance@FreeBSD.ORG Thu Apr 22 18:20:54 2010 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4682A1065670 for ; Thu, 22 Apr 2010 18:20:54 +0000 (UTC) (envelope-from jfvogel@gmail.com) Received: from mail-wy0-f182.google.com (mail-wy0-f182.google.com [74.125.82.182]) by mx1.freebsd.org (Postfix) with ESMTP id A568D8FC28 for ; Thu, 22 Apr 2010 18:20:53 +0000 (UTC) Received: by wye20 with SMTP id 20so2040228wye.13 for ; Thu, 22 Apr 2010 11:20:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:received:message-id:subject:from:to:cc:content-type; bh=gSIcUiO+Zy2C/Osxdkvh+sxfbGvstmfrow4xuBdhSPM=; b=WiX1fDMham087lsX0H27hYKFWElB/402sj2hP8x4BWql2cC3gFb2mMGwsm6Cw8bvMb rXfdTPNs4sE/io4XQVAeDKOFlIAD33s1ajFyky0GyDaE4hJ9NHb1C5alMC+W+IyP3p6/ evvKKwSAmeAQZQ3y9uLgJ5atfUrky/IzA2ftY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=XiErfe1UZHepQuvGiPPtfKaopk1US5OVA6bgRR/hiEoEYgr9IHK1pArO2EV0ly7XYo 0qS0AkEybYWZMn2Qyqf1nrY8sdWBrVl4attumq+lIjD/Yu3mbkTlGevF6k3KZd1tJNyQ DrT2FPsrSZMVLqBC9SCVaWohDjeFmHeI2+hzA= MIME-Version: 1.0 Received: by 10.216.11.8 with HTTP; Thu, 22 Apr 2010 11:20:51 -0700 (PDT) In-Reply-To: <4BD09011.6000104@softhammer.net> References: <4BCF0C9A.10005@softhammer.net> <4BCF5783.9050007@softhammer.net> <4BD06E28.3060609@softhammer.net> <4BD09011.6000104@softhammer.net> Date: Thu, 22 Apr 2010 11:20:51 -0700 Received: by 10.216.177.82 with SMTP id c60mr1122768wem.25.1271960452394; Thu, 22 Apr 2010 11:20:52 -0700 (PDT) Message-ID: From: Jack Vogel To: Stephen Sanders Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: Brandon Gooch , freebsd-performance@freebsd.org Subject: Re: FreeBSD 8.0 ixgbe Poor Performance X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Apr 2010 18:20:54 -0000 Welcome, glad to have helped. Jack On Thu, Apr 22, 2010 at 11:06 AM, Stephen Sanders wrote: > Adding "-P 2 " to the iperf client got the rate up to what it should be. > Also, running multiple tcpreplay's pushed the rate up as well. > > Thanks again for the pointers. > > > On 4/22/2010 12:39 PM, Jack Vogel wrote: > > Couple more things that come to mind: > > make sure you increase mbuf pool, nmbclusters up to at least 262144, and > the driver uses 4K clusters if > you go to jumbo frames (nmbjumbop). some workloads will benefit from > increeasing the various sendspace > and recvspace parameters, maxsockets and maxfiles are other candidates. > > Another item: look in /var/log/messages to see if you are getting any > Interrupt storm messages, if you are > that can throttle the irq and reduce performance, there is an > intr_storm_threshold that you can increase to > keep that from happening. > > Finally, it is sometimes not possible to fully utilize the hardware from a > single process, you can get limited > by the socket layer, stack, scheduler, whatever, so you might want to use > multiple test processes. I believe > iperf has a builtin way to do this. Run more threads and look at your > cumulative. > > Good luck, > > Jack > > > On Thu, Apr 22, 2010 at 8:41 AM, Stephen Sanders wrote: > >> I believe that "pciconf -lvc" showed that the cards were in the correct >> slot. I'm not sure as to what all of the output means but I'm guessing that >> " cap 10[a0] = PCI-Express 2 endpoint max data 128(256) link x8(x8)" means >> that the card is an 8 lane card and is using all 8 lanes. >> >> Setting kern.ipc.maxsockbuf to16777216 got a better result with ipref TCP >> testing. The rate when from ~2.5Gpbs to ~5.5Gbps. >> >> Running iperf in UDP test mode is still yielding ~2.5Gbps. Running >> tcpreplay tests is also yielding ~2.5Gbps as well. >> >> Command lines for iperf testing are: >> >> ipref -t 10 -w 2.5m -l 2.5m -c 169.1.0.2 >> iperf -s -w 2.5m -B 169.1.0.2 >> >> iperf -t 10 -w 2.5m -c 169.1.0.2 -u >> iperf -s -w 2.5m -B 169.1.0.2 -u >> >> For the tcpdump test, I'm sending output to /dev/null and using the cache >> flag on tcpreplay in order to avoid limiting my network interface throughput >> to the disk speed. >> Commands lines for this test are: >> >> tcpdump -i ix1 -w /dev/null >> tcpreplay -i ix1 -t -l 0 -K ./rate.pcap >> >> Please forgive my lack of kernel building prowess but I'm guessing that >> the latest driver needs to be built in a FreeBSD STABLE tree. I ran into >> an undefined symbol "drbr_needs_enqueue" in the ixgbe code I downloaded. >> >> Thanks for all the help. >> >> On 4/21/2010 4:53 PM, Jack Vogel wrote: >> >> Use my new driver and it will tell you when it comes up with the slot >> speed is, >> and if its substandard it will SQUAWK loudly at you :) >> >> I think the S5000PAL only has Gen1 PCIE slots which is going to limit you >> somewhat. Would recommend a current generation (x58 or 5520 chipset) >> system if you want the full benefit of 10G. >> >> BTW, you dont way what adapter, 82598 or 82599, you are using? >> >> Jack >> >> >> On Wed, Apr 21, 2010 at 12:52 PM, Stephen Sanders < >> ssanders@softhammer.net> wrote: >> >>> I'd be most pleased to get near 9k. >>> >>> I'm running FreeBSD 8.0 amd64 on both of the the test hosts. I've reset >>> the configurations to system default as I was getting no where with >>> sysctl and loader.conf settings. >>> >>> The motherboards have been configured to do MSI interrupts. The >>> S5000PAL has a MSI to old style interrupt BIOS setting that confuses the >>> driver interrupt setup. >>> >>> The 10Gbps cards should be plugged into the 8x PCI-E slots on both >>> hosts. I'm double checking that claim right now and will get back later. >>> >>> Thanks >>> >>> >>> On 4/21/2010 2:13 PM, Jack Vogel wrote: >>> > When you get into the 10G world your performance will only be as good >>> > as your weakest link, what I mean is if you connect to something that >>> has >>> > less than stellar bus and/or memory performance it is going to throttle >>> > everything. >>> > >>> > Running back to back with two good systems you should be able to get >>> > near line rate (9K range). Things that can effect that: 64 bit >>> kernel, >>> > TSO, LRO, how many queues come to mind. The default driver config >>> > should get you there, so tell me more about your hardware/os config?? >>> > >>> > Jack >>> > >>> > >>> > >>> > On Wed, Apr 21, 2010 at 8:04 AM, Brandon Gooch >>> > wrote: >>> > >>> > >>> >> On Wed, Apr 21, 2010 at 9:32 AM, Stephen Sanders >>> >> wrote: >>> >> >>> >>> I am running speed tests on a pair of systems equipped with Intel >>> 10Gbps >>> >>> cards and am getting poor performance. >>> >>> >>> >>> iperf and tcpdump testing indicates that the card is running at >>> roughly >>> >>> 2.5Gbps max transmit/receive. >>> >>> >>> >>> My attempts at turning fiddling with netisr, polling, and varying the >>> >>> buffer sizes has been fruitless. I'm sure there is something that >>> I'm >>> >>> missing so I'm hoping for suggestions. >>> >>> >>> >>> There are two systems that are connected head to head via cross over >>> >>> cable. The two systems have the same hardware configuration. The >>> >>> hardware is as follows: >>> >>> >>> >>> 2 Intel E5430 (Quad core) @ 2.66 Ghz >>> >>> Intel S5000PAL Motherboard >>> >>> 16GB Memory >>> >>> >>> >>> My iperf command line for the client is: >>> >>> >>> >>> iperf -t 10 -c 169.0.0.1 -w 2.5M -l 2.5M >>> >>> >>> >>> My TCP dump test command lines are: >>> >>> >>> >>> tcpdump -i ix0 -w/dev/null >>> >>> tcpreplay -i ix0 -t -l 0 -K ./test.pcap >>> >>> >>> >> If you're running 8.0-RELEASE, you might try updating to 8-STABLE. >>> >> Jack Vogel recently committed updated Intel NIC driver code: >>> >> >>> >> http://svn.freebsd.org/viewvc/base/stable/8/sys/dev/ixgbe/ >>> >> >>> >> -Brandon >>> >> _______________________________________________ >>> >> freebsd-performance@freebsd.org mailing list >>> >> http://lists.freebsd.org/mailman/listinfo/freebsd-performance >>> >> To unsubscribe, send any mail to " >>> >> freebsd-performance-unsubscribe@freebsd.org" >>> >> >>> >> >>> > _______________________________________________ >>> > freebsd-performance@freebsd.org mailing list >>> > http://lists.freebsd.org/mailman/listinfo/freebsd-performance >>> > To unsubscribe, send any mail to " >>> freebsd-performance-unsubscribe@freebsd.org" >>> > >>> > >>> >>> >> >> > >