From owner-freebsd-stable@FreeBSD.ORG Tue Jan 30 18:59:11 2007 Return-Path: X-Original-To: freebsd-stable@freebsd.org Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 5848A16A402 for ; Tue, 30 Jan 2007 18:59:11 +0000 (UTC) (envelope-from mike@sentex.net) Received: from smarthost1.sentex.ca (smarthost1.sentex.ca [64.7.153.18]) by mx1.freebsd.org (Postfix) with ESMTP id F105613C461 for ; Tue, 30 Jan 2007 18:59:10 +0000 (UTC) (envelope-from mike@sentex.net) Received: from lava.sentex.ca (pyroxene.sentex.ca [199.212.134.18]) by smarthost1.sentex.ca (8.13.6/8.13.6) with ESMTP id l0UIxAVg078697; Tue, 30 Jan 2007 13:59:10 -0500 (EST) (envelope-from mike@sentex.net) Received: from mdt-xp.sentex.net (simeon.sentex.ca [192.168.43.27]) by lava.sentex.ca (8.13.6/8.13.3) with ESMTP id l0UIx9qK002719 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 30 Jan 2007 13:59:09 -0500 (EST) (envelope-from mike@sentex.net) Message-Id: <200701301859.l0UIx9qK002719@lava.sentex.ca> X-Mailer: QUALCOMM Windows Eudora Version 7.1.0.9 Date: Tue, 30 Jan 2007 13:58:17 -0500 To: "Jack Vogel" From: Mike Tancsa In-Reply-To: <2a41acea0701300930u4f920b95n61d20972c14576a9@mail.gmail.co m> References: <200701301719.l0UHJ1Kk002345@lava.sentex.ca> <2a41acea0701300930u4f920b95n61d20972c14576a9@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; format=flowed X-Virus-Scanned: ClamAV version 0.88.3, clamav-milter version 0.88.3 on clamscanner2 X-Virus-Status: Clean Cc: freebsd-stable@freebsd.org Subject: Re: Intel EM tuning (PT1000 adaptors) X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 30 Jan 2007 18:59:11 -0000 At 12:30 PM 1/30/2007, Jack Vogel wrote: >Performance tuning is not something that I have yet had time to focus >on, our Linux team is able to do a lot more of that. Just at a glance, >try increasing your mbuf pool size and the number of receive descriptors >for a start. Oh, and try increasing your processing limit to 200 and see >what effect that has. Hi, thanks for the info. What is the processing_limit limit, and apart from crashing the box, how do I know if I set it too high ? ;-) I am not sure which mbuf setting you mean ? From netstat -m, I dont seem to be hitting any max values # netstat -m 838/2237/3075 mbufs in use (current/cache/total) 836/578/1414/25600 mbuf clusters in use (current/cache/total/max) 836/572 mbuf+clusters out of packet secondary zone in use (current/cache) 0/0/0/0 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/0 9k jumbo clusters in use (current/cache/total/max) 0/0/0/0 16k jumbo clusters in use (current/cache/total/max) 1881K/1715K/3596K bytes allocated to network (current/cache/total) 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0/5/6656 sfbufs in use (current/peak/max) 0 requests for sfbufs denied 0 requests for sfbufs delayed 0 requests for I/O initiated by sendfile 0 calls to protocol drain routines As for hw.em.rxd, how do I know what this chip can handle ? It says the current default is 256, but I dont know what I can set that too, based on this adaptor ? WRT, hw.em.rx_int_delay This value delays the generation of receive interrupts in units of 1.024 microseconds. The default value is 0, since adapters may hang with this feature being enabled. Do you know which adaptors have this issue ? Also, for hw.em.rx_abs_int_delay If hw.em.rx_int_delay is non-zero, this tunable limits the maxi- mum delay in which a receive interrupt is generated. I take it this is for interrupt moderation ? Am I right in thinking that if my rx buffers are filling, the box is not processing interrupts fast enough so I should move this value closer to zero ? How do I find what the current value is ? Thanks for any pointers you can provide. ---Mike