From owner-freebsd-performance@FreeBSD.ORG Mon Apr 27 00:02:49 2009 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6BD6D106564A; Mon, 27 Apr 2009 00:02:49 +0000 (UTC) (envelope-from pieter@degoeje.nl) Received: from smtp.utwente.nl (smtp2.utsp.utwente.nl [130.89.2.9]) by mx1.freebsd.org (Postfix) with ESMTP id CE43D8FC15; Mon, 27 Apr 2009 00:02:48 +0000 (UTC) (envelope-from pieter@degoeje.nl) Received: from nox.student.utwente.nl (nox.student.utwente.nl [130.89.165.91]) by smtp.utwente.nl (8.12.10/SuSE Linux 0.7) with ESMTP id n3QNoWMA027686; Mon, 27 Apr 2009 01:50:32 +0200 From: Pieter de Goeje To: freebsd-hackers@freebsd.org Date: Mon, 27 Apr 2009 01:50:31 +0200 User-Agent: KMail/1.9.10 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200904270150.31912.pieter@degoeje.nl> X-UTwente-MailScanner-Information: Scanned by MailScanner. Contact servicedesk@icts.utwente.nl for more information. X-UTwente-MailScanner: Found to be clean X-UTwente-MailScanner-From: pieter@degoeje.nl X-Spam-Status: No Cc: freebsd-performance@freebsd.org Subject: ACPI-fast default timecounter, but HPET 83% faster X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Apr 2009 00:02:49 -0000 Dear hackers, While fiddling with the sysctl kern.timecounter.hardware, I found out that on my system HPET is significantly faster than ACPI-fast. Using the program below I measured the number of clock_gettime() calls the system can execute per second. I ran the program 10 times for each configuration and here are the results: x ACPI-fast + HPET +-------------------------------------------------------------------------+ |x +| |x +| |x ++| |x ++| |x ++| |x ++| |A |A| +-------------------------------------------------------------------------+ N Min Max Median Avg Stddev x 10 822032 823752 823551 823397.8 509.43254 + 10 1498348 1506862 1502830 1503267.4 2842.9779 Difference at 95.0% confidence 679870 +/- 1918.94 82.5688% +/- 0.233052% (Student's t, pooled s = 2042.31) System details: Intel(R) Core(TM)2 Duo CPU E6750 @ 2.66GHz (3200.02-MHz 686-class CPU), Gigabyte P35-DS3R motherboard running i386 -CURRENT updated today. Unfortunately I only have one system with a HPET timecounter, so I cannot verify these results on another system. If similar results are obtained on other machines, I think the HPET timecounter quality needs to be increased beyond that of ACPI-fast. Regards, Pieter de Goeje ----- 8< ----- clock_gettime.c ----- 8< ------ #include #include #include #define COUNT 1000000 int main() { struct timespec ts_start, ts_stop, ts_read; double time; int i; clock_gettime(CLOCK_MONOTONIC, &ts_start); for(i = 0; i < COUNT; i++) { clock_gettime(CLOCK_MONOTONIC, &ts_read); } clock_gettime(CLOCK_MONOTONIC, &ts_stop); time = (ts_stop.tv_sec - ts_start.tv_sec) + (ts_stop.tv_nsec - ts_start.tv_nsec) * 1E-9; printf("%.0f\n", COUNT / time); } From owner-freebsd-performance@FreeBSD.ORG Mon Apr 27 03:00:31 2009 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E2DEB106566C; Mon, 27 Apr 2009 03:00:31 +0000 (UTC) (envelope-from yanefbsd@gmail.com) Received: from mail-gx0-f218.google.com (mail-gx0-f218.google.com [209.85.217.218]) by mx1.freebsd.org (Postfix) with ESMTP id 73DE48FC1A; Mon, 27 Apr 2009 03:00:31 +0000 (UTC) (envelope-from yanefbsd@gmail.com) Received: by gxk18 with SMTP id 18so1848438gxk.19 for ; Sun, 26 Apr 2009 20:00:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=pUsvqDt7M19zmTn73Ua5JkFaCAcDPz5upV4j6KHS6gU=; b=EqBhSoTUo8WleHQEQGXUYWp492f6G4Sz1J1zyEIYTpKJ/YO2x3NScnkxXTs0dgYR+B wC1+KIctsOikschyc8WRFj4rbR4cwqso7EnhGTsYXuwFkCwenx2lQRe3Nb7oLLc1e6F3 yS88oojYEv0k4azpzErgpA9o/OB9se1MrCfFk= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=rbtBMdtv6Z/FGX7UUBBjW1dyUnXlVLffL7WNEVR3qjvYOFnTkBtKEsKiXbIpZVJKyr n9MmiG22AoaUIfNJHbt7WQ3ZxG42hdBPaOufyBEMgvXjmqpXfG08VNDErnDA3NoTWGwQ iXPu/kxbCoKWBjayHhvlMKVMx0MM+UKsLuVi8= MIME-Version: 1.0 Received: by 10.151.137.5 with SMTP id p5mr7844582ybn.223.1240799262908; Sun, 26 Apr 2009 19:27:42 -0700 (PDT) In-Reply-To: <200904270150.31912.pieter@degoeje.nl> References: <200904270150.31912.pieter@degoeje.nl> Date: Sun, 26 Apr 2009 19:27:42 -0700 Message-ID: <7d6fde3d0904261927s1a67cf85jc982c1a68e30e081@mail.gmail.com> From: Garrett Cooper To: Pieter de Goeje Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Mailman-Approved-At: Mon, 27 Apr 2009 03:14:33 +0000 Cc: acpi , freebsd-hackers@freebsd.org, freebsd-performance@freebsd.org Subject: Re: ACPI-fast default timecounter, but HPET 83% faster X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Apr 2009 03:00:32 -0000 On Sun, Apr 26, 2009 at 4:50 PM, Pieter de Goeje wrote: > Dear hackers, > > While fiddling with the sysctl kern.timecounter.hardware, I found out tha= t on > my system HPET is significantly faster than ACPI-fast. Using the program > below I measured the number of clock_gettime() calls the system can execu= te > per second. I ran the program 10 times for each configuration and here ar= e > the results: > > x ACPI-fast > + HPET > +------------------------------------------------------------------------= -+ > |x =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 +| > |x =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 +| > |x =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0++| > |x =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0++| > |x =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0++| > |x =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0++| > |A =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0|A| > +------------------------------------------------------------------------= -+ > =A0 =A0N =A0 =A0 =A0 =A0 =A0 Min =A0 =A0 =A0 =A0 =A0 Max =A0 =A0 =A0 =A0M= edian =A0 =A0 =A0 =A0 =A0 Avg =A0 =A0 =A0 =A0Stddev > x =A010 =A0 =A0 =A0 =A0822032 =A0 =A0 =A0 =A0823752 =A0 =A0 =A0 =A0823551= =A0 =A0 =A0823397.8 =A0 =A0 509.43254 > + =A010 =A0 =A0 =A0 1498348 =A0 =A0 =A0 1506862 =A0 =A0 =A0 1502830 =A0 = =A0 1503267.4 =A0 =A0 2842.9779 > Difference at 95.0% confidence > =A0 =A0 =A0 =A0679870 +/- 1918.94 > =A0 =A0 =A0 =A082.5688% +/- 0.233052% > =A0 =A0 =A0 =A0(Student's t, pooled s =3D 2042.31) > > System details: Intel(R) Core(TM)2 Duo CPU E6750 =A0@ 2.66GHz (3200.02-MH= z > 686-class CPU), Gigabyte P35-DS3R motherboard running i386 -CURRENT updat= ed > today. > > Unfortunately I only have one system with a HPET timecounter, so I cannot > verify these results on another system. If similar results are obtained o= n > other machines, I think the HPET timecounter quality needs to be increase= d > beyond that of ACPI-fast. > > Regards, > > Pieter de Goeje > > ----- 8< ----- clock_gettime.c ----- 8< ------ > #include > #include > #include > > #define COUNT 1000000 > > int main() { > =A0 =A0 =A0 =A0struct timespec ts_start, ts_stop, ts_read; > =A0 =A0 =A0 =A0double time; > =A0 =A0 =A0 =A0int i; > > =A0 =A0 =A0 =A0clock_gettime(CLOCK_MONOTONIC, &ts_start); > =A0 =A0 =A0 =A0for(i =3D 0; i < COUNT; i++) { > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0clock_gettime(CLOCK_MONOTONIC, &ts_read); > =A0 =A0 =A0 =A0} > =A0 =A0 =A0 =A0clock_gettime(CLOCK_MONOTONIC, &ts_stop); > > =A0 =A0 =A0 =A0time =3D (ts_stop.tv_sec - ts_start.tv_sec) + (ts_stop.tv_= nsec - > ts_start.tv_nsec) * 1E-9; > =A0 =A0 =A0 =A0printf("%.0f\n", COUNT / time); > } I'm seeing similar results. [root@orangebox /usr/home/gcooper]# dmesg | grep 'Timecounter "' Timecounter "i8254" frequency 1193182 Hz quality 0 Timecounter "ACPI-fast" frequency 3579545 Hz quality 1000 Timecounter "HPET" frequency 14318180 Hz quality 900 [root@orangebox /usr/home/gcooper]# ./cgt 1369355 [root@orangebox /usr/home/gcooper]# sysctl kern.timecounter.hardware=3D"ACPI-fast" kern.timecounter.hardware: HPET -> ACPI-fast [root@orangebox /usr/home/gcooper]# ./cgt 772289 Why's the default ACPI-fast? For power-saving functionality or because of the `quality' factor? What is the criteria that determines the `quality' of a clock as what's being reported above (I know what determines the quality of a clock visually from a oscilloscope =3D])? Thanks, -Garrett From owner-freebsd-performance@FreeBSD.ORG Mon Apr 27 08:43:44 2009 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 9A80B1065674; Mon, 27 Apr 2009 08:43:44 +0000 (UTC) (envelope-from raykinsella78@gmail.com) Received: from mail-bw0-f213.google.com (mail-bw0-f213.google.com [209.85.218.213]) by mx1.freebsd.org (Postfix) with ESMTP id B98468FC16; Mon, 27 Apr 2009 08:43:43 +0000 (UTC) (envelope-from raykinsella78@gmail.com) Received: by bwz9 with SMTP id 9so2064852bwz.43 for ; Mon, 27 Apr 2009 01:43:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=EgL9OfuikxPUuaOQkhhv6hBEs0YxgajW0ZnJZENa4n0=; b=sxaFGPdkm8nw0CSPO4PIhXmMNaMuSrYigaP8I5J37ESCivA7rCooT4PRs5wJZ/6ZlO acbSdBSQSHhJuc5U5tVVO8I1JJN9HT1GFb90DVWtPQx6Z7l9sMZDO8fQxpg09tk7yauF MxR1DgC0VkageEToiqhi4w3JfxocQOhRMaSvU= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=nHU478wC8tRjwFlDyJ3jSb3PRQ/fNhULLI4bcbOMBzLmGgDL2GzgzHPQzQU3Er4i+O XKmr7z/bQe3qr/IELHoBSGP/CLeEYdkwAGRfTkgSDLSUm0SQmFw2pmmoW4GdsR8sTkik RA1DNdacQ+iU0BA+j9w0Dodurg6d3tj7TxdpI= MIME-Version: 1.0 Received: by 10.239.172.18 with SMTP id y18mr253998hbe.72.1240820320304; Mon, 27 Apr 2009 01:18:40 -0700 (PDT) In-Reply-To: <40bb871a0904241542o3f4d6c6ap62ff71876074bbea@mail.gmail.com> References: <40bb871a0904241542o3f4d6c6ap62ff71876074bbea@mail.gmail.com> Date: Mon, 27 Apr 2009 09:18:40 +0100 Message-ID: <584ec6bb0904270118v37795ee2k24c9262d4c1abd80@mail.gmail.com> From: Ray Kinsella To: Joseph Kuan Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-net@freebsd.org, freebsd-performance@freebsd.org, freebsd-threads@freebsd.org Subject: Re: FreeBSD 7.1 taskq em performance X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Apr 2009 08:43:45 -0000 Joseph, I would recommend that you start with PMCStat and figure where the bottleneck is, Given that you have a two threads and your CPU is at 100%, my a apriori guess would be a contention for a spinlock, so I might also try to use LOCK_PROFILING to handle on this. Regards Ray Kinsella On Fri, Apr 24, 2009 at 11:42 PM, Joseph Kuan wrote: > Hi all, > I have been hitting some barrier with FreeBSD 7.1 network performance. I > have written an application which contains two kernel threads that takes > mbufs directly from a network interface and forwards to another network > interface. This idea is to simulate different network environment. > > I have been using FreeBSD 6.4 amd64 and tested with an Ixia box > (specialised hardware firing very high packet rate). The PC was a Core2 2.6 > GHz with dual ports Intel PCIE Gigabit network card. It can manage up to > 1.2 > million pps. > > I have a higher spec PC with FreeBSD 7.1 amd64 and Quadcore 2.3 GHz and > PCIE Gigabit network card. The performance can only achieve up to 600k pps. > I notice the 'taskq em0' and 'taskq em1' is solid 100% CPU but it is not in > FreeBSD 6.4. > > Any advice? > > Many thanks in advance > > Joe > _______________________________________________ > freebsd-performance@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-performance > To unsubscribe, send any mail to " > freebsd-performance-unsubscribe@freebsd.org" > From owner-freebsd-performance@FreeBSD.ORG Thu Apr 30 21:41:23 2009 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BFD82106567E; Thu, 30 Apr 2009 21:41:23 +0000 (UTC) (envelope-from jhb@freebsd.org) Received: from cyrus.watson.org (cyrus.watson.org [65.122.17.42]) by mx1.freebsd.org (Postfix) with ESMTP id 902F38FC22; Thu, 30 Apr 2009 21:41:23 +0000 (UTC) (envelope-from jhb@freebsd.org) Received: from bigwig.baldwin.cx (66.111.2.69.static.nyinternet.net [66.111.2.69]) by cyrus.watson.org (Postfix) with ESMTPSA id 4507146B09; Thu, 30 Apr 2009 17:41:23 -0400 (EDT) Received: from jhbbsd.hudson-trading.com (unknown [209.249.190.8]) by bigwig.baldwin.cx (Postfix) with ESMTPA id 294918A023; Thu, 30 Apr 2009 17:41:22 -0400 (EDT) From: John Baldwin To: freebsd-acpi@freebsd.org Date: Thu, 30 Apr 2009 08:46:41 -0400 User-Agent: KMail/1.9.7 References: <200904270150.31912.pieter@degoeje.nl> <7d6fde3d0904261927s1a67cf85jc982c1a68e30e081@mail.gmail.com> In-Reply-To: <7d6fde3d0904261927s1a67cf85jc982c1a68e30e081@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200904300846.41576.jhb@freebsd.org> X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.0.1 (bigwig.baldwin.cx); Thu, 30 Apr 2009 17:41:22 -0400 (EDT) X-Virus-Scanned: clamav-milter 0.95 at bigwig.baldwin.cx X-Virus-Status: Clean X-Spam-Status: No, score=-0.6 required=4.2 tests=AWL,BAYES_00, DATE_IN_PAST_06_12,RDNS_NONE autolearn=no version=3.2.5 X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on bigwig.baldwin.cx X-Mailman-Approved-At: Thu, 30 Apr 2009 22:28:08 +0000 Cc: acpi , Garrett Cooper , freebsd-performance@freebsd.org, freebsd-hackers@freebsd.org, Pieter de Goeje Subject: Re: ACPI-fast default timecounter, but HPET 83% faster X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Apr 2009 21:41:24 -0000 On Sunday 26 April 2009 10:27:42 pm Garrett Cooper wrote: > I'm seeing similar results. > > [root@orangebox /usr/home/gcooper]# dmesg | grep 'Timecounter "' > Timecounter "i8254" frequency 1193182 Hz quality 0 > Timecounter "ACPI-fast" frequency 3579545 Hz quality 1000 > Timecounter "HPET" frequency 14318180 Hz quality 900 > [root@orangebox /usr/home/gcooper]# ./cgt > 1369355 > [root@orangebox /usr/home/gcooper]# sysctl > kern.timecounter.hardware="ACPI-fast" > kern.timecounter.hardware: HPET -> ACPI-fast > [root@orangebox /usr/home/gcooper]# ./cgt > 772289 > > Why's the default ACPI-fast? For power-saving functionality or because > of the `quality' factor? What is the criteria that determines the > `quality' of a clock as what's being reported above (I know what > determines the quality of a clock visually from a oscilloscope =])? I suspect that the quality of the HPET driver is lower simply because no one had measured it previously and HPET is newer and less "proven". -- John Baldwin From owner-freebsd-performance@FreeBSD.ORG Thu Apr 30 21:52:52 2009 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 0E2E3106564A; Thu, 30 Apr 2009 21:52:52 +0000 (UTC) (envelope-from bruce@cran.org.uk) Received: from muon.cran.org.uk (brucec-1-pt.tunnel.tserv4.nyc4.ipv6.he.net [IPv6:2001:470:1f06:c09::2]) by mx1.freebsd.org (Postfix) with ESMTP id BBF3B8FC28; Thu, 30 Apr 2009 21:52:51 +0000 (UTC) (envelope-from bruce@cran.org.uk) Received: from muon.cran.org.uk (localhost [127.0.0.1]) by muon.cran.org.uk (Postfix) with ESMTP id 9B2B11900F; Thu, 30 Apr 2009 22:52:54 +0000 (GMT) X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on muon X-Spam-Level: X-Spam-Status: No, score=-2.6 required=8.0 tests=AWL,BAYES_00,NO_RELAYS autolearn=ham version=3.2.5 Received: from gluon.draftnet (unknown [IPv6:2a01:348:10f:0:240:f4ff:fe57:9871]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by muon.cran.org.uk (Postfix) with ESMTPSA; Thu, 30 Apr 2009 22:52:54 +0000 (GMT) Date: Thu, 30 Apr 2009 22:52:45 +0100 From: Bruce Cran To: John Baldwin Message-ID: <20090430225245.538d073e@gluon.draftnet> In-Reply-To: <200904300846.41576.jhb@freebsd.org> References: <200904270150.31912.pieter@degoeje.nl> <7d6fde3d0904261927s1a67cf85jc982c1a68e30e081@mail.gmail.com> <200904300846.41576.jhb@freebsd.org> X-Mailer: Claws Mail 3.7.1 (GTK+ 2.14.7; i386-portbld-freebsd7.2) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Mailman-Approved-At: Thu, 30 Apr 2009 22:28:19 +0000 Cc: freebsd-hackers@freebsd.org, Pieter, freebsd-acpi@freebsd.org, Goeje , Garrett Cooper , freebsd-performance@freebsd.org Subject: Re: ACPI-fast default timecounter, but HPET 83% faster X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Apr 2009 21:52:52 -0000 On Thu, 30 Apr 2009 08:46:41 -0400 John Baldwin wrote: > On Sunday 26 April 2009 10:27:42 pm Garrett Cooper wrote: > > Why's the default ACPI-fast? For power-saving functionality or > > because of the `quality' factor? What is the criteria that > > determines the `quality' of a clock as what's being reported above > > (I know what determines the quality of a clock visually from a > > oscilloscope =])? > > I suspect that the quality of the HPET driver is lower simply because > no one had measured it previously and HPET is newer and less "proven". > http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/dev/acpica/acpi_hpet.c shows some of the history behind the decision. Apparently it used to be slower but it was hoped it would get faster as systems supported it better. I guess that's happening now. -- Bruce Cran From owner-freebsd-performance@FreeBSD.ORG Fri May 1 03:20:33 2009 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4B617106566C for ; Fri, 1 May 2009 03:20:33 +0000 (UTC) (envelope-from rmosher@he.net) Received: from humid.lightning.net (humid.lightning.net [209.51.160.9]) by mx1.freebsd.org (Postfix) with SMTP id DEAF68FC15 for ; Fri, 1 May 2009 03:20:32 +0000 (UTC) (envelope-from rmosher@he.net) Received: (qmail 15499 invoked from network); 1 May 2009 02:54:04 -0000 Received: from traffic.lightning.net (HELO ?192.168.1.229?) (209.51.160.8) by humid.lightning.net with SMTP; 1 May 2009 02:54:04 -0000 Message-ID: <49FA643A.1070505@he.net> Date: Thu, 30 Apr 2009 22:53:46 -0400 From: Rob Mosher User-Agent: Thunderbird 2.0.0.21 (Windows/20090302) MIME-Version: 1.0 To: freebsd-performance@freebsd.org Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Poll packet loss from tunnel traffic X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 01 May 2009 03:20:33 -0000 Hi, I'm seeing loss with polling due to the tunnel driver. Any information that can help me resolve this will be greatly appreciated. I'll start with a little bit of background information. This is an IPv6 tunnel server, with 1500 gif interfaces configured. There is also 1 teredo tunnel configured. After upgrading to 7.1 and enabling polling, The poll loss only shows up when there is tunnel traffic passing through. If I disable polling, the tunnel driver (used by miredo) starts dropping packets at a high rate. There is no packet loss to the machine directly when using interrupts, but the teredo tunnel introduces loss to traffic being sent through it. During the test below, there was about 5000pps of tunnel traffic going to the box. I generated about 65kpps of packets to send to the box, and you can see the input errors skyrocket. During this test I filtered the tunnel traffic to the machine, and the input errors disappeared and it was receiving 67kpps without any issues. As soon as I restored the traffic, the errors started again. Does anyone have any input on why tunnel traffic is causing polling drops? I have modified if.h to change IFQ_MAXLEN to 1000, since 50 was not enough for the teredo tunnel. The same problem existed before this change. This has also been tested at 1000hz with no difference. My settings are below. If I left anything off please let me know. netstat -I bge0 -w 1 input (bge0) output packets errs bytes packets errs bytes colls 54867 13988 3404372 492 0 97696 0 53407 15668 3306569 444 0 73218 0 56372 12560 3478105 494 0 77937 0 56765 12512 3492866 392 0 64412 0 57100 11230 3519765 373 0 65237 0 55354 13617 3409092 477 0 77209 0 67886 789 4089003 220 0 17898 0 67489 0 4068003 240 0 20439 0 67488 0 4062953 211 0 17574 0 67430 0 4056830 213 0 16777 0 67482 0 4064063 241 0 17981 0 67434 0 4057161 150 0 12777 0 67318 0 4048591 163 0 12296 0 67485 0 4057702 229 0 16279 0 67863 0 4082756 220 0 17994 0 56381 24676 3490003 297 0 51457 0 47338 35006 2918266 323 0 51448 0 56400 19233 3465354 307 0 48882 0 56364 12603 3467025 342 0 57955 0 57968 12039 3562743 284 0 53378 0 53421 11315 3350914 324 0 47294 0 33038 22232 2019793 171 0 29951 0 50857 33673 3103288 289 0 41463 0 [root@tserv3 /usr/src/sys/net]# netstat -nr | wc -l #this is high because teredo adds routes. 36444 [root@tserv3 /usr/src/sys/net]# sysctl -a kern kern.ostype: FreeBSD kern.osrelease: 7.1-RELEASE-p4 kern.osrevision: 199506 kern.version: FreeBSD 7.1-RELEASE-p4 #8: Mon Dec 31 16:32:13 PST 2001 root@:/usr/obj/usr/src/sys/GENERIC kern.maxvnodes: 100000 kern.maxproc: 6164 kern.maxfiles: 12328 kern.argmax: 262144 kern.securelevel: -1 kern.hostname: tserv3.fmt2.ipv6.he.net kern.hostid: 2180312168 kern.clockrate: { hz = 2000, tick = 500, profhz = 2000, stathz = 133 } kern.posix1version: 200112 kern.ngroups: 16 kern.job_control: 1 kern.saved_ids: 0 kern.boottime: { sec = 1241138975, usec = 851402 } Thu Apr 30 17:49:35 2009 kern.domainname: kern.osreldate: 701000 kern.bootfile: /boot/kernel/kernel kern.maxfilesperproc: 11095 kern.maxprocperuid: 5547 kern.ipc.maxsockbuf: 1000000 kern.ipc.sockbuf_waste_factor: 8 kern.ipc.somaxconn: 128 kern.ipc.max_linkhdr: 16 kern.ipc.max_protohdr: 60 kern.ipc.max_hdr: 76 kern.ipc.max_datalen: 128 kern.ipc.nmbjumbo16: 3200 kern.ipc.nmbjumbo9: 6400 kern.ipc.nmbjumbop: 12800 kern.ipc.nmbclusters: 100000 kern.ipc.piperesizeallowed: 1 kern.ipc.piperesizefail: 0 kern.ipc.pipeallocfail: 0 kern.ipc.pipefragretry: 0 kern.ipc.pipekva: 32768 kern.ipc.maxpipekva: 16777216 kern.ipc.msgseg: 2048 kern.ipc.msgssz: 8 kern.ipc.msgtql: 40 kern.ipc.msgmnb: 2048 kern.ipc.msgmni: 40 kern.ipc.msgmax: 16384 kern.ipc.semaem: 16384 kern.ipc.semvmx: 32767 kern.ipc.semusz: 92 kern.ipc.semume: 10 kern.ipc.semopm: 100 kern.ipc.semmsl: 60 kern.ipc.semmnu: 30 kern.ipc.semmns: 60 kern.ipc.semmni: 10 kern.ipc.semmap: 30 kern.ipc.shm_allow_removed: 0 kern.ipc.shm_use_phys: 0 kern.ipc.shmall: 8192 kern.ipc.shmseg: 128 kern.ipc.shmmni: 192 kern.ipc.shmmin: 1 kern.ipc.shmmax: 33554432 kern.ipc.maxsockets: 25600 kern.ipc.zero_copy.send: 1 kern.ipc.zero_copy.receive: 1 kern.ipc.numopensockets: 43 kern.ipc.nsfbufsused: 0 kern.ipc.nsfbufspeak: 27 kern.ipc.nsfbufs: 6656 kern.dummy: 0 kern.ps_strings: 3217031152 kern.usrstack: 3217031168 kern.logsigexit: 1 kern.iov_max: 1024 kern.hostuuid: 00020003-0004-0005-0006-000700080009 kern.cam.cam_srch_hi: 0 kern.cam.scsi_delay: 5000 kern.cam.cd.changer.max_busy_seconds: 15 kern.cam.cd.changer.min_busy_seconds: 5 kern.cam.da.da_send_ordered: 1 kern.cam.da.default_timeout: 60 kern.cam.da.retry_count: 4 kern.dcons.poll_hz: 100 kern.disks: ad6 ad4 kern.geom.collectstats: 1 kern.geom.debugflags: 0 kern.geom.label.debug: 0 kern.elf32.fallback_brand: -1 kern.init_shutdown_timeout: 120 kern.init_path: /sbin/init:/sbin/oinit:/sbin/init.bak:/rescue/init:/stand/sysinstall kern.acct_suspended: 0 kern.acct_configured: 0 kern.acct_chkfreq: 15 kern.acct_resume: 4 kern.acct_suspend: 2 kern.cp_times: 17030 0 127675 277305 457322 19681 0 201650 179434 478628 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 kern.cp_time: 36711 0 329325 456739 935950 kern.openfiles: 94 kern.kq_calloutmax: 4096 kern.ps_arg_cache_limit: 256 kern.stackprot: 7 kern.randompid: 0 kern.lastpid: 76148 kern.ktrace.request_pool: 100 kern.ktrace.genio_size: 4096 kern.module_path: /boot/kernel;/boot/modules kern.malloc_count: 247 kern.fallback_elf_brand: -1 kern.features.compat_freebsd6: 1 kern.features.compat_freebsd5: 1 kern.features.compat_freebsd4: 1 kern.maxusers: 384 kern.ident: GENERIC kern.polling.idlepoll_sleeping: 1 kern.polling.stalled: 11236 kern.polling.suspect: 625362 kern.polling.phase: 0 kern.polling.enable: 0 kern.polling.handlers: 1 kern.polling.residual_burst: 0 kern.polling.pending_polls: 0 kern.polling.lost_polls: 5605120 kern.polling.short_ticks: 2031 kern.polling.reg_frac: 1 kern.polling.user_frac: 1 kern.polling.idle_poll: 0 kern.polling.each_burst: 1000 kern.polling.burst_max: 1000 kern.polling.burst: 29 kern.kstack_pages: 2 kern.shutdown.kproc_shutdown_wait: 60 kern.shutdown.poweroff_delay: 5000 kern.sync_on_panic: 0 kern.corefile: %N.core kern.nodump_coredump: 0 kern.coredump: 1 kern.sugid_coredump: 0 kern.sigqueue.alloc_fail: 0 kern.sigqueue.overflow: 0 kern.sigqueue.preallocate: 1024 kern.sigqueue.max_pending_per_proc: 128 kern.forcesigexit: 1 kern.fscale: 2048 kern.timecounter.tick: 2 kern.timecounter.choice: TSC(-100) ACPI-safe(850) i8254(0) dummy(-1000000) kern.timecounter.hardware: ACPI-safe kern.timecounter.nsetclock: 3 kern.timecounter.ngetmicrotime: 700031 kern.timecounter.ngetnanotime: 12814 kern.timecounter.ngetbintime: 0 kern.timecounter.ngetmicrouptime: 510350 kern.timecounter.ngetnanouptime: 3766 kern.timecounter.ngetbinuptime: 20127 kern.timecounter.nmicrotime: 808701 kern.timecounter.nnanotime: 6634 kern.timecounter.nbintime: 815336 kern.timecounter.nmicrouptime: 37590443 kern.timecounter.nnanouptime: 119 kern.timecounter.nbinuptime: 40052546 kern.timecounter.stepwarnings: 0 kern.timecounter.tc.i8254.mask: 65535 kern.timecounter.tc.i8254.counter: 4041 kern.timecounter.tc.i8254.frequency: 1193182 kern.timecounter.tc.i8254.quality: 0 kern.timecounter.tc.ACPI-safe.mask: 4294967295 kern.timecounter.tc.ACPI-safe.counter: 2482401764 kern.timecounter.tc.ACPI-safe.frequency: 3579545 kern.timecounter.tc.ACPI-safe.quality: 850 kern.timecounter.tc.TSC.mask: 4294967295 kern.timecounter.tc.TSC.counter: 487861987 kern.timecounter.tc.TSC.frequency: 2593518990 kern.timecounter.tc.TSC.quality: -100 kern.timecounter.smp_tsc: 0 kern.threads.virtual_cpu: 2 kern.threads.max_threads_hits: 0 kern.threads.max_threads_per_proc: 1500 kern.ccpu: 0 kern.sched.preemption: 1 kern.sched.topology: 0 kern.sched.steal_thresh: 1 kern.sched.steal_idle: 1 kern.sched.steal_htt: 1 kern.sched.balance_interval: 133 kern.sched.balance: 1 kern.sched.tryself: 1 kern.sched.affinity: 3 kern.sched.pick_pri: 1 kern.sched.preempt_thresh: 64 kern.sched.interact: 30 kern.sched.slice: 13 kern.sched.name: ULE kern.devstat.version: 6 kern.devstat.generation: 129 kern.devstat.numdevs: 2 kern.kobj_methodcount: 140 kern.log_wakeups_per_second: 5 kern.msgbuf_clear: 0 kern.msgbuf: kern.always_console_output: 0 kern.log_console_output: 1 kern.smp.forward_roundrobin_enabled: 1 kern.smp.forward_signal_enabled: 1 kern.smp.cpus: 2 kern.smp.disabled: 0 kern.smp.active: 1 kern.smp.maxcpus: 16 kern.smp.maxid: 15 kern.nselcoll: 0 kern.tty_nout: 12126931 kern.tty_nin: 1974280 kern.drainwait: 300 kern.constty_wakeups_per_second: 5 kern.consmsgbuf_size: 8192 kern.consmute: 0 kern.console: consolectl,dcons,/dcons,consolectl,ttyd0, kern.minvnodes: 25000 kern.metadelay: 28 kern.dirdelay: 29 kern.filedelay: 30 kern.chroot_allow_open_directories: 1 kern.rpc.invalid: 0 kern.rpc.unexpected: 0 kern.rpc.timeouts: 0 kern.rpc.request: 0 kern.rpc.retries: 0 kern.random.yarrow.gengateinterval: 10 kern.random.yarrow.bins: 10 kern.random.yarrow.fastthresh: 192 kern.random.yarrow.slowthresh: 256 kern.random.yarrow.slowoverthresh: 2 kern.random.sys.seeded: 1 kern.random.sys.harvest.ethernet: 1 kern.random.sys.harvest.point_to_point: 1 kern.random.sys.harvest.interrupt: 1 kern.random.sys.harvest.swi: 0 [root@tserv3 /usr/src/sys/net]# sysctl -a net net.local.stream.recvspace: 100000 net.local.stream.sendspace: 100000 net.local.dgram.recvspace: 100000 net.local.dgram.maxdgram: 100000 net.local.recycled: 0 net.local.taskcount: 0 net.local.inflight: 0 net.inet.ip.portrange.randomtime: 45 net.inet.ip.portrange.randomcps: 10 net.inet.ip.portrange.randomized: 1 net.inet.ip.portrange.reservedlow: 0 net.inet.ip.portrange.reservedhigh: 1023 net.inet.ip.portrange.hilast: 65535 net.inet.ip.portrange.hifirst: 49152 net.inet.ip.portrange.last: 65535 net.inet.ip.portrange.first: 49152 net.inet.ip.portrange.lowlast: 600 net.inet.ip.portrange.lowfirst: 1023 net.inet.ip.forwarding: 1 net.inet.ip.redirect: 1 net.inet.ip.ttl: 64 net.inet.ip.rtexpire: 3600 net.inet.ip.rtminexpire: 10 net.inet.ip.rtmaxcache: 128 net.inet.ip.sourceroute: 0 net.inet.ip.intr_queue_maxlen: 1024 net.inet.ip.intr_queue_drops: 0 net.inet.ip.accept_sourceroute: 0 net.inet.ip.keepfaith: 0 net.inet.ip.gifttl: 30 net.inet.ip.same_prefix_carp_only: 0 net.inet.ip.subnets_are_local: 0 net.inet.ip.fastforwarding: 1 net.inet.ip.maxfragpackets: 3125 net.inet.ip.maxfragsperpacket: 16 net.inet.ip.fragpackets: 0 net.inet.ip.check_interface: 0 net.inet.ip.random_id: 0 net.inet.ip.sendsourcequench: 0 net.inet.ip.process_options: 1 net.inet.icmp.maskrepl: 0 net.inet.icmp.icmplim: 200 net.inet.icmp.bmcastecho: 0 net.inet.icmp.quotelen: 8 net.inet.icmp.reply_from_interface: 0 net.inet.icmp.reply_src: net.inet.icmp.icmplim_output: 1 net.inet.icmp.log_redirect: 0 net.inet.icmp.drop_redirect: 0 net.inet.icmp.maskfake: 0 net.inet.tcp.rfc1323: 1 net.inet.tcp.mssdflt: 512 net.inet.tcp.keepidle: 7200000 net.inet.tcp.keepintvl: 75000 net.inet.tcp.sendspace: 32768 net.inet.tcp.recvspace: 65536 net.inet.tcp.keepinit: 75000 net.inet.tcp.delacktime: 100 net.inet.tcp.v6mssdflt: 1024 net.inet.tcp.hostcache.purge: 0 net.inet.tcp.hostcache.prune: 300 net.inet.tcp.hostcache.expire: 3600 net.inet.tcp.hostcache.count: 3 net.inet.tcp.hostcache.bucketlimit: 30 net.inet.tcp.hostcache.hashsize: 512 net.inet.tcp.hostcache.cachelimit: 15360 net.inet.tcp.recvbuf_max: 262144 net.inet.tcp.recvbuf_inc: 16384 net.inet.tcp.recvbuf_auto: 1 net.inet.tcp.insecure_rst: 0 net.inet.tcp.rfc3390: 1 net.inet.tcp.rfc3042: 1 net.inet.tcp.drop_synfin: 0 net.inet.tcp.delayed_ack: 1 net.inet.tcp.blackhole: 0 net.inet.tcp.log_in_vain: 0 net.inet.tcp.sendbuf_max: 262144 net.inet.tcp.sendbuf_inc: 8192 net.inet.tcp.sendbuf_auto: 1 net.inet.tcp.tso: 1 net.inet.tcp.newreno: 1 net.inet.tcp.local_slowstart_flightsize: 4 net.inet.tcp.slowstart_flightsize: 1 net.inet.tcp.path_mtu_discovery: 1 net.inet.tcp.reass.overflows: 0 net.inet.tcp.reass.maxqlen: 48 net.inet.tcp.reass.cursegments: 0 net.inet.tcp.reass.maxsegments: 6250 net.inet.tcp.sack.globalholes: 0 net.inet.tcp.sack.globalmaxholes: 65536 net.inet.tcp.sack.maxholes: 128 net.inet.tcp.sack.enable: 1 net.inet.tcp.inflight.stab: 20 net.inet.tcp.inflight.max: 1073725440 net.inet.tcp.inflight.min: 6144 net.inet.tcp.inflight.rttthresh: 10 net.inet.tcp.inflight.debug: 0 net.inet.tcp.inflight.enable: 1 net.inet.tcp.isn_reseed_interval: 0 net.inet.tcp.icmp_may_rst: 1 net.inet.tcp.pcbcount: 21 net.inet.tcp.do_tcpdrain: 1 net.inet.tcp.tcbhashsize: 512 net.inet.tcp.log_debug: 0 net.inet.tcp.minmss: 216 net.inet.tcp.syncache.rst_on_sock_fail: 1 net.inet.tcp.syncache.rexmtlimit: 3 net.inet.tcp.syncache.hashsize: 512 net.inet.tcp.syncache.count: 0 net.inet.tcp.syncache.cachelimit: 15360 net.inet.tcp.syncache.bucketlimit: 30 net.inet.tcp.syncookies_only: 0 net.inet.tcp.syncookies: 1 net.inet.tcp.timer_race: 0 net.inet.tcp.finwait2_timeout: 60000 net.inet.tcp.fast_finwait2_recycle: 0 net.inet.tcp.always_keepalive: 1 net.inet.tcp.rexmit_slop: 200 net.inet.tcp.rexmit_min: 30 net.inet.tcp.msl: 30000 net.inet.tcp.nolocaltimewait: 0 net.inet.tcp.maxtcptw: 5120 net.inet.udp.checksum: 1 net.inet.udp.maxdgram: 100000 net.inet.udp.recvspace: 100000 net.inet.udp.soreceive_dgram_enabled: 0 net.inet.udp.blackhole: 0 net.inet.udp.log_in_vain: 0 net.inet.sctp.enable_sack_immediately: 0 net.inet.sctp.udp_tunneling_port: 0 net.inet.sctp.udp_tunneling_for_client_enable: 0 net.inet.sctp.mobility_fasthandoff: 0 net.inet.sctp.mobility_base: 0 net.inet.sctp.default_frag_interleave: 1 net.inet.sctp.default_cc_module: 0 net.inet.sctp.log_level: 0 net.inet.sctp.max_retran_chunk: 30 net.inet.sctp.min_residual: 1452 net.inet.sctp.strict_data_order: 0 net.inet.sctp.abort_at_limit: 0 net.inet.sctp.hb_max_burst: 4 net.inet.sctp.do_sctp_drain: 1 net.inet.sctp.max_chained_mbufs: 5 net.inet.sctp.abc_l_var: 1 net.inet.sctp.nat_friendly: 1 net.inet.sctp.auth_disable: 0 net.inet.sctp.asconf_auth_nochk: 0 net.inet.sctp.early_fast_retran_msec: 250 net.inet.sctp.early_fast_retran: 0 net.inet.sctp.cwnd_maxburst: 1 net.inet.sctp.cmt_pf: 0 net.inet.sctp.cmt_use_dac: 0 net.inet.sctp.cmt_on_off: 0 net.inet.sctp.outgoing_streams: 10 net.inet.sctp.add_more_on_output: 1452 net.inet.sctp.path_rtx_max: 5 net.inet.sctp.assoc_rtx_max: 10 net.inet.sctp.init_rtx_max: 8 net.inet.sctp.valid_cookie_life: 60000 net.inet.sctp.init_rto_max: 60000 net.inet.sctp.rto_initial: 3000 net.inet.sctp.rto_min: 1000 net.inet.sctp.rto_max: 60000 net.inet.sctp.secret_lifetime: 3600 net.inet.sctp.shutdown_guard_time: 180 net.inet.sctp.pmtu_raise_time: 600 net.inet.sctp.heartbeat_interval: 30000 net.inet.sctp.asoc_resource: 10 net.inet.sctp.sys_resource: 1000 net.inet.sctp.sack_freq: 2 net.inet.sctp.delayed_sack_time: 200 net.inet.sctp.chunkscale: 10 net.inet.sctp.min_split_point: 2904 net.inet.sctp.pcbhashsize: 256 net.inet.sctp.tcbhashsize: 1024 net.inet.sctp.maxchunks: 3200 net.inet.sctp.maxburst: 4 net.inet.sctp.peer_chkoh: 256 net.inet.sctp.strict_init: 1 net.inet.sctp.loopback_nocsum: 1 net.inet.sctp.strict_sacks: 0 net.inet.sctp.ecn_nonce: 0 net.inet.sctp.ecn_enable: 1 net.inet.sctp.auto_asconf: 1 net.inet.sctp.recvspace: 233016 net.inet.sctp.sendspace: 233016 net.inet.raw.recvspace: 100000 net.inet.raw.maxdgram: 100000 net.inet.accf.unloadable: 0 net.link.generic.system.ifcount: 1509 net.link.ether.inet.log_arp_permanent_modify: 1 net.link.ether.inet.log_arp_movements: 1 net.link.ether.inet.log_arp_wrong_iface: 1 net.link.ether.inet.proxyall: 0 net.link.ether.inet.useloopback: 1 net.link.ether.inet.maxtries: 5 net.link.ether.inet.max_age: 1200 net.link.ether.ipfw: 0 net.link.stf.route_cache: 1 net.link.gif.parallel_tunnels: 0 net.link.gif.max_nesting: 1 net.link.log_link_state_change: 1 net.link.tun.devfs_cloning: 1 net.inet6.ip6.forwarding: 1 net.inet6.ip6.redirect: 1 net.inet6.ip6.hlim: 64 net.inet6.ip6.maxfragpackets: 25000 net.inet6.ip6.accept_rtadv: 0 net.inet6.ip6.keepfaith: 0 net.inet6.ip6.log_interval: 5 net.inet6.ip6.hdrnestlimit: 15 net.inet6.ip6.dad_count: 1 net.inet6.ip6.auto_flowlabel: 1 net.inet6.ip6.defmcasthlim: 1 net.inet6.ip6.gifhlim: 30 net.inet6.ip6.kame_version: FreeBSD net.inet6.ip6.use_deprecated: 1 net.inet6.ip6.rr_prune: 5 net.inet6.ip6.v6only: 1 net.inet6.ip6.rtexpire: 3600 net.inet6.ip6.rtminexpire: 10 net.inet6.ip6.rtmaxcache: 128 net.inet6.ip6.use_tempaddr: 0 net.inet6.ip6.temppltime: 86400 net.inet6.ip6.tempvltime: 604800 net.inet6.ip6.auto_linklocal: 1 net.inet6.ip6.prefer_tempaddr: 0 net.inet6.ip6.use_defaultzone: 0 net.inet6.ip6.maxfrags: 25000 net.inet6.ip6.mcast_pmtu: 0 net.inet6.icmp6.rediraccept: 1 net.inet6.icmp6.redirtimeout: 600 net.inet6.icmp6.nd6_prune: 1 net.inet6.icmp6.nd6_delay: 5 net.inet6.icmp6.nd6_umaxtries: 3 net.inet6.icmp6.nd6_mmaxtries: 3 net.inet6.icmp6.nd6_useloopback: 1 net.inet6.icmp6.nodeinfo: 3 net.inet6.icmp6.errppslimit: 100 net.inet6.icmp6.nd6_maxnudhint: 0 net.inet6.icmp6.nd6_debug: 0 net.inet6.icmp6.nd6_maxqueuelen: 1 net.inet6.icmp6.nd6_onlink_ns_rfc4861: 0 net.bpf.maxinsns: 512 net.bpf.maxbufsize: 524288 net.bpf.bufsize: 4096 net.isr.swi_count: 67634604 net.isr.drop: 0 net.isr.queued: 183166 net.isr.deferred: 41476140 net.isr.directed: 41346528 net.isr.count: 82819191 net.isr.direct: 1 net.raw.recvspace: 100000 net.raw.sendspace: 100000 net.my_fibnum: 0 net.add_addr_allfibs: 1 net.fibs: 1 net.route.netisr_maxqlen: 256 net.wlan.recv_bar: 1 net.wlan.debug: 0 [root@tserv3 /usr/src/sys/net]# sysctl -a dev.bge.0 dev.bge.0.%desc: Broadcom NetXtreme Gigabit Ethernet Controller, ASIC rev. 0x2100 dev.bge.0.%driver: bge dev.bge.0.%location: slot=3 function=0 dev.bge.0.%pnpinfo: vendor=0x14e4 device=0x1648 subvendor=0x15d9 subdevice=0x1648 class=0x020000 dev.bge.0.%parent: pci2 dev.bge.0.stats.FramesDroppedDueToFilters: 0 dev.bge.0.stats.DmaWriteQueueFull: 2997701 dev.bge.0.stats.DmaWriteHighPriQueueFull: 0 dev.bge.0.stats.NoMoreRxBDs: 0 dev.bge.0.stats.InputDiscards: 287023 dev.bge.0.stats.InputErrors: 0 dev.bge.0.stats.RecvThresholdHit: 33654645 dev.bge.0.stats.DmaReadQueueFull: 434236 dev.bge.0.stats.DmaReadHighPriQueueFull: 0 dev.bge.0.stats.SendDataCompQueueFull: 0 dev.bge.0.stats.RingSetSendProdIndex: 31463597 dev.bge.0.stats.RingStatusUpdate: 64933072 dev.bge.0.stats.Interrupts: 75393 dev.bge.0.stats.AvoidedInterrupts: 64857679 dev.bge.0.stats.SendThresholdHit: 0 dev.bge.0.stats.rx.Octets: 79971511 dev.bge.0.stats.rx.Fragments: 1 dev.bge.0.stats.rx.UcastPkts: 33966088 dev.bge.0.stats.rx.MulticastPkts: 0 dev.bge.0.stats.rx.FCSErrors: 0 dev.bge.0.stats.rx.AlignmentErrors: 0 dev.bge.0.stats.rx.xonPauseFramesReceived: 0 dev.bge.0.stats.rx.xoffPauseFramesReceived: 0 dev.bge.0.stats.rx.ControlFramesReceived: 0 dev.bge.0.stats.rx.xoffStateEntered: 0 dev.bge.0.stats.rx.FramesTooLong: 0 dev.bge.0.stats.rx.Jabbers: 0 dev.bge.0.stats.rx.UndersizePkts: 0 dev.bge.0.stats.rx.inRangeLengthError: 0 dev.bge.0.stats.rx.outRangeLengthError: 0 dev.bge.0.stats.tx.Octets: 142844970 dev.bge.0.stats.tx.Collisions: 0 dev.bge.0.stats.tx.XonSent: 0 dev.bge.0.stats.tx.XoffSent: 0 dev.bge.0.stats.tx.flowControlDone: 0 dev.bge.0.stats.tx.InternalMacTransmitErrors: 0 dev.bge.0.stats.tx.SingleCollisionFrames: 0 dev.bge.0.stats.tx.MultipleCollisionFrames: 0 dev.bge.0.stats.tx.DeferredTransmissions: 0 dev.bge.0.stats.tx.ExcessiveCollisions: 0 dev.bge.0.stats.tx.LateCollisions: 0 dev.bge.0.stats.tx.UcastPkts: 31339486 dev.bge.0.stats.tx.MulticastPkts: 0 dev.bge.0.stats.tx.BroadcastPkts: 7 dev.bge.0.stats.tx.CarrierSenseErrors: 0 dev.bge.0.stats.tx.Discards: 0 dev.bge.0.stats.tx.Errors: 0 -- -- Rob Mosher Network Engineer Hurricane Electric / AS6939