From owner-freebsd-questions@FreeBSD.ORG Wed Aug 10 22:43:10 2011 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 1F14B1065672 for ; Wed, 10 Aug 2011 22:43:10 +0000 (UTC) (envelope-from lobo@bsd.com.br) Received: from mail-yx0-f182.google.com (mail-yx0-f182.google.com [209.85.213.182]) by mx1.freebsd.org (Postfix) with ESMTP id D259C8FC0A for ; Wed, 10 Aug 2011 22:43:09 +0000 (UTC) Received: by yxl31 with SMTP id 31so1187888yxl.13 for ; Wed, 10 Aug 2011 15:43:09 -0700 (PDT) Received: by 10.236.153.8 with SMTP id e8mr9610204yhk.179.1313016188887; Wed, 10 Aug 2011 15:43:08 -0700 (PDT) Received: from papi.localnet ([186.212.157.207]) by mx.google.com with ESMTPS id j45sm20882yhe.78.2011.08.10.15.43.06 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 10 Aug 2011 15:43:07 -0700 (PDT) From: Mario Lobo To: bf1783@gmail.com Date: Wed, 10 Aug 2011 19:43:01 -0300 User-Agent: KMail/1.13.7 (FreeBSD/8.2-STABLE; KDE/4.6.2; amd64; ; ) References: <201108072028.09658.lobo@bsd.com.br> In-Reply-To: X-KMail-Markup: true MIME-Version: 1.0 Message-Id: <201108101943.01874.lobo@bsd.com.br> Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-questions@freebsd.org Subject: Re: High interrupt rate X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 10 Aug 2011 22:43:10 -0000 On Monday 08 August 2011 21:30:41 b. f. wrote: >> I'll wait for your views on those before disabling polling on the kernel >> and hz=100. > It looks like your interrupt rate, while probably higher than needed, > is not unexpectedly high for your configuration. But you can lower it > if you want to do so. > > You are using a system before the introduction of the new eventtimer > code. If you use 9.x, that has the new code and some other > timer-related improvements, and you are not performing polling, then > you can achieve a large reduction in the number of timer interrupts > when the system isn't busy. You can still achieve a reduction on 8.x, > but the reduction usually won't be as large as on 9.x under similar > conditions. > > To reduce timer interrupts on an idle system running 8.x or 9.x, if > you do not need to poll (most systems do not), remove DEVICE_POLLING > from your kernel, and lower kern.hz to a suitable value -- 100 or 250, > for example. For many workloads, a lower value is not only adequate, > but may also be better in some ways. > > Also, you may want to consider using your TSC as the system > timecounter, because it is usually more efficient to do so. This may > not work for SMP, because if there are multiple TSCs on your system, > they may not be synchronized. In 9.x, there is a test for > synchronization, and the TSCs are preferred to the ACPI-safe timer if > they satisfy this test and meet some other requirements. In 8.x, the > user has to tell the system that it is safe to use the TSCs by adding: > > kern.timecounter.smp_tsc="1" > > to /boot/loader.conf. If you are not putting your cores into the C3 > state, then you could try setting this via the loader command line, > booting, and then seeing if the kern.timecounter.tc.TSC.quality is > positive, kern.timecounter.hardware is TSC, and everything is working > as expected. If the results are satisfactory, then you could add the > above entry to /boot/loader.conf. But it would be better to do this > on 9.x, where there are some added safeguards. > > b. b.; Something really odd happened. After I sent you the data, while waiting for your reply, I changed Lusca cache to use 64M ram instead of the 256M it had. It was 1/8th of ram so I just decided to give it less. Well, I swear to you this was the ONLY thing I did!. Since then, the system has been running at around 97% idle 98% of the time! During load hours, there are only short(1s) spikes of 75%ish idle, far from each other. And web performance is actually a little better! And the overall response of the system improved. That's why I waited a couple of days to reply so I could confirm this behavior. I don't know. Maybe with more ram, lusca was spawning to many threads and thus loading the CPU but this is just a guess. I will take lusca memory back to 256 for the sake of checking but I want to find out if this new found estability is there to stay so I'll wait a little longer to do that. Your suggestions will be kept handy just in case. Thanks for everything. -- Mario Lobo http://www.mallavoodoo.com.br FreeBSD since 2.2.8 [not Pro-Audio.... YET!!] (99% winblows FREE)