Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 26 Dec 2012 21:24:46 +0200
From:      Alexander Motin <mav@FreeBSD.org>
To:        Marius Strobl <marius@alchemy.franken.de>
Cc:        Davide Italiano <davide@freebsd.org>, FreeBSD Current <freebsd-current@freebsd.org>, freebsd-arch@freebsd.org
Subject:   Re: [RFC/RFT] calloutng
Message-ID:  <50DB4EFE.2020600@FreeBSD.org>
In-Reply-To: <20121225232126.GA47692@alchemy.franken.de>
References:  <50CCAB99.4040308@FreeBSD.org> <50CE5B54.3050905@FreeBSD.org> <50D03173.9080904@FreeBSD.org> <20121225232126.GA47692@alchemy.franken.de>

next in thread | previous in thread | raw e-mail | index | archive | help
On 26.12.2012 01:21, Marius Strobl wrote:
> On Tue, Dec 18, 2012 at 11:03:47AM +0200, Alexander Motin wrote:
>> Experiments with dummynet shown ineffective support for very short
>> tick-based callouts. New version fixes that, allowing to get as many
>> tick-based callout events as hz value permits, while still be able to
>> aggregate events and generating minimum of interrupts.
>>
>> Also this version modifies system load average calculation to fix some
>> cases existing in HEAD and 9 branches, that could be fixed with new
>> direct callout functionality.
>>
>> http://people.freebsd.org/~mav/calloutng_12_17.patch
>>
>> With several important changes made last time I am going to delay commit
>> to HEAD for another week to do more testing. Comments and new test cases
>> are welcome. Thanks for staying tuned and commenting.
>
> FYI, I gave both calloutng_12_15_1.patch and calloutng_12_17.patch a
> try on sparc64 and it at least survives a buildworld there. However,
> with the patched kernels, buildworld times seem to increase slightly but
> reproducible by 1-2% (I only did four runs but typically buildworld
> times are rather stable and don't vary more than a minute for the
> same kernel and source here). Is this an expected trade-off (system
> time as such doesn't seem to increase)?

I don't think build process uses significant number of callouts to 
affect results directly. I think this additional time could be result of 
the deeper next event look up, done by the new code, that is practically 
useless for sparc64, which effectively has no cpu_idle() routine. It 
wouldn't affect system time and wouldn't show up in any statistics 
(except PMC or something alike) because it is executed inside timer 
hardware interrupt handler. If my guess is right, that is a part that 
probably still could be optimized. I'll look on it. Thanks.

> Is there anything specific to test?

Since the most of code is MI, for sparc64 I would mostly look on related 
MD parts (eventtimers and timecounters) to make sure they are working 
reliably in more stressful conditions.  I still have some worries about 
possible deadlock on hardware where IPIs are used to fetch present time 
from other CPU.

Here is small tool we are using for test correctness and performance of 
different user-level APIs: http://people.freebsd.org/~mav/testsleep.c

-- 
Alexander Motin



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?50DB4EFE.2020600>