Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 06 Feb 2012 18:29:14 +0200
From:      Alexander Motin <mav@FreeBSD.org>
To:        Alexander Best <arundel@freebsd.org>
Cc:        freebsd-hackers@freebsd.org
Subject:   Re: [RFT][patch] Scheduling for HTT and not only
Message-ID:  <4F2FFFDA.2080608@FreeBSD.org>
In-Reply-To: <20120206160136.GA35918@freebsd.org>
References:  <4F2F7B7F.40508@FreeBSD.org> <20120206160136.GA35918@freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On 02/06/12 18:01, Alexander Best wrote:
> On Mon Feb  6 12, Alexander Motin wrote:
>> I've analyzed scheduler behavior and think found the problem with HTT.
>> SCHED_ULE knows about HTT and when doing load balancing once a second,
>> it does right things. Unluckily, if some other thread gets in the way,
>> process can be easily pushed out to another CPU, where it will stay for
>> another second because of CPU affinity, possibly sharing physical core
>> with something else without need.
>>
>> I've made a patch, reworking SCHED_ULE affinity code, to fix that:
>> http://people.freebsd.org/~mav/sched.htt.patch
>>
>> This patch does three things:
>>   - Disables strict affinity optimization when HTT detected to let more
>> sophisticated code to take into account load of other logical core(s).
>>   - Adds affinity support to the sched_lowest() function to prefer
>> specified (last used) CPU (and CPU groups it belongs to) in case of
>> equal load. Previous code always selected first valid CPU of evens. It
>> caused threads migration to lower CPUs without need.
>>   - If current CPU group has no CPU where the process with its priority
>> can run now, sequentially check parent CPU groups before doing global
>> search. That should improve affinity for the next cache levels.

>> Who wants to do independent testing to verify my results or do some more
>> interesting benchmarks? :)
>
> i don't have any benchmarks to offer, but i'm seeing a massive increase in
> responsiveness with your patch. with an unpatched kernel, opening xterm while
> unrar'ing some huge archive could take up to 3 minutes!!! with your patch the
> time it takes for xterm to start is never>  10 seconds!!!

Thank you for the report. I can suggest explanation for this. Original 
code does only one pass looking for CPU where the thread can run 
immediately. That pass limited to the first level of CPU topology (for 
HTT systems it is one physical core). If it sees no good candidate, it 
just looks for the CPU with minimal load, ignoring thread priority. I 
suppose that may lead to priority violation, scheduling thread to CPU 
where higher-priority thread is running, where it may wait for a very 
long time, while there is some other CPU with minimal priority thread. 
My patch does more searches, that allows to handle priorities better.

Unluckily on my newer tests of context-switch-intensive workloads (like 
doing 40K MySQL requests per second) I've found about 3% slowdown 
because of these additional searches. I'll finish some more tests and 
try to find some compromise solution.

-- 
Alexander Motin



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4F2FFFDA.2080608>