Date: Tue, 10 Apr 2012 14:46:10 -0400 From: Arnaud Lacombe <lacombar@gmail.com> To: Alexander Motin <mav@freebsd.org> Cc: freebsd-hackers@freebsd.org, Florian Smeets <flo@freebsd.org>, Jeff Roberson <jroberson@jroberson.net>, Andriy Gapon <avg@freebsd.org>, FreeBSD current <freebsd-current@freebsd.org> Subject: Re: [RFT][patch] Scheduling for HTT and not only Message-ID: <CACqU3MXBGotDuTupZP3njEfCAWhdcC3fow%2B7QAdtRr=YVMiy3Q@mail.gmail.com> In-Reply-To: <4F8473B7.9080000@FreeBSD.org> References: <4F2F7B7F.40508@FreeBSD.org> <4F366E8F.9060207@FreeBSD.org> <4F367965.6000602@FreeBSD.org> <4F396B24.5090602@FreeBSD.org> <alpine.BSF.2.00.1202131012270.2020@desktop> <4F3978BC.6090608@FreeBSD.org> <alpine.BSF.2.00.1202131108460.2020@desktop> <4F3990EA.1080002@FreeBSD.org> <4F3C0BB9.6050101@FreeBSD.org> <alpine.BSF.2.00.1202150949480.2020@desktop> <4F3E807A.60103@FreeBSD.org> <CACqU3MWEC4YYguPQF_d%2B_i_CwTc=86hG%2BPbxFgJQiUS-=AHiRw@mail.gmail.com> <4F3E8858.4000001@FreeBSD.org> <CACqU3MWZj503xN_-wr6s%2BXOB7JGhhBgaWW0gOX60KJvU3Y=Rig@mail.gmail.com> <4F7DE863.6080607@FreeBSD.org> <4F833F3D.7070106@FreeBSD.org> <CACqU3MXo__hiKf%2Bs31c5WFZmVO_T8mJgu4A=KkMF=MWp8VoW4w@mail.gmail.com> <4F846B74.9080504@FreeBSD.org> <4F8473B7.9080000@FreeBSD.org>
next in thread | previous in thread | raw e-mail | index | archive | help
Hi, On Tue, Apr 10, 2012 at 1:53 PM, Alexander Motin <mav@freebsd.org> wrote: > On 04/10/12 20:18, Alexander Motin wrote: >> >> On 04/10/12 19:58, Arnaud Lacombe wrote: >>> >>> 2012/4/9 Alexander Motin<mav@freebsd.org>: >>>> >>>> [...] >>>> >>>> I have strong feeling that while this test may be interesting for >>>> profiling, >>>> it's own results in first place depend not from how fast scheduler >>>> is, but >>>> from the pipes capacity and other alike things. Can somebody hint me >>>> what >>>> except pipe capacity and context switch to unblocked receiver prevents >>>> sender from sending all data in batch and then receiver from >>>> receiving them >>>> all in batch? If different OSes have different policies there, I think >>>> results could be incomparable. >>>> >>> Let me disagree on your conclusion. If OS A does a task in X seconds, >>> and OS B does the same task in Y seconds, if Y> X, then OS B is just >>> not performing good enough. Internal implementation's difference for >>> the task can not be waived as an excuse for result's comparability. >> >> >> Sure, numbers are always numbers, but the question is what are they >> showing? Understanding of the test results is even more important for >> purely synthetic tests like this. Especially when one test run gives 25 >> seconds, while another gives 50. This test is not completely clear to me >> and that is what I've told. > > Small illustration to my point. Simple scheduler tuning affects thread > preemption policy and changes this test results in three times: > > mav@test:/test/hackbench# ./hackbench 30 process 1000 > Running with 30*40 (== 1200) tasks. > Time: 9.568 > > mav@test:/test/hackbench# sysctl kern.sched.interact=0 > kern.sched.interact: 30 -> 0 > mav@test:/test/hackbench# ./hackbench 30 process 1000 > Running with 30*40 (== 1200) tasks. > Time: 5.163 > > mav@test:/test/hackbench# sysctl kern.sched.interact=100 > kern.sched.interact: 0 -> 100 > mav@test:/test/hackbench# ./hackbench 30 process 1000 > Running with 30*40 (== 1200) tasks. > Time: 3.190 > > I think it affects balance between pipe latency and bandwidth, while test > measures only the last. It is clear that conclusion from these numbers > depends on what do we want to have. > I don't really care on this point, I'm only testing default values, or more precisely, whatever developers though default values would be good. Btw, you are testing 3 differents configuration. Different results are expected. What worries me more is the rather the huge instability on the *same* configuration, say on a pipe/thread/70 groups/600 iterations run, where results range from 2.7s[0] to 7.4s, or a socket/thread/20 groups/1400 iterations run, where results range from 2.4s to 4.5s. - Arnaud [0]: numbers extracted from a recent run of 9.0-RELEASE on a Xeon E5-1650 platform.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CACqU3MXBGotDuTupZP3njEfCAWhdcC3fow%2B7QAdtRr=YVMiy3Q>