Date: Fri, 11 Mar 2011 16:42:04 +0100 From: Martin Matuska <mm@FreeBSD.org> To: Poul-Henning Kamp <phk@phk.freebsd.dk> Cc: freebsd-performance@FreeBSD.org, freebsd-current@FreeBSD.org Subject: Re: FreeBSD Compiler Benchmark: gcc-base vs. gcc-ports vs. clang Message-ID: <4D7A42CC.8020807@FreeBSD.org> In-Reply-To: <90325.1299852096@critter.freebsd.dk> References: <90325.1299852096@critter.freebsd.dk>
next in thread | previous in thread | raw e-mail | index | archive | help
I don't take this personally and fully understand your point. But even if all conditions you described are met, I am still not able to say "this is better" as I am not doing a microbenchmark. The +x% score is just an average of all test scores weightened by factor 1 - this does not reflect any real application out there, as these applications don't use the tested functions in that exact weighting ratio. If one function had score 0%, the program actually would be stale forever when executing this function but the score of this average would still look promising :-) But what I can say, e.g. for the Intel Atom processor, if there are performance gains in all but one test (that falls 2% behind), generic perl code (the routines benchmarked) on this processor is very likely to run faster with that setup. On the other hand, if clang generated code falls short in all tests, I can say it is very likely that it will run slower. But again, I am benchmarking just a subset of generic perl functions. Cheers, mm Dňa 11.03.2011 15:01, Poul-Henning Kamp wrote / napísal(a): > In message <4D7943B1.1030604@FreeBSD.org>, Martin Matuska writes: > >> More information, detailed test results and test configuration are at >> our blog: >> http://blog.vx.sk/archives/25-FreeBSD-Compiler-Benchmark-gcc-base-vs-gcc-ports-vs-clang.html > Please don't take this personally Martin, but you have triggered > my periodic rant about proper running, evaluation and reporting of > benchmarks. > > These results are not published at a level of detail that allows > anybody to draw any kind of conclusions from them. > > In particular, your use of "overall best" result selection is totally > bogus from a statistical point of view. > > At the very least, we need to see standard-deviations on your numbers, > and preferably, when you claim that "X is N% better than Y", you should > also provide the confidence interval on that judgment, "Student's T" > being the canonical test. > > The ministat(1) program does both of these things, and is now in > FreeBSD/src, so there is absolutely no excuse for not using it. > > In practice this means that you have to run each test at least three > times, to get a standardeviation, and you have to make sure that > your testconditions are as identical as possible. > > Therefore, proper benchmarking procedure is something like: > > (boot machine single-user // Improves reproducibility) > (mount md(4)/malloc filesystem // ditto) > (newfs test-partition // ditto) > for at least 4 iterations: > run test A > run test B > run test C > ... > Throw first result away for all tests > Run remaining results through ministat(1) > > This was a public service announcement. > > Poul-Henning > > PS: Recommended reading: http://www.larrygonick.com/html/pub/books/sci7.html >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4D7A42CC.8020807>