Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 15 Jun 2001 21:37:39 +0300
From:      Giorgos Keramidas <keramida@ceid.upatras.gr>
To:        Rajappa Iyer <rsi@panix.com>
Cc:        hackers@FreeBSD.ORG
Subject:   Re: Sysadmin article
Message-ID:  <20010615213739.B12591@hades.hell.gr>
In-Reply-To: <200106150223.f5F2NLW08368@panix1.panix.com>; from rsi@panix.com on Thu, Jun 14, 2001 at 10:23:21PM -0400
References:  <200106150223.f5F2NLW08368@panix1.panix.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, Jun 14, 2001 at 10:23:21PM -0400, Rajappa Iyer wrote:
> http://www.sysadminmag.com/articles/2001/0107/0107a/0107a.htm
> 
> Any obvious reasons why FreeBSD performed so poorly for these people?

Yes, it's not very difficult to guess why.  If you read the tuning(7)
manpage in recent 4.x FreeBSD systems you will notice that even the
order in which you lay out the partitions on the disks ruding
installation time can play a significant role in filesystem speed.
Softupdates are disabled by default, and for a good reason too
(reliability is more important than raw speed to the people who
install FreeBSD for the first time; if it isn't they can always enable
softupdates later on).  Write-back caching is disabled in the disks,
even if they support it.  This is yet another step towards making the
default installation of FreeBSD as reliable a system as it can be.

Installing an operating system (be it FreeBSD, linux, Windows or what
else) and failing to tune the system to perform as good as possible
for the application, is no decent way of doing a benchmark.  And when
is comes to benchmarks, you have to tune ALL the systems that are
involved.  You have to perform the test on identical hardware (if such
a thing is ever possible[1]).

When doing benchmarks, you have to present a lot more data than a
simple bar or line graph with the results, for the benchmarks to be of
any practical value to somebody else.  An exact description of the
hardware involved, details about the installation of the software,
tuning decisions and tweaks performed during installation that will
make the software perform better for a given application, what
application you are interested in and testing with this benchmark,
post installation tuning, what software you used for doing the
benchmark, was it compiled by you or somebody else? what compiler and
tools you used to generate the software of the benchmark, what special
options you gave if any, and finally what the benchmark was, how long
it took, did it finish successfully or fail, and those infamous charts
with the results.

You see, there's more to a benchmark than just a few charts, and we
have not been given account about any of that by the authors of the
articles in question.

-giorgos

[1] Even disks os the same manufacturer, and the same declared size,
    speed, characteristics, etc. do have slight differences some times.

[-- Sorry about this long rant, but the whole story about this article
    is starting to get on my nerves :P --]

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20010615213739.B12591>