Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 1 Sep 2001 21:02:51 -0600
From:      Mike Porter <mupi@mknet.org>
To:        "Ted Mittelstaedt" <tedm@toybox.placo.com>, "Sean Chittenden" <sean@chittenden.org>, "Bsd Newbie" <bsdneophyte@yahoo.com>
Cc:        <freebsd-questions@freebsd.org>
Subject:   Re: overclocking and FreeBSD stablity...
Message-ID:  <200109020302.f8232pl07186@c1828785-a.saltlk1.ut.home.com>
In-Reply-To: <00dc01c1329d$b0b523c0$1401a8c0@tedm.placo.com>
References:  <00dc01c1329d$b0b523c0$1401a8c0@tedm.placo.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Friday 31 August 2001 10:22 pm, Ted Mittelstaedt wrote:
> >-----Original Message-----
>
> From: owner-freebsd-questions@FreeBSD.ORG
>
> >[mailto:owner-freebsd-questions@FreeBSD.ORG]On Behalf Of Sean Chittenden
> >
> >Slowaris wasn't meant to be a performance system and probably chokes
> >when it runs at speeds above 400Mhz.
>
> Solaris runs fine on our Compaq 550Mhz system.
>
> My $0.02 is that the base of the troubles is the machine code that the
> compiler produces.  I suspect that when a CPU is overclocked that unless
> the parts are good that the CPU is unable to execute SOME of it's opcodes,
> opcodes that produce certain electrical patterns inside of the CPU that
> may ring and generate electrical wave colissions.  While I'm not an EE
> I do know that lengths of traces and such inside of a CPU are held to
> precise tolerances in order to deal with clock propagations and such.  It's
> not just the cooling but when you overclock the CPU you can have signals
> arriving at internal parts of the CPU earlier than the designer intended.
>
What you fail to realize in this is two things.  First, the designers of 
processors measure everything in terms of clock cycles rather than other, 
more objective, standards.  So for a signal to arrive at its destination 
"earlier than intended" the various parts of the CPU have to be operating at 
different clock speeds.  This applies to die sizes much smaller than those 
currently in use and to clock speeds much higher than those currently in use. 
 Eventually, yes, that will be a problem, but not until frequencies with 
wavelengths smaller than the internal pathways of the chips (hint: we ain't 
there yet...you'll fry your chip (or turn it into something resembling the 
Vegas strip, at least) before you'll reach that threshold).  

Second, and this is the primary reason people overclock, is the *method* used 
to determine what clock speed a chip is capable of.  (and this varies by 
product and manufacturer, of course, but we'll stick with Intel for the time 
being).  When intel makes a chip, it is part of a wafer, which has several 
(as many as 10-12, depending on what they are building) chips.  This entire 
wafer is tested to determine the maximum clock speed every chip in the wafer 
will run reliably at.  This is based on a number of factors, including the 
maximum clock speed they are currently building for that product line, and 
other stuff.  If EVERY chip in the wafer passes at the maximum clock speed, 
then the entrie wafer is packaged as that clock speed.  If one (or more) of 
the chips FAILS, however, they step down to the next clock speed, and try 
again.  If every chip on the wafer passes at that clock speed, they mark the 
WAFER as that clock speed.  But you have a one-in-twleve (if there are 12 
chips on a wafer) chance of a CPU that is really capable of the fastest clock 
speed for that chip design.  This process continues until they reach a clock 
speed at which all of the chips pass, or they reach a "bottom" threshold 
where the cost of producing the chips has exceeded the revenue they will get, 
and they throw the wafer away.  So for any given clock speed marked below the 
maximum clock speed for that family, you have pretty decent chance of having 
a chip which can run significantly faster than the marked speed, up to the 
maximum spped for which they are marking chips. (of course, you may be able 
to go faster than that, but in that case, you really are taking a chance).  
The other wrinkle in this scheme is that Intel is completely free, if the 
demand is there, to remark their OWN chips to a LOWER speed.  So if demand 
spikes for a 300Mhz Celeron, and they have a pile of 450Mhz Celerons sitting 
on the shelf, there is nothing illegal, immoral, or fattening about calling 
them 300Mhz Celerons and pricing them accordingly. (after all, they passed as 
450Mhz celerons..and don't forget, any given chip in the lot of 450's has a 
one-in-ten or so chance of being capable of much faster speeds than even 
450Mhz).

(this second argument, BTW, pretty much nullifies the "design" argument by 
itself, since all chips are "designed" to be the fastest speed in their 
family)

Of course, your mileage may vary, and if your luck is like mine, you will get 
the *one* 300MHz part that made the wafer fail, and be unable to overclock at 
all, and everyone else will sail along at 600Mhz or so with no problems....

The other wrinkle is that your motherboard may not be able to handle 
overclocking correctly:  if they follow Intel's instructions properly, for 
example, without considerable effort, you can't change the multiplier.  This 
means that to overclock, you must also run the motherboard faster than 
intended.  And the distances involved on a motherboard *are* longer than a 
100Mhz wavelength, which can cause all sorts of problems, if your motherboard 
will even allow you to.  Then all of your peripherals have to support the 
higher clock speeds, becuase all the motherboard does is count 3 100Mhz 
clocks and produce a 33Mhz clock for your PCI bus....but if you are counting 
3 109Mhz clocks, suddenly you get a 36Mhz clock, and THOSE components may or 
may not support running that fast.. ..if your network card, for example, 
relies on a 33Mhz PCI clock to generate the 20Mhz 10Base-T carrier, and your 
33Mhz clock is off....you might not be able to talk on the network. 
(fortuneately, most network cards don't do this, they have their own 20Mhz 
crystal for that; it's more reliable...but...if your network card heats up 
more than normal becuase it is running faster than normal (more clocks=more 
work=more heat) then that will screw up the crystal, too, and might make you 
unable to talk on the network...or worse, abnle to talk on the network when 
you fire up your computer in the morning, but not when you come back from 
lunch in the afternoon, until you shut your computer off overnight, and it 
cools down, and starts talking again.....try troubleshooting THAT one!)



> Certainly, you can overclock to a certain extent because most electrical
> parts are derated somewhat.  But there are just so many variables that
> you can't just make blanket statements about overclocking.
>
That is certainly true.  Even with identical hardware, as I mentioned above, 
becuase of quality control, you may or may not get the same results as the 
next guy.  HOWEVER....software shouldn't affect it all that much except in 
one area (granted this is a big concern for overclockers anyway): heat.  If 
your compiler produces better code than the other guy's  it will run more 
efficiently on your hardware, and generate less heat.  If you are 
overclocking to the point of bordeline failure (say, running a processor 
above the speed at which it actually failed intel's tests, but it works ok), 
this can make a difference.  If you are less efficient than the other guys, 
then you just might push the processor over the edge into total failure.

The moral of the story:  if you can't afford to replace your processor, don't 
overclock.  If you have money to burn, then its your business, but I might 
suggest either 1) buying a faster processor to start with and/or 2) 
contributing to the freeBSD project <(};

mike


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-questions" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200109020302.f8232pl07186>