Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 2 May 2000 18:51:23 +0000 (GMT)
From:      Terry Lambert <tlambert@primenet.com>
To:        BHechinger@half.com (Brian Hechinger)
Cc:        tlambert@primenet.com ('Terry Lambert'), BHechinger@half.com (Brian Hechinger), dillon@apollo.backplane.com, jgowdy@home.com, smp@csn.net, jim@thehousleys.net, freebsd-smp@FreeBSD.ORG
Subject:   Re: hlt instructions and temperature issues
Message-ID:  <200005021851.LAA22601@usr01.primenet.com>
In-Reply-To: <F997095BF6F8D3119E540090276AE53015D60E@exchange01.half.com> from "Brian Hechinger" at Apr 28, 2000 11:45:43 PM

next in thread | previous in thread | raw e-mail | index | archive | help
> >No.  The point is that the CPUs run hotter, which means shorter
> >batter life, more power consumption, higher cooling requirements,
> >and limitations on installtion in close space, as a result.
> 
> so there is no super-critical need for CPU idling.

Not unless you have power or heat dissipation issues for a
particular use case.  For the vast majority of users, it's
meaningless, unless they have philosophical instead of
technical reasons (e.g. they are environmentalists, etc.).


> >about a six instruction latency overall, counting the code in
> >the scheduler and the halt and wakeup overhead.  This will vary
> >from processor to processor, based on the voodoo necessary to
> >make it work.
> 
> but very acceptable for the gains.

If the gains are purely thermal, perhaps not.  It does introduce
an additional context switch latency, when leaving the scheduler,
for the CPU that is running -- this means that it penalizes the
IPI sending CPU to wake up the receiving CPU.  But I think that
if this is done correctly, this will be practically unmeasurable.


> >> would there be a significant increase in speed if we could
> >> avoid this?  
> >
> >Hotter processors run fractionally slower.  All in all, it's
> >about a wash, in terms of processor effectiveness.  The real
> >wins are heat dissipation and power consumption.
> 
> so those super-cryo CPU cooling units are hype. :)

Not if they actually cool the CPU.


> so no "real" usefulness for such a beast, only overly comlicated code?

IMO, the utility is more in the ability to prepare the kernel
for further change in the direction of per CPU run queues.  This
will require an interlock and IPI for process migration from
CPU #1's run queue to CPU #N's run queue.

The benefit to doing this is processor affinity for processes.

Right now, if a CPU comes into the scheduler, the process at
the top of the run queue gets the CPU.  This results in cache
busting.  Consider that in an 8 processor system, there is only
a 12.5% probability of getting the same CPU you were using last
time, and thus there is an 87.5% probability of a cache bust.


People who don't know any better commonly claim that the SMP
scalaing on shared memory multiprocessor architectures is
limited to about 4 processors before you hit a wall of diminishing
returns for additional processors.  This is not true, unless you
are doing a lot of interprocessor communication; this will not
commonly happen in practice, unless you do the wrong things and
let it happen through bad design.  If all of your engines are
work-to-do engines, then they are practically identical, except
for cache contents, and there is little or no need to communicate
between them.

For example, there's little reason that an HTTP server with 8
engines can not run one engine per processor, keep persistant
connections (HTTP 1.1 spec.), and operate almost wholly
independantly from one another.


					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-smp" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200005021851.LAA22601>