Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 26 Jan 2006 10:11:38 +0000
From:      Brian Candler <B.Candler@pobox.com>
To:        Poul-Henning Kamp <phk@phk.freebsd.dk>
Cc:        Peter Jeremy <PeterJeremy@optushome.com.au>, current@freebsd.org, arch@freebsd.org
Subject:   Re: [TEST/REVIEW] CPU accounting patches
Message-ID:  <20060126101138.GA40773@uk.tiscali.com>
In-Reply-To: <56988.1138220896@critter.freebsd.dk>
References:  <20060125201450.GE25397@cirb503493.alcatel.com.au> <56988.1138220896@critter.freebsd.dk>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Jan 25, 2006 at 09:28:16PM +0100, Poul-Henning Kamp wrote:
> Right, so we bill users in "full speed CPU second equvivalents"

How about "BogoMIPS-seconds"?

<ducks/>

Seriously... don't forget that the *other* usage of CPU-second accounting is
for system administrators to assess the amount of CPU resource used by a
particular task, in order to plan when the machine is going to need
upgrading.

In this case, the administrator is not so much interested in the absolute
amount of work done, as the amount of work done as a proportion of total
work capacity on a particular machine. That is, if task X uses 1200
CPU-seconds over a period of one hour, that's a third of the total available
capacity on that machine [1].

If the CPU were then cranked down to 1/3rd of its clock speed, this task
would be using the full CPU capacity - and observing that this process is
now using 3600 CPU-seconds in an hour is a useful view of the real
situation, rather than some mythical 1200 CPU-seconds which it *would have*
used *if* it had been running on a different machine (i.e. a machine similar
to this one, but running at a faster clock speed). The machine is maxed out
on CPU, and that's what matters.

Another way of looking at this is that if the CPU is running at 1/3rd speed
then CPU cycles are three times as rare, and therefore three times as
expensive. That's not good from the point of view of a timeshare user who
pays for CPU seconds, as they end up paying three times as much for the same
amount of work [2][3]. But it's realistic, especially if the end user owns,
runs and pays for the whole asset (which I suggest is more common than the
timeshare user these days)

Regards,

Brian.

[1] Of course a dual-CPU box has a capacity of 7200 CPU-seconds per hour, so
1200 CPU-seconds would be one sixth. I don't see a need to normalise that,
even if that means I'm taking a slightly inconsistent position :-) Admins
are used to thinking of a 4-CPU box as a kind-of cluster of 4 machines.

[2] If today CPU cycles are three times as expensive as normal, because the
sysadmin needed to reduce the clock speed (e.g. air conditioning failure?)
then the user can always choose to run their application on a different day
instead.

[3] On a multi-CPU machine, bottlenecks such as RAM I/O may mean that the
same sequence of instructions takes more cycles (and hence time) to execute
than on a single CPU machine, even at the same clock speed. The timeshare
user may also feel unfairly penalised for this - but I don't see there's
much that can be done about it. That is, it's very difficult to charge the
timeshare user for absolute work done, completely independent of the
platform their application runs on. I think it's reasonable to charge them
based on the proportion of resource they've used on the actual machine
they've chosen to run it on, at the time they've chosen to run it.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20060126101138.GA40773>