Date: Fri, 26 Oct 2001 15:25:55 -0700 (PDT) From: Matthew Dillon <dillon@apollo.backplane.com> To: Julian Elischer <julian@elischer.org> Cc: John Baldwin <jhb@FreeBSD.ORG>, Poul-Henning Kamp <phk@critter.freebsd.dk>, arch@FreeBSD.ORG, Peter Wemm <peter@wemm.org>, Bakul Shah <bakul@bitblocks.com> Subject: Re: 64 bit times revisited.. Message-ID: <200110262225.f9QMPta39239@apollo.backplane.com> References: <Pine.BSF.4.21.0110261622450.11653-100000@InterJet.elischer.org>
next in thread | previous in thread | raw e-mail | index | archive | help
:
:trouble is, that ticks are:
:1: not guaranteed to be constant
:2/ inaccurate.
:
:also,
:you can represent ticks in terms of 1/(2^64) units, certainly to the
:accuracy of the crystals that we use for timekeeping at this time.
It doesn't work. That is, it *might* appear to work if a tick is
an 8254 (in the microsecond range), but you wind up with a completely
non-deterministic error creep that depends entirely on the frequency.
The higher the frequency, the more pronounced the error. If you are
trying to sync a microtime style aggregation by using the 1/(2^64)
fractional format you wind up adding an error, even if it is small,
every single time you call microtime(). The more often you call
microtime(), the more pronounced the cumulative error. What happens if
microtime() gets called in a tight loop?
Oh, wait, I seem to recall that it has already been demonstrated that
calling microtime in a tight loop screws things up! This is just more
of the same, just with more bits to try to hide the problem. But it
doesn't work if you have more processors, or faster processors.
Adding more precision does NOT solve the problem. You would have to go
to a 128 bit fractional quantity and that is just plain crazy. We are
moving towards 10GHz in the next 10 years (probably less). With clusters
one might need a unique timestamp and move to an offset counter mechanism
(so each host is guarenteed completely unique timestamps) which is
roughly 100GHz or 1THz virtual resolution. A 1/1E10 of cumulative error
per call when a cpu may be making millions of calls is simply not
acceptable. It is not a timing mechanism that will carry is forward.
What happens if in the next 10 years platforms are phase-locked to
each other? Think that's spacy? Gigabit ethernet already has to do it.
I am not being totally wild here, I am being pragmatic. Error-prone
representations are a bad base to work from.
For kernel time keeping the only representation that is not prone to
non-deterministic error creep is to store the time in the native
counter format -- ticks at X frequency, or ticks at X*(constant)
frequency. From there you can use a baseline cache conversion mechanism
to convert it, with NO cumulative error, into some other format (if you
need it in some other format). You wind up with a non-cumulative
deterministic error no matter how often the routine is called, no matter
what the frequency of the counter, etc etc etc.
-Matt
To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200110262225.f9QMPta39239>
