Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 15 Aug 1998 21:03:55 +0000 (GMT)
From:      Terry Lambert <tlambert@primenet.com>
To:        ac199@hwcn.org (Tim Vanderhoek)
Cc:        grog@lemis.com, mph@pobox.com, brawley@camtech.com.au, hackers@FreeBSD.ORG
Subject:   Re: 64-bit time_t
Message-ID:  <199808152103.OAA22129@usr01.primenet.com>
In-Reply-To: <19980815110445.A2355@zappo> from "Tim Vanderhoek" at Aug 15, 98 11:04:45 am

next in thread | previous in thread | raw e-mail | index | archive | help
> Why the hell would you want to know how many seconds it has been since
> your grandfather was born?
> 
> The whole idea of measuring the current time in seconds (or useconds,
> or nanoseconds) since some epoch is bogus.

Or years since some epoch... 8-) 8-).

The value of a single epoch (positive *or* negative relative, so
all it is is a baseline, not a limit before which there was no time)
is the ability to do calendar mathematics quickly and easily.

> If some physicist performing an experiment needs to record time
> passed, he needs his own clock to track seconds passed, but comparing
> seconds (or any time) passed to the current time as recorded by some
> central clock isn't necessarily meaningful.

That would be Cesium cycles, most likely, for most interesting recent
physics.  8-).

However, to deal with audio Doppler shift for a train in your
standard Newtonian universe, the only real guarantee necessary
is that the clock doesn't go backwards, and monotonically
increases.

For an event with a duration of N ticks, so long as N is sufficiently
large that the tick duration does not impact the significant digits
obtained (ie: there are more significant digits than the required
precision), then there's no problem.

The difference between accuracy and precision is often hard to grasp.
Most measurements need only repeatable precision, and can be scaled
to obtain accuracy by measuring the precision's interval against an
accurate base.  If my clock always ticks at the same rate, it is
precise.  If my clock always ticks at the same rate and always tells
the correct time, it's accurate.

The problem here is that tickadj and friends are abstracted in such
a way that it looks like we are trying to make time_t accurate,
when those things which use time_t need only be precise.

The internal clock should go at whatever rather the internal clock
goes, to avoid "stretch seconds" in stored values, and deltas should
be measured against whatever timebase.

The value of time_t is as a monoclock value; ie: what it calls
"seconds" are actually "ticks", and it is useful to know "how many
ticks between X and Y" for things like making makefiles work.

The scaling of "ticks" to "timebase delta" needs to be performed
based on (1) a known tick count at the time a timebase reference
sample occurred, and (2) a known tick count vs. the expected tick
count at the time a timebase delta sample occurred.

Let's deal first with the time_t overflow (which only approximately
coincides with 2039, since a tick interval only approximately
coincides with a second) given the physical constraints we have on
inode size and layout for existing systems, and then use some of our
spare fields to deal with subsecond timing when it becomes important
to the operation of the system.


					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199808152103.OAA22129>