Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 17 Aug 1998 10:42:45 +0800
From:      Joerg Micheel <joerg@krdl.org.sg>
To:        Tim Vanderhoek <ac199@hwcn.org>
Cc:        Greg Lehey <grog@lemis.com>, Matthew Hunt <mph@pobox.com>, Ivan Brawley <brawley@camtech.com.au>, hackers@FreeBSD.ORG
Subject:   Re: 64-bit time_t
Message-ID:  <19980817104245.20871@krdl.org.sg>
In-Reply-To: <19980815110445.A2355@zappo>; from Tim Vanderhoek on Sat, Aug 15, 1998 at 11:04:45AM -0400
References:  <199808131721.KAA00864@antipodes.cdrom.com> <199808140040.KAA14156@mad.ct> <19980814000605.A25012@astro.psu.edu> <19980814135919.U1921@freebie.lemis.com> <19980814114525.B4001@zappo> <19980815120445.C21662@lemis.com> <19980815110445.A2355@zappo>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Aug 15, 1998 at 11:04:45AM -0400, Tim Vanderhoek wrote:
> On Sat, Aug 15, 1998 at 12:04:45PM +0930, Greg Lehey wrote:
> > 
> > One problem UNIX has is that there is no standardized format for
> > representing times.  There is no sensible reason to use one format for
> > representing system times, one (inconvenient) format for representing
> > times more accurately (down to only 1 microsecond, when it could have
> > been down to a nanosecond), and one format broken down into
> > representations of the individual units of time.  None are of any use
> > when I want to know "How many seconds has it been since my grandfather
> > was born?".  time_t *will* answer the question "How many seconds has
> 
> Why the hell would you want to know how many seconds it has been since
> your grandfather was born?

Your sense of humor is astonishing.

I don't think it makes much sense to be able to express several billion
years ahead (or back), while something in the order of centuries back
might make sense for some applications (like statistics on mankind).
What bothers much more is resolution. Many things of interest now happen
in fractions of milliseconds. Look at the clock frequency of computers
or the clock of networks (SONET, etc). Nanosecond granularity is a must,
if you'd like to accurately describe an interval it takes to compute or
transmit a certain piece of information. Please note that granularity
does not mean accuracy (it has never been).

The fact that this is silly to compute how long it has taken since your
grandfather has been born doesn't invalidate the statement. Quite the opposite
is true. As Greg mentioned, it makes you nervous to have all these different
"time" facilities for different "purposes" that do no good for the programmer
and users. A single *handle* that works in all cases is cool, because it
prevents us from having to rewrite applications after some years, reconvert
databases or else (which is very costly, as we know from the Y2K problem).

We have addressed this very issue in our network test toolkit. For us, it
is important to have a signed 64 bit integer to be able to do simple +/-
computations on todays computers in a very efficient way, we use 'long long'
in C. We have settled for nanosecond granularity (not necessary accuracy,
that depends on the hardware being used). This approach keeps us going for
about 31 years, more than required for any long-term measurement. Of course,
for UNIX, this period needs to be extended to something reasonable, so
granularity will suffer. Still, 10 or 100 ns might be perfect for most
applications.

Regards,
	Joerg

-- 
Joerg B. Micheel			Email: <joerg@krdl.org.sg>
SingAREN Technology Center		Phone: +65 7705577
Kent Ridge Digital Labs	(pron: curdle)	Fax:   +65 7795966
11 Science Park Road			Pager: +65 96016020
Singapore Science Park II		Plan:  Troubleshooting ATM
117685 Singapore			       Networks and Applications

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?19980817104245.20871>