Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 26 Feb 2001 13:55:21 +0000 (GMT)
From:      Terry Lambert <tlambert@primenet.com>
To:        kris@FreeBSD.ORG (Kris Kennaway)
Cc:        tlambert@primenet.com (Terry Lambert), arch@FreeBSD.ORG
Subject:   Re: cvs commit: ports/astro/xglobe/files patch-random
Message-ID:  <200102261355.GAA17234@usr05.primenet.com>
In-Reply-To: <20010226045855.A34109@hub.freebsd.org> from "Kris Kennaway" at Feb 26, 2001 04:58:55 AM

next in thread | previous in thread | raw e-mail | index | archive | help
Not to belabor the point, but...

> > > Me? No, but others have done so.  Terry, the existing rand() is a bad
> > > algorithm just about any way you look at it.
> > 
> > It's useful because it creates repeatable results with the
> > same seed, which are the same for the same seed on other
> > platforms.
> 
> Well, so does Andrey's replacement.

So if I run the same program, compiled on a Solaris box, and
compiled on a FreeBSD box, both linked against the platform
libc, I will get the same results from both machines, without
having to carry the random number generator code with my
program, over to the new platform?

I didn't think so.

What's the benchmark of the two algorithms for 100,000 numbers
each, on the same machine?

If the "improved" code takes longer to run, that would be a
problem in itself.


> > At least let it be a compile time option, set in make.conf.
> 
> This has been requested -- I don't object, so those who are
> using unrandom PRNGs can maintain compatability as a
> transition mechanism.

I think that the new code should not be turned on by default;
people who worry about such things will be willing to turn it
on, if they feel the need.

I'd call it a compatability mechanism, not a transition
mechanism.  I know of several programs that work only
because they were tested out with particular seeds; in other
words, they depend on the algorithm to produce a deterministic
set of data each time it is run, instead of saving a large
pseudo-random data set to a file, and using that, instead.
That's probably a "bad use" in your opinion, but it's a very
clever hack, in mine.

Someone has already pointed out their network simulation data
becomes invalid if the algorithm changes, since they can't
compare the same fictional network with new code.

I have some physics code designed to run on a Cray, but which
gets maintained and tested with new constraints on small
machines, like FreeBSD boxes.  I know several physicists who
do similar work, in the same way.

FreeBSD stops being useful if 100,000 generations don't match
up between the Cray (or Hitachi, for one of them) and the
FreeBSD box, to ensure that the code is behaving sanely on
the big iron, before cranking the iterations up to a hundred
million or more events.

Finally... if the new code is not going to be cryptographically
strong due to interface constraints, then what is the benefit,
other than it being newer than the old code?  I'm not seeing
anything other than what looks like change for the sake of
change, and some unhappy consequences.


					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.

To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-arch" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200102261355.GAA17234>