From owner-freebsd-arch Mon Feb 26 5:55:35 2001 Delivered-To: freebsd-arch@freebsd.org Received: from smtp02.primenet.com (smtp02.primenet.com [206.165.6.132]) by hub.freebsd.org (Postfix) with ESMTP id 7EF1037B401; Mon, 26 Feb 2001 05:55:31 -0800 (PST) (envelope-from tlambert@usr05.primenet.com) Received: (from daemon@localhost) by smtp02.primenet.com (8.9.3/8.9.3) id GAA13250; Mon, 26 Feb 2001 06:49:11 -0700 (MST) Received: from usr05.primenet.com(206.165.6.205) via SMTP by smtp02.primenet.com, id smtpdAAAR3aWYz; Mon Feb 26 06:49:04 2001 Received: (from tlambert@localhost) by usr05.primenet.com (8.8.5/8.8.5) id GAA17234; Mon, 26 Feb 2001 06:55:21 -0700 (MST) From: Terry Lambert Message-Id: <200102261355.GAA17234@usr05.primenet.com> Subject: Re: cvs commit: ports/astro/xglobe/files patch-random To: kris@FreeBSD.ORG (Kris Kennaway) Date: Mon, 26 Feb 2001 13:55:21 +0000 (GMT) Cc: tlambert@primenet.com (Terry Lambert), arch@FreeBSD.ORG In-Reply-To: <20010226045855.A34109@hub.freebsd.org> from "Kris Kennaway" at Feb 26, 2001 04:58:55 AM X-Mailer: ELM [version 2.5 PL2] MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Sender: owner-freebsd-arch@FreeBSD.ORG Precedence: bulk X-Loop: FreeBSD.ORG Not to belabor the point, but... > > > Me? No, but others have done so. Terry, the existing rand() is a bad > > > algorithm just about any way you look at it. > > > > It's useful because it creates repeatable results with the > > same seed, which are the same for the same seed on other > > platforms. > > Well, so does Andrey's replacement. So if I run the same program, compiled on a Solaris box, and compiled on a FreeBSD box, both linked against the platform libc, I will get the same results from both machines, without having to carry the random number generator code with my program, over to the new platform? I didn't think so. What's the benchmark of the two algorithms for 100,000 numbers each, on the same machine? If the "improved" code takes longer to run, that would be a problem in itself. > > At least let it be a compile time option, set in make.conf. > > This has been requested -- I don't object, so those who are > using unrandom PRNGs can maintain compatability as a > transition mechanism. I think that the new code should not be turned on by default; people who worry about such things will be willing to turn it on, if they feel the need. I'd call it a compatability mechanism, not a transition mechanism. I know of several programs that work only because they were tested out with particular seeds; in other words, they depend on the algorithm to produce a deterministic set of data each time it is run, instead of saving a large pseudo-random data set to a file, and using that, instead. That's probably a "bad use" in your opinion, but it's a very clever hack, in mine. Someone has already pointed out their network simulation data becomes invalid if the algorithm changes, since they can't compare the same fictional network with new code. I have some physics code designed to run on a Cray, but which gets maintained and tested with new constraints on small machines, like FreeBSD boxes. I know several physicists who do similar work, in the same way. FreeBSD stops being useful if 100,000 generations don't match up between the Cray (or Hitachi, for one of them) and the FreeBSD box, to ensure that the code is behaving sanely on the big iron, before cranking the iterations up to a hundred million or more events. Finally... if the new code is not going to be cryptographically strong due to interface constraints, then what is the benefit, other than it being newer than the old code? I'm not seeing anything other than what looks like change for the sake of change, and some unhappy consequences. Terry Lambert terry@lambert.org --- Any opinions in this posting are my own and not those of my present or previous employers. To Unsubscribe: send mail to majordomo@FreeBSD.org with "unsubscribe freebsd-arch" in the body of the message