Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 5 Oct 1997 02:15:54 +1000
From:      Bruce Evans <bde@zeta.org.au>
To:        bde@zeta.org.au, rcarter@consys.com
Cc:        freebsd-current@freebsd.org
Subject:   Re: xlock: caught signal 8 while running galaxy mode.
Message-ID:  <199710041615.CAA31787@godzilla.zeta.org.au>

next in thread | raw e-mail | index | archive | help
>}---
>}/* Intel prefers long real (53 bit) precision */
>}#define	__iBCS_NPXCW__		0x262
>}/* wfj prefers temporary real (64 bit) precision */
>}#define	__386BSD_NPXCW__	0x362
>}/*
>} * bde prefers 53 bit precision and all exceptions masked.
>}---
>}
>Interesting, in the long run 0x200 is the most "standard",
>if Sun has its way.  I did not realize that it was possible

I think you mean 0x137f?  0x00 in the low 8 bits would unmask all
exceptions.  The 0x40 and 0x1000 bits are forced to 1 by the h/w.
The 0x0300 its give the precision.

>to control the internal precision of arithmetic operations,
>silly me.  A lot of the debate on "Pure" java fp focuses on
>the (unmodifiable) 80 bit internal representation of x87 operands
>stored on the fp stack, but this flag apparently renders that 
>problem moot.  Oddly, Sun has been
>insisting that the only way to make x87 fp "Pure" is to store
>the result of EVERY fp operation to main memory and read it
>back in again.  That way of course every arithmetic operation
>gets performed with 53b precision operands.
>Surely they know about this flag... no no I won't be cynical ;-)

Neither way is completely pure - there are some problems with
double rounding that could not possibly be fixed by running with
64-bit precision and reducing to 53-bit precision by storing.
I believe they aren't fixed by running with 53-bit precision
either, at least for division - the FPU apparently first rounds
to 64 bits.  For transcendental functions, storing is the only
way.

>However, your comment in npx.h opines that "64-bit precision often
>gives bad results", and that is not true at all.  More accurately, 
>(oops, I punned:) if computing intermediate values to higher 
>precision causes *different* final results, then in all but the 
>mostly highly contrived cases the problem lies with the code, 
>or in your terminology it's a bug :).  (in that case the 
>algorithm is unstable wrt precision).  Not to say counterexamples 
>don't exist but they are uncommon enough to be addressed in 
>Kahan's SIAM talk this year.

Counterexamples are well known.  Many can be found in Kahan's old
"paranioa" program.  Bugfeatures in gcc provide many more counterexamples.
Double-precision arithmetic operations operations done at
compile time are always done in 53-bit precision, but the same
computations done at runtime are done partly in 53-bit precision
and partly in the h/w precision, where the parts depend on the
optimization level.  The only sure way to get consistent results is
to store the result of EVERY fp operation to main memory ...

>So there is an inconsistency here: on the one hand your preferences
>yield compatibility for buggy programs when the bug is instability
>of the program algorithm when subjected to (increased) intermediate
>precision; OTOH if the "bug" is manifested by something that
>generates an exception, FreeBSD by default calls it out.

My precision preference yields compatibility for non-buggy programs
compiled by buggy compilers (provided the programs only use double
precision - float precision has the same bugs).

>Java is consistent, there is one fp format and as far as fp exceptions 
>go mask 'em all!

This is the best default behaviour.

Bruce



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199710041615.CAA31787>