Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 13 Oct 1997 09:15:55 -0700
From:      "Russell L. Carter" <rcarter@consys.com>
To:        Bruce Evans <bde@zeta.org.au>
Cc:        freebsd-current@freebsd.org
Subject:   Re: xlock: caught signal 8 while running galaxy mode. 
Message-ID:  <199710131615.JAA03647@dnstoo.consys.com>
In-Reply-To: Your message of "Sun, 05 Oct 1997 02:15:54 %2B1000." <199710041615.CAA31787@godzilla.zeta.org.au> 

next in thread | previous in thread | raw e-mail | index | archive | help

bde@zeta.org.au said:


}>to control the internal precision of arithmetic operations,
}>silly me.  A lot of the debate on "Pure" java fp focuses on
}>the (unmodifiable) 80 bit internal representation of x87 operands
}>stored on the fp stack, but this flag apparently renders that 
}>problem moot.  Oddly, Sun has been
}>insisting that the only way to make x87 fp "Pure" is to store
}>the result of EVERY fp operation to main memory and read it
}>back in again.  That way of course every arithmetic operation
}>gets performed with 53b precision operands.
}>Surely they know about this flag... no no I won't be cynical ;-)
}
}Neither way is completely pure - there are some problems with
}double rounding that could not possibly be fixed by running with
}64-bit precision and reducing to 53-bit precision by storing.
}I believe they aren't fixed by running with 53-bit precision
}either, at least for division - the FPU apparently first rounds
}to 64 bits.  For transcendental functions, storing is the only
}way.

Point taken, but...

}
}>However, your comment in npx.h opines that "64-bit precision often
}>gives bad results", and that is not true at all.  More accurately, 
}>(oops, I punned:) if computing intermediate values to higher 
}>precision causes *different* final results, then in all but the 
}>mostly highly contrived cases the problem lies with the code, 
}>or in your terminology it's a bug :).  (in that case the 
}>algorithm is unstable wrt precision).  Not to say counterexamples 
}>don't exist but they are uncommon enough to be addressed in 
}>Kahan's SIAM talk this year.
}
}Counterexamples are well known.  Many can be found in Kahan's old
}"paranioa" program.  Bugfeatures in gcc provide many more counterexamples.
}Double-precision arithmetic operations operations done at
}compile time are always done in 53-bit precision, but the same
}computations done at runtime are done partly in 53-bit precision
}and partly in the h/w precision, where the parts depend on the
}optimization level.  The only sure way to get consistent results is
}to store the result of EVERY fp operation to main memory ...

Paranoia handles the 64 bit case wrong, and reports a "flaw".
Paranoia is not the law, and its author chooses the
"flaw" over the alternative.

I don't know what these bugfeatures in gcc are, perhaps
they are showstoppers?  Although linux apparently does
not seem to mind them.

My original statement said there are "counterexamples"
to the general rule that more accuracy is better for
application codes.  After a lot of thought and rereading
of papers in my archive I can think of only two,
very tiny, classes of codes that 53 bit precision
benefits more than 64bit precision (given identical
compuation costs).  Those are test programs with bugs,
like paranoia, and iterative algorithms that require
a bit of noise to be coaxed to converge.  Are there
more?  At any rate, "counterexamples" was much too 
charitable.

On the other hand, there are some really fundamental
algorithms like iterative refinement that greatly
improve with extended precision.  Kahan is definitely
willing to live with the possible double rounding bugaboo
in exchange for extended precision.

}
}>So there is an inconsistency here: on the one hand your preferences
}>yield compatibility for buggy programs when the bug is instability
}>of the program algorithm when subjected to (increased) intermediate
}>precision; OTOH if the "bug" is manifested by something that
}>generates an exception, FreeBSD by default calls it out.
}
}My precision preference yields compatibility for non-buggy programs
}compiled by buggy compilers (provided the programs only use double
}precision - float precision has the same bugs).

Compatible with what?  Most cpus are x86 and 68xxx with
extended precision enabled by default.  Most flops are (were?)
done on Crays, which are not compatible model to model and
aside from the T3D are not even close to IEEE754.  A big chunk
of flops now is done on RS6000, which has extended precision
via fused multiply/add.  And apparently, 53bit x86 isn't
compatible either...

Russell




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199710131615.JAA03647>