Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 29 Jul 2012 15:59:49 +1000 (EST)
From:      Bruce Evans <brde@optusnet.com.au>
To:        Stephen Montgomery-Smith <stephen@missouri.edu>
Cc:        freebsd-bugs@FreeBSD.org, FreeBSD-gnats-submit@FreeBSD.org, Stephen Montgomery-Smith <stephen@FreeBSD.org>, Bruce Evans <brde@optusnet.com.au>
Subject:   Re: bin/170206: complex arcsinh, log, etc.
Message-ID:  <20120729150858.F1594@besplex.bde.org>
In-Reply-To: <501471C7.8000102@missouri.edu>
References:  <201207270247.q6R2lkeR021134@wilberforce.math.missouri.edu> <20120727233939.A7820@besplex.bde.org> <5012A96E.9090400@missouri.edu> <20120728142915.K909@besplex.bde.org> <50137C24.1060004@missouri.edu> <20120728171345.T1911@besplex.bde.org> <50140956.1030603@missouri.edu> <50141018.3040203@missouri.edu> <501471C7.8000102@missouri.edu>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, 28 Jul 2012, Stephen Montgomery-Smith wrote:

> On 07/28/2012 11:15 AM, Stephen Montgomery-Smith wrote:
>> On 07/28/2012 10:46 AM, Stephen Montgomery-Smith wrote:
>>> OK.  This clog really seems to work.
>>> 
>>> x*x + y*y - 1 is computed with a ULP less than 0.8.  The rest of the
>>> errors seem to be due to the implementation of log1p.  The ULP of the
>>> final answer seems to be never bigger than a little over 2.
> ...
> And I should add that I just realized that ULP isn't quite the same as 
> relative error, so an extra factor of up to 2 could make its way into the 
> calculations.

Yes, this is tricky.  For denormals, it is easy to be off by a factor of
2**MANT_DIG or infinity instead of only 2.

For normals, the most interesting cases are near powers of 2 (say 1).
One ulp is twice as large for values in [1, 2) as it is for values
in [0.5, 1).  Even to determine which one to use, you need to know
if the infinitely precise result is >= 1 or < 1, else you may be
off by a factor of 2 in the error checking.  If the factor is 1/2,
then it hides errors, and if it is 2 then it gives unexpected errors.

For denormals, the easiest case to understand is when the correctly
rounded case is the smallest strictly positive denormal.  Then the
size of an ulp is the same as the value of this denormal.  A rounding
error of < 1 ulp (but > 0.5 ulps) may give a result of 0 ulps or 2 ulps.
Such errors are to be expected.  But relatively, they are infinity and
2, respectively.  Normally you expected rounding errors of near
2**-MANT_DIG, and infinity and 2 are much larger.  The relative error
should be scaled (like the size of an ulp should be) so that you don't
see such large errors.

You might not care about denormals, but you should check a few of them
and then you don't want errors in the checking software for them hiding
errors for normals.  Without denormals, there would be no gradual
underflow, and underflow from the smallest normal to 0 really would be
essentially infinity (it would be about 2**MANT_DIG in ulps).

My checking software scales for denormals, but I think I got it wrong
and am off by a factor of 2.  For normals near a power of 2, its just
impossible to determine the right scale without a reference function
with enough extra precision to determine which side it is on.  Even
the correct definition of an ulp is unclear near powers of 2 (perhaps
other cases).  I want my checking program to match my definition, which
is currently just what the checking program does :-) -- don't worry
about the reference function not be precise enough.  There is the
same uncertainty about the size of an ulp for a result of 0 (and
more -- maybe the result should be -0 :-).

Recently I started checking extra internal precision for some functions.
This gives relative errors of < 2**(-MANT_DIG+N), where N is the extra
precision.  When the relative error is near 2**-MANT_DIG, it is hard
to tell if it is smaller than 1 ulp, since the ulp scale may be hard
to determine and the relative error doesn't map simply to ulps.  When
N >= 2, there is a factor of 2 to spare and final errors (after
rounding) that are certain to be < 1 ulp are easy to ignore.  Even
functions that don;t have much extra internal precision but which
that meet my design goal of a final error < 1 ulp should have N at
least 1, so their non-errors should be easy to not see too, but I have
used this mainly for functions with N about 7.  The internal errors
for the recently committed logl() are < 2**-120 relative on sparc64,
except for denormal results.  This is obviously enough for 113-bit
precision to 0.5+epsilon ulps.  But rounding would cluster the final
errors near 2**-113 or 2**--114 and it wouldn't be obvious how this
maps to ulps.

Bruce



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20120729150858.F1594>