Date: Sun, 29 Jul 2012 22:08:45 +1000 (EST) From: Bruce Evans <brde@optusnet.com.au> To: Stephen Montgomery-Smith <stephen@missouri.edu> Cc: freebsd-bugs@FreeBSD.org, Stephen Montgomery-Smith <stephen@FreeBSD.org>, Bruce Evans <brde@optusnet.com.au> Subject: Re: bin/170206: complex arcsinh, log, etc. Message-ID: <20120729195931.J2598@besplex.bde.org> In-Reply-To: <5014A284.2060204@missouri.edu> References: <201207281550.q6SFoBdC073014@freefall.freebsd.org> <20120729121443.M1008@besplex.bde.org> <5014A284.2060204@missouri.edu>
index | next in thread | previous in thread | raw e-mail
On Sat, 28 Jul 2012, Stephen Montgomery-Smith wrote:
> On 07/28/2012 09:31 PM, Bruce Evans wrote:
>> On Sat, 28 Jul 2012, Stephen Montgomery-Smith wrote:
>>
>>> OK. This clog really seems to work.
>>>
>>> x*x + y*y - 1 is computed with a ULP less than 0.8. The rest of the
>>> errors seem to be due to the implementation of log1p. The ULP of the
>>> final answer seems to be never bigger than a little over 2.
>>
>> I really don't like this version...
>
> I can understand your reticence. I'll let you work some more on clog, and I
> will concentrate on the catrig stuff.
>
> But did you see that I messed up my earlier error analysis of log1p(1+x)?
I didn't look at it closely, but just counted the number of operations and
multiplied by 0.5 and added a safety margin to get a similar value :-).
> >> Also, I don't think the problem is due to the implementation of log1p.
> >> If you do an error analysis of log(1+x) where x is about exp(-1)-1, and
> >> x is correct to within 0.8 ULP, I suspect that about 2.5 ULP is the best
> >> you can do for the final answer:
> >>
> >> relative_error(log(1+x)) = fabs(1/((1+x) log(1+x))) * relative_error(x)
> >> = 1.58 * relative_error(x)
> >>
> >> Given that log1p has itself a ULP of about 1, and relative error in x is
> >> 0.8, and considering x=exp(-1)-1, this gives a ULP at around 1.58*0.8+1
> >> = 2.3. And that is what I observed.
log1p(x) does expand the error signficantly here, since since exp(-1)-1 is
too close to -1 for log1p to be a good function to use on it. The
expansion in units of ulps as x -> -1 seems to be fully exponential.
was surprised by this. exp(x) also expands errors significantly, but
only linearly in x (or is it x*log(x)?).
> >> (Here "=" means approximately equal to.)
> >
> > And I should add that I just realized that ULP isn't quite the same as
> > relative error, so an extra factor of up to 2 could make its way into
> > the calculations.
>
> In fact, I think I messed it up a bunch.
>
> So let D(f(x)) denote the absolute error in f(x).
>
> D(f(x)) = f'(x) Dx.
>
> So
>
> D(log(1+x)) = Dx/(1+x).
>
> If x is a bit bigger than exp(-1)+1 = -0.63, which has ilogb = -1. If
> ULP in calculating x is around 0.8, then
> Dx = 0.8 * 2^(-d-1).
> where d is the number of bits in the mantissa,
>
> So D(log(1+x)) = Dx/0.37.
1/0.37 is best spelled as `e'. I think exp(N) is right for exp(-N)+1.
I didn't think about the scaling of Dx.
> Since log(1+x) is a little bit bigger than -1, and so ilogb(log(1+x)) = -1.
>
> ULP(log(1+x)) = Dx/0.37 * 2^{d+1} = 0.8/0.37 = 2.2
Certainly there is exponential scaling. It seems to be fully exponential,
unlike for exp(x).
> Now add 1 for ULP in calculating log1p, and this only gives a ULP of
> 3.2. So the observed bound is actually better than expected. If one
> could get the ULP of log1p to be as good as possible (0.5), the best ULP
> one could get is 2.7. We still do a bit better than that.
This is easier to think about in binary with log2(). Consider values
-1+2**-N. We can't get closer to -1 than with N = d (maybe off by 1).
log2(1+this) is -d. If we are off by a full ulp in the arg, and towards
0, then the wrong arg is -1+2**-d+2**-d = -1+2**-(d-1). log2(1+this) is
-(d-1). The error has expanded by 1 ulp to a lot. Not by 2*d though
(in ulps). Only the relative error expanded by about 2**d (actually
by 2**d/d). It remains to calculate the error between -d and -(d-1) in
ulps. For d = 53, log2(53) is between 5 and 6, so all except the top
5 or 6 bits are wrong, so the error is between 2**47 and 2**48 ulps.
The details are more complicated for N = 1, and I didn't do them.
Your formula gives the above when modified to:
ULP(log(1+x)) = 1.0 * (exp2(N)/N)
Somehow the relative error us the same as the error in ulps (and the
approximate expansion factor).
This shows that we shouldn't consider using log1p() near -1, and I
think -1+exp(-1) is too near. x above -1/2 or -1/4 should be OK.
log1p() on -1+exp(-1) seems to only happen in your version when
I can't see any version does this. The worse case in your version
seems to be when ax*ax+ay*ay == 0.7. Then subtracting 1 gives -0.3.
My version does the extra precision calculation of ax*ax+ay*ay in
many more cases, but its worst case for using log1p() seems to be
when subtracting 1 gives -0.5. This may be too near -1. I wouldn't
like even a factor of 2 expansion in the error. But errors of a
couple of ulps will occur for different reasons using log() instead
of log1p(). Another thing to do is to find exactly when log1p()
starts being worse than log().
Bruce
home |
help
Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20120729195931.J2598>
