Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 29 Jul 2012 10:25:04 -0500
From:      Stephen Montgomery-Smith <stephen@missouri.edu>
To:        Bruce Evans <brde@optusnet.com.au>
Cc:        freebsd-bugs@FreeBSD.org, Stephen Montgomery-Smith <stephen@FreeBSD.org>
Subject:   Re: bin/170206: complex arcsinh, log, etc.
Message-ID:  <501555D0.1090102@missouri.edu>
In-Reply-To: <20120729195931.J2598@besplex.bde.org>
References:  <201207281550.q6SFoBdC073014@freefall.freebsd.org> <20120729121443.M1008@besplex.bde.org> <5014A284.2060204@missouri.edu> <20120729195931.J2598@besplex.bde.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On 07/29/2012 07:08 AM, Bruce Evans wrote:
> On Sat, 28 Jul 2012, Stephen Montgomery-Smith wrote:
>
>> On 07/28/2012 09:31 PM, Bruce Evans wrote:
>>> On Sat, 28 Jul 2012, Stephen Montgomery-Smith wrote:
>>>
>>>> OK.  This clog really seems to work.
>>>>
>>>> x*x + y*y - 1 is computed with a ULP less than 0.8.  The rest of the
>>>> errors seem to be due to the implementation of log1p.  The ULP of the
>>>> final answer seems to be never bigger than a little over 2.
>>>
>>> I really don't like this version...
>>
>> I can understand your reticence.  I'll let you work some more on clog,
>> and I will concentrate on the catrig stuff.
>>
>> But did you see that I messed up my earlier error analysis of log1p(1+x)?
>
> I didn't look at it closely, but just counted the number of operations and
> multiplied by 0.5 and added a safety margin to get a similar value :-).
>
>> >> Also, I don't think the problem is due to the implementation of log1p.
>> >> If you do an error analysis of log(1+x) where x is about exp(-1)-1,
>> and
>> >> x is correct to within 0.8 ULP, I suspect that about 2.5 ULP is the
>> best
>> >> you can do for the final answer:
>> >>
>> >> relative_error(log(1+x)) = fabs(1/((1+x) log(1+x))) *
>> relative_error(x)
>> >>                    = 1.58 * relative_error(x)
>> >>
>> >> Given that log1p has itself a ULP of about 1, and relative error in
>> x is
>> >> 0.8, and considering x=exp(-1)-1, this gives a ULP at around
>> 1.58*0.8+1
>> >> = 2.3.  And that is what I observed.
>
> log1p(x) does expand the error signficantly here, since since exp(-1)-1 is
> too close to -1 for log1p to be a good function to use on it.  The
> expansion in units of ulps as x -> -1 seems to be fully exponential.
> was surprised by this.  exp(x) also expands errors significantly, but
> only linearly in x (or is it x*log(x)?).
>
>> >> (Here "=" means approximately equal to.)
>> >
>> > And I should add that I just realized that ULP isn't quite the same as
>> > relative error, so an extra factor of up to 2 could make its way into
>> > the calculations.
>>
>> In fact, I think I messed it up a bunch.
>>
>> So let D(f(x)) denote the absolute error in f(x).
>>
>> D(f(x)) = f'(x) Dx.
>>
>> So
>>
>> D(log(1+x)) = Dx/(1+x).
>>
>> If x is a bit bigger than exp(-1)+1 = -0.63, which has ilogb = -1.  If
>> ULP in calculating x is around 0.8, then
>> Dx = 0.8 * 2^(-d-1).
>> where d is the number of bits in the mantissa,
>>
>> So D(log(1+x)) = Dx/0.37.
>
> 1/0.37 is best spelled as `e'.

Duh!  Silly me.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?501555D0.1090102>