From owner-freebsd-bugs@FreeBSD.ORG Sun Jul 29 12:08:56 2012 Return-Path: Delivered-To: freebsd-bugs@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id A457D106564A; Sun, 29 Jul 2012 12:08:56 +0000 (UTC) (envelope-from brde@optusnet.com.au) Received: from mail06.syd.optusnet.com.au (mail06.syd.optusnet.com.au [211.29.132.187]) by mx1.freebsd.org (Postfix) with ESMTP id 35F3E8FC12; Sun, 29 Jul 2012 12:08:55 +0000 (UTC) Received: from c122-106-171-246.carlnfd1.nsw.optusnet.com.au (c122-106-171-246.carlnfd1.nsw.optusnet.com.au [122.106.171.246]) by mail06.syd.optusnet.com.au (8.13.1/8.13.1) with ESMTP id q6TC8k4u028512 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Sun, 29 Jul 2012 22:08:47 +1000 Date: Sun, 29 Jul 2012 22:08:45 +1000 (EST) From: Bruce Evans X-X-Sender: bde@besplex.bde.org To: Stephen Montgomery-Smith In-Reply-To: <5014A284.2060204@missouri.edu> Message-ID: <20120729195931.J2598@besplex.bde.org> References: <201207281550.q6SFoBdC073014@freefall.freebsd.org> <20120729121443.M1008@besplex.bde.org> <5014A284.2060204@missouri.edu> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Cc: freebsd-bugs@FreeBSD.org, Stephen Montgomery-Smith , Bruce Evans Subject: Re: bin/170206: complex arcsinh, log, etc. X-BeenThere: freebsd-bugs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Bug reports List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Jul 2012 12:08:56 -0000 On Sat, 28 Jul 2012, Stephen Montgomery-Smith wrote: > On 07/28/2012 09:31 PM, Bruce Evans wrote: >> On Sat, 28 Jul 2012, Stephen Montgomery-Smith wrote: >> >>> OK. This clog really seems to work. >>> >>> x*x + y*y - 1 is computed with a ULP less than 0.8. The rest of the >>> errors seem to be due to the implementation of log1p. The ULP of the >>> final answer seems to be never bigger than a little over 2. >> >> I really don't like this version... > > I can understand your reticence. I'll let you work some more on clog, and I > will concentrate on the catrig stuff. > > But did you see that I messed up my earlier error analysis of log1p(1+x)? I didn't look at it closely, but just counted the number of operations and multiplied by 0.5 and added a safety margin to get a similar value :-). > >> Also, I don't think the problem is due to the implementation of log1p. > >> If you do an error analysis of log(1+x) where x is about exp(-1)-1, and > >> x is correct to within 0.8 ULP, I suspect that about 2.5 ULP is the best > >> you can do for the final answer: > >> > >> relative_error(log(1+x)) = fabs(1/((1+x) log(1+x))) * relative_error(x) > >> = 1.58 * relative_error(x) > >> > >> Given that log1p has itself a ULP of about 1, and relative error in x is > >> 0.8, and considering x=exp(-1)-1, this gives a ULP at around 1.58*0.8+1 > >> = 2.3. And that is what I observed. log1p(x) does expand the error signficantly here, since since exp(-1)-1 is too close to -1 for log1p to be a good function to use on it. The expansion in units of ulps as x -> -1 seems to be fully exponential. was surprised by this. exp(x) also expands errors significantly, but only linearly in x (or is it x*log(x)?). > >> (Here "=" means approximately equal to.) > > > > And I should add that I just realized that ULP isn't quite the same as > > relative error, so an extra factor of up to 2 could make its way into > > the calculations. > > In fact, I think I messed it up a bunch. > > So let D(f(x)) denote the absolute error in f(x). > > D(f(x)) = f'(x) Dx. > > So > > D(log(1+x)) = Dx/(1+x). > > If x is a bit bigger than exp(-1)+1 = -0.63, which has ilogb = -1. If > ULP in calculating x is around 0.8, then > Dx = 0.8 * 2^(-d-1). > where d is the number of bits in the mantissa, > > So D(log(1+x)) = Dx/0.37. 1/0.37 is best spelled as `e'. I think exp(N) is right for exp(-N)+1. I didn't think about the scaling of Dx. > Since log(1+x) is a little bit bigger than -1, and so ilogb(log(1+x)) = -1. > > ULP(log(1+x)) = Dx/0.37 * 2^{d+1} = 0.8/0.37 = 2.2 Certainly there is exponential scaling. It seems to be fully exponential, unlike for exp(x). > Now add 1 for ULP in calculating log1p, and this only gives a ULP of > 3.2. So the observed bound is actually better than expected. If one > could get the ULP of log1p to be as good as possible (0.5), the best ULP > one could get is 2.7. We still do a bit better than that. This is easier to think about in binary with log2(). Consider values -1+2**-N. We can't get closer to -1 than with N = d (maybe off by 1). log2(1+this) is -d. If we are off by a full ulp in the arg, and towards 0, then the wrong arg is -1+2**-d+2**-d = -1+2**-(d-1). log2(1+this) is -(d-1). The error has expanded by 1 ulp to a lot. Not by 2*d though (in ulps). Only the relative error expanded by about 2**d (actually by 2**d/d). It remains to calculate the error between -d and -(d-1) in ulps. For d = 53, log2(53) is between 5 and 6, so all except the top 5 or 6 bits are wrong, so the error is between 2**47 and 2**48 ulps. The details are more complicated for N = 1, and I didn't do them. Your formula gives the above when modified to: ULP(log(1+x)) = 1.0 * (exp2(N)/N) Somehow the relative error us the same as the error in ulps (and the approximate expansion factor). This shows that we shouldn't consider using log1p() near -1, and I think -1+exp(-1) is too near. x above -1/2 or -1/4 should be OK. log1p() on -1+exp(-1) seems to only happen in your version when I can't see any version does this. The worse case in your version seems to be when ax*ax+ay*ay == 0.7. Then subtracting 1 gives -0.3. My version does the extra precision calculation of ax*ax+ay*ay in many more cases, but its worst case for using log1p() seems to be when subtracting 1 gives -0.5. This may be too near -1. I wouldn't like even a factor of 2 expansion in the error. But errors of a couple of ulps will occur for different reasons using log() instead of log1p(). Another thing to do is to find exactly when log1p() starts being worse than log(). Bruce