Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 10 Jul 2012 13:53:10 -0700
From:      Steve Kargl <sgk@troutmask.apl.washington.edu>
To:        Stephen Montgomery-Smith <stephen@missouri.edu>
Cc:        freebsd-current@freebsd.org
Subject:   Re: Use of C99 extra long double math functions after r236148
Message-ID:  <20120710205310.GA95746@troutmask.apl.washington.edu>
In-Reply-To: <4FFC705A.6070403@missouri.edu>
References:  <210816F0-7ED7-4481-ABFF-C94A700A3EA0@bsdimp.com> <4FF9DA46.2010502@missouri.edu> <20120708235848.GB53462@troutmask.apl.washington.edu> <4FFA25EA.5090705@missouri.edu> <20120709020107.GA53977@troutmask.apl.washington.edu> <4FFA52F8.2080700@missouri.edu> <20120709050238.GA54634@troutmask.apl.washington.edu> <4FFC5ADF.2010601@missouri.edu> <20120710165045.GA94795@troutmask.apl.washington.edu> <4FFC705A.6070403@missouri.edu>

next in thread | previous in thread | raw e-mail | index | archive | help
On Tue, Jul 10, 2012 at 01:11:38PM -0500, Stephen Montgomery-Smith wrote:
> On 07/10/2012 11:50 AM, Steve Kargl wrote:
> >On Tue, Jul 10, 2012 at 11:39:59AM -0500, Stephen Montgomery-Smith wrote:
> >>On 07/09/2012 12:02 AM, Steve Kargl wrote:
> >>
> >>>Yep.  Another example is the use of upward recurion to compute
> >>>Bessel functions where the argument is larger than the order.
> >>>The algorithm is known to be unstable.
> >>
> >>By upward recursion, do you mean equation (1) in
> >>http://mathworld.wolfram.com/BesselFunction.html?
> >
> >Yes.
> >
> >>So what do people use.  Maybe something like
> >>http://en.wikipedia.org/wiki/Bessel_function#Asymptotic_forms (second
> >>set of equations), but finding some asymptotics with a few extra terms
> >>in them?
> >
> >They use downward recursion, which is known to be stable.
> >NIST has revised Abramowitz and Stegun, and it is available
> >on line.  For Bessel function computations, look at
> >http://dlmf.nist.gov/10.74
> >and more importantly example 1 under the following link
> >http://dlmf.nist.gov/3.6#v
> 
> These algorithms are going to be subject to the same problems as using 
> Taylor's series to evaluate exp(x) for x>0.  The computation will 
> require several floating point operations.  Even if the method is not 
> unstable, I would think getting a ULP of about 1 is next to impossible, 
> that is, unless one performs all the computations in a higher precision, 
> and then rounds at the end.

Downward recurion is used for n > 1.  j0(x) and j1(x)
are computed using two different schemes.  If x <= 2, 
then the ascending series can be summed (this only
takes somewhere around 10 terms for double).  Alternately,
one can use a minimax polynomial for x in [0:2].  For
x > 2, the Hankel expansions are used (see 10.17.3 at
http://dlmf.nist.gov/10.17).  The summations are approximated
by minimax polynomials.  See msun/src/s_j0.c.

I've tested j0f() and know that one can get less than 1 
ULP over various ranges of x.  I also know that when is
close to a zero of j0(x), then one has ULP on the order
of 1000000.

> Whereas as a scientist, having a bessel function computed to within 10 
> ULP would be just fine.

As someone who has spent a portion of career computing 
Bessel functions, I am disinclined to agree with your
statement.

> I am looking at the man page of j0 for FreeBSD-8.3.  It says in conforms 
> to IEEE Std 1003.1-2001.  Does that specify a certain ULP?

I doubt that it would specify ULP for any function other than
perhaps sqrt().  I don't have IEEE Std 1003.1-2001 handy,
but I would guess that it simply statements the math funtion
return an floating point approximation to its true value.

> I am 
> searching around in this document, and I am finding nothing.  Whereas 
> the IEEE-754 document seems rather rigid, but on the other hand it 
> doesn't specifically talk about math functions other than sqrt.

-- 
Steve



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20120710205310.GA95746>