Date: Thu, 26 Jul 2012 11:52:21 +1000 (EST) From: Bruce Evans <brde@optusnet.com.au> To: Rainer Hurling <rhurlin@gwdg.de> Cc: David Schultz <das@FreeBSD.org>, freebsd-current@FreeBSD.org, Bruce Evans <bde@FreeBSD.org>, Steve Kargl <sgk@troutmask.apl.washington.edu> Subject: Re: Use of C99 extra long double math functions after r236148 Message-ID: <20120726111106.W1093@besplex.bde.org> In-Reply-To: <50102DD3.6010700@gwdg.de> References: <20120708124047.GA44061@zim.MIT.EDU> <210816F0-7ED7-4481-ABFF-C94A700A3EA0@bsdimp.com> <20120708233624.GA53462@troutmask.apl.washington.edu> <4FFBF16D.2030007@gwdg.de> <2A1DE516-ABB4-49D7-8C3D-2C4DA2D9FCF5@bsdimp.com> <4FFC412B.4090202@gwdg.de> <20120710151115.GA56950@zim.MIT.EDU> <4FFC5E5D.8000502@gwdg.de> <20120710225801.GB58778@zim.MIT.EDU> <50101EDE.6030509@gwdg.de> <20120725170004.GA72338@troutmask.apl.washington.edu> <50102DD3.6010700@gwdg.de>
next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, 25 Jul 2012, Rainer Hurling wrote: > On 25.07.2012 19:00 (UTC+2), Steve Kargl wrote: >> On Wed, Jul 25, 2012 at 06:29:18PM +0200, Rainer Hurling wrote: >>> >>> Many thanks to you three for implementing expl() with r238722 and r238724. >>> >>> I am not a C programmer, but would like to ask if the following example >>> is correct and suituable as a minimalistic test of this new C99 function? >> >> It's not clear to me what you mean by test. If expl() is >> not available in libm, then linking the code would fail. >> So, testing for the existence of expl() (for example, >> in a configure script) is as simple as > > Sorry for not being clear enough. I didn't mean testing for the existence, > but for some comparable output between exp() and expl(), on a system with > expl() available in libm. This is basically what I do to test exp() (with a few billion cases automatically generated and compared). It is not sufficient for checking expl(), except for consistency. (It is assumed that expl() is reasonably accurate. If it is in fact less accurate than exp(), this tends to show up in the comparisons.) >> #include <math.h> >> long double >> func(long double x) >> { >> return (expl(x)); >> } >> >>> //----------------------------------- >>> #include <stdio.h> >>> #include <math.h> >>> >>> int main(void) >>> { >>> double c = 2.0; >>> long double d = 2.0; >>> >>> double e = exp(c); >>> long double f = expl(d); >>> >>> printf("exp(%f) is %.*f\n", c, 90, e); >>> printf("expl(%Lf) is %.*Lf\n", d, 90, f); >> >> If you mean testing that the output is correct, then >> asking for 90 digits is of little use. The following >> is sufficient (and my actually produce a digit or two >> more than is available in number) > > Ok, I understand. I printed the 90 digits to be able to take a look at the > decimal places, I did not expect to get valid digits in this area. Use binary format (%a) for manual comparison. Don't print any more bits than the format has. This is DBL_MANT_DIG (53) for doubles and LDLBL_MANT_DIG (64 on x86) for long doubles. %a format is in nybbles and tends to group the bits into nybbles badly. See below on reducing problems from this. Decimal format has to print about 3 more digits than are really meaningful, to allow recovering the original value uniquely. For manual comparison, you need to print these extra digits and manually round or ignore them as appropriate. The correct number of extra digits is hard to determine. For the "any", type, it is DECIMAL_DIG (21) on x86. The corresponding number of normally-accurate decimal digits for long doubles is given by LDBL_DIG (18). For floats and doubles, this corresponds to FLT_DIG (6) and DBL_DIG (15). Unfortunately, <float.h> doesn't define anything corresponding to DECIMAL_DIG for the smaller types. 21 is a lot of digits and noise digits take a long time to determine and ignore (its worse on sparc64 where DECIMAL_DIG is 36). I usually add 2 extra digits to the number of normally-accurate digits. This is sloppy. 3 is needed in some cases, depending on MANT_DIG and the bits in log(2) and/or log(10). >> troutmask:fvwm:kargl[203] diff -u a.c.orig a.c >> --- a.c.orig 2012-07-25 09:38:31.000000000 -0700 >> +++ a.c 2012-07-25 09:40:36.000000000 -0700 >> @@ -1,5 +1,6 @@ >> #include <stdio.h> >> #include <math.h> >> +#include <float.h> >> >> int main(void) >> { >> @@ -9,8 +10,8 @@ >> double e = exp(c); >> long double f = expl(d); >> >> - printf("exp(%f) is %.*f\n", c, 90, e); >> - printf("expl(%Lf) is %.*Lf\n", d, 90, f); >> + printf("exp(%f) is %.*f\n", c, DBL_DIG+2, e); >> + printf("expl(%Lf) is %.*Lf\n", d, LDBL_DIG+2, f); >> >> return 0; >> } > > Thanks, I was not aware of DBL_DIG and LDBL_DIG. Steve is sloppy and adds 2 also :-). For long doubles, it is clear that 3 are strictly needed, since DECIMAL_DIG is 3 more. For most long double functions on i386, you need to switch the rounding precision to 64 bits around calls to them, and also to do any operations on the results except printing them. expl() is one of the few large functions that does the switch internally. So the above should work (since it only prints), but (expl(d) + 0) should round to the default 53-bit precision and this give the same result as exp(d). >> If you actually want to test expl() to see if it is producing >> a decent result, you need a reference solution that contains >> a higher precision. I use mpfr with 256 bits of precision. >> >> troutmask:fvwm:kargl[213] ./testl -V 2 >> ULP = 0.3863 >> x = 2.000000000000000000e+00 >> libm: 7.389056098930650227e+00 0x1.d8e64b8d4ddadcc4p+2 >> mpfr: 7.389056098930650227e+00 0x1.d8e64b8d4ddadcc4p+2 >> mpfr: 7.3890560989306502272304274605750078131803155705518\ >> 47324087127822522573796079054e+00 >> mpfr: >> 0x7.63992e35376b730ce8ee881ada2aeea11eb9ebd93c887eb59ed77977d109f148p+0 >> >> The 1st 'mpfr:' line is produced after converting the results >> fof mpfr_exp() to long double. The 2nd 'mpfr:' line is >> produced by mpfr_printf() where the number of printed >> digits depends on the 256-bit precision. The last 'mpfr:' >> line is mpfr_printf()'s hex formatting. Unfortunately, it >> does not normalize the hex representation to start with >> '0x1.', which makes comparison somewhat difficult. Starting with 0x1 is only a good (just OK) normalization when the number of binary digits is 1 mod the number of bits in a nybble. So it is good for doubles but bad for long doubles on x86 and 256-bit format. To reduce this problem, promote to a common type and print all the bits of that. (I don't know how to promote to 256 bits to match mpfr). Even when the first hex nybble is not normalized, the bits of similar results line up unless the results are not actually similar or in rare half-way cases. Another method that I like to use with better binary formats than %a produced by my library is to print values twice, first rounded to their nominal number of binary bits (53 for doubles) and then with 8 extra bits (8 is a multiple of 4 to keep the nybbles lined up). Sometimes you have extra precision and need to see what it is. i386 has 11 bits of extra precision for long doubles internally, in all cases provided its rounding precision is switched to 64 bits and in some other cases. Actually, I would use 12 extra bits to compare 64-bit values with 53-bit ones, since one extra nybble provides complete info without much expansion. Bruce
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20120726111106.W1093>