Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 25 Sep 2012 08:28:34 +1000 (EST)
From:      Bruce Evans <brde@optusnet.com.au>
To:        Steve Kargl <sgk@troutmask.apl.washington.edu>
Cc:        numerics@freebsd.org
Subject:   Re: LDBL_MAX broken on i386
Message-ID:  <20120925073508.M2077@besplex.bde.org>
In-Reply-To: <20120924195121.GA22138@troutmask.apl.washington.edu>
References:  <20120925035402.C1433@besplex.bde.org> <20120924195121.GA22138@troutmask.apl.washington.edu>

index | next in thread | previous in thread | raw e-mail

On Mon, 24 Sep 2012, Steve Kargl wrote:

> On Tue, Sep 25, 2012 at 05:05:42AM +1000, Bruce Evans wrote:
>>
>> I just noticed that LDBL_MAX is completely broken on i386.
>
> It seems history is repeating itself.
>
> http://lists.freebsd.org/pipermail/freebsd-hackers/2005-August/013007.html
>
> I can't find the other email exchange I had with at
> leaset Warner about LDBL_MAX.  I distinctly remember
> asking why LDBL_MAX on i386 was 2^64 instead of 2^53.

2**16384 less 1 ulp.  It really does go up to this and not
the double limit of 2**1024 less 1 ulp.

I should have objected more stongly when the constants were
changed prematurely in 2002:

% RCS file: /home/ncvs/src/sys/i386/include/float.h,v
% Working file: float.h
% head: 1.17
% ----------------------------
% revision 1.9
% date: 2002/10/25 07:02:52;  author: imp;  state: Exp;  lines: +10 -9
% Use the correct values for LDBL_*.  Libc doesn't completely support
% long doubles at the moment (printf truncates them to doubles).
% However, long doubles to appear to work to the ranges listed in this
% commit on both -stable (4.5) and -current.  There may be some slight
% rounding issues with long doubles, but that's an orthogonal issue to
% these constants.
% 
% I've had this in my local tree for 3 months, and in my company's local
% tree for 15 months with no ill effects.
% 
% Obtained from: NetBSD
% Not likely to like it: bde
% ----------------------------

Only the changes related to the exponents in this were correct, but
the incorrect LDBL_MAX mostly worked at the time, since gcc rounded
it in 64-bit precision so it didn't overflow.

LDBL_MAX stopped working with this change to gcc.  From gcc/ChangeLog-
2003:

% 2003-07-01  Richard Henderson  <rth@redhat.com>
% 	    (blame to: Loren James Rittle  <ljrittle@acm.org>)
% 
% 	* real.h (ieee_extended_intel_96_round_53_format): New.
% 	* real.c (ieee_extended_intel_96_round_53_format): New.
% 	* config/i386/freebsd.h (SUBTARGET_OVERRIDE_OPTIONS): Use it
% 	for XFmode and TFmode.

This was well enough discussed in FreeBSD lists before it was committed
(or even developed?), and I agreed with it at the time, but didn't
understand its full awfulness.  gcc-4.2.1 still hard-configures the
flag that enables this: from gcc/config/freebsd.h:

% /* FreeBSD sets the rounding precision of the FPU to 53 bits.  Let the
%    compiler get the contents of <float.h> and std::numeric_limits correct.  */
% #undef  TARGET_96_ROUND_53_LONG_DOUBLE
% #define TARGET_96_ROUND_53_LONG_DOUBLE (!TARGET_64BIT)

so i386 always gets it and amd64 never does.  There seems to be no way to
get 53-bit rounding for expressions without getting it for literals.

I don't understand C++ and forgot about it.  Values in headers in
/usr/include/c++ seems to depend on builtin definitions like the
following:

% #define __FLT_EVAL_METHOD__ 2

Broken.  I knew this, but I just looked at the gcc code for this and
the code seemed reasonable there (it seems to try to define it as -1
unless SSE is used).

% #define __LDBL_MAX__ 1.1897314953572316e+4932L

This is less than the value in <float.h>, so it has a chance of not
producing infinity.  It is also rounded to DBL_DECIMAL_DIG digits.
Reminder of the value in <float.h>  with formatting manged so that
it lines up:

% #define   LDBL_MAX   1.1897314953572317650E+4932L  (20 digits)
% #define   LDBL_MAX   1.1897314953572318E+4932L  (rounded to 17 digits)

But the builtin value isn't actually the max.  It is possible to produce
the actual max although not with gcc literals.  Altogether there are
(2**11 - 1) representable values exceeding gcc's builtin max.  Comparison
of these values with __LDBL_MAX__ would give interesting bugs.

% #define __LDBL_MAX_EXP__ 16384
% #define __LDBL_HAS_INFINITY__ 1
% #define __LDBL_MIN__ 3.3621031431120935e-4932L

This is a power of 2, so rounding it to 53 bits gives the correct value.
But clang's rounding to 64 bits would give a different value.  Spelling
it with 21 decimal digits would give the correct value for everyone.

% #define __LDBL_HAS_QUIET_NAN__ 1
% #define __LDBL_HAS_DENORM__ 1
% #define __LDBL_EPSILON__ 2.2204460492503131e-16L

Like __LDBL_MIN__ (power of 2), but only correct with the default
53-bit rounding precision.  It of course differs by a factor of
2**11 from the value in <float.h>.

% #define __LDBL_DIG__ 15
% #define __LDBL_MANT_DIG__ 53
% #define __LDBL_MIN_EXP__ (-16381)
% #define __LDBL_MAX_10_EXP__ 4932
% #define __LDBL_DENORM_MIN__ 7.4653686412953080e-4948L

Like __LDBL_MIN__ (power of 2).

% #define __LDBL_MIN_10_EXP__ (-4931)

gcc also predefines __DECIMAL_DIG__ as 17, consistently with the above.

So gcc's values are perfectly consistent, but are correct if no one
ever creates values with the lower 11 bits set.

Bruce


home | help

Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20120925073508.M2077>