Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 6 Feb 2015 19:56:15 +1100 (EST)
From:      Bruce Evans <brde@optusnet.com.au>
To:        Anuranjan Shukla <anshukla@juniper.net>
Cc:        Simon Gerraty <sjg@juniper.net>, "freebsd-arch@freebsd.org" <freebsd-arch@freebsd.org>
Subject:   Re: Buggy sbspace() on 64bit builds?
Message-ID:  <20150206183036.S1246@besplex.bde.org>
In-Reply-To: <D0F95E21.2489D%anshukla@juniper.net>
References:  <D0F95E21.2489D%anshukla@juniper.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, 6 Feb 2015, Anuranjan Shukla wrote:

> The way sbspace() is done today, it stores the result of subtraction of
> socket buffer variables (u_int) in longs, and returns a long. If one of
> the subtractions results in a -ve result (the other being positive), it's
> seen as a large +ve and sbspace() ends up returning the wrong value.

Old versions used bogus casts to work in practice.

>    I'm not sure if this is enough of a corner case for consumers at large
> to experience it, but at Juniper some of our implementation uses sbspace
> directly and trips up on this for amd64 builds. Any thoughts on what a fix
> should be for this?
>
> ---------------------------
> long
> sbspace(struct sockbuf *sb)
> {
>        long bleft;
>        long mleft;
>
>        if (sb->sb_flags & SB_STOP)
>                return(0);
>        bleft = sb->sb_hiwat - sb->sb_cc;

This seems to be much more than a corner case.  It is normal for high
water marks to be exceeded (perhaps not so normal for sockets).  Whenever
the high water mark is exceeded, the subtraction overflows to a large
u_int value.  Then on 32-bit arches, assignment to long overflows again
but gives the correct value, but on 64-bit arches it doesn't overflow
and the result is a large positive long value.

>        mleft = sb->sb_mbmax - sb->sb_mbcnt;
>        return((bleft < mleft) ? bleft : mleft);
>
> }

This isn't quite the same as in -current from 2 months ago.  -current
also has an ifdef and different style bugs.

The working version in FreeBSD-5 is:

X /*
X  * How much space is there in a socket buffer (so->so_snd or so->so_rcv)?
X  * This is problematical if the fields are unsigned, as the space might
X  * still be negative (cc > hiwat or mbcnt > mbmax).  Should detect
X  * overflow and return 0.  Should use "lmin" but it doesn't exist now.
X  */
X #define	sbspace(sb) \
X     ((long) imin((int)((sb)->sb_hiwat - (sb)->sb_cc), \
X 	 (int)((sb)->sb_mbmax - (sb)->sb_mbcnt)))

-current has the same comment with the last 2 sentences removed.  These
sentences were a poor description of the hackish fix.  They were removed
together with changing the hackish fix to a bug.  The warning about the
bug remains.

All of the sb_ fields in the above are still u_int.  This implements the
intermediate C programmer mistake of using unsigned types for values that
are known to be nonnegative.  This gives overflow and sign extension bugs
here (strictly, non-overflow that is harmful, and non-sign extension that
is harmful).  Plain int should be enough for holding nonnegative values
for anyone.  It wasn't with 16-bit ints, but FreeBSD and possibly BSD
never supported any system with 16-bit ints.  32-bit ints are enough here.

The inner bogus cast compensate for the type botch by converting to int.
It has to know that the field types are at most unsigned with the same
size, and not get sign extension bugs as in the current version by
converting to a larger size.  This depends on no overflow in the
expression occuring when everything is done with plain ints.  This is
satisified since 32-bit ints are enough for anyone.  Buffer bloat starts
with buffers much smaller than 2G.

Then imin() is used to compare the values.  imin() is hard to use since
it only works for args that are either int or representable as int, but
here they are known to be _mis_representable as int if we didn't already
cast them; imin() would do the same casts automatically, and this is
what is wanted here.  We do the conversions explicitly because working
around the type errors is hard enough to understand even when it is
explicit.

Finally, we cast to long because that is what sbspace() always returned.
This was only needed to support 16-bit ints.  Old BSD code used longs
excessively, and this has not been fixed in many internal APIs.

Old code using corrected API:

X #define	sbspace(sb) \
X 	lmin((long)(sb)->sb_hiwat - (sb)->sb_cc, \
X 	   (long)(sb)->sb_mbmax - (sb)->sb_mbcnt)

Other bugs in the old code:
- the comment about lmin() not existing has been false for 20+ years
- using lmin() makes only a cosmetic difference.  Delicate casts are
   still needed.  Implicit ones would work, but I wrote one explicit
   one.  On 64-bit arches, this just works, but on 32-bit arches we
   start with u_int and truncate to longs; this depends on the u_int
   values being much smaller than LONG_MAX, just like they are much
   smaller than INT_MAX.
- overflow is technically impossible, so the comment shouldn't mention it
- after detecting the technically impossible overflow, there is no need
   to return 0.  sbspace() returns a count, and a negative count works
   just as well as 0, sincd the API is not broken by using unsigned types.

In the current version, changing the local variables from long to int
would restore the delicate conversions, slightly more obfuscated by
using redundant explicit assignments instead of redundant explicit casts.
Or more clearly, cast all the field values to int.  There is no point in
keeping the variables long and casting to long instead -- on 32-bit machines,
this is equally delicate.  But keep the long return for compatibility.

Or using only unsigned types and overflow checking, write more complicated
code, like people had to do for counting before negative numbers were
invented: after also fixing some style bugs:

long
sbspace(struct sockbuf *sb)
{
 	u_int bleft, mleft;

 	if (sb->sb_flags & SB_STOP)
 		return (0);
 	bleft = sb->sb_hiwat < sb->sb_cc ? 0 : sb->sb_hiwat - sb->sb_cc;
 	mleft = sb->sb_mbmax < sb->sb_mbcnt ? 9 : sb->sb_mbmax - sb->sb_mbcnt;
 	return (min(bleft, mleft));
}

This still needs range analysis to ensure that the u_int returned by min()
is representable as a long.  This follows from everything being much smaller
than INT32_MAX, so we should have used int all along.

Recent bugs from overflow in 'int ticks' were not quite the opposite of this.
'ticks' really does need to be unsigned, but some differences of it probably
need be signed.  It is signed because that is what it was originally and
that is probably still needed to avoid sign extension bugs for differences
of it, and its overflow used to be benign.  Now -fwrapv makes its overflow
benign again and hides overflow bugs elswhere.

I used to think that overflow in conversions gives undefined behaviour.
Actually, it gives defined behaviour with an implementation-defined result.
The bogus casts in the above depend on the result being reasonable for
conversion from a large unsigned value to a signed integer (int or long)
of the same size.  A flag like -fwrapv might be needed to stop the
compiler defining its result as weird.  In the 1's complement case, the
reasonable result for a difference of -1U is -0 = 0, but we would prefer
a result of -1.  This is not a problem since both -0 and -1 mean no space.

Bruce



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20150206183036.S1246>