Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 20 Sep 1996 03:43:17 +0000
From:      steve farrell <spfarrel@midway.uchicago.edu>
To:        newton@communica.com.au (Mark Newton)
Cc:        imp@village.org (Warner Losh), security@freebsd.org
Subject:   Re: comments on the SYN attack 
Message-ID:  <199609200343.DAA03778@phaedrus.uchicago.edu>
In-Reply-To: Your message of "Fri, 20 Sep 1996 15:12:43 %2B0930." <9609200542.AA11812@communica.com.au> 

next in thread | previous in thread | raw e-mail | index | archive | help
in the statement to freebsd list, paul traina said that shortening
your timeout on the SYN too much ran the risk of the adminstrator
'doing the attackers job', since this itself will deny service.
the suggested killing of SYNs based on age has the effect of
dynamically scaling the timeout based upon the load (attack)
situation (in fact, i think the aging algorithm would probably be
much easier to implement by simply reducing the timeout based on
the size of the queue!)  if both of these statements are true, then
isn't this method *guaranteed* to actually make the attackers job
easier? (just by means of attacking, the target OS (effectively)
cuts back its timeout.  you can't argue that you'd put a lower-bound
on the timeout, of course, becuase obviously at the point this
lower-bound is hit you're guaranteed to be in a full-queue,
unserviceable state, so it really doesn't matter what you do then.)

but what about randomly?   first: i think randomly killing packets
is a fallacy since the longer the packet remains on the queue, the
more likely it will get killed  (if 1% of packets are killed every
second, then the packet which hangs out on the queue 100secs will
probably get killed, whereas the one that hangs out 10secs will
probably not.)   -- so if there is an effective difference, it's
odd and probably not very important.

conclusion:  i think both proposals can be logically reduced into
shortening the timeout on the queue, and as has been pointed out,
shortening the timeout too much does the attackers job for him.
in the worst case, these proposals could lead to an amplification
situation where the attack is the signal, and the servers proposed
tcpip modifications respond to that signal with another measure
further increasing the denial of service.  either cure is (by my
late-night analysis) worse than the disease.

so now the real question: which is worse =)

my bet goes with the random-killer:

one of the assumptions for the randomness is that 'bad' and 'good'
SYNs will get equal treatment and killed in equal proportions.  if
this is the case, then it will do nothing to the ratio of 'good'
to 'bad' packets in the queue, so i think this could be considered
a no-gainer.  in contrast, the timeout-based method would probably
kill more 'bad' packets then 'good' ones, assuming that 'good' ones
are generally fulfilled in a timely manner.  

furthermore, with the random method, there will be an additional
small reduction in service to 'good' connections since some are
granted premature deaths.

some late-night thoughts from: steve farrell



>Warner Losh wrote:
>
> > However, my gut tells me that the random victum will give better
> > behavior than the shoot the oldest one.
> 
>Oh?  I believe that statistically speaking the two cases provide an
>identical chance of booting away an individual packet: Consider that the
>deterministic case is precisely equivalent to the random case with a
>pseudorandom number generator which just coincidentally always returns
>the identity of the oldest SYN packet...
>
>[ indeed, if you're looking at an individual packet, the present 
>  implementation is identical as well:  it just "randomly" drops the
>  "youngest" packet on the queue, ie: the one that has arrived at a
>  time when there aren't enough resources to keep it ]
>
> > If you have a queue length of 1000, and can deliver 500 bogus SYNs in
> > the 200mS that it takes, then you'd have a 60% chance of not dropping
> > the good SYN (1 - .999 ^ 500 = 60%).  If you can deliver 1000 bogus
> > SYNs in that time, then the deterministic method would have a 0%
> > chance, and the random method would have a 35% chance of surviving (if
> > bc on my machine for .999^1000 can be trusted).
>
>You're assuming that a SYN is going to spend all of that 200mSec in the
>queue (remember, I provided that number as an upper-bound for the purposes
>of discussion, not as an axiomatic interval symtomatic of catastrophic
>failure!).  1000 bogus SYNs in 200mSec is something that takes of the
>order of 200kbps of bandwidth to support.  On a 200kbps channel, I'd be
>willing to wager that the vast majority of your SYNs are answered with
>SYN-ACKs (and thereby removed from this equation) within 50mSec or less,
>meaning that the oldest SYN would almost always be one with an unreachable
>return address.
>
>Keep in mind that this is making the attacker work a lot harder too -
>He only needs to send about five packets per second to keep queues 
>congested under present implementations!
>
>By thowing away the oldest SYN, you're effectively extending the present
>algorithm by adding a variable-length timeout instead of a 75-second timeout.
>The size of that variable-length timeout is dependent on the demand
>being placed on your system.  That dynamicism is biased to provide newly
>arrived legitimate SYNs a fair chance of survival before they're treated
>as bogus and thrown away due to high demand.  Given that you'll end up
>throwing away good SYNs if you're short on resources anyway, I'd rather
>select which good ones get chucked on a best-efforts basis instead of a
>random basis.
>
>The only way you'd throw away a good SYN is if its source address was 
>on the other side of a link that was so slow that it was indistinguishible
>from an unreachable host for the purposes of SYN-warfare.  At that point,
>it becomes the "best choice" for droppage for a (relatively) good reason,
>rather than a random impulse.
>
>By throwing away a random SYN, you're effectively treating all packets
>as bogus until proven otherwise, *AND* creating the possibility that you'll
>randomly boot 'em up the arse before they've had a chance to provide you
>with that proof.  Sure, if you look at a single packet in isolation after
>declaring that "all things are equal" then you'll probably give an 
>individual packet a better chance of survival;  But that approach ignores
>the fact that all things *aren't* equal, and that it is highly likely
>that the oldest packet in the SYN queue is there because the SYN-ACK has
>been dropped by a router because its destination is unreachable.
>
> > Given that SYNs are
> > retransmitted, then you'd be able to get after 2 tries on the average.
>
>I'd suggest that the deterministic solution would let you in on 1 try.
>
>Usually :-)
>
>You're implementation is also computationally expensive:  Random numbers
>aren't cheap, and I wouldn't like to spend too much of my life calculating
>thousands of them per second...
>
>    - mark
>
>---
>Mark Newton                               Email: newton@communica.com.au
>Systems Engineer                          Phone: +61-8-8373-2523
>Communica Systems                         WWW:   http://www.communica.com.au
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199609200343.DAA03778>