From owner-freebsd-security Thu Sep 19 23:12:46 1996 Return-Path: owner-security Received: (from root@localhost) by freefall.freebsd.org (8.7.5/8.7.3) id XAA17556 for security-outgoing; Thu, 19 Sep 1996 23:12:46 -0700 (PDT) Received: from rover.village.org (rover.village.org [204.144.255.49]) by freefall.freebsd.org (8.7.5/8.7.3) with ESMTP id XAA17485 for ; Thu, 19 Sep 1996 23:12:37 -0700 (PDT) Received: from rover.village.org (localhost [127.0.0.1]) by rover.village.org (8.7.5/8.6.6) with ESMTP id AAA26892; Fri, 20 Sep 1996 00:11:58 -0600 (MDT) Message-Id: <199609200611.AAA26892@rover.village.org> To: newton@communica.com.au (Mark Newton) Subject: Re: comments on the SYN attack Cc: security@FreeBSD.org In-reply-to: Your message of Fri, 20 Sep 1996 15:12:43 +0930 Date: Fri, 20 Sep 1996 00:11:58 -0600 From: Warner Losh Sender: owner-security@FreeBSD.org X-Loop: FreeBSD.org Precedence: bulk : Warner Losh wrote: : : > However, my gut tells me that the random victum will give better : > behavior than the shoot the oldest one. : : Oh? I believe that statistically speaking the two cases provide an : identical chance of booting away an individual packet: Consider that the : deterministic case is precisely equivalent to the random case with a : pseudorandom number generator which just coincidentally always returns : the identity of the oldest SYN packet... I don't think so. In the determanistic case if you get 1000 SYNs when you have a queue that is 1000 long, you'd drop all 1000 of the SYNs in every case. In the randomly kill one, once you get in the queue you have about a 60% chance of survival once the next 1000 SYNs come in. That's not quite the same thing. : [ indeed, if you're looking at an individual packet, the present : implementation is identical as well: it just "randomly" drops the : "youngest" packet on the queue, ie: the one that has arrived at a : time when there aren't enough resources to keep it ] That is a possibility. However, it won't do that every time, and SYNs are retransmitted, so unless you always drop all of the legitimate SYNs, then you'll still be able to make connections. : > If you have a queue length of 1000, and can deliver 500 bogus SYNs in : > the 200mS that it takes, then you'd have a 60% chance of not dropping : > the good SYN (1 - .999 ^ 500 = 60%). If you can deliver 1000 bogus : > SYNs in that time, then the deterministic method would have a 0% : > chance, and the random method would have a 35% chance of surviving (if : > bc on my machine for .999^1000 can be trusted). : : You're assuming that a SYN is going to spend all of that 200mSec in the : queue (remember, I provided that number as an upper-bound for the purposes : of discussion, not as an axiomatic interval symtomatic of catastrophic : failure!). No. I don't think that I am. I'm saying that if, for a given time, you have a queue length of 1000 and you get 500 packets, then you have a 60% chance of survival. If you have 1000 packets come in, then you'll have a 35% chance of survival. : 1000 bogus SYNs in 200mSec is something that takes of the : order of 200kbps of bandwidth to support. Each SYN is 40 bytes long (IP + TCP headers). You'd need 5000 of these packets in a second, which is 200kBps or 2000kbps. This is faster than a T1 line right now, which is 1500kbps. : On a 200kbps channel, I'd be : willing to wager that the vast majority of your SYNs are answered with : SYN-ACKs (and thereby removed from this equation) within 50mSec or less, : meaning that the oldest SYN would almost always be one with an unreachable : return address. Agreed. If the above rate is going full throttle, then in 50ms, you'd have 250 packets delivered, which gives the chances of dropping a legitimate SYN in a sea of bogons about 25%. Your method wouldn't drop at all, in this case. : Keep in mind that this is making the attacker work a lot harder too - : He only needs to send about five packets per second to keep queues : congested under present implementations! Agreed! : By thowing away the oldest SYN, you're effectively extending the present : algorithm by adding a variable-length timeout instead of a 75-second timeout. True. However, you may also make that timeout too short in the case where SYNs are coming in much faster than your queue can hold (say for a 100 slot queue under similar conidtions). : The size of that variable-length timeout is dependent on the demand : being placed on your system. That dynamicism is biased to provide newly : arrived legitimate SYNs a fair chance of survival before they're treated : as bogus and thrown away due to high demand. Given that you'll end up : throwing away good SYNs if you're short on resources anyway, I'd rather : select which good ones get chucked on a best-efforts basis instead of a : random basis. I'm not sure I'd call it best effort. It would be chucked on a strict FIFO basis, which has different statitical properites than one where a random one is discarded. At least according to the queueing theory class I took years ago in college. : By throwing away a random SYN, you're effectively treating all packets : as bogus until proven otherwise, *AND* creating the possibility that you'll : randomly boot 'em up the arse before they've had a chance to provide you : with that proof. That is exactly correct. Trust no one. If there are too many of em, get rid of a few. The legitimate ones will be back. The bogons won't and good riddens. : Sure, if you look at a single packet in isolation after : declaring that "all things are equal" then you'll probably give an : individual packet a better chance of survival; But that approach ignores : the fact that all things *aren't* equal, and that it is highly likely : that the oldest packet in the SYN queue is there because the SYN-ACK has : been dropped by a router because its destination is unreachable. If you have queue lenghts long enough to make sure that under load the legitimate ones don't become the oldest one on the queue. : You're implementation is also computationally expensive: Random numbers : aren't cheap, and I wouldn't like to spend too much of my life calculating : thousands of them per second... "good enough" random numbers are likely cheap enough. A few adds and shifts. Most fast machines can do that quickly. They are poor for cryptographic purposes, but reasonably well suited for chosing victums. Since they change so quickly, it would be hard to time arrival times such that the random number generate is a factor. Thinking about it some more, I think that a queue length of 1000 may be way too big. The search time on it would be horrible if not hashed. I think that someone with a more recent exposure to queuing theory might be able to do a better job at a theoretical analysis of this problem and might show which is better. At this point, however, we are in agreement on the following: Some discarding of packets will help, not hurt, the situation. Further tests may be needed under fire to determine which will be better. The best method of discarding is a matter of debate and likely would make a good candidate for research. Warner