Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 24 Apr 2000 12:25:22 -0400 (EDT)
From:      Robert Watson <rwatson@FreeBSD.ORG>
To:        Jonathan Lemon <jlemon@flugsvamp.com>
Cc:        net@FreeBSD.ORG
Subject:   Re: netkill - generic remote DoS attack (fwd) 
Message-ID:  <Pine.NEB.3.96L.1000424121428.15998C-100000@fledge.watson.org>
In-Reply-To: <200004241547.KAA16081@prism.flugsvamp.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, 24 Apr 2000, Jonathan Lemon wrote:

> Given a quick look at netkill, it appears that it mainly acts as
> a DoS by establishing a connection, then dropping its end, leaving
> the server to maintain the connection until TCP gives up.  The server
> will wind up with the connection in either ESTABLISHED or FIN_WAIT_1
> state, depending on the traffic patterns, and what the attacker does.

Sounds right to me, although the technique really applies to any part of
the state machine--presumably you select one of those states because they
provide the longest timeouts before automated garbage collection or
connection verification ``keepalive''.  At this point with the ESTABLISHED
case, as Louis has pointed out, it is generally assumed that the
application level will take care of detecting dead end-hosts not
responding in a timely manner.  However, if the application layer does do
this and perform a close(), the state still hangs around in the FIN_WAIT1
stage for a long time, so I don't see any purely application layer
solution helping us.

The question seems to be: is it acceptable to more agressively manage
state and timeouts, possible limited to specific circumstances, so as to
improve the performance of the FreeBSD TCP stack under conditions of load
similar to this attack (i.e., high levels of state consumption and/or
exhaustion), without substantially breaking normal behavior.

> As was pointed out, it is difficult to discern when a connection in 
> this state is the result of legitimate traffic, or the result of an 
> attack.  However, a strong indication that the machine is under 
> attack is a large number of connections from the same IP address.
> Unlike a SYN flood, the attacker must complete the TCP handshake, so 
> the server will have the IP address of the attacker (leaving the issue
> of packet sniffing aside at the moment).

I'd agree that the consistent IP address assumption is certainly valid for
a single connection, and likely to be valid for multiple connections given
the common attack environments.  You can imagine other environments, but
it's probably not worth discussing them in detail.

> RFC 2140 (which FreeBSD implements in some form) provides for a shared
> control block for TCP connections.  Would it make sense to add some kind
> of connection counter to this structure as well?  Then, armed with this
> information (and if the sysctl knob allows), the server can make decisions
> for connections from a certain host:
> 
> 	1. refuse to accept any more connections
> 	2. drop existing TCP connections
> 	3. accelerate the timeout 
> 
> This might catch most of the "ankle-biter" attacks without interfering
> with normal traffic, but also might be a problem for people who are 
> connecting through some type of NAT service, where all connections appear
> to be from a single host.

I see this as effectively a show-stopper in most environments, given the
predominence of:

1) Network Address Translation
2) Firewalls
3) Proxy caches in front of large networks.
4) Web performance testers that use a small number of machines to simulate
   high load levels for marketing foo

However, it still might be a useful piece of functionality for many
environments.  You can imagine:

sysctl -w net.inet.tcp.max_connections_per_ip=20

Or the like.  We'd have to handle the FIN states carefully, as technically
they are not connections (application level has waved goodbye on both
sides in normal situations) but state is still consumed for a substantial
period of time.

> Unfortunately, the shared information is currently stored in a 
> route entry, so this would require a little restructuring of how
> things work.

Is RTT estimate measurement currently stored per-host or per-connection? 
Ideally, per-host, in which case wherever that is stored might be the
right place to look (routing entries?).

I'm not to familiar with that aspect of our TCP stack, unfortunately, and
currently in a time crunch.

  Robert N M Watson 

robert@fledge.watson.org              http://www.watson.org/~robert/
PGP key fingerprint: AF B5 5F FF A6 4A 79 37  ED 5F 55 E9 58 04 6A B1
TIS Labs at Network Associates, Safeport Network Services



To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-net" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.NEB.3.96L.1000424121428.15998C-100000>