Date: Thu, 19 Feb 2004 16:02:09 +0300 From: Andrew Riabtsev <resident@b-o.ru> To: Gleb Smirnoff <glebius@cell.sick.ru> Cc: freebsd-net@freebsd.org Subject: Re[2]: ng_netflow: request for feature Message-ID: <199102170052.20040219160209@b-o.ru> In-Reply-To: <20040219121811.GB46148@cell.sick.ru> References: <20040121114502.GC17802@cell.sick.ru> <20040218124958.GB40340@cell.sick.ru> <10796883310.20040219143402@b-o.ru> <20040219121811.GB46148@cell.sick.ru>
next in thread | previous in thread | raw e-mail | index | archive | help
Привет Gleb, Thursday, February 19, 2004, 3:18:11 PM, you wrote: GS> On Thu, Feb 19, 2004 at 02:34:02PM +0300, Andrew Riabtsev wrote: A>> GS> a port of ng_netflow has been just commited to ports A>> GS> tree. It builds both on STABLE and CURRENT, and was tested A>> GS> to work on really busy routers. A>> GS> As before, I'd be glad for any kind of feedback: ideas, A>> GS> patches and else. Thanks. A>> A>> GS> (Also crossposted to -net). A>> A>> Few requests: A>> A>> 1. Is it possible to include ability in that module to turn on rule: A>> (accounted = passed) or other words (not accounted = not passed)? GS> In most cases the answer is no. In 90 % cases ng_netflow is used on GS> top of ng_ether(4) node, which passes all data coming on wire. All GS> packet filtering with help of ipfw or ipf are done later. GS> You can try some workarounds using ng_bpf(4) just between ng_netflow GS> and ng_tee(4), but I have not tested such configurations. I meen not filtering, sorry, im talking about ability connect ng_netflow direct to ng_ether upper and lower hooks so if something happend with packet and it was not accounted due to memory overflow or something else this packet will not be transfered to the upper layer or to the ethernet. A>> 2. And there is one possible vulnerability. I've tryed A>> ng_ipacct befor, as I undestand ng_netflow source code based on A>> ng_ipacct, and found the following problem. No matter how much free mem A>> has kernel soon or later all mem will be filled with "garbage" if A>> "smart" host generates the following trafic, for example: A>> A>> 14:06:31.194057 95.18.81.203 > 81.176.66.50: icmp: echo request A>> 14:06:31.194058 95.18.81.203 > 81.176.66.50: icmp: echo request A>> 14:06:31.194059 95.18.81.203 > 81.176.66.50: icmp: echo request A>> 14:06:31.194060 95.18.81.203 > 81.176.66.50: icmp: echo request A>> 14:06:31.194061 95.18.81.203 > 81.176.66.50: icmp: echo request A>> ... A>> and so on A>> ... GS> Either you have incorrectly described the situation or you are not GS> right at all. The tcpdump you showed is the perfect situation GS> for any traffic accounting soft, because it does not generate any GS> new entries, it just increments byte and packet counters on one entry. sorry this should looks like this: 14:06:31.194057 95.18.81.203 > 81.176.66.50: icmp: echo request 14:06:31.194058 95.18.81.203 > 81.176.66.51: icmp: echo request 14:06:31.194059 95.18.81.203 > 81.176.66.52: icmp: echo request 14:06:31.194060 95.18.81.203 > 81.176.66.53: icmp: echo request 14:06:31.194061 95.18.81.203 > 81.176.66.54: icmp: echo request ^ ^ i was trying to make tcpdump output by hands and do this mistake. Now for each of this packets creats new record in accounting table ("garbage" i was talking about) and this is not a problem to make a huge amount of this kind of records in a few seconds. Hope now it is clear. The point of exploit is not only lot of packets, but lot of destination address in short time. A>>It could be icmp request, or tcp syn, or udp or anything else, the A>>point is to generate as much outgoing packets as it possible, sometimes it A>>does few hosts. The result is huge lag (huge accounting hash table A>>each packet going throw) and very soon box becomes unavalible to do GS> I am using ng_ipacct in production, and I have never faced such a GS> situation. May be you do not checkpoint/clear accounting database, and GS> it grows to a huge size during some hours? Normal checkpoint interval GS> is 15 minutes. Yes, i even used 5 mins period. A>>any tasks even routing. Is it possible to include ability to limit A>>amount of records in accounting hash table for src addr? With policy A>>not accountes = not passed) it will protect box from this kind of A>>attacks. Limiting amount of memory used by accounting A>>table to not let it grow into huge laggy monster leads to fill with A>> "garbage" account table and no more traffic accounting till new check A>> point comes. GS> Such things will lead to loss of accounting data. However I have never GS> faced such a problem. May be your problem is a slow box itself? GS> I'm running ng_netflow on 5 FE interfaces that are sometimes running GS> at wire speed with up to 3000 simultaneous flows and I see no real GS> load on it. It is some Athlon XP. -- С наилучшими пожеланиями, Andrew mailto:resident@b-o.ru
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199102170052.20040219160209>