Date: Wed, 1 Jul 2015 14:15:09 +0100 From: Oliver Humpage <oliver@watershed.co.uk> To: freebsd-net@freebsd.org Subject: IPFW divert and suricata Message-ID: <D632FEB9-4C62-451E-B2F6-333B7EDAE7C9@watershed.co.uk>
next in thread | raw e-mail | index | archive | help
Hello, I hope this is a good list to post this on, I have a feeling the = solution is somewhere obscure in the networking layer. I've set up an IPS system, using: * FreeBSD 10.1 (guest OS, plenty of RAM/CPU) * ESXi 5.5 (host OS, using Intel X520 10Gb cards. Not overloaded, all = graphs show it's got plenty of RAM/CPU spare at all times) * vmxnet3 drivers * ipfw (very small ruleset, basically just a divert rule) * suricata, in ipfw divert mode I'm having a couple of major issues.=20 The first is that every so often, even with relatively little traffic, = the load on the box suddenly spikes and pings to a neighbouring router = (via the divert rule) go from <1ms to >300ms. Generally this resolves = itself after a few minutes, although last night it went on for an hour = until I restarted ipfw and suricata. The second is that if I do a large download, eg a FreeBSD ISO, the = download usually hangs somewhere between 5MB and 100MB through. I can = see traffic trying to get through on neighbouring routers, it's just the = interface with the divert to suricata where they disappear into a black = hole. The connection speed is around 50Mb, btw. Now it's possible it's suricata being weird, but there's nothing = untoward in its events and stats logs, and if I replay the traffic from = a pcap file then suricata processes everything fine (a pcap taken over a = 90s period during a slowdown is processed in under a second). So my = guess is that if suricata takes slightly longer than normal to process a = packet, something in the networking or ipfw divert system is tripping = itself up. Maybe a queue is filling up? I've set net.inet.ip.fw.dyn_buckets=3D16384, and done an ipfw flush, but = net.inet.ip.fw.curr_dyn_buckets is stubbornly sticking at 256: have I = done something wrong? Other tunables I've set are: kern.random.sys.harvest.ethernet=3D0 kern.random.sys.harvest.point_to_point=3D0 kern.random.sys.harvest.interrupt=3D0=20 kern.ipc.soacceptqueue=3D1024=20 Can anyone suggest either tests to see what might be going wrong, or = tunables to help things run smoother? Both myself and a colleague have = used FreeBSD for over 15 years, and never quite seen anything like it. Many thanks, Oliver.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?D632FEB9-4C62-451E-B2F6-333B7EDAE7C9>