Date: Sun, 27 Jan 2008 15:49:46 +0100 From: Max Laier <max@love2party.net> To: Stefan Lambrev <stefan.lambrev@moneybookers.com> Cc: freebsd-current@freebsd.org Subject: Re: FreeBSD 7, bridge, PF and syn flood = very bad performance Message-ID: <200801271549.52791.max@love2party.net> In-Reply-To: <479C953C.1010304@moneybookers.com> References: <479A2389.2000802@moneybookers.com> <200801271422.23340.max@love2party.net> <479C953C.1010304@moneybookers.com>
next in thread | previous in thread | raw e-mail | index | archive | help
--nextPart3454006.VeOnS32Bod Content-Type: text/plain; charset="windows-1251" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline On Sunday 27 January 2008, Stefan Lambrev wrote: > Greetings, > > Max Laier wrote: > > -cut- > > >> Well I think the interesting lines from this experiment are: > >> max total wait_total count avg > >> wait_avg cnt_hold cnt_lock name > >> 39 25328476 70950955 9015860 2 7 > >> 5854948 6309848 /usr/src/sys/contrib/pf/net/pf.c:6729 (sleep > >> mutex:pf task mtx) > >> 936935 10645209 350 50 212904 7 > >> 110 47 /usr/src/sys/contrib/pf/net/pf.c:980 (sleep > >> mutex:pf task mtx) > > > > Yeah, those two mostly are the culprit, but a quick fix is not really > > available. You can try to "set timeout interval" to something bigger > > (e.g. 60 seconds) which will decrease the average hold time of the > > second lock instance at the cost of increased peak memory usage. > > I'll try and this. At least memory doesn't seems to be a problem :) > > > I have the ideas how to fix this, but it will take much much more > > time than I currently have for FreeBSD :-\ In general this requires > > a bottom up redesign of pf locking and some data structures involved > > in the state tree handling. > > > > The first(=3Dmain) lock instance is also far from optimal (i.e. pf is a > > congestion point in the bridge forwarding path). For this I have > > also a plan how to make at least state table lookups run in parallel > > to some extend, but again the lack of free time to spend coding > > prevents me from doing it at the moment :-\ > > Well, now we know where the issue is. The same problem seems to affect > synproxy state btw. > Can I expect better performance with IPFW's dynamic rules? Not significantly better, I'd predict. IPFW's dynamic rules are also=20 protected by a single mutex leading to similar congestion problems as pf. = =20 There should be a measureable constant improvement as IPFW does much less=20 sanity checks. i.e. better performance at the expense of less security. =20 It really depends on your needs which is better suited for your setup. > I wonder how one can protect himself on gigabit network and service > more then 500pps. > For example in my test lab I see incoming ~400k packets per second, but > if I activate PF, > I see only 130-140k packets per second. Is this expected behavior, if > PF cannot handle so many packets? As you can see from the hwpmc trace starting this thread, we don't spend=20 that much time in pf. The culprit is the pf task mutext, which forces=20 serialization in pf congesting the whole forward path. Under different=20 circumstances pf can handle more pps. > The missing 250k+ are not listed as discarded or other errors, which is > weird. As you slow down the forwarding protocols like TCP will automatically slow= =20 down. Unless you have UDP bombs blasting at your network this is quite=20 usual behavior. =2D-=20 /"\ Best regards, | mlaier@freebsd.org \ / Max Laier | ICQ #67774661 X http://pf4freebsd.love2party.net/ | mlaier@EFnet / \ ASCII Ribbon Campaign | Against HTML Mail and News --nextPart3454006.VeOnS32Bod Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.4 (FreeBSD) iD8DBQBHnJoQXyyEoT62BG0RAilkAJ9IF2Kx9/aIiJVb/tXQMuh/bPkfggCfU4N0 PEPMMD/KLFmbPaSq7mdPPKg= =syjV -----END PGP SIGNATURE----- --nextPart3454006.VeOnS32Bod--
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200801271549.52791.max>