Date: Wed, 10 Apr 2019 01:09:29 +0000 From: bugzilla-noreply@freebsd.org To: net@FreeBSD.org Subject: [Bug 237072] netgraph(4): performance issue [on HardenedBSD]? Message-ID: <bug-237072-7501-r0F7H1pYmW@https.bugs.freebsd.org/bugzilla/> In-Reply-To: <bug-237072-7501@https.bugs.freebsd.org/bugzilla/> References: <bug-237072-7501@https.bugs.freebsd.org/bugzilla/>
next in thread | previous in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D237072 --- Comment #14 from Larry Rosenman <ler@FreeBSD.org> --- More information from Austin Robertson (aus on github): Hey Larry, I hope you don't mind me emailing you, but I've seen you've been posting ab= out a similar issue I experienced in regards to netgraph performance with pfatt= . (I had seen some freebsd.org bug report referrals in my Github repo's traffic analytics) In my experience, the netgraph configuration in pfatt can max out a single = core when reaching gigabit speeds. In some cases, the single core performance of= the process can handle it. In other cases, it cannot and speed suffers. In the case of another user, their C2758 CPU wasn't getting full gigabit performance. Upgrading to a beefier E3-1230v6 got them the full line speeds. When being throttled by the CPU, I see a high percentage of interrupts (relative to core count) against the NIC via systat -vmstat. I suspect that= the extra packet processing isn't hardware accelerated by the NIC and being han= dled in kernel space by netgraph.=20 BSD and performance aren't my expertise, and you seem to be more savvy in t= hose areas. I thought I'd pass along my experience. If you come up with a soluti= on, I'd definitely like to here it! --=20 You are receiving this mail because: You are the assignee for the bug.=
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-237072-7501-r0F7H1pYmW>