From owner-freebsd-net@FreeBSD.ORG Wed Sep 8 18:53:12 2010 Return-Path: Delivered-To: freebsd-net@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id A892C10656E6 for ; Wed, 8 Sep 2010 18:53:12 +0000 (UTC) (envelope-from korvus@comcast.net) Received: from mx04.pub.collaborativefusion.com (mx04.pub.collaborativefusion.com [206.210.72.84]) by mx1.freebsd.org (Postfix) with ESMTP id 5E9978FC20 for ; Wed, 8 Sep 2010 18:53:12 +0000 (UTC) Received: from [192.168.2.164] ([206.210.89.202]) by mx04.pub.collaborativefusion.com (StrongMail Enterprise 4.1.1.4(4.1.1.4-47689)); Wed, 08 Sep 2010 14:20:43 -0400 X-VirtualServerGroup: Default X-MailingID: 00000::00000::00000::00000::::5228 X-SMHeaderMap: mid="X-MailingID" X-Destination-ID: freebsd-net@freebsd.org X-SMFBL: ZnJlZWJzZC1uZXRAZnJlZWJzZC5vcmc= Message-ID: <4C87D80C.3020200@comcast.net> Date: Wed, 08 Sep 2010 14:38:04 -0400 From: Steve Polyack User-Agent: Mozilla/5.0 (X11; U; FreeBSD amd64; en-US; rv:1.9.2.7) Gecko/20100805 Lightning/1.0b2 Thunderbird/3.1.1 MIME-Version: 1.0 To: =?ISO-8859-1?Q?Marcos_Vin=EDcius_Buzo?= References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 8bit Cc: freebsd-net@freebsd.org Subject: Re: MPD5 + DUMMYNET + PF HIGH CPU USAGE X-BeenThere: freebsd-net@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Networking and TCP/IP with FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Sep 2010 18:53:12 -0000 On 09/08/10 13:38, Marcos Vinícius Buzo wrote: > Hi all. > > I just started working in a small WISP in a place of a friend that > unfortunatelly is not between us anymore :( > _ We're running FreeBSD 8.1 64bits with MPD5 for pppoe, IPFW+Dummynet for > Traffic Shaping and PF for NAT and firewall. > _ Our hardware is a Dell PowerEdge R210, X3430 Intel Xeon, 4GB 1066Mhz and a > two ports Broadcom NetXtreme II BCM5716. > _ Our WAN Link is 60mbps down/up. > > When we have 450+ pppoe connections and link usage is about 30mbps, things > get strange. > CPU usage goes to 80%+(Im using cacti+snmp to see this); we have high > latency pings, sometimes it goes to 300ms+ and sometimes mpd5 stops doing > its service. > > I did setup another server to work together, it solves the problem just for > now, in this server i disabled flowtable (sysctl > net.inet.flowtable.enable=0), because in the old server, when i run top > -ISH, I see the following: > > 22 root 44 - 0K 16K CPU2 2 236:19 100.00% flowcleaner > > Is this a bug ? > > Are the following customizations right ? > > Here are the custom kernel flags: > ... > kern.maxvnodes=100000000 > ... 100 million vnodes sounds like a lot for a system that is not doing IO with lots of files. I guess the worst it's going to do is sucking up some extra memory. I can't speak much for the flowtable, but with 450+ clients, you are surely hitting the limits of the default number of entries there. $ sysctl net.inet.ip.output_flowtable_size net.inet.ip.output_flowtable_size: 32768 $ sysctl -d net.inet.ip.output_flowtable_size net.inet.ip.output_flowtable_size: number of entries in the per-cpu output flow caches With 4 CPUs, that tracks a maximum of 128k flows. With 450 clients behind, I could see you easily exceeding that rapidly. You may want to try doubling (or tripling) this value via loader.conf on the main system and seeing if that helps a lot (the flowcleaner may not have to constantly work if you are not always close to the maximum number of flows). I'm not sure of the specifics of the flow table, so someone else could probably chime in with some more information on it (I can't find any real documentation on the feature). With such a high number of flows, you may just be better turning it off anyways.