From owner-svn-src-user@FreeBSD.ORG Mon Dec 3 07:41:34 2012 Return-Path: Delivered-To: svn-src-user@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 55613D70; Mon, 3 Dec 2012 07:41:34 +0000 (UTC) (envelope-from sobomax@sippysoft.com) Received: from mail.sippysoft.com (hub.sippysoft.com [174.36.24.17]) by mx1.freebsd.org (Postfix) with ESMTP id 29A018FC13; Mon, 3 Dec 2012 07:41:33 +0000 (UTC) Received: from s173-180-43-49.bc.hsia.telus.net ([173.180.43.49] helo=[192.168.22.32]) by mail.sippysoft.com with esmtpsa (TLSv1:DHE-RSA-CAMELLIA256-SHA:256) (Exim 4.80 (FreeBSD)) (envelope-from ) id 1TfQ4s-000Hvt-5H; Sun, 02 Dec 2012 23:04:30 -0800 Message-ID: <50BC4EF6.8040902@FreeBSD.org> Date: Sun, 02 Dec 2012 23:04:22 -0800 From: Maxim Sobolev Organization: Sippy Software, Inc. User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:16.0) Gecko/20121026 Thunderbird/16.0.2 MIME-Version: 1.0 To: Alfred Perlstein Subject: Re: svn commit: r242910 - in user/andre/tcp_workqueue/sys: kern sys References: <201211120847.qAC8lEAM086331@svn.freebsd.org> <50A0D420.4030106@freebsd.org> <0039CD42-C909-41D0-B0A7-7DFBC5B8D839@mu.org> <50A1206B.1000200@freebsd.org> <3D373186-09E2-48BC-8451-E4439F99B29D@mu.org> In-Reply-To: <3D373186-09E2-48BC-8451-E4439F99B29D@mu.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Sender: sobomax@sippysoft.com X-ssp-trusted: yes Cc: "src-committers@freebsd.org" , Andre Oppermann , "svn-src-user@freebsd.org" X-BeenThere: svn-src-user@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "SVN commit messages for the experimental " user" src tree" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 03 Dec 2012 07:41:34 -0000 Hi Alfred and Andre, It's nice somebody takes care of this. Default settings pretty much sucks on any off-the-shelf PC hardware in the last 5 years. We are also in quite mbufs hungry environment, is's not 10GigE, but we are dealing with forwarding voice traffic, which consists of predominantly very small packets (20-40 bytes). So we have a lot of small packets in-flight, which uses a lot of MBUFS. What however happens, the network stack consistently lock up after we put more than 16-18MB/sec onto it, which corresponds to about 350-400 Kpps. This is way lower than any nmbclusters/maxusers limits we have (1.5m/1500). With half of that critical load right now we see something along those lines: 66365/71953/138318/1597440 mbuf clusters in use (current/cache/total/max) 149617K/187910K/337528K bytes allocated to network (current/cache/total) Machine has 24GB of ram. vm.kmem_map_free: 24886267904 vm.kmem_map_size: 70615040 vm.kmem_size_scale: 1 vm.kmem_size_max: 329853485875 vm.kmem_size_min: 0 vm.kmem_size: 24956903424 So my question is whether there are some other limits that can cause MBUFS starvation if the number of allocated clusters grows to more than 200-250k? I am curious how it works in the dynamic system - since no memory is pre-allocated for MBUFS, what happens if the network load increases gradually while the system is running? Is it possible to get to ENOMEM eventually with all memory already taken for other pools? Mem: 6283M Active, 12G Inact, 3760M Wired, 754M Cache, 2464M Buf, 504M Free Swap: 40G Total, 6320K Used, 40G Free Any pointers/suggestions are greatly appreciated. -Maxim