From owner-svn-src-head@FreeBSD.ORG Fri Dec 7 09:37:31 2012 Return-Path: Delivered-To: svn-src-head@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 7E5E350E; Fri, 7 Dec 2012 09:37:31 +0000 (UTC) (envelope-from gonzo@id.bluezbox.com) Received: from id.bluezbox.com (id.bluezbox.com [88.198.91.248]) by mx1.freebsd.org (Postfix) with ESMTP id 0359C8FC13; Fri, 7 Dec 2012 09:37:30 +0000 (UTC) Received: from [207.6.254.8] (helo=[192.168.1.67]) by id.bluezbox.com with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.77 (FreeBSD)) (envelope-from ) id 1TguMw-000KLU-GA; Fri, 07 Dec 2012 01:37:24 -0800 Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 6.2 \(1499\)) Subject: Re: svn commit: r243631 - in head/sys: kern sys From: Oleksandr Tymoshenko In-Reply-To: <201211272119.qARLJxXV061083@svn.freebsd.org> Date: Fri, 7 Dec 2012 01:36:59 -0800 Content-Transfer-Encoding: quoted-printable Message-Id: References: <201211272119.qARLJxXV061083@svn.freebsd.org> To: Andre Oppermann X-Mailer: Apple Mail (2.1499) Sender: gonzo@id.bluezbox.com X-Spam-Level: -- X-Spam-Report: Spam detection software, running on the system "id.bluezbox.com", has identified this incoming email as possible spam. The original message has been attached to this so you can view it (if it isn't spam) or label similar future email. If you have any questions, see The administrator of that system for details. Content preview: On 2012-11-27, at 1:19 PM, Andre Oppermann wrote: > Author: andre > Date: Tue Nov 27 21:19:58 2012 > New Revision: 243631 > URL: http://svnweb.freebsd.org/changeset/base/243631 > > Log: > Base the mbuf related limits on the available physical memory or > kernel memory, whichever is lower. The overall mbuf related memory > limit must be set so that mbufs (and clusters of various sizes) > can't exhaust physical RAM or KVM. > > The limit is set to half of the physical RAM or KVM (whichever is > lower) as the baseline. In any normal scenario we want to leave > at least half of the physmem/kvm for other kernel functions and > userspace to prevent it from swapping too easily. Via a tunable > kern.maxmbufmem the limit can be upped to at most 3/4 of physmem/kvm. > > At the same time divorce maxfiles from maxusers and set maxfiles to > physpages / 8 with a floor based on maxusers. This way busy servers > can make use of the significantly increased mbuf limits with a much > larger number of open sockets. > > Tidy up ordering in init_param2() and check up on some users of > those values calculated here. > > Out of the overall mbuf memory limit 2K clusters and 4K (page size) > clusters to get 1/4 each because these are the most heavily used mbuf > sizes. 2K clusters are used for MTU 1500 ethernet inbound packets. > 4K clusters are used whenever possible for sends on sockets and thus > outbound packets. The larger cluster sizes of 9K and 16K are limited > to 1/6 of the overall mbuf memory limit. When jumbo MTU's are used > these large clusters will end up only on the inbound path. They are > not used on outbound, there it's still 4K. Yes, that will stay that > way because otherwise we run into lots of complications in the > stack. And it really isn't a problem, so don't make a scene. > > Normal mbufs (256B) weren't limited at all previously. This was > problematic as there are certain places in the kernel that on > allocation failure of clusters try to piece together their packet > from smaller mbufs. > > The mbuf limit is the number of all other mbuf sizes together plus > [...] Content analysis details: (-2.9 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: svn-src-head@freebsd.org, svn-src-all@freebsd.org, src-committers@freebsd.org X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 07 Dec 2012 09:37:31 -0000 On 2012-11-27, at 1:19 PM, Andre Oppermann wrote: > Author: andre > Date: Tue Nov 27 21:19:58 2012 > New Revision: 243631 > URL: http://svnweb.freebsd.org/changeset/base/243631 >=20 > Log: > Base the mbuf related limits on the available physical memory or > kernel memory, whichever is lower. The overall mbuf related memory > limit must be set so that mbufs (and clusters of various sizes) > can't exhaust physical RAM or KVM. >=20 > The limit is set to half of the physical RAM or KVM (whichever is > lower) as the baseline. In any normal scenario we want to leave > at least half of the physmem/kvm for other kernel functions and > userspace to prevent it from swapping too easily. Via a tunable > kern.maxmbufmem the limit can be upped to at most 3/4 of physmem/kvm. >=20 > At the same time divorce maxfiles from maxusers and set maxfiles to > physpages / 8 with a floor based on maxusers. This way busy servers > can make use of the significantly increased mbuf limits with a much > larger number of open sockets. >=20 > Tidy up ordering in init_param2() and check up on some users of > those values calculated here. >=20 > Out of the overall mbuf memory limit 2K clusters and 4K (page size) > clusters to get 1/4 each because these are the most heavily used mbuf > sizes. 2K clusters are used for MTU 1500 ethernet inbound packets. > 4K clusters are used whenever possible for sends on sockets and thus > outbound packets. The larger cluster sizes of 9K and 16K are limited > to 1/6 of the overall mbuf memory limit. When jumbo MTU's are used > these large clusters will end up only on the inbound path. They are > not used on outbound, there it's still 4K. Yes, that will stay that > way because otherwise we run into lots of complications in the > stack. And it really isn't a problem, so don't make a scene. >=20 > Normal mbufs (256B) weren't limited at all previously. This was > problematic as there are certain places in the kernel that on > allocation failure of clusters try to piece together their packet > from smaller mbufs. >=20 > The mbuf limit is the number of all other mbuf sizes together plus > some more to allow for standalone mbufs (ACK for example) and to > send off a copy of a cluster. Unfortunately there isn't a way to > set an overall limit for all mbuf memory together as UMA doesn't > support such a limiting. >=20 > NB: Every cluster also has an mbuf associated with it. >=20 > Two examples on the revised mbuf sizing limits: >=20 > 1GB KVM: > 512MB limit for mbufs > 419,430 mbufs > 65,536 2K mbuf clusters > 32,768 4K mbuf clusters > 9,709 9K mbuf clusters > 5,461 16K mbuf clusters >=20 > 16GB RAM: > 8GB limit for mbufs > 33,554,432 mbufs > 1,048,576 2K mbuf clusters > 524,288 4K mbuf clusters > 155,344 9K mbuf clusters > 87,381 16K mbuf clusters >=20 > These defaults should be sufficient for even the most demanding > network loads. Andre, these changes along with r243631 break booting ARM kernels on devices = with 1Gb of memory: vm_thread_new: kstack allocation failed panic: kproc_create() failed with 12 KDB: enter: panic If I manually set amount of memory to 512Mb it boots fine.=20 If you need help debugging this issue or testing possible fixes, I'll be = glad to help Thank you=