Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 7 Dec 2012 01:36:59 -0800
From:      Oleksandr Tymoshenko <gonzo@bluezbox.com>
To:        Andre Oppermann <andre@freebsd.org>
Cc:        svn-src-head@freebsd.org, svn-src-all@freebsd.org, src-committers@freebsd.org
Subject:   Re: svn commit: r243631 - in head/sys: kern sys
Message-ID:  <ABB3E29B-91F3-4C25-8FAB-869BBD7459E1@bluezbox.com>
In-Reply-To: <201211272119.qARLJxXV061083@svn.freebsd.org>
References:  <201211272119.qARLJxXV061083@svn.freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help

On 2012-11-27, at 1:19 PM, Andre Oppermann <andre@freebsd.org> wrote:

> Author: andre
> Date: Tue Nov 27 21:19:58 2012
> New Revision: 243631
> URL: http://svnweb.freebsd.org/changeset/base/243631
>=20
> Log:
>  Base the mbuf related limits on the available physical memory or
>  kernel memory, whichever is lower.  The overall mbuf related memory
>  limit must be set so that mbufs (and clusters of various sizes)
>  can't exhaust physical RAM or KVM.
>=20
>  The limit is set to half of the physical RAM or KVM (whichever is
>  lower) as the baseline.  In any normal scenario we want to leave
>  at least half of the physmem/kvm for other kernel functions and
>  userspace to prevent it from swapping too easily.  Via a tunable
>  kern.maxmbufmem the limit can be upped to at most 3/4 of physmem/kvm.
>=20
>  At the same time divorce maxfiles from maxusers and set maxfiles to
>  physpages / 8 with a floor based on maxusers.  This way busy servers
>  can make use of the significantly increased mbuf limits with a much
>  larger number of open sockets.
>=20
>  Tidy up ordering in init_param2() and check up on some users of
>  those values calculated here.
>=20
>  Out of the overall mbuf memory limit 2K clusters and 4K (page size)
>  clusters to get 1/4 each because these are the most heavily used mbuf
>  sizes.  2K clusters are used for MTU 1500 ethernet inbound packets.
>  4K clusters are used whenever possible for sends on sockets and thus
>  outbound packets.  The larger cluster sizes of 9K and 16K are limited
>  to 1/6 of the overall mbuf memory limit.  When jumbo MTU's are used
>  these large clusters will end up only on the inbound path.  They are
>  not used on outbound, there it's still 4K.  Yes, that will stay that
>  way because otherwise we run into lots of complications in the
>  stack.  And it really isn't a problem, so don't make a scene.
>=20
>  Normal mbufs (256B) weren't limited at all previously.  This was
>  problematic as there are certain places in the kernel that on
>  allocation failure of clusters try to piece together their packet
>  from smaller mbufs.
>=20
>  The mbuf limit is the number of all other mbuf sizes together plus
>  some more to allow for standalone mbufs (ACK for example) and to
>  send off a copy of a cluster.  Unfortunately there isn't a way to
>  set an overall limit for all mbuf memory together as UMA doesn't
>  support such a limiting.
>=20
>  NB: Every cluster also has an mbuf associated with it.
>=20
>  Two examples on the revised mbuf sizing limits:
>=20
>  1GB KVM:
>   512MB limit for mbufs
>   419,430 mbufs
>    65,536 2K mbuf clusters
>    32,768 4K mbuf clusters
>     9,709 9K mbuf clusters
>     5,461 16K mbuf clusters
>=20
>  16GB RAM:
>   8GB limit for mbufs
>   33,554,432 mbufs
>    1,048,576 2K mbuf clusters
>      524,288 4K mbuf clusters
>      155,344 9K mbuf clusters
>       87,381 16K mbuf clusters
>=20
>  These defaults should be sufficient for even the most demanding
>  network loads.

Andre,

these changes along with r243631 break booting ARM kernels on devices =
with 1Gb of memory:

vm_thread_new: kstack allocation failed
panic: kproc_create() failed with 12
KDB: enter: panic

If I manually set amount of memory to 512Mb it boots fine.=20
If you need help debugging this issue or testing possible fixes, I'll be =
glad to help

Thank you=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?ABB3E29B-91F3-4C25-8FAB-869BBD7459E1>