Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 11 Jul 2013 07:59:48 -0700
From:      Alfred Perlstein <alfred@ixsystems.com>
To:        Andre Oppermann <andre@freebsd.org>
Cc:        "re@freebsd.org" <re@freebsd.org>, "stable@freebsd.org" <stable@freebsd.org>, Steven Hartland <killing@multiplay.co.uk>, "nonesuch@longcount.org" <nonesuch@longcount.org>, Peter Wemm <peter@wemm.org>
Subject:   Re: status of autotuning freebsd for 9.2
Message-ID:  <5D355B31-3160-44A3-ADF2-3397E8831DA2@ixsystems.com>
In-Reply-To: <51DEA947.6060605@freebsd.org>
References:  <51D90B9B.9080209@ixsystems.com> <51D92826.1070707@freebsd.org> <51D9B24B.8070303@ixsystems.com> <51DACE93.9050608@freebsd.org> <51DE6255.5000304@freebsd.org> <F438F05872C24151BE51F1F8ED3871C2@multiplay.co.uk> <51DEA947.6060605@freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Andre, Peter what about i386? =20

Ever since I touched this Peter has been worried about i386 and said we've b=
roken the platform.=20

I'm going to boot some vms but maybe we ought to get some testing from Peter=
 on i386?

Sent from my iPhone

On Jul 11, 2013, at 5:47 AM, Andre Oppermann <andre@freebsd.org> wrote:

> On 11.07.2013 11:08, Steven Hartland wrote:
>> ----- Original Message ----- From: "Andre Oppermann" <andre@freebsd.org>
>>=20
>>> On 08.07.2013 16:37, Andre Oppermann wrote:
>>>> On 07.07.2013 20:24, Alfred Perlstein wrote:
>>>>> On 7/7/13 1:34 AM, Andre Oppermann wrote:
>>>>>> Can you help me with with testing?
>>>>> Yes.  Please give me your proposed changes and I'll stand up a machine=
 and give feedback.
>>>>=20
>>>> http://people.freebsd.org/~andre/mfc-autotuning-20130708.diff
>>>=20
>>> Any feedback from testers on this?  The MFC window is closing soon.
>>=20
>> Few things I've noticed most of which look like issues against the origin=
al
>> patch and not the MFC but worth mentioning.
>>=20
>> 1. You've introduced a new tunable kern.maxmbufmem which is autosized but=

>>   doesnt seem to be exposed via a sysctl so it looks like there is no way=

>>   to determine what its actually set to?
>=20
> Good point.  I've made it global and exposed as kern.ipc.maxmbufmem (RDTUN=
).
>=20
>> 2. There's a missmatch between the tuneable kern.ipc.nmbufs in tunable_mb=
init
>>   and the sysctl kern.ipc.nmbuf i.e. no 's'.
>=20
> That's a typo, fixed.
>=20
>> 3. Should kern.maxmbufmem be kern.ipc.maxmbufmem to sit along side all of=

>>   the other sysctls?
>=20
> Yes, see above.
>=20
>> 4. style issues:
>> * @@ -178,11 +202,13 @@
>>  ...
>>  if (newnmbjumbo9 > nmbjumbo9&&
>=20
> Thanks.  All fixed in r253204.
>=20
>> Finally out of interest what made us arrive at the various defaults for e=
ach
>> type as it looks like the ratios have changed?
>=20
> Before it was an arbitrary mess.  Mbufs were not limited at all and the ot=
hers
> to some random multiple of maxusers with the net limit ending up at some 2=
5,000
> mbuf clusters by default.
>=20
> Now default overall limit is set at 50% of all available min(physical, kme=
m_map)
> memory to prevent mbufs from monopolizing kernel memory and leave some spa=
ce for
> other kernel structures and buffers as well as user-space programs.  It ca=
n be
> raised to 3/4 of available memory by the tunable.
>=20
> 2K and 4K (page size) mbuf clusters can each go up to 25% of this mbuf mem=
ory.
> The former is dominantly used on the receive path and the latter in the se=
nd path.
> 9K and 16K jumbo mbuf clusters can each go up to 12.5% of mbuf memory.  Th=
ey are
> only used in the receive path if large jumbo MTUs on a network interface a=
re active.
> Both are special in that their memory is contiguous in KVM and physical me=
mory.
> This becomes problematic due to memory fragmentation after a short amount o=
f heavy
> system use.  I hope to deprecate them for 10.0.  Network interfaces should=
 use 4K
> clusters instead and chain them together for larger packets.  All modern N=
ICs
> support that.  Only the early and limited DMA engines without scatter-gath=
er
> capabilities required contiguous physical memory.  They are long gone by n=
ow.
>=20
> The limit for mbufs itselfs is 12.5% of mbuf memory and should be at least=
 as
> many as the sum of the cluster types.  Each cluster requires an mbuf to wh=
ich
> it is attached.
>=20
> Two examples on the revised mbuf sizing limits:
>=20
>  1GB KVM:
>   512MB limit for mbufs
>   419,430 mbufs
>    65,536 2K mbuf clusters
>    32,768 4K mbuf clusters
>     9,709 9K mbuf clusters
>     5,461 16K mbuf clusters
>=20
>  16GB RAM:
>   8GB limit for mbufs
>   33,554,432 mbufs
>    1,048,576 2K mbuf clusters
>      524,288 4K mbuf clusters
>      155,344 9K mbuf clusters
>       87,381 16K mbuf clusters
>=20
> These defaults should be sufficient for even the most demanding network lo=
ads.
>=20
> For additional information see:
>=20
> http://svnweb.freebsd.org/changeset/base/243631
>=20
> --=20
> Andre
>=20



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5D355B31-3160-44A3-ADF2-3397E8831DA2>