Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 23 Feb 2012 09:19:13 +0100
From:      Fabien Thomas <fabien.thomas@netasq.com>
To:        Jack Vogel <jfvogel@gmail.com>
Cc:        Ben Hutchings <bhutchings@solarflare.com>, FreeBSD Net <freebsd-net@freebsd.org>, Luigi Rizzo <rizzo@iet.unipi.it>, re <re@freebsd.org>, FreeBSD stable <freebsd-stable@freebsd.org>
Subject:   Re: nmbclusters: how do we want to fix this for 8.3 ?
Message-ID:  <134564BB-676B-49BB-8BDA-6B8EB8965969@netasq.com>
In-Reply-To: <CAFOYbc=BWkvGuqAOVehaYEVc7R_4b1Cq1i7Ged=-YEpCekNvfA@mail.gmail.com>
References:  <CAFOYbc=oU5DxZDZQZZe4wJhVDoP=ocVOnpDq7bT=HbVkAjffLQ@mail.gmail.com> <20120222205231.GA81949@onelab2.iet.unipi.it> <1329944986.2621.46.camel@bwh-desktop> <20120222214433.GA82582@onelab2.iet.unipi.it> <CAFOYbc=BWkvGuqAOVehaYEVc7R_4b1Cq1i7Ged=-YEpCekNvfA@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help

--Apple-Mail=_D789DF06-0C84-453C-AD51-1C13DBE43289
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=iso-8859-1


Le 22 f=E9vr. 2012 =E0 22:51, Jack Vogel a =E9crit :

> On Wed, Feb 22, 2012 at 1:44 PM, Luigi Rizzo <rizzo@iet.unipi.it> =
wrote:
>=20
>> On Wed, Feb 22, 2012 at 09:09:46PM +0000, Ben Hutchings wrote:
>>> On Wed, 2012-02-22 at 21:52 +0100, Luigi Rizzo wrote:
>> ...
>>>> I have hit this problem recently, too.
>>>> Maybe the issue mostly/only exists on 32-bit systems.
>>>=20
>>> No, we kept hitting mbuf pool limits on 64-bit systems when we =
started
>>> working on FreeBSD support.
>>=20
>> ok never mind then, the mechanism would be the same, though
>> the limits (especially VM_LIMIT) would be different.
>>=20
>>>> Here is a possible approach:
>>>>=20
>>>> 1. nmbclusters consume the kernel virtual address space so there
>>>>   must be some upper limit, say
>>>>=20
>>>>        VM_LIMIT =3D 256000 (translates to 512MB of address space)
>>>>=20
>>>> 2. also you don't want the clusters to take up too much of the
>> available
>>>>   memory. This one would only trigger for minimal-memory systems,
>>>>   or virtual machines, but still...
>>>>=20
>>>>        MEM_LIMIT =3D (physical_ram / 2) / 2048
>>>>=20
>>>> 3. one may try to set a suitably large, desirable number of buffers
>>>>=20
>>>>        TARGET_CLUSTERS =3D 128000
>>>>=20
>>>> 4. and finally we could use the current default as the absolute =
minimum
>>>>=20
>>>>        MIN_CLUSTERS =3D 1024 + maxusers*64
>>>>=20
>>>> Then at boot the system could say
>>>>=20
>>>>        nmbclusters =3D min(TARGET_CLUSTERS, VM_LIMIT, MEM_LIMIT)
>>>>=20
>>>>        nmbclusters =3D max(nmbclusters, MIN_CLUSTERS)
>>>>=20
>>>>=20
>>>> In turn, i believe interfaces should do their part and by default
>>>> never try to allocate more than a fraction of the total number
>>>> of buffers,
>>>=20
>>> Well what fraction should that be?  It surely depends on how many
>>> interfaces are in the system and how many queues the other =
interfaces
>>> have.
>>=20
>>>> if necessary reducing the number of active queues.
>>>=20
>>> So now I have too few queues on my interface even after I increase =
the
>>> limit.
>>>=20
>>> There ought to be a standard way to configure numbers of queues and
>>> default queue lengths.
>>=20
>> Jack raised the problem that there is a poorly chosen default for
>> nmbclusters, causing one interface to consume all the buffers.
>> If the user explicitly overrides the value then
>> the number of cluster should be what the user asks (memory =
permitting).
>> The next step is on devices: if there are no overrides, the default
>> for a driver is to be lean. I would say that topping the request =
between
>> 1/4 and 1/8 of the total buffers is surely better than the current
>> situation. Of course if there is an explicit override, then use
>> it whatever happens to the others.
>>=20
>> cheers
>> luigi
>>=20
>=20
> Hmmm, well, I could make the default use only 1 queue or something =
like
> that,
> was thinking more of what actual users of the hardware would want.
>=20

I think this is more reasonable to setup interface with one queue.
Even if the cluster does not hit the max you will end up with unbalanced =
setting that
let very low mbuf count for other uses.


> After the installed kernel is booted and the admin would do whatever =
post
> install
> modifications they wish it could be changed, along with nmbclusters.
>=20
> This was why i sought opinions, of the algorithm itself, but also =
anyone
> using
> ixgbe and igb in heavy use, what would you find most convenient?
>=20
> Jack
> _______________________________________________
> freebsd-net@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"


--Apple-Mail=_D789DF06-0C84-453C-AD51-1C13DBE43289--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?134564BB-676B-49BB-8BDA-6B8EB8965969>