Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 23 Feb 2012 09:48:44 +0100
From:      Andreas Nilsson <andrnils@gmail.com>
To:        Fabien Thomas <fabien.thomas@netasq.com>
Cc:        FreeBSD stable <freebsd-stable@freebsd.org>, FreeBSD Net <freebsd-net@freebsd.org>, Jack Vogel <jfvogel@gmail.com>, Ben Hutchings <bhutchings@solarflare.com>, re <re@freebsd.org>, Luigi Rizzo <rizzo@iet.unipi.it>
Subject:   Re: nmbclusters: how do we want to fix this for 8.3 ?
Message-ID:  <CAPS9%2BSunAmxSL68J-7zHYQnstyd%2BH3r2yrt2yQ_R=ZJ6L8VVSw@mail.gmail.com>
In-Reply-To: <134564BB-676B-49BB-8BDA-6B8EB8965969@netasq.com>
References:  <CAFOYbc=oU5DxZDZQZZe4wJhVDoP=ocVOnpDq7bT=HbVkAjffLQ@mail.gmail.com> <20120222205231.GA81949@onelab2.iet.unipi.it> <1329944986.2621.46.camel@bwh-desktop> <20120222214433.GA82582@onelab2.iet.unipi.it> <CAFOYbc=BWkvGuqAOVehaYEVc7R_4b1Cq1i7Ged=-YEpCekNvfA@mail.gmail.com> <134564BB-676B-49BB-8BDA-6B8EB8965969@netasq.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, Feb 23, 2012 at 9:19 AM, Fabien Thomas <fabien.thomas@netasq.com>wr=
ote:

>
> Le 22 f=E9vr. 2012 =E0 22:51, Jack Vogel a =E9crit :
>
> > On Wed, Feb 22, 2012 at 1:44 PM, Luigi Rizzo <rizzo@iet.unipi.it> wrote=
:
> >
> >> On Wed, Feb 22, 2012 at 09:09:46PM +0000, Ben Hutchings wrote:
> >>> On Wed, 2012-02-22 at 21:52 +0100, Luigi Rizzo wrote:
> >> ...
> >>>> I have hit this problem recently, too.
> >>>> Maybe the issue mostly/only exists on 32-bit systems.
> >>>
> >>> No, we kept hitting mbuf pool limits on 64-bit systems when we starte=
d
> >>> working on FreeBSD support.
> >>
> >> ok never mind then, the mechanism would be the same, though
> >> the limits (especially VM_LIMIT) would be different.
> >>
> >>>> Here is a possible approach:
> >>>>
> >>>> 1. nmbclusters consume the kernel virtual address space so there
> >>>>   must be some upper limit, say
> >>>>
> >>>>        VM_LIMIT =3D 256000 (translates to 512MB of address space)
> >>>>
> >>>> 2. also you don't want the clusters to take up too much of the
> >> available
> >>>>   memory. This one would only trigger for minimal-memory systems,
> >>>>   or virtual machines, but still...
> >>>>
> >>>>        MEM_LIMIT =3D (physical_ram / 2) / 2048
> >>>>
> >>>> 3. one may try to set a suitably large, desirable number of buffers
> >>>>
> >>>>        TARGET_CLUSTERS =3D 128000
> >>>>
> >>>> 4. and finally we could use the current default as the absolute
> minimum
> >>>>
> >>>>        MIN_CLUSTERS =3D 1024 + maxusers*64
> >>>>
> >>>> Then at boot the system could say
> >>>>
> >>>>        nmbclusters =3D min(TARGET_CLUSTERS, VM_LIMIT, MEM_LIMIT)
> >>>>
> >>>>        nmbclusters =3D max(nmbclusters, MIN_CLUSTERS)
> >>>>
> >>>>
> >>>> In turn, i believe interfaces should do their part and by default
> >>>> never try to allocate more than a fraction of the total number
> >>>> of buffers,
> >>>
> >>> Well what fraction should that be?  It surely depends on how many
> >>> interfaces are in the system and how many queues the other interfaces
> >>> have.
> >>
> >>>> if necessary reducing the number of active queues.
> >>>
> >>> So now I have too few queues on my interface even after I increase th=
e
> >>> limit.
> >>>
> >>> There ought to be a standard way to configure numbers of queues and
> >>> default queue lengths.
> >>
> >> Jack raised the problem that there is a poorly chosen default for
> >> nmbclusters, causing one interface to consume all the buffers.
> >> If the user explicitly overrides the value then
> >> the number of cluster should be what the user asks (memory permitting)=
.
> >> The next step is on devices: if there are no overrides, the default
> >> for a driver is to be lean. I would say that topping the request betwe=
en
> >> 1/4 and 1/8 of the total buffers is surely better than the current
> >> situation. Of course if there is an explicit override, then use
> >> it whatever happens to the others.
> >>
> >> cheers
> >> luigi
> >>
> >
> > Hmmm, well, I could make the default use only 1 queue or something like
> > that,
> > was thinking more of what actual users of the hardware would want.
> >
>
> I think this is more reasonable to setup interface with one queue.
> Even if the cluster does not hit the max you will end up with unbalanced
> setting that
> let very low mbuf count for other uses.
>

If interfaces have the possibility to use more queues, they should, imo so
I'm all for rasing the default size.

For those systems with very limited memory it's easily changed.


>
> > After the installed kernel is booted and the admin would do whatever po=
st
> > install
> > modifications they wish it could be changed, along with nmbclusters.
> >
> > This was why i sought opinions, of the algorithm itself, but also anyon=
e
> > using
> > ixgbe and igb in heavy use, what would you find most convenient?
> >
> > Jack
> > _______________________________________________
> > freebsd-net@freebsd.org mailing list
> > http://lists.freebsd.org/mailman/listinfo/freebsd-net
> > To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"
>
>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAPS9%2BSunAmxSL68J-7zHYQnstyd%2BH3r2yrt2yQ_R=ZJ6L8VVSw>