Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 24 Mar 2012 13:08:53 -0700
From:      John-Mark Gurney <jmg@funkthat.com>
To:        Juli Mallett <jmallett@freebsd.org>
Cc:        freebsd-net@freebsd.org, Ivan Voras <ivoras@freebsd.org>
Subject:   Re: nmbclusters: how do we want to fix this for 8.3 ?
Message-ID:  <20120324200853.GE2253@funkthat.com>
In-Reply-To: <CACVs6=_avBzUm0mJd%2BkNvPuBodmc56wHmdg_pCrAODfztVnamw@mail.gmail.com>
References:  <CAFOYbc=oU5DxZDZQZZe4wJhVDoP=ocVOnpDq7bT=HbVkAjffLQ@mail.gmail.com> <20120222205231.GA81949@onelab2.iet.unipi.it> <1329944986.2621.46.camel@bwh-desktop> <20120222214433.GA82582@onelab2.iet.unipi.it> <CAFOYbc=BWkvGuqAOVehaYEVc7R_4b1Cq1i7Ged=-YEpCekNvfA@mail.gmail.com> <134564BB-676B-49BB-8BDA-6B8EB8965969@netasq.com> <ji5ldg$8tl$1@dough.gmane.org> <CACVs6=_avBzUm0mJd%2BkNvPuBodmc56wHmdg_pCrAODfztVnamw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Juli Mallett wrote this message on Thu, Feb 23, 2012 at 08:03 -0800:
> Which sounds slightly off-topic, except that dedicating loads of mbufs
> to receive queues that will sit empty on the vast majority of systems
> and receive a few packets per second in the service of some kind of
> magical thinking or excitement about multiqueue reception may be a
> little ill-advised.  On my desktop with hardware supporting multiple
> queues, do I really want to use the maximum number of them just to
> handle a few thousand packets per second?  One core can do that just
> fine.
> 
> FreeBSD's great to drop-in on forwarding systems that will have
> moderate load, but it seems the best justification for this default is
> so users need fewer reboots to get more queues to spread what is
> assumed to be an evenly-distributed load over other cores.  In
> practice, isn't the real problem that we have no facility for changing
> the number of queues at runtime?
> 
> If the number of queues weren't fixed at boot, users could actually
> find the number that suits them best with a plausible amount of work,
> and the point about FreeBSD being "slow" goes away since it's perhaps
> one more sysctl to set (or one per-interface) or one (or one-per)
> ifconfig line to run, along with enabling forwarding, etc.
> 
> The big commitment that multi-queue drivers ask for when they use the
> maximum number of queues on boot and then demand to fill those queues
> up with mbufs is unreasonable, even if it can be met on a growing
> number of systems without much in the way of pain.  It's unreasonable,
> but perhaps it feels good to see all those interrupts bouncing around,
> all those threads running from time to time in top.  Maybe it makes
> FreeBSD seem more serious, or perhaps it's something that gets people
> excited.  It gives the appearance of doing quite a bit behind the
> scenes, and perhaps that's beneficial in and of itself, and will keep
> users from imagining that FreeBSD is slow, to your point.  We should
> be clear, though, whether we are motivated by technical or
> psychological constraints and benefits.

Sorry the wake up this thread, but I wanted to add another announce
I've had with most of the ethernet drivers relating to mbufs.  This is
the fact that if upon packet receive that it can't allocate a new mbuf
cluster to replace the received packet in the receive queue that it
"drops" the packet to use the received packets as a replacement.

There should either be another thread or after the packet has been
processed the option of the ethernet driver getting it back to put back
into the receive queue.  I've run into systems that were very low
memory and ran out of mbufs, but you couldn't log into them over the
network because all the mbufs were busy, and each attempt to log in
was dropped.  It doesn't make much sense to keep possibly 4MB/port (or
more) of memory used that "effictively" never gets used, just
increasing the ammount of memory required to run a "quiet" system...

If we had some sort of tuning algorithm that would keep track of the
current receive queue usage depth, and always keep enough mbufs on the
queue to handle the largest expected burst of packets (either historical,
or by looking at largest tcp window size, etc), this would both improve
memory usage, and in general reduce the number of require mbufs on the
system...  If you have fast processors, you might be able to get away with
less mbufs since you can drain the receive queue faster, but on slower
systems, you would use more mbufs.

This tuning would also fix the problem of interfaces not coming up since
at boot, each interface might only allocate 128 or so mbufs, and then
dynamicly grow as necessary...

Just my 2 cents.

P.S. I removed -stable from the CC list.

-- 
  John-Mark Gurney				Voice: +1 415 225 5579

     "All that I will do, has been done, All that I have, has not."



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20120324200853.GE2253>