Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 11 Mar 2013 21:24:21 +0100
From:      Andre Oppermann <andre@freebsd.org>
To:        Garrett Wollman <wollman@hergotha.csail.mit.edu>
Cc:        freebsd-net@freebsd.org, rmacklem@uoguelph.ca
Subject:   Re: Limits on jumbo mbuf cluster allocation
Message-ID:  <513E3D75.7010803@freebsd.org>
In-Reply-To: <201303111605.r2BG5I6v073052@hergotha.csail.mit.edu>
References:  <1154859394.3748712.1362959165419.JavaMail.root@erie.cs.uoguelph.ca> <201303111605.r2BG5I6v073052@hergotha.csail.mit.edu>

next in thread | previous in thread | raw e-mail | index | archive | help
On 11.03.2013 17:05, Garrett Wollman wrote:
> In article <513DB550.5010004@freebsd.org>, andre@freebsd.org writes:
>
>> Garrett's problem is receive side specific and NFS can't do much about it.
>> Unless, of course, NFS is holding on to received mbufs for a longer time.
>
> Well, I have two problems: one is running out of mbufs (caused, we
> think, by ixgbe requiring 9k clusters when it doesn't actually need
> them), and one is livelock.  Allowing potentially hundreds of clients
> to queue 2 MB of requests before TCP pushes back on them helps to
> sustain the livelock once it gets started, and of course those packets
> will be of the 9k jumbo variety, which makes the first problem worse
> as well.

I think that TCP, or rather the send socket buffer, currently doesn't
push back at all but simply accepts everything that gets thrown at it.
This obviously is a problem and the NFS server seems to depend somewhat
on that by requiring atomicity on a RPC send.  I have to trace the mbuf
path through NFS to the socket to be sure.  The code is slightly opaque
though.

-- 
Andre




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?513E3D75.7010803>