Date: Sun, 10 Mar 2013 23:06:47 +0100 From: Andre Oppermann <andre@freebsd.org> To: Rick Macklem <rmacklem@uoguelph.ca> Cc: jfv@freebsd.org, freebsd-net@freebsd.org, Garrett Wollman <wollman@freebsd.org>, Garrett Wollman <wollman@bimajority.org> Subject: Re: Limits on jumbo mbuf cluster allocation Message-ID: <513D03F7.6090206@freebsd.org> In-Reply-To: <2050712270.3721724.1362790033662.JavaMail.root@erie.cs.uoguelph.ca> References: <2050712270.3721724.1362790033662.JavaMail.root@erie.cs.uoguelph.ca>
next in thread | previous in thread | raw e-mail | index | archive | help
On 09.03.2013 01:47, Rick Macklem wrote: > Garrett Wollman wrote: >> <<On Fri, 08 Mar 2013 08:54:14 +0100, Andre Oppermann >> <andre@freebsd.org> said: >> >>> [stuff I wrote deleted] >>> You have an amd64 kernel running HEAD or 9.x? >> >> Yes, these are 9.1 with some patches to reduce mutex contention on the >> NFS server's replay "cache". >> > The cached replies are copies of the mbuf list done via m_copym(). > As such, the clusters in these replies won't be free'd (ref cnt -> 0) > until the cache is trimmed (nfsrv_trimcache() gets called after the > TCP layer has received an ACK for receipt of the reply from the client). If these are not received mbufs but locally generated with m_getm2() or so they won't be jumbo mbufs > PAGE_SIZE. > If reducing the size to 4K doesn't fix the problem, you might want to > consider shrinking the tunable vfs.nfsd.tcphighwater and suffering > the increased CPU overhead (and some increased mutex contention) of > calling nfsrv_trimcache() more frequently. > (I'm assuming that you are using drc2.patch + drc3.patch. If you are > using one of ivoras@'s variants of the patch, I'm not sure if the > tunable is called the same thing, although it should have basically > the same effect.) > > Good luck with it and thanks for running on the "bleeding edge" so > these issues get identified, rick -- Andre
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?513D03F7.6090206>