Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 27 Jul 2018 16:53:24 -0400 (EDT)
From:      Garrett Wollman <wollman@hergotha.csail.mit.edu>
To:        ryan@ixsystems.com
Cc:        freebsd-net@freebsd.org
Subject:   Re: 9k jumbo clusters
Message-ID:  <201807272053.w6RKrO1o053565@hergotha.csail.mit.edu>
References:  <EBDE6EDD-D875-43D8-8D65-1F1344A6B817@ixsystems.com>

next in thread | previous in thread | raw e-mail | index | archive | help
In article <EBDE6EDD-D875-43D8-8D65-1F1344A6B817@ixsystems.com>
ryan@ixsystems.com writes:

>I have seen some work in the direction of avoiding larger than page size
>jumbo clusters in 12-CURRENT.  Many existing drivers avoid the 9k cluster
>size already.  The code for larger cluster sizes in iflib is #ifdef'd out
>so it maxes out at the page size jumbo clusters until "CONTIGMALLOC_WORKS"
>(apparently it doesn't).

My view, which I've expressed before, is that we should have a special
pool allocator that provides much larger buffers for systems with
high-speed network interfaces that can benefit from them.  On at
machine with 96 GB of RAM (a small file server in my world), it would
not hurt at all to reserve a few 2 GB pages worth of physical memory
to be used as very large network buffers, say 64k in length, with the
constraint that all of the "very large" buffers had to be the same
length.  This could be set up in early initialization via tunables,
with the default being not to reserve any space so it doesn't affect
memory allocation on systems that aren't configured for it.  (If
you're building a high-performance file server you obviously are going
to need to tune more than just network buffers anyway!)

I thought a bit about trying to implement this a few years ago when
the 9k cluster issue was really biting me, but instead I just diked
out the 9k cluster code in the NIC drivers I was using.

-GAWollman




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201807272053.w6RKrO1o053565>