Date: Thu, 11 Jul 2013 08:12:53 -0700 From: Adrian Chadd <adrian@freebsd.org> To: Alfred Perlstein <alfred@ixsystems.com> Cc: "stable@freebsd.org" <stable@freebsd.org>, "re@freebsd.org" <re@freebsd.org>, Peter Wemm <peter@wemm.org>, "nonesuch@longcount.org" <nonesuch@longcount.org>, Steven Hartland <killing@multiplay.co.uk>, Andre Oppermann <andre@freebsd.org> Subject: Re: status of autotuning freebsd for 9.2 Message-ID: <CAJ-VmomHQ1K=xOPr34cf70FNOKkMuRgA2vfqThQKh13o6-_F=Q@mail.gmail.com> In-Reply-To: <5D355B31-3160-44A3-ADF2-3397E8831DA2@ixsystems.com> References: <51D90B9B.9080209@ixsystems.com> <51D92826.1070707@freebsd.org> <51D9B24B.8070303@ixsystems.com> <51DACE93.9050608@freebsd.org> <51DE6255.5000304@freebsd.org> <F438F05872C24151BE51F1F8ED3871C2@multiplay.co.uk> <51DEA947.6060605@freebsd.org> <5D355B31-3160-44A3-ADF2-3397E8831DA2@ixsystems.com>
index | next in thread | previous in thread | raw e-mail
Please test on VMs.
I've tested -HEAD in i386 virtualbox all the way down to 128mb with no
panics. I'll test with 64mb soon. It's easy to do.
i think the i386 PAE stuff on ${LARGE} memory systems is still broken. Peter?
-adrian
On 11 July 2013 07:59, Alfred Perlstein <alfred@ixsystems.com> wrote:
> Andre, Peter what about i386?
>
> Ever since I touched this Peter has been worried about i386 and said we've broken the platform.
>
> I'm going to boot some vms but maybe we ought to get some testing from Peter on i386?
>
> Sent from my iPhone
>
> On Jul 11, 2013, at 5:47 AM, Andre Oppermann <andre@freebsd.org> wrote:
>
>> On 11.07.2013 11:08, Steven Hartland wrote:
>>> ----- Original Message ----- From: "Andre Oppermann" <andre@freebsd.org>
>>>
>>>> On 08.07.2013 16:37, Andre Oppermann wrote:
>>>>> On 07.07.2013 20:24, Alfred Perlstein wrote:
>>>>>> On 7/7/13 1:34 AM, Andre Oppermann wrote:
>>>>>>> Can you help me with with testing?
>>>>>> Yes. Please give me your proposed changes and I'll stand up a machine and give feedback.
>>>>>
>>>>> http://people.freebsd.org/~andre/mfc-autotuning-20130708.diff
>>>>
>>>> Any feedback from testers on this? The MFC window is closing soon.
>>>
>>> Few things I've noticed most of which look like issues against the original
>>> patch and not the MFC but worth mentioning.
>>>
>>> 1. You've introduced a new tunable kern.maxmbufmem which is autosized but
>>> doesnt seem to be exposed via a sysctl so it looks like there is no way
>>> to determine what its actually set to?
>>
>> Good point. I've made it global and exposed as kern.ipc.maxmbufmem (RDTUN).
>>
>>> 2. There's a missmatch between the tuneable kern.ipc.nmbufs in tunable_mbinit
>>> and the sysctl kern.ipc.nmbuf i.e. no 's'.
>>
>> That's a typo, fixed.
>>
>>> 3. Should kern.maxmbufmem be kern.ipc.maxmbufmem to sit along side all of
>>> the other sysctls?
>>
>> Yes, see above.
>>
>>> 4. style issues:
>>> * @@ -178,11 +202,13 @@
>>> ...
>>> if (newnmbjumbo9 > nmbjumbo9&&
>>
>> Thanks. All fixed in r253204.
>>
>>> Finally out of interest what made us arrive at the various defaults for each
>>> type as it looks like the ratios have changed?
>>
>> Before it was an arbitrary mess. Mbufs were not limited at all and the others
>> to some random multiple of maxusers with the net limit ending up at some 25,000
>> mbuf clusters by default.
>>
>> Now default overall limit is set at 50% of all available min(physical, kmem_map)
>> memory to prevent mbufs from monopolizing kernel memory and leave some space for
>> other kernel structures and buffers as well as user-space programs. It can be
>> raised to 3/4 of available memory by the tunable.
>>
>> 2K and 4K (page size) mbuf clusters can each go up to 25% of this mbuf memory.
>> The former is dominantly used on the receive path and the latter in the send path.
>> 9K and 16K jumbo mbuf clusters can each go up to 12.5% of mbuf memory. They are
>> only used in the receive path if large jumbo MTUs on a network interface are active.
>> Both are special in that their memory is contiguous in KVM and physical memory.
>> This becomes problematic due to memory fragmentation after a short amount of heavy
>> system use. I hope to deprecate them for 10.0. Network interfaces should use 4K
>> clusters instead and chain them together for larger packets. All modern NICs
>> support that. Only the early and limited DMA engines without scatter-gather
>> capabilities required contiguous physical memory. They are long gone by now.
>>
>> The limit for mbufs itselfs is 12.5% of mbuf memory and should be at least as
>> many as the sum of the cluster types. Each cluster requires an mbuf to which
>> it is attached.
>>
>> Two examples on the revised mbuf sizing limits:
>>
>> 1GB KVM:
>> 512MB limit for mbufs
>> 419,430 mbufs
>> 65,536 2K mbuf clusters
>> 32,768 4K mbuf clusters
>> 9,709 9K mbuf clusters
>> 5,461 16K mbuf clusters
>>
>> 16GB RAM:
>> 8GB limit for mbufs
>> 33,554,432 mbufs
>> 1,048,576 2K mbuf clusters
>> 524,288 4K mbuf clusters
>> 155,344 9K mbuf clusters
>> 87,381 16K mbuf clusters
>>
>> These defaults should be sufficient for even the most demanding network loads.
>>
>> For additional information see:
>>
>> http://svnweb.freebsd.org/changeset/base/243631
>>
>> --
>> Andre
>>
> _______________________________________________
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"
help
Want to link to this message? Use this
URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAJ-VmomHQ1K=xOPr34cf70FNOKkMuRgA2vfqThQKh13o6-_F=Q>
