From owner-freebsd-stable@FreeBSD.ORG Thu Jul 11 15:12:55 2013 Return-Path: Delivered-To: stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 1ACCF5D7; Thu, 11 Jul 2013 15:12:55 +0000 (UTC) (envelope-from adrian.chadd@gmail.com) Received: from mail-wi0-x235.google.com (mail-wi0-x235.google.com [IPv6:2a00:1450:400c:c05::235]) by mx1.freebsd.org (Postfix) with ESMTP id 58A3C133F; Thu, 11 Jul 2013 15:12:54 +0000 (UTC) Received: by mail-wi0-f181.google.com with SMTP id hq4so7689910wib.2 for ; Thu, 11 Jul 2013 08:12:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=dcjFkXn75RV000bruC/uEgeTQCOJBjmYUqFttx4hRRk=; b=XKbRA6xOYpPe8J5ikpg6u70M42GC+lg4dtFDbYyPOa99SsscDtny0s7Ho8dIWs31Gb dRtrbX9XzSA2dBKbYeQNlQ0i1iAVDPuQTPZoM1k6CnVYwCbmDy99dWdaMY34obe5g/rb DQ94k3mkuzsXnrERIvgO6P7pdJrrHRYsLA834zmfowvTqiM4pM0XY4uAURjhoCKPx985 v9YIh/q00LKyMaZEm7cldPDX8mORIRM9Q/0+fzwnJiM6cA/fodKadypjrp5xOHn79oBd o187OkPkCpDiR9LKUWI6EdRka67LUxYCiaJsD0qinwRptHfkh8+LilPyQxye4LObyaUR +K2A== MIME-Version: 1.0 X-Received: by 10.180.20.116 with SMTP id m20mr9496162wie.46.1373555573252; Thu, 11 Jul 2013 08:12:53 -0700 (PDT) Sender: adrian.chadd@gmail.com Received: by 10.217.94.132 with HTTP; Thu, 11 Jul 2013 08:12:53 -0700 (PDT) In-Reply-To: <5D355B31-3160-44A3-ADF2-3397E8831DA2@ixsystems.com> References: <51D90B9B.9080209@ixsystems.com> <51D92826.1070707@freebsd.org> <51D9B24B.8070303@ixsystems.com> <51DACE93.9050608@freebsd.org> <51DE6255.5000304@freebsd.org> <51DEA947.6060605@freebsd.org> <5D355B31-3160-44A3-ADF2-3397E8831DA2@ixsystems.com> Date: Thu, 11 Jul 2013 08:12:53 -0700 X-Google-Sender-Auth: ZKG7gZfIEemeg-bdaXzAro0ZtvU Message-ID: Subject: Re: status of autotuning freebsd for 9.2 From: Adrian Chadd To: Alfred Perlstein Content-Type: text/plain; charset=ISO-8859-1 Cc: "stable@freebsd.org" , "re@freebsd.org" , Peter Wemm , "nonesuch@longcount.org" , Steven Hartland , Andre Oppermann X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Jul 2013 15:12:55 -0000 Please test on VMs. I've tested -HEAD in i386 virtualbox all the way down to 128mb with no panics. I'll test with 64mb soon. It's easy to do. i think the i386 PAE stuff on ${LARGE} memory systems is still broken. Peter? -adrian On 11 July 2013 07:59, Alfred Perlstein wrote: > Andre, Peter what about i386? > > Ever since I touched this Peter has been worried about i386 and said we've broken the platform. > > I'm going to boot some vms but maybe we ought to get some testing from Peter on i386? > > Sent from my iPhone > > On Jul 11, 2013, at 5:47 AM, Andre Oppermann wrote: > >> On 11.07.2013 11:08, Steven Hartland wrote: >>> ----- Original Message ----- From: "Andre Oppermann" >>> >>>> On 08.07.2013 16:37, Andre Oppermann wrote: >>>>> On 07.07.2013 20:24, Alfred Perlstein wrote: >>>>>> On 7/7/13 1:34 AM, Andre Oppermann wrote: >>>>>>> Can you help me with with testing? >>>>>> Yes. Please give me your proposed changes and I'll stand up a machine and give feedback. >>>>> >>>>> http://people.freebsd.org/~andre/mfc-autotuning-20130708.diff >>>> >>>> Any feedback from testers on this? The MFC window is closing soon. >>> >>> Few things I've noticed most of which look like issues against the original >>> patch and not the MFC but worth mentioning. >>> >>> 1. You've introduced a new tunable kern.maxmbufmem which is autosized but >>> doesnt seem to be exposed via a sysctl so it looks like there is no way >>> to determine what its actually set to? >> >> Good point. I've made it global and exposed as kern.ipc.maxmbufmem (RDTUN). >> >>> 2. There's a missmatch between the tuneable kern.ipc.nmbufs in tunable_mbinit >>> and the sysctl kern.ipc.nmbuf i.e. no 's'. >> >> That's a typo, fixed. >> >>> 3. Should kern.maxmbufmem be kern.ipc.maxmbufmem to sit along side all of >>> the other sysctls? >> >> Yes, see above. >> >>> 4. style issues: >>> * @@ -178,11 +202,13 @@ >>> ... >>> if (newnmbjumbo9 > nmbjumbo9&& >> >> Thanks. All fixed in r253204. >> >>> Finally out of interest what made us arrive at the various defaults for each >>> type as it looks like the ratios have changed? >> >> Before it was an arbitrary mess. Mbufs were not limited at all and the others >> to some random multiple of maxusers with the net limit ending up at some 25,000 >> mbuf clusters by default. >> >> Now default overall limit is set at 50% of all available min(physical, kmem_map) >> memory to prevent mbufs from monopolizing kernel memory and leave some space for >> other kernel structures and buffers as well as user-space programs. It can be >> raised to 3/4 of available memory by the tunable. >> >> 2K and 4K (page size) mbuf clusters can each go up to 25% of this mbuf memory. >> The former is dominantly used on the receive path and the latter in the send path. >> 9K and 16K jumbo mbuf clusters can each go up to 12.5% of mbuf memory. They are >> only used in the receive path if large jumbo MTUs on a network interface are active. >> Both are special in that their memory is contiguous in KVM and physical memory. >> This becomes problematic due to memory fragmentation after a short amount of heavy >> system use. I hope to deprecate them for 10.0. Network interfaces should use 4K >> clusters instead and chain them together for larger packets. All modern NICs >> support that. Only the early and limited DMA engines without scatter-gather >> capabilities required contiguous physical memory. They are long gone by now. >> >> The limit for mbufs itselfs is 12.5% of mbuf memory and should be at least as >> many as the sum of the cluster types. Each cluster requires an mbuf to which >> it is attached. >> >> Two examples on the revised mbuf sizing limits: >> >> 1GB KVM: >> 512MB limit for mbufs >> 419,430 mbufs >> 65,536 2K mbuf clusters >> 32,768 4K mbuf clusters >> 9,709 9K mbuf clusters >> 5,461 16K mbuf clusters >> >> 16GB RAM: >> 8GB limit for mbufs >> 33,554,432 mbufs >> 1,048,576 2K mbuf clusters >> 524,288 4K mbuf clusters >> 155,344 9K mbuf clusters >> 87,381 16K mbuf clusters >> >> These defaults should be sufficient for even the most demanding network loads. >> >> For additional information see: >> >> http://svnweb.freebsd.org/changeset/base/243631 >> >> -- >> Andre >> > _______________________________________________ > freebsd-stable@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-stable > To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"