From owner-freebsd-performance@FreeBSD.ORG Thu Aug 23 19:02:56 2012 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 27BC4106564A; Thu, 23 Aug 2012 19:02:56 +0000 (UTC) (envelope-from alc@rice.edu) Received: from mh10.mail.rice.edu (mh10.mail.rice.edu [128.42.201.30]) by mx1.freebsd.org (Postfix) with ESMTP id D947A8FC08; Thu, 23 Aug 2012 19:02:55 +0000 (UTC) Received: from mh10.mail.rice.edu (localhost.localdomain [127.0.0.1]) by mh10.mail.rice.edu (Postfix) with ESMTP id ED2AC604D9; Thu, 23 Aug 2012 14:02:54 -0500 (CDT) Received: from mh10.mail.rice.edu (localhost.localdomain [127.0.0.1]) by mh10.mail.rice.edu (Postfix) with ESMTP id EB54B604C8; Thu, 23 Aug 2012 14:02:54 -0500 (CDT) X-Virus-Scanned: by amavis-2.7.0 at mh10.mail.rice.edu, auth channel Received: from mh10.mail.rice.edu ([127.0.0.1]) by mh10.mail.rice.edu (mh10.mail.rice.edu [127.0.0.1]) (amavis, port 10026) with ESMTP id DhYj4pZTPCBD; Thu, 23 Aug 2012 14:02:54 -0500 (CDT) Received: from adsl-216-63-78-18.dsl.hstntx.swbell.net (adsl-216-63-78-18.dsl.hstntx.swbell.net [216.63.78.18]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) (Authenticated sender: alc) by mh10.mail.rice.edu (Postfix) with ESMTPSA id 63FF0604D8; Thu, 23 Aug 2012 14:02:54 -0500 (CDT) Message-ID: <50367E5D.1020702@rice.edu> Date: Thu, 23 Aug 2012 14:02:53 -0500 From: Alan Cox User-Agent: Mozilla/5.0 (X11; FreeBSD i386; rv:8.0) Gecko/20111113 Thunderbird/8.0 MIME-Version: 1.0 To: =?ISO-8859-1?Q?=22Gezeala_M=2E_Bacu=F1o_II=22?= References: <502DEAD9.6050304@zonov.org> <502EB081.3030801@rice.edu> <502FE98E.40807@rice.edu> <50325634.7090904@rice.edu> <503418C0.5000901@rice.edu> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 8bit Cc: alc@freebsd.org, freebsd-performance@freebsd.org, Andrey Zonov , kib@freebsd.org Subject: Re: vm.kmem_size_max and vm.kmem_size capped at 329853485875 (~307GB) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 23 Aug 2012 19:02:56 -0000 On 08/22/2012 12:09, Gezeala M. Bacuņo II wrote: > On Tue, Aug 21, 2012 at 4:24 PM, Alan Cox wrote: >> On 8/20/2012 8:26 PM, Gezeala M. Bacuņo II wrote: >>> On Mon, Aug 20, 2012 at 9:07 AM, Gezeala M. Bacuņo II >>> wrote: >>>> On Mon, Aug 20, 2012 at 8:22 AM, Alan Cox wrote: >>>>> On 08/18/2012 19:57, Gezeala M. Bacuņo II wrote: >>>>>> On Sat, Aug 18, 2012 at 12:14 PM, Alan Cox wrote: >>>>>>> On 08/17/2012 17:08, Gezeala M. Bacuņo II wrote: >>>>>>>> On Fri, Aug 17, 2012 at 1:58 PM, Alan Cox wrote: >>>>>>>>> vm.kmem_size controls the maximum size of the kernel's heap, i.e., >>>>>>>>> the >>>>>>>>> region where the kernel's slab and malloc()-like memory allocators >>>>>>>>> obtain >>>>>>>>> their memory. While this heap may occupy the largest portion of the >>>>>>>>> kernel's virtual address space, it cannot occupy the entirety of the >>>>>>>>> address >>>>>>>>> space. There are other things that must be given space within the >>>>>>>>> kernel's >>>>>>>>> address space, for example, the file system buffer map. >>>>>>>>> >>>>>>>>> ZFS does not, however, use the regular file system buffer cache. The >>>>>>>>> ARC >>>>>>>>> takes its place, and the ARC abuses the kernel's heap like nothing >>>>>>>>> else. >>>>>>>>> So, if you are running a machine that only makes trivial use of a >>>>>>>>> non-ZFS >>>>>>>>> file system, like you boot from UFS, but store all of your data in >>>>>>>>> ZFS, >>>>>>>>> then >>>>>>>>> you can dramatically reduce the size of the buffer map via boot >>>>>>>>> loader >>>>>>>>> tuneables and proportionately increase vm.kmem_size. >>>>>>>>> >>>>>>>>> Any further increases in the kernel virtual address space size will, >>>>>>>>> however, require code changes. Small changes, but changes >>>>>>>>> nonetheless. >>>>>>>>> >>>>>>>>> Alan >>>>>>>>> >>>> <> >>>>>>> Your objective should be to reduce the value of "sysctl >>>>>>> vfs.maxbufspace". >>>>>>> You can do this by setting the loader.conf tuneable "kern.maxbcache" >>>>>>> to >>>>>>> the >>>>>>> desired value. >>>>>>> >>>>>>> What does your machine currently report for "sysctl vfs.maxbufspace"? >>>>>>> >>>>>> Here you go: >>>>>> vfs.maxbufspace: 54967025664 >>>>>> kern.maxbcache: 0 >>>>> >>>>> Try setting kern.maxbcache to two billion and adding 50 billion to the >>>>> setting of vm.kmem_size{,_max}. >>>>> >>> 2 : 50 ==>> is this the ratio for further tuning >>> kern.maxbcache:vm.kmem_size? Is kern.maxbcache also in bytes? >>> >> No, this is not a ratio. Yes, kern.maxbcache is in bytes. Basically, for >> every byte that you subtract from vfs.maxbufspace, through setting >> kern.maxbcache, you can add a byte to vm.kmem_size{,_max}. >> >> Alan >> > Great! Thanks. Are there other sysctls aside from vfs.bufspace that I > should monitor for vfs.maxbufspace usage? I just want to make sure > that vfs.maxbufspace is sufficient for our needs. You might keep an eye on "sysctl vfs.bufdefragcnt". If it starts rapidly increasing, you may want to increase vfs.maxbufspace. Alan