From owner-freebsd-performance@FreeBSD.ORG Mon Aug 27 22:23:27 2012 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 7322E106564A; Mon, 27 Aug 2012 22:23:27 +0000 (UTC) (envelope-from gezeala@gmail.com) Received: from mail-pb0-f54.google.com (mail-pb0-f54.google.com [209.85.160.54]) by mx1.freebsd.org (Postfix) with ESMTP id 2C3158FC14; Mon, 27 Aug 2012 22:23:26 +0000 (UTC) Received: by pbbrp2 with SMTP id rp2so8408722pbb.13 for ; Mon, 27 Aug 2012 15:23:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type:content-transfer-encoding; bh=l3teHgXusy9uoP/Ej2Ntm1zaP5h4kPaJAVDgtJxEDjE=; b=NNjKKG1xKX8lupabvpOveuILyjNSvBvDbSC1Ik3yb3Y/pAfXlqxTLt5rj6xKMJs6R0 w3+2bgMnzO/n9v/AE10Y0pVG0/EvSxtHwRDV4sb6WZJ728sCyUriHfuphT6mjHaq6ZAz Xy11kMXS1VQe3HFdFnYY91TsAsqHbQYgQMGEY/8AdJOL1b0JwDrExOhH4kmUOU6T21eR 1+dHLtRECHjq51vvgljpJVM3gpXPCBrAoL36+USjw8DDG5yZc7YPQHZd17GDaKMdgQKw 0QBB0qWhr/CW34IZB/yRQaQrn5703RmnTPr/oM3+ikxAcu0WMWH13Y/MjjgKzyjmyIHh f+3g== Received: by 10.66.76.226 with SMTP id n2mr33038924paw.67.1346106206608; Mon, 27 Aug 2012 15:23:26 -0700 (PDT) MIME-Version: 1.0 Received: by 10.68.54.234 with HTTP; Mon, 27 Aug 2012 15:23:05 -0700 (PDT) In-Reply-To: <50367E5D.1020702@rice.edu> References: <502DEAD9.6050304@zonov.org> <502EB081.3030801@rice.edu> <502FE98E.40807@rice.edu> <50325634.7090904@rice.edu> <503418C0.5000901@rice.edu> <50367E5D.1020702@rice.edu> From: =?ISO-8859-1?Q?Gezeala_M=2E_Bacu=F1o_II?= Date: Mon, 27 Aug 2012 15:23:05 -0700 Message-ID: To: Alan Cox Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: alc@freebsd.org, freebsd-performance@freebsd.org, Andrey Zonov , kib@freebsd.org Subject: Re: vm.kmem_size_max and vm.kmem_size capped at 329853485875 (~307GB) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 27 Aug 2012 22:23:27 -0000 On Thu, Aug 23, 2012 at 12:02 PM, Alan Cox wrote: > On 08/22/2012 12:09, Gezeala M. Bacu=F1o II wrote: >> >> On Tue, Aug 21, 2012 at 4:24 PM, Alan Cox wrote: >>> >>> On 8/20/2012 8:26 PM, Gezeala M. Bacu=F1o II wrote: >>>> >>>> On Mon, Aug 20, 2012 at 9:07 AM, Gezeala M. Bacu=F1o II >>>> wrote: >>>>> >>>>> On Mon, Aug 20, 2012 at 8:22 AM, Alan Cox wrote: >>>>>> >>>>>> On 08/18/2012 19:57, Gezeala M. Bacu=F1o II wrote: >>>>>>> >>>>>>> On Sat, Aug 18, 2012 at 12:14 PM, Alan Cox wrote: >>>>>>>> >>>>>>>> On 08/17/2012 17:08, Gezeala M. Bacu=F1o II wrote: >>>>>>>>> >>>>>>>>> On Fri, Aug 17, 2012 at 1:58 PM, Alan Cox wrote: >>>>>>>>>> >>>>>>>>>> vm.kmem_size controls the maximum size of the kernel's heap, i.e= ., >>>>>>>>>> the >>>>>>>>>> region where the kernel's slab and malloc()-like memory allocato= rs >>>>>>>>>> obtain >>>>>>>>>> their memory. While this heap may occupy the largest portion of >>>>>>>>>> the >>>>>>>>>> kernel's virtual address space, it cannot occupy the entirety of >>>>>>>>>> the >>>>>>>>>> address >>>>>>>>>> space. There are other things that must be given space within t= he >>>>>>>>>> kernel's >>>>>>>>>> address space, for example, the file system buffer map. >>>>>>>>>> >>>>>>>>>> ZFS does not, however, use the regular file system buffer cache. >>>>>>>>>> The >>>>>>>>>> ARC >>>>>>>>>> takes its place, and the ARC abuses the kernel's heap like nothi= ng >>>>>>>>>> else. >>>>>>>>>> So, if you are running a machine that only makes trivial use of = a >>>>>>>>>> non-ZFS >>>>>>>>>> file system, like you boot from UFS, but store all of your data = in >>>>>>>>>> ZFS, >>>>>>>>>> then >>>>>>>>>> you can dramatically reduce the size of the buffer map via boot >>>>>>>>>> loader >>>>>>>>>> tuneables and proportionately increase vm.kmem_size. >>>>>>>>>> >>>>>>>>>> Any further increases in the kernel virtual address space size >>>>>>>>>> will, >>>>>>>>>> however, require code changes. Small changes, but changes >>>>>>>>>> nonetheless. >>>>>>>>>> >>>>>>>>>> Alan >>>>>>>>>> >>>>> <> >>>>>>>> >>>>>>>> Your objective should be to reduce the value of "sysctl >>>>>>>> vfs.maxbufspace". >>>>>>>> You can do this by setting the loader.conf tuneable "kern.maxbcach= e" >>>>>>>> to >>>>>>>> the >>>>>>>> desired value. >>>>>>>> >>>>>>>> What does your machine currently report for "sysctl >>>>>>>> vfs.maxbufspace"? >>>>>>>> >>>>>>> Here you go: >>>>>>> vfs.maxbufspace: 54967025664 >>>>>>> kern.maxbcache: 0 >>>>>> >>>>>> >>>>>> Try setting kern.maxbcache to two billion and adding 50 billion to t= he >>>>>> setting of vm.kmem_size{,_max}. >>>>>> >>>> 2 : 50 =3D=3D>> is this the ratio for further tuning >>>> kern.maxbcache:vm.kmem_size? Is kern.maxbcache also in bytes? >>>> >>> No, this is not a ratio. Yes, kern.maxbcache is in bytes. Basically, f= or >>> every byte that you subtract from vfs.maxbufspace, through setting >>> kern.maxbcache, you can add a byte to vm.kmem_size{,_max}. >>> >>> Alan >>> >> Great! Thanks. Are there other sysctls aside from vfs.bufspace that I >> should monitor for vfs.maxbufspace usage? I just want to make sure >> that vfs.maxbufspace is sufficient for our needs. > > > You might keep an eye on "sysctl vfs.bufdefragcnt". If it starts rapidly > increasing, you may want to increase vfs.maxbufspace. > > Alan > We seem to max out vfs.bufspace in <24hrs uptime. It has been steady at 1999273984 while vfs.bufdefragcnt stays at 0 - which I presume is good. Nevertheless, I will increase kern.maxbcache to 6GB and adjust vm.kmem_size{,_max}, vfs.zfs.arc_max accordingly. On another machine with vfs.maxbufspace auto-tuned to 7738671104 (~7.2GB), vfs.bufspace is now at 5278597120 (uptime 129 days). vfs.maxbufspace: 1999994880 kern.maxbcache: 2000000000 vfs.hirunningspace: 16777216 vfs.lorunningspace: 11206656 vfs.bufdefragcnt: 0 vfs.buffreekvacnt: 59 vfs.bufreusecnt: 61075 vfs.hibufspace: 1999339520 vfs.lobufspace: 1999273984 vfs.maxmallocbufspace: 99966976 vfs.bufmallocspace: 0 vfs.bufspace: 1999273984 vfs.runningbufspace: 0 vfs.numdirtybuffers: 2 vfs.lodirtybuffers: 15268 vfs.hidirtybuffers: 30537 vfs.dirtybufthresh: 27483 vfs.numfreebuffers: 122068 vfs.getnewbufcalls: 1159148 From owner-freebsd-performance@FreeBSD.ORG Tue Aug 28 19:07:43 2012 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 49F16106564A; Tue, 28 Aug 2012 19:07:43 +0000 (UTC) (envelope-from alc@rice.edu) Received: from mh11.mail.rice.edu (mh11.mail.rice.edu [128.42.199.30]) by mx1.freebsd.org (Postfix) with ESMTP id 0ABE38FC12; Tue, 28 Aug 2012 19:07:42 +0000 (UTC) Received: from mh11.mail.rice.edu (localhost.localdomain [127.0.0.1]) by mh11.mail.rice.edu (Postfix) with ESMTP id C0BDF4C02E6; Tue, 28 Aug 2012 14:07:41 -0500 (CDT) Received: from mh11.mail.rice.edu (localhost.localdomain [127.0.0.1]) by mh11.mail.rice.edu (Postfix) with ESMTP id BF0D74C02CF; Tue, 28 Aug 2012 14:07:41 -0500 (CDT) X-Virus-Scanned: by amavis-2.7.0 at mh11.mail.rice.edu, auth channel Received: from mh11.mail.rice.edu ([127.0.0.1]) by mh11.mail.rice.edu (mh11.mail.rice.edu [127.0.0.1]) (amavis, port 10026) with ESMTP id KNyrg-LZw2dz; Tue, 28 Aug 2012 14:07:41 -0500 (CDT) Received: from adsl-216-63-78-18.dsl.hstntx.swbell.net (adsl-216-63-78-18.dsl.hstntx.swbell.net [216.63.78.18]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) (Authenticated sender: alc) by mh11.mail.rice.edu (Postfix) with ESMTPSA id 2E4BE4C023A; Tue, 28 Aug 2012 14:07:41 -0500 (CDT) Message-ID: <503D16FC.2080903@rice.edu> Date: Tue, 28 Aug 2012 14:07:40 -0500 From: Alan Cox User-Agent: Mozilla/5.0 (X11; FreeBSD i386; rv:8.0) Gecko/20111113 Thunderbird/8.0 MIME-Version: 1.0 To: =?ISO-8859-1?Q?=22Gezeala_M=2E_Bacu=F1o_II=22?= References: <502DEAD9.6050304@zonov.org> <502EB081.3030801@rice.edu> <502FE98E.40807@rice.edu> <50325634.7090904@rice.edu> <503418C0.5000901@rice.edu> <50367E5D.1020702@rice.edu> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 8bit Cc: alc@freebsd.org, freebsd-performance@freebsd.org, Andrey Zonov , kib@freebsd.org Subject: Re: vm.kmem_size_max and vm.kmem_size capped at 329853485875 (~307GB) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Aug 2012 19:07:43 -0000 On 08/27/2012 17:23, Gezeala M. Bacuņo II wrote: > On Thu, Aug 23, 2012 at 12:02 PM, Alan Cox wrote: >> On 08/22/2012 12:09, Gezeala M. Bacuņo II wrote: >>> On Tue, Aug 21, 2012 at 4:24 PM, Alan Cox wrote: >>>> On 8/20/2012 8:26 PM, Gezeala M. Bacuņo II wrote: >>>>> On Mon, Aug 20, 2012 at 9:07 AM, Gezeala M. Bacuņo II >>>>> wrote: >>>>>> On Mon, Aug 20, 2012 at 8:22 AM, Alan Cox wrote: >>>>>>> On 08/18/2012 19:57, Gezeala M. Bacuņo II wrote: >>>>>>>> On Sat, Aug 18, 2012 at 12:14 PM, Alan Cox wrote: >>>>>>>>> On 08/17/2012 17:08, Gezeala M. Bacuņo II wrote: >>>>>>>>>> On Fri, Aug 17, 2012 at 1:58 PM, Alan Cox wrote: >>>>>>>>>>> vm.kmem_size controls the maximum size of the kernel's heap, i.e., >>>>>>>>>>> the >>>>>>>>>>> region where the kernel's slab and malloc()-like memory allocators >>>>>>>>>>> obtain >>>>>>>>>>> their memory. While this heap may occupy the largest portion of >>>>>>>>>>> the >>>>>>>>>>> kernel's virtual address space, it cannot occupy the entirety of >>>>>>>>>>> the >>>>>>>>>>> address >>>>>>>>>>> space. There are other things that must be given space within the >>>>>>>>>>> kernel's >>>>>>>>>>> address space, for example, the file system buffer map. >>>>>>>>>>> >>>>>>>>>>> ZFS does not, however, use the regular file system buffer cache. >>>>>>>>>>> The >>>>>>>>>>> ARC >>>>>>>>>>> takes its place, and the ARC abuses the kernel's heap like nothing >>>>>>>>>>> else. >>>>>>>>>>> So, if you are running a machine that only makes trivial use of a >>>>>>>>>>> non-ZFS >>>>>>>>>>> file system, like you boot from UFS, but store all of your data in >>>>>>>>>>> ZFS, >>>>>>>>>>> then >>>>>>>>>>> you can dramatically reduce the size of the buffer map via boot >>>>>>>>>>> loader >>>>>>>>>>> tuneables and proportionately increase vm.kmem_size. >>>>>>>>>>> >>>>>>>>>>> Any further increases in the kernel virtual address space size >>>>>>>>>>> will, >>>>>>>>>>> however, require code changes. Small changes, but changes >>>>>>>>>>> nonetheless. >>>>>>>>>>> >>>>>>>>>>> Alan >>>>>>>>>>> >>>>>> <> >>>>>>>>> Your objective should be to reduce the value of "sysctl >>>>>>>>> vfs.maxbufspace". >>>>>>>>> You can do this by setting the loader.conf tuneable "kern.maxbcache" >>>>>>>>> to >>>>>>>>> the >>>>>>>>> desired value. >>>>>>>>> >>>>>>>>> What does your machine currently report for "sysctl >>>>>>>>> vfs.maxbufspace"? >>>>>>>>> >>>>>>>> Here you go: >>>>>>>> vfs.maxbufspace: 54967025664 >>>>>>>> kern.maxbcache: 0 >>>>>>> >>>>>>> Try setting kern.maxbcache to two billion and adding 50 billion to the >>>>>>> setting of vm.kmem_size{,_max}. >>>>>>> >>>>> 2 : 50 ==>> is this the ratio for further tuning >>>>> kern.maxbcache:vm.kmem_size? Is kern.maxbcache also in bytes? >>>>> >>>> No, this is not a ratio. Yes, kern.maxbcache is in bytes. Basically, for >>>> every byte that you subtract from vfs.maxbufspace, through setting >>>> kern.maxbcache, you can add a byte to vm.kmem_size{,_max}. >>>> >>>> Alan >>>> >>> Great! Thanks. Are there other sysctls aside from vfs.bufspace that I >>> should monitor for vfs.maxbufspace usage? I just want to make sure >>> that vfs.maxbufspace is sufficient for our needs. >> >> You might keep an eye on "sysctl vfs.bufdefragcnt". If it starts rapidly >> increasing, you may want to increase vfs.maxbufspace. >> >> Alan >> > We seem to max out vfs.bufspace in<24hrs uptime. It has been steady > at 1999273984 while vfs.bufdefragcnt stays at 0 - which I presume is > good. Nevertheless, I will increase kern.maxbcache to 6GB and adjust > vm.kmem_size{,_max}, vfs.zfs.arc_max accordingly. On another machine > with vfs.maxbufspace auto-tuned to 7738671104 (~7.2GB), vfs.bufspace > is now at 5278597120 (uptime 129 days). The buffer map is a kind of cache. Like any cache, most of the time it will be full. Don't worry. Moreover, even when the buffer map is full, the UFS file system is caching additional file data in physical memory pages that simply aren't mapped for instantaneous access. Essentially, limiting the size of the buffer map is only limiting the amount of modified file data that hasn't been written back to disk, not the total amount of cached data. As long as you're making trivial use of UFS file systems, there really isn't a reason to increase the buffer map size. Alan From owner-freebsd-performance@FreeBSD.ORG Tue Aug 28 19:47:54 2012 Return-Path: Delivered-To: freebsd-performance@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E1366106566B; Tue, 28 Aug 2012 19:47:54 +0000 (UTC) (envelope-from gezeala@gmail.com) Received: from mail-pz0-f54.google.com (mail-pz0-f54.google.com [209.85.210.54]) by mx1.freebsd.org (Postfix) with ESMTP id 988D88FC1B; Tue, 28 Aug 2012 19:47:54 +0000 (UTC) Received: by dadr6 with SMTP id r6so3561018dad.13 for ; Tue, 28 Aug 2012 12:47:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type:content-transfer-encoding; bh=olBVSj2uePOGhwJah/El8Q4GmAXlpK/KXi9K6tS9ynA=; b=Zj7njZx9u53BQTiwvxUwaOq5TnHD3BjpaHAWAY+a1CE8SWDYa3wU/ttADa50cGaksS 6Pu9pKucvXb6wD9gSH3vNdBWo8U2bUXb2KdjCjk8VAISfmOm2FiFB1x7g/fqAayeUziM bNn6PscecGCTK3oBhTN8fxOhNpJxcvgAcbTMX+CK8u/FtvJvutGrLAzR2Oz1bVb5QTgd 0oj4OokXQgB1MQ+jRjHtYS3xj/ZhDZ7gqpLtKeohiRoDstvsXUxVqIAA8YFXwlZcC5GP oOc0SRW+l6sKtkqFHcY1w4QZnQbzI5EmLS3O/d6BK0izDdRPYpBFxEM/T/C5VfaLipZi IK7w== Received: by 10.66.76.226 with SMTP id n2mr39694486paw.67.1346183272051; Tue, 28 Aug 2012 12:47:52 -0700 (PDT) MIME-Version: 1.0 Received: by 10.68.54.234 with HTTP; Tue, 28 Aug 2012 12:47:29 -0700 (PDT) In-Reply-To: <503D16FC.2080903@rice.edu> References: <502DEAD9.6050304@zonov.org> <502EB081.3030801@rice.edu> <502FE98E.40807@rice.edu> <50325634.7090904@rice.edu> <503418C0.5000901@rice.edu> <50367E5D.1020702@rice.edu> <503D16FC.2080903@rice.edu> From: =?ISO-8859-1?Q?Gezeala_M=2E_Bacu=F1o_II?= Date: Tue, 28 Aug 2012 12:47:29 -0700 Message-ID: To: Alan Cox Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: alc@freebsd.org, freebsd-performance@freebsd.org, Andrey Zonov , kib@freebsd.org Subject: Re: vm.kmem_size_max and vm.kmem_size capped at 329853485875 (~307GB) X-BeenThere: freebsd-performance@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Performance/tuning List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Aug 2012 19:47:55 -0000 On Tue, Aug 28, 2012 at 12:07 PM, Alan Cox wrote: > On 08/27/2012 17:23, Gezeala M. Bacu=F1o II wrote: >> >> On Thu, Aug 23, 2012 at 12:02 PM, Alan Cox wrote: >>> >>> On 08/22/2012 12:09, Gezeala M. Bacu=F1o II wrote: >>>> >>>> On Tue, Aug 21, 2012 at 4:24 PM, Alan Cox wrote: >>>>> >>>>> On 8/20/2012 8:26 PM, Gezeala M. Bacu=F1o II wrote: >>>>>> >>>>>> On Mon, Aug 20, 2012 at 9:07 AM, Gezeala M. Bacu=F1o >>>>>> II >>>>>> wrote: >>>>>>> >>>>>>> On Mon, Aug 20, 2012 at 8:22 AM, Alan Cox wrote: >>>>>>>> >>>>>>>> On 08/18/2012 19:57, Gezeala M. Bacu=F1o II wrote: >>>>>>>>> >>>>>>>>> On Sat, Aug 18, 2012 at 12:14 PM, Alan Cox wrote= : >>>>>>>>>> >>>>>>>>>> On 08/17/2012 17:08, Gezeala M. Bacu=F1o II wrote: >>>>>>>>>>> >>>>>>>>>>> On Fri, Aug 17, 2012 at 1:58 PM, Alan Cox >>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>> vm.kmem_size controls the maximum size of the kernel's heap, >>>>>>>>>>>> i.e., >>>>>>>>>>>> the >>>>>>>>>>>> region where the kernel's slab and malloc()-like memory >>>>>>>>>>>> allocators >>>>>>>>>>>> obtain >>>>>>>>>>>> their memory. While this heap may occupy the largest portion = of >>>>>>>>>>>> the >>>>>>>>>>>> kernel's virtual address space, it cannot occupy the entirety = of >>>>>>>>>>>> the >>>>>>>>>>>> address >>>>>>>>>>>> space. There are other things that must be given space within >>>>>>>>>>>> the >>>>>>>>>>>> kernel's >>>>>>>>>>>> address space, for example, the file system buffer map. >>>>>>>>>>>> >>>>>>>>>>>> ZFS does not, however, use the regular file system buffer cach= e. >>>>>>>>>>>> The >>>>>>>>>>>> ARC >>>>>>>>>>>> takes its place, and the ARC abuses the kernel's heap like >>>>>>>>>>>> nothing >>>>>>>>>>>> else. >>>>>>>>>>>> So, if you are running a machine that only makes trivial use o= f >>>>>>>>>>>> a >>>>>>>>>>>> non-ZFS >>>>>>>>>>>> file system, like you boot from UFS, but store all of your dat= a >>>>>>>>>>>> in >>>>>>>>>>>> ZFS, >>>>>>>>>>>> then >>>>>>>>>>>> you can dramatically reduce the size of the buffer map via boo= t >>>>>>>>>>>> loader >>>>>>>>>>>> tuneables and proportionately increase vm.kmem_size. >>>>>>>>>>>> >>>>>>>>>>>> Any further increases in the kernel virtual address space size >>>>>>>>>>>> will, >>>>>>>>>>>> however, require code changes. Small changes, but changes >>>>>>>>>>>> nonetheless. >>>>>>>>>>>> >>>>>>>>>>>> Alan >>>>>>>>>>>> >>>>>>> <> >>>>>>>>>> >>>>>>>>>> Your objective should be to reduce the value of "sysctl >>>>>>>>>> vfs.maxbufspace". >>>>>>>>>> You can do this by setting the loader.conf tuneable >>>>>>>>>> "kern.maxbcache" >>>>>>>>>> to >>>>>>>>>> the >>>>>>>>>> desired value. >>>>>>>>>> >>>>>>>>>> What does your machine currently report for "sysctl >>>>>>>>>> vfs.maxbufspace"? >>>>>>>>>> >>>>>>>>> Here you go: >>>>>>>>> vfs.maxbufspace: 54967025664 >>>>>>>>> kern.maxbcache: 0 >>>>>>>> >>>>>>>> >>>>>>>> Try setting kern.maxbcache to two billion and adding 50 billion to >>>>>>>> the >>>>>>>> setting of vm.kmem_size{,_max}. >>>>>>>> >>>>>> 2 : 50 =3D=3D>> is this the ratio for further tuning >>>>>> kern.maxbcache:vm.kmem_size? Is kern.maxbcache also in bytes? >>>>>> >>>>> No, this is not a ratio. Yes, kern.maxbcache is in bytes. Basically, >>>>> for >>>>> every byte that you subtract from vfs.maxbufspace, through setting >>>>> kern.maxbcache, you can add a byte to vm.kmem_size{,_max}. >>>>> >>>>> Alan >>>>> >>>> Great! Thanks. Are there other sysctls aside from vfs.bufspace that I >>>> should monitor for vfs.maxbufspace usage? I just want to make sure >>>> that vfs.maxbufspace is sufficient for our needs. >>> >>> >>> You might keep an eye on "sysctl vfs.bufdefragcnt". If it starts rapid= ly >>> increasing, you may want to increase vfs.maxbufspace. >>> >>> Alan >>> >> We seem to max out vfs.bufspace in<24hrs uptime. It has been steady >> at 1999273984 while vfs.bufdefragcnt stays at 0 - which I presume is >> good. Nevertheless, I will increase kern.maxbcache to 6GB and adjust >> vm.kmem_size{,_max}, vfs.zfs.arc_max accordingly. On another machine >> with vfs.maxbufspace auto-tuned to 7738671104 (~7.2GB), vfs.bufspace >> is now at 5278597120 (uptime 129 days). > > > The buffer map is a kind of cache. Like any cache, most of the time it w= ill > be full. Don't worry. > > Moreover, even when the buffer map is full, the UFS file system is cachin= g > additional file data in physical memory pages that simply aren't mapped f= or > instantaneous access. Essentially, limiting the size of the buffer map i= s > only limiting the amount of modified file data that hasn't been written b= ack > to disk, not the total amount of cached data. > > As long as you're making trivial use of UFS file systems, there really is= n't > a reason to increase the buffer map size. > > Alan > > I see. Makes sense now. Thanks! I forgot to mention that we do have smbfs mounts mounted from another server, are writes/modifications on files on these mounts also cached in the buffer map? All non-ZFS file systems right? Input/Output files are read from or written to these mounts.