From owner-freebsd-stable@FreeBSD.ORG Thu Oct 15 22:15:20 2009 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E8B2D106566B; Thu, 15 Oct 2009 22:15:19 +0000 (UTC) (envelope-from grarpamp@gmail.com) Received: from ey-out-2122.google.com (ey-out-2122.google.com [74.125.78.27]) by mx1.freebsd.org (Postfix) with ESMTP id 4C31E8FC15; Thu, 15 Oct 2009 22:15:19 +0000 (UTC) Received: by ey-out-2122.google.com with SMTP id 9so325417eyd.9 for ; Thu, 15 Oct 2009 15:15:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=IPGsFErhvgZy9TM194bFffvNcHDTuiLXBgTvndWkfJk=; b=gwBGWyB31QjVz3AMM4aeabtC/92l5Lc5jAqtK+LxctB6u6Kn+8FJDK+l9+z82zusfT FuP8vfBMh9/MDkDtR0hq5Kf+1o7f0IJ2+6vyZtgsTIZQSOQKbASKEe8aiUtCcC3LY97S KavdJcH0gKuDgBC8bHAsTULC9IAXszp0UkJUA= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=Nht2h/WJt7TqvnPo6CwPQKRO+0E1OcB1USTXI70dCnWjcFlGmDV7Hb/JQz332ZVVH9 2FpKm5LpEFs4u2CihkO3D4kvE33ue/b1QG/cfmUPMaQ1wgtv8hNIawBt3vEtqtjJb7v5 hoqLAexK3g1QRYMGFRYjHVDQAnMW87qBRwT5o= MIME-Version: 1.0 Received: by 10.216.88.212 with SMTP id a62mr286083wef.72.1255644918374; Thu, 15 Oct 2009 15:15:18 -0700 (PDT) In-Reply-To: <43425103-6DAE-43FD-82E7-9B71BAB862B0@gothic.net.au> References: <43425103-6DAE-43FD-82E7-9B71BAB862B0@gothic.net.au> Date: Thu, 15 Oct 2009 18:15:18 -0400 Message-ID: From: grarpamp To: freebsd-stable@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Cc: freebsd-fs@freebsd.org Subject: Re: ZFS repeatable reboot 8.0-RC1 X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 Oct 2009 22:15:20 -0000 Note: I tried setting vfs.zfs.arc_max=32M and zfs mem usage still grew past its limits and the machine rebooted. Forwarding a note I received... """"" >> Your machine is starving! > How can this be, there is over 500MiB free ram at all times? I'm sysctl vm.kmem_size_max I've got the following in my kernel configuration file options KVA_PAGES=512 and in /boot/loader.conf vm.kmem_size_max=1536M vm.kmem_size=1536M On two machines with 2G of RAM....both 8.0-RC1/i386 the ZFS tuning guide gives a better idea of how to play with things like that http://wiki.freebsd.org/ZFSTuningGuide """"" Some questions... Am I reading correctly that vm.kmem_size is how much ram the kernel initially allocates for itself on boot? And that vm.kmem_size_min and vm.kmem_size_max are the range that vm.kmem_size is allowed to float naturally within at runtime? Is KVA_PAGES the hard max space the kern is allowed/capable of addressing/using at runtime? Such that I could set kmem_size_max to the KVA_PAGES limit and then vm.kmem_size will grow into it as needed? With the caveat of course that with the below defaults and hardware, if I just bumped vm.kmem_size_max to 1GiB [as per KVA_PAGES limit] I'd have to add maybe another 1GiB ram so that this new vm.kmem_size_max kernel reservation wouldn't conflict with userland memory needs when vm.kmem_size grows to it? And KVA_PAGES is typically say 1/3 of installed ram? If vm.kmem_size starts out being under vm.kmem_size_max, can user apps use the unused space (vm.kmem_size_max - vm.kmem_size) until vm.kmem_size grows to vm.kmem_size_max and the kernel kills them off? Or can user apps only use (ram = user apps + [KVA_PAGES hard limit and/or vm.kmem_size_max])? What is the idea behind setting vm.kmem_size = vm.kmem_size_max? Should not just vm.kmem_size_max be set and allow vm.kmem_size [unset] to grow up to vm.kmem_size_max instead of allocating it all at boot with vm.kmem_size? Maybe someone can wikify these answers? I think I need to find more to read and then test one by one to see what changes. With untuned defaults and 1GiB ram I have: #define KVA_PAGES 256 # gives 1GiB kern address space vm.kmem_size_max: 335544320 # auto calculated by the kernel at boot? Less than KVA_PAGES? vm.kmem_size_min: 0 vm.kmem_size: 335544320 # amount in use at runtime? I'm still figuring out how to find and add all the kernel memory. Here's zfs: vfs.zfs.arc_meta_limit: 52428800 vfs.zfs.arc_meta_used: 56241732 # greater than meta_limit? vfs.zfs.arc_min: 26214400 vfs.zfs.arc_max: 209715200 vfs.zfs.vdev.cache.size: 10485760 vfs.zfs.vdev.cache.max: 16384 kstat.zfs.misc.arcstats.p: 20589785 # was 104857600 on boot kstat.zfs.misc.arcstats.c: 128292242 # was 209715200 on boot kstat.zfs.misc.arcstats.c_min: 26214400 kstat.zfs.misc.arcstats.c_max: 209715200 loader(8) vm.kmem_size Sets the size of kernel memory (bytes). This overrides the value determined when the kernel was compiled. Modifies VM_KMEM_SIZE. vm.kmem_size_min vm.kmem_size_max Sets the minimum and maximum (respectively) amount of ker- nel memory that will be automatically allocated by the ker- nel. These override the values determined when the kernel was compiled. Modifies VM_KMEM_SIZE_MIN and VM_KMEM_SIZE_MAX. sys/i386/include/pmap.h /* * Size of Kernel address space. This is the number of page table pages * (4MB each) to use for the kernel. 256 pages == 1 Gigabyte. * This **MUST** be a multiple of 4 (eg: 252, 256, 260, etc). * For PAE, the page table page unit size is 2MB. This means that 512 pages * is 1 Gigabyte. Double everything. It must be a multiple of 8 for PAE. */ #ifndef KVA_PAGES #ifdef PAE #define KVA_PAGES 512 #else #define KVA_PAGES 256 #endif #endif sys/i386/conf/NOTES # Change the size of the kernel virtual address space. Due to # constraints in loader(8) on i386, this must be a multiple of 4. # 256 = 1 GB of kernel address space. Increasing this also causes # a reduction of the address space in user processes. 512 splits # the 4GB cpu address space in half (2GB user, 2GB kernel). For PAE # kernels, the value will need to be double non-PAE. A value of 1024 # for PAE kernels is necessary to split the address space in half. # This will likely need to be increased to handle memory sizes >4GB. # PAE kernels default to a value of 512. # options KVA_PAGES=260