From owner-freebsd-hackers@FreeBSD.ORG Wed Jan 23 08:10:45 2013 Return-Path: Delivered-To: freebsd-hackers@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 767FD6C8 for ; Wed, 23 Jan 2013 08:10:45 +0000 (UTC) (envelope-from andre@freebsd.org) Received: from c00l3r.networx.ch (c00l3r.networx.ch [62.48.2.2]) by mx1.freebsd.org (Postfix) with ESMTP id C6E66FDA for ; Wed, 23 Jan 2013 08:10:44 +0000 (UTC) Received: (qmail 23874 invoked from network); 23 Jan 2013 09:32:03 -0000 Received: from c00l3r.networx.ch (HELO [127.0.0.1]) ([62.48.2.2]) (envelope-sender ) by c00l3r.networx.ch (qmail-ldap-1.03) with SMTP for ; 23 Jan 2013 09:32:03 -0000 Message-ID: <50FF9AFE.9000406@freebsd.org> Date: Wed, 23 Jan 2013 09:10:38 +0100 From: Andre Oppermann User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130107 Thunderbird/17.0.2 MIME-Version: 1.0 To: Artem Belevich Subject: Re: kmem_map auto-sizing and size dependencies References: <50F96A67.9080203@freebsd.org> <20130121210645.GC1341@garage.freebsd.pl> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Matthew Fleming , FreeBSD Current , freebsd-hackers X-BeenThere: freebsd-hackers@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Technical Discussions relating to FreeBSD List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 23 Jan 2013 08:10:45 -0000 On 23.01.2013 00:22, Artem Belevich wrote: > On Mon, Jan 21, 2013 at 1:06 PM, Pawel Jakub Dawidek wrote: >> On Fri, Jan 18, 2013 at 08:26:04AM -0800, mdf@FreeBSD.org wrote: >>>> Should it be set to a larger initial value based on min(physical,KVM) space >>>> available? >>> >>> It needs to be smaller than the physical space, [...] >> >> Or larger, as the address space can get fragmented and you might not be >> able to allocate memory even if you have physical pages available. > > +1 for relaxing upper limit. > > I routinely patch all my systems that use ZFS to allow kmem_map size > to be larger than physical memory. Otherwise on a system where most of > RAM goes towards ZFS ARC I used to eventually run into dreaded > kmem_map too small panic. During startup and VM initialization the following kernel VM maps are created: kernel_map (parent) specifying the entire kernel virtual address space. It is 512GB on amd64 currently. Out of the kernel_map a number of sub-maps are created: clean_map which isn't referenced anywhere else buffer_map used in vfs_bio.c for i/o buffers pager_map used in vm_page.c for paging exec_map used in kern/kern_exec.c and other places for program startup pipe_map used in kern/sys_pipe.c for pipe buffering kmem_map used in kern/kern_malloc. and vm/uma_core.c among other places and provides all kernel malloc and UMA zone memory allocations. Having the kernel occupy all of physical RAM eventually isn't pretty. So the problem you're describing is that even though enough kernel_map space is still available it is too fragmented to find a sufficiently large chunk. If the kmem_map is larger than the available physical memory another mechanism has to track and limit its physical memory consumption. This may become a SMP bottleneck due to synchronization issues. I haven't looked how the maps are managed internally. Maybe there is a natural hook to attach such a mechanism and to allow the sub-maps to be larger in kVM space than physical memory. Maybe ZFS then can have its own sub-map for ARC too. -- Andre