From owner-freebsd-arch@FreeBSD.ORG Mon Mar 23 06:51:55 2015 Return-Path: Delivered-To: freebsd-arch@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4C081CCF for ; Mon, 23 Mar 2015 06:51:55 +0000 (UTC) Received: from mail-ig0-x235.google.com (mail-ig0-x235.google.com [IPv6:2607:f8b0:4001:c05::235]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 136F2818 for ; Mon, 23 Mar 2015 06:51:55 +0000 (UTC) Received: by igcau2 with SMTP id au2so34247951igc.0 for ; Sun, 22 Mar 2015 23:51:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:date:message-id:subject:from:to:content-type; bh=850aUjRYKIWh4QgorCyW7mr3g37BoM4WyQez3BRcM9w=; b=t9fZUdQXiuGte9MOgHNUuO8DLXwZUK3TW4izbBt2gfsYCQxkg8WRojYgh3UB4I9MVy s2yzlmFRENILF1PyCbxUlmHHalL0h0lLaDWXBBvw5krYDGevYayFSVl8HNu3219kSp7c 0d8IbnORTyqSkoFc4P97Wx1y84EiKr9soGdloe48ciaMmpNp16cE6sG35xGr085WD2/8 KGzYEXwhysueIfv8jVaQXZSbMB9mHosdaEt5dqEohurArIMI99VgDW2rDT9JjktqjVkj maZOWp55u2W3T/+dmjp/sg4n4ffIp4V6/vUlujCZ1xt+90zZqWNgsJB8SPQTAxxjLUU6 RtsQ== MIME-Version: 1.0 X-Received: by 10.107.136.206 with SMTP id s75mr127030624ioi.8.1427093514422; Sun, 22 Mar 2015 23:51:54 -0700 (PDT) Sender: adrian.chadd@gmail.com Received: by 10.36.17.194 with HTTP; Sun, 22 Mar 2015 23:51:54 -0700 (PDT) Date: Sun, 22 Mar 2015 23:51:54 -0700 X-Google-Sender-Auth: X0uw5BCQYWUpeESsnNESdyj-Vkg Message-ID: Subject: vm_reserv and VM domains (was Re: A quick dumpster dive through the busdma allocation path..) From: Adrian Chadd To: "freebsd-arch@freebsd.org" Content-Type: text/plain; charset=UTF-8 X-BeenThere: freebsd-arch@freebsd.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Discussion related to FreeBSD architecture List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 23 Mar 2015 06:51:55 -0000 [snip] So after some more digging and discussion, here's where I got to. The problem seen here here is that the KVA allocation / vm_object representation doesn't really have any concept of VM domains; so page allocations just find space in the kmem vm_object, and then request a physical page to back it. If it's already backed by a superpage the vm_reserv code will return it, and not allocate another page. It then comes out of memory from the wrong domain. The quick (!) hack would be to just break the superpage and allow adjacent pages to come from different physical memory from different VM domains. In the VM domain case, we don't want to just blindly allocate like this - instead, we want to try and be more sparser with KVA allocations so the superpages don't have to be broken down. It looks like the vm_reserv code doesn't need to know about domains until it's decided to break the superpage up to meet requirements. It's been suggested that a layer be put between the malloc routines and the calls into the kmem KVA allocation, to allocate larger, (aligned) KVA regions that are multiples of the vm_reserv superpage size. It'll be fine on amd64 and such platforms where there's plenty of KVA. But, I don't really want to add another layer of complexity here. So, does anyone else have any other ideas? -adrian