From owner-svn-src-all@freebsd.org Fri Mar 23 01:20:27 2018 Return-Path: Delivered-To: svn-src-all@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id B8A2AF5C2D4 for ; Fri, 23 Mar 2018 01:20:26 +0000 (UTC) (envelope-from jroberson@jroberson.net) Received: from mail-pl0-x234.google.com (mail-pl0-x234.google.com [IPv6:2607:f8b0:400e:c01::234]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 350027754D for ; Fri, 23 Mar 2018 01:20:26 +0000 (UTC) (envelope-from jroberson@jroberson.net) Received: by mail-pl0-x234.google.com with SMTP id 9-v6so6478034ple.11 for ; Thu, 22 Mar 2018 18:20:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jroberson-net.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=jCdVj0tIOFf3syaz8NMfukM7vCrMSGuhP8lRkIZMtXs=; b=Y7f6fagq2EVNony+xkLczUj2YoNXB2jrY7mouIYu6xfDnJfuxN4SBZ2V4ofIg4wWVh ToEhaT+RnfmvpOrXgVYJQozsBWlHCLmDRC60LOGXFMfU03fnTaLSxetILsRqamLAgZhS 7JGwtMcVz3Vl55/Qj6HkMd/ljnUqO4jQEv/C0Msgqikomyr+YIKU/eDzuNSdF60nymuP 1bAcb6g+rPnCVqCjgB2P1kuZAbnJRFa9bCfcUS+yBWswpsYpWQwqrVFYPyrmOdguUL+E KeF8XJ+qBhUpy2O8Jq89SusK+JlXJ8/53chtJw74UwIykG27NYL4rnCEJFW+1na8qphK I/vQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=jCdVj0tIOFf3syaz8NMfukM7vCrMSGuhP8lRkIZMtXs=; b=iR7xW7sIeEeIsqRPuUQLAzTPTzDRaxmeABxAv+sEGSNfLukv8MAnE3ELpFhkA1eoql ZnDStDx4UcjDvlnvAtWH184AQNs8bYbbqxTAnbDU7TAAeN+4MMoqpzPRqZDB3isgUGTx mLPw4653Vp6VUX11kF3B6y3vTE4R7yPbu4Vs3EdyjM/JYZu7tpT4S77YB+SD9WcEZpiS zcwmZnpzFGdAqE2ThUjQVi47wN0SPLNu/gt8zTDiN9AJfMBrH4Icv6ydgvjBA1h/DmLc fvWEXQpwteSJnhY5MiY784QGxFCzaiVFqQCK5Nr63YOWUwoAZfzUPdnofDXHFZTgY/zf NExA== X-Gm-Message-State: AElRT7FKheU11g7faOURjQ88RbQICiBHHrBZzfVUzHos2wocZNS+A8vD f12qfAQuS8gGpPsc2Rdkw9ETyg== X-Google-Smtp-Source: AG47ELu35D99abYM/iIgHzoGKLi1AyKWlcm6zxMMfxXUHXAagukQrNb/26KzZpHv1qHJP0FoW9VTVw== X-Received: by 2002:a17:902:6ac1:: with SMTP id i1-v6mr24103735plt.152.1521768025080; Thu, 22 Mar 2018 18:20:25 -0700 (PDT) Received: from rrcs-66-91-135-210.west.biz.rr.com (rrcs-66-91-135-210.west.biz.rr.com. [66.91.135.210]) by smtp.gmail.com with ESMTPSA id 5sm16213207pfh.133.2018.03.22.18.20.23 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 22 Mar 2018 18:20:24 -0700 (PDT) Date: Thu, 22 Mar 2018 15:19:25 -1000 (HST) From: Jeff Roberson X-X-Sender: jroberson@desktop To: Cy Schubert cc: Justin Hibbits , Jeff Roberson , src-committers , svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: Re: svn commit: r331369 - head/sys/vm In-Reply-To: <201803230059.w2N0x2fw077291@slippy.cwsent.com> Message-ID: References: <201803230059.w2N0x2fw077291@slippy.cwsent.com> User-Agent: Alpine 2.21 (BSF 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Mar 2018 01:20:27 -0000 On Thu, 22 Mar 2018, Cy Schubert wrote: > It broke i386 too. I just did TARGET_ARCH=i386 make buildworld TARGET_ARCH=i386 make buildkernel This worked for me? Jeff > > Index: sys/vm/vm_reserv.c > =================================================================== > --- sys/vm/vm_reserv.c (revision 331399) > +++ sys/vm/vm_reserv.c (working copy) > @@ -45,8 +45,6 @@ > > #include > #include > -#include > -#include > #include > #include > #include > @@ -55,6 +53,8 @@ > #include > #include > #include > +#include > +#include > #include > #include > > This is because sys/i386/include/machine.h uses critical_enter() and > critical_exit() which are defined in sys/systm.h. > > It built nicely on my amd64's though. > > ~cy > > In message , Jeff Roberson > writes: >> Thank you, working on it. I had done a make universe before getting >> review feedback. >> >> Jeff >> >> On Thu, 22 Mar 2018, Justin Hibbits wrote: >> >>> This broke gcc builds. >>> >>> On Thu, Mar 22, 2018 at 2:21 PM, Jeff Roberson wrote: >>>> Author: jeff >>>> Date: Thu Mar 22 19:21:11 2018 >>>> New Revision: 331369 >>>> URL: https://svnweb.freebsd.org/changeset/base/331369 >>>> >>>> Log: >>>> Lock reservations with a dedicated lock in each reservation. Protect th >> e >>>> vmd_free_count with atomics. >>>> >>>> This allows us to allocate and free from reservations without the free l >> ock >>>> except where a superpage is allocated from the physical layer, which is >>>> roughly 1/512 of the operations on amd64. >>>> >>>> Use the counter api to eliminate cache conention on counters. >>>> >>>> Reviewed by: markj >>>> Tested by: pho >>>> Sponsored by: Netflix, Dell/EMC Isilon >>>> Differential Revision: https://reviews.freebsd.org/D14707 >>>> >>>> Modified: >>>> head/sys/vm/vm_page.c >>>> head/sys/vm/vm_pagequeue.h >>>> head/sys/vm/vm_reserv.c >>>> head/sys/vm/vm_reserv.h >>>> >>>> Modified: head/sys/vm/vm_page.c >>>> ========================================================================== >> ==== >>>> --- head/sys/vm/vm_page.c Thu Mar 22 19:11:43 2018 (r331368) >>>> +++ head/sys/vm/vm_page.c Thu Mar 22 19:21:11 2018 (r331369) >>>> @@ -177,7 +177,6 @@ static uma_zone_t fakepg_zone; >>>> static void vm_page_alloc_check(vm_page_t m); >>>> static void vm_page_clear_dirty_mask(vm_page_t m, vm_page_bits_t pagebits >> ); >>>> static void vm_page_enqueue(uint8_t queue, vm_page_t m); >>>> -static void vm_page_free_phys(struct vm_domain *vmd, vm_page_t m); >>>> static void vm_page_init(void *dummy); >>>> static int vm_page_insert_after(vm_page_t m, vm_object_t object, >>>> vm_pindex_t pindex, vm_page_t mpred); >>>> @@ -1677,10 +1676,10 @@ vm_page_alloc_after(vm_object_t object, vm_pindex_ >> t pi >>>> * for the request class and false otherwise. >>>> */ >>>> int >>>> -vm_domain_available(struct vm_domain *vmd, int req, int npages) >>>> +vm_domain_allocate(struct vm_domain *vmd, int req, int npages) >>>> { >>>> + u_int limit, old, new; >>>> >>>> - vm_domain_free_assert_locked(vmd); >>>> req = req & VM_ALLOC_CLASS_MASK; >>>> >>>> /* >>>> @@ -1688,15 +1687,34 @@ vm_domain_available(struct vm_domain *vmd, int req >> , in >>>> */ >>>> if (curproc == pageproc && req != VM_ALLOC_INTERRUPT) >>>> req = VM_ALLOC_SYSTEM; >>>> + if (req == VM_ALLOC_INTERRUPT) >>>> + limit = 0; >>>> + else if (req == VM_ALLOC_SYSTEM) >>>> + limit = vmd->vmd_interrupt_free_min; >>>> + else >>>> + limit = vmd->vmd_free_reserved; >>>> >>>> - if (vmd->vmd_free_count >= npages + vmd->vmd_free_reserved || >>>> - (req == VM_ALLOC_SYSTEM && >>>> - vmd->vmd_free_count >= npages + vmd->vmd_interrupt_free_min) | >> | >>>> - (req == VM_ALLOC_INTERRUPT && >>>> - vmd->vmd_free_count >= npages)) >>>> - return (1); >>>> + /* >>>> + * Attempt to reserve the pages. Fail if we're below the limit. >>>> + */ >>>> + limit += npages; >>>> + old = vmd->vmd_free_count; >>>> + do { >>>> + if (old < limit) >>>> + return (0); >>>> + new = old - npages; >>>> + } while (atomic_fcmpset_int(&vmd->vmd_free_count, &old, new) == 0) >> ; >>>> >>>> - return (0); >>>> + /* Wake the page daemon if we've crossed the threshold. */ >>>> + if (vm_paging_needed(vmd, new) && !vm_paging_needed(vmd, old)) >>>> + pagedaemon_wakeup(vmd->vmd_domain); >>>> + >>>> + /* Only update bitsets on transitions. */ >>>> + if ((old >= vmd->vmd_free_min && new < vmd->vmd_free_min) || >>>> + (old >= vmd->vmd_free_severe && new < vmd->vmd_free_severe)) >>>> + vm_domain_set(vmd); >>>> + >>>> + return (1); >>>> } >>>> >>>> vm_page_t >>>> @@ -1723,44 +1741,34 @@ vm_page_alloc_domain_after(vm_object_t object, vm_ >> pind >>>> again: >>>> m = NULL; >>>> #if VM_NRESERVLEVEL > 0 >>>> + /* >>>> + * Can we allocate the page from a reservation? >>>> + */ >>>> if (vm_object_reserv(object) && >>>> - (m = vm_reserv_extend(req, object, pindex, domain, mpred)) >>>> - != NULL) { >>>> + ((m = vm_reserv_extend(req, object, pindex, domain, mpred)) != >> NULL || >>>> + (m = vm_reserv_alloc_page(req, object, pindex, domain, mpred)) >> != NULL)) { >>>> domain = vm_phys_domain(m); >>>> vmd = VM_DOMAIN(domain); >>>> goto found; >>>> } >>>> #endif >>>> vmd = VM_DOMAIN(domain); >>>> - vm_domain_free_lock(vmd); >>>> - if (vm_domain_available(vmd, req, 1)) { >>>> + if (vm_domain_allocate(vmd, req, 1)) { >>>> /* >>>> - * Can we allocate the page from a reservation? >>>> + * If not, allocate it from the free page queues. >>>> */ >>>> + vm_domain_free_lock(vmd); >>>> + m = vm_phys_alloc_pages(domain, object != NULL ? >>>> + VM_FREEPOOL_DEFAULT : VM_FREEPOOL_DIRECT, 0); >>>> + vm_domain_free_unlock(vmd); >>>> + if (m == NULL) { >>>> + vm_domain_freecnt_inc(vmd, 1); >>>> #if VM_NRESERVLEVEL > 0 >>>> - if (!vm_object_reserv(object) || >>>> - (m = vm_reserv_alloc_page(object, pindex, >>>> - domain, mpred)) == NULL) >>>> + if (vm_reserv_reclaim_inactive(domain)) >>>> + goto again; >>>> #endif >>>> - { >>>> - /* >>>> - * If not, allocate it from the free page queues. >>>> - */ >>>> - m = vm_phys_alloc_pages(domain, object != NULL ? >>>> - VM_FREEPOOL_DEFAULT : VM_FREEPOOL_DIRECT, 0); >>>> -#if VM_NRESERVLEVEL > 0 >>>> - if (m == NULL && vm_reserv_reclaim_inactive(domain >> )) { >>>> - m = vm_phys_alloc_pages(domain, >>>> - object != NULL ? >>>> - VM_FREEPOOL_DEFAULT : VM_FREEPOOL_DIRE >> CT, >>>> - 0); >>>> - } >>>> -#endif >>>> } >>>> } >>>> - if (m != NULL) >>>> - vm_domain_freecnt_dec(vmd, 1); >>>> - vm_domain_free_unlock(vmd); >>>> if (m == NULL) { >>>> /* >>>> * Not allocatable, give up. >>>> @@ -1775,9 +1783,7 @@ again: >>>> */ >>>> KASSERT(m != NULL, ("missing page")); >>>> >>>> -#if VM_NRESERVLEVEL > 0 >>>> found: >>>> -#endif >>> >>> 'found' is now declared, but unused on powerpc64. >>> >>> - Justin >>> >> > > -- > Cheers, > Cy Schubert > FreeBSD UNIX: Web: http://www.FreeBSD.org > > The need of the many outweighs the greed of the few. > >