From owner-svn-src-head@freebsd.org Tue Feb 20 10:13:16 2018 Return-Path: Delivered-To: svn-src-head@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 55066F18F3C; Tue, 20 Feb 2018 10:13:16 +0000 (UTC) (envelope-from kib@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 051CB6E94B; Tue, 20 Feb 2018 10:13:16 +0000 (UTC) (envelope-from kib@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id F41A427FFD; Tue, 20 Feb 2018 10:13:15 +0000 (UTC) (envelope-from kib@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w1KADFrX038159; Tue, 20 Feb 2018 10:13:15 GMT (envelope-from kib@FreeBSD.org) Received: (from kib@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id w1KADDfX038137; Tue, 20 Feb 2018 10:13:13 GMT (envelope-from kib@FreeBSD.org) Message-Id: <201802201013.w1KADDfX038137@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: kib set sender to kib@FreeBSD.org using -f From: Konstantin Belousov Date: Tue, 20 Feb 2018 10:13:13 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r329636 - in head/sys: amd64/amd64 arm/arm arm/nvidia/drm2 arm64/arm64 compat/linuxkpi/common/src dev/drm2/i915 dev/drm2/ttm i386/i386 mips/mips powerpc/aim powerpc/booke riscv/riscv vm X-SVN-Group: head X-SVN-Commit-Author: kib X-SVN-Commit-Paths: in head/sys: amd64/amd64 arm/arm arm/nvidia/drm2 arm64/arm64 compat/linuxkpi/common/src dev/drm2/i915 dev/drm2/ttm i386/i386 mips/mips powerpc/aim powerpc/booke riscv/riscv vm X-SVN-Commit-Revision: 329636 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 Feb 2018 10:13:16 -0000 Author: kib Date: Tue Feb 20 10:13:13 2018 New Revision: 329636 URL: https://svnweb.freebsd.org/changeset/base/329636 Log: vm_wait() rework. Make vm_wait() take the vm_object argument which specifies the domain set to wait for the min condition pass. If there is no object associated with the wait, use curthread' policy domainset. The mechanics of the wait in vm_wait() and vm_wait_domain() is supplied by the new helper vm_wait_doms(), which directly takes the bitmask of the domains to wait for passing min condition. Eliminate pagedaemon_wait(). vm_domain_clear() handles the same operations. Eliminate VM_WAIT and VM_WAITPFAULT macros, the direct functions calls are enough. Eliminate several control state variables from vm_domain, unneeded after the vm_wait() conversion. Scetched and reviewed by: jeff Tested by: pho Sponsored by: The FreeBSD Foundation, Mellanox Technologies Differential revision: https://reviews.freebsd.org/D14384 Modified: head/sys/amd64/amd64/pmap.c head/sys/arm/arm/pmap-v4.c head/sys/arm/arm/pmap-v6.c head/sys/arm/nvidia/drm2/tegra_bo.c head/sys/arm64/arm64/pmap.c head/sys/compat/linuxkpi/common/src/linux_page.c head/sys/dev/drm2/i915/i915_gem.c head/sys/dev/drm2/i915/i915_gem_gtt.c head/sys/dev/drm2/ttm/ttm_bo_vm.c head/sys/dev/drm2/ttm/ttm_page_alloc.c head/sys/i386/i386/pmap.c head/sys/mips/mips/pmap.c head/sys/mips/mips/uma_machdep.c head/sys/powerpc/aim/mmu_oea.c head/sys/powerpc/aim/mmu_oea64.c head/sys/powerpc/booke/pmap.c head/sys/riscv/riscv/pmap.c head/sys/vm/vm_fault.c head/sys/vm/vm_page.c head/sys/vm/vm_pageout.c head/sys/vm/vm_pageout.h head/sys/vm/vm_pagequeue.h Modified: head/sys/amd64/amd64/pmap.c ============================================================================== --- head/sys/amd64/amd64/pmap.c Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/amd64/amd64/pmap.c Tue Feb 20 10:13:13 2018 (r329636) @@ -2675,7 +2675,7 @@ _pmap_allocpte(pmap_t pmap, vm_pindex_t ptepindex, str RELEASE_PV_LIST_LOCK(lockp); PMAP_UNLOCK(pmap); PMAP_ASSERT_NOT_IN_DI(); - VM_WAIT; + vm_wait(NULL); PMAP_LOCK(pmap); } Modified: head/sys/arm/arm/pmap-v4.c ============================================================================== --- head/sys/arm/arm/pmap-v4.c Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/arm/arm/pmap-v4.c Tue Feb 20 10:13:13 2018 (r329636) @@ -3248,7 +3248,7 @@ do_l2b_alloc: if ((flags & PMAP_ENTER_NOSLEEP) == 0) { PMAP_UNLOCK(pmap); rw_wunlock(&pvh_global_lock); - VM_WAIT; + vm_wait(NULL); rw_wlock(&pvh_global_lock); PMAP_LOCK(pmap); goto do_l2b_alloc; Modified: head/sys/arm/arm/pmap-v6.c ============================================================================== --- head/sys/arm/arm/pmap-v6.c Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/arm/arm/pmap-v6.c Tue Feb 20 10:13:13 2018 (r329636) @@ -2478,7 +2478,7 @@ _pmap_allocpte2(pmap_t pmap, vm_offset_t va, u_int fla if ((flags & PMAP_ENTER_NOSLEEP) == 0) { PMAP_UNLOCK(pmap); rw_wunlock(&pvh_global_lock); - VM_WAIT; + vm_wait(NULL); rw_wlock(&pvh_global_lock); PMAP_LOCK(pmap); } Modified: head/sys/arm/nvidia/drm2/tegra_bo.c ============================================================================== --- head/sys/arm/nvidia/drm2/tegra_bo.c Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/arm/nvidia/drm2/tegra_bo.c Tue Feb 20 10:13:13 2018 (r329636) @@ -114,7 +114,7 @@ retry: if (tries < 3) { if (!vm_page_reclaim_contig(pflags, npages, low, high, alignment, boundary)) - VM_WAIT; + vm_wait(NULL); tries++; goto retry; } Modified: head/sys/arm64/arm64/pmap.c ============================================================================== --- head/sys/arm64/arm64/pmap.c Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/arm64/arm64/pmap.c Tue Feb 20 10:13:13 2018 (r329636) @@ -1409,7 +1409,7 @@ pmap_pinit(pmap_t pmap) */ while ((l0pt = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED | VM_ALLOC_ZERO)) == NULL) - VM_WAIT; + vm_wait(NULL); l0phys = VM_PAGE_TO_PHYS(l0pt); pmap->pm_l0 = (pd_entry_t *)PHYS_TO_DMAP(l0phys); @@ -1449,7 +1449,7 @@ _pmap_alloc_l3(pmap_t pmap, vm_pindex_t ptepindex, str if (lockp != NULL) { RELEASE_PV_LIST_LOCK(lockp); PMAP_UNLOCK(pmap); - VM_WAIT; + vm_wait(NULL); PMAP_LOCK(pmap); } Modified: head/sys/compat/linuxkpi/common/src/linux_page.c ============================================================================== --- head/sys/compat/linuxkpi/common/src/linux_page.c Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/compat/linuxkpi/common/src/linux_page.c Tue Feb 20 10:13:13 2018 (r329636) @@ -101,7 +101,7 @@ linux_alloc_pages(gfp_t flags, unsigned int order) if (flags & M_WAITOK) { if (!vm_page_reclaim_contig(req, npages, 0, pmax, PAGE_SIZE, 0)) { - VM_WAIT; + vm_wait(NULL); } flags &= ~M_WAITOK; goto retry; Modified: head/sys/dev/drm2/i915/i915_gem.c ============================================================================== --- head/sys/dev/drm2/i915/i915_gem.c Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/dev/drm2/i915/i915_gem.c Tue Feb 20 10:13:13 2018 (r329636) @@ -1561,7 +1561,7 @@ retry: i915_gem_object_unpin(obj); DRM_UNLOCK(dev); VM_OBJECT_WUNLOCK(vm_obj); - VM_WAIT; + vm_wait(vm_obj); goto retry; } page->valid = VM_PAGE_BITS_ALL; Modified: head/sys/dev/drm2/i915/i915_gem_gtt.c ============================================================================== --- head/sys/dev/drm2/i915/i915_gem_gtt.c Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/dev/drm2/i915/i915_gem_gtt.c Tue Feb 20 10:13:13 2018 (r329636) @@ -589,7 +589,7 @@ retry: if (tries < 1) { if (!vm_page_reclaim_contig(req, 1, 0, 0xffffffff, PAGE_SIZE, 0)) - VM_WAIT; + vm_wait(NULL); tries++; goto retry; } Modified: head/sys/dev/drm2/ttm/ttm_bo_vm.c ============================================================================== --- head/sys/dev/drm2/ttm/ttm_bo_vm.c Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/dev/drm2/ttm/ttm_bo_vm.c Tue Feb 20 10:13:13 2018 (r329636) @@ -246,7 +246,7 @@ reserve: if (m1 == NULL) { if (vm_page_insert(m, vm_obj, OFF_TO_IDX(offset))) { VM_OBJECT_WUNLOCK(vm_obj); - VM_WAIT; + vm_wait(vm_obj); VM_OBJECT_WLOCK(vm_obj); ttm_mem_io_unlock(man); ttm_bo_unreserve(bo); Modified: head/sys/dev/drm2/ttm/ttm_page_alloc.c ============================================================================== --- head/sys/dev/drm2/ttm/ttm_page_alloc.c Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/dev/drm2/ttm/ttm_page_alloc.c Tue Feb 20 10:13:13 2018 (r329636) @@ -168,7 +168,7 @@ ttm_vm_page_alloc_dma32(int req, vm_memattr_t memattr) return (p); if (!vm_page_reclaim_contig(req, 1, 0, 0xffffffff, PAGE_SIZE, 0)) - VM_WAIT; + vm_wait(NULL); } } @@ -181,7 +181,7 @@ ttm_vm_page_alloc_any(int req, vm_memattr_t memattr) p = vm_page_alloc(NULL, 0, req); if (p != NULL) break; - VM_WAIT; + vm_wait(NULL); } pmap_page_set_memattr(p, memattr); return (p); Modified: head/sys/i386/i386/pmap.c ============================================================================== --- head/sys/i386/i386/pmap.c Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/i386/i386/pmap.c Tue Feb 20 10:13:13 2018 (r329636) @@ -1893,10 +1893,9 @@ pmap_pinit(pmap_t pmap) m = vm_page_alloc(NULL, 0, VM_ALLOC_NORMAL | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED | VM_ALLOC_ZERO); if (m == NULL) - VM_WAIT; - else { + vm_wait(NULL); + else ptdpg[i++] = m; - } } pmap_qenter((vm_offset_t)pmap->pm_pdir, ptdpg, NPGPTD); @@ -1945,7 +1944,7 @@ _pmap_allocpte(pmap_t pmap, u_int ptepindex, u_int fla if ((flags & PMAP_ENTER_NOSLEEP) == 0) { PMAP_UNLOCK(pmap); rw_wunlock(&pvh_global_lock); - VM_WAIT; + vm_wait(NULL); rw_wlock(&pvh_global_lock); PMAP_LOCK(pmap); } Modified: head/sys/mips/mips/pmap.c ============================================================================== --- head/sys/mips/mips/pmap.c Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/mips/mips/pmap.c Tue Feb 20 10:13:13 2018 (r329636) @@ -1050,11 +1050,11 @@ pmap_grow_direct_page(int req) { #ifdef __mips_n64 - VM_WAIT; + vm_wait(NULL); #else if (!vm_page_reclaim_contig(req, 1, 0, MIPS_KSEG0_LARGEST_PHYS, PAGE_SIZE, 0)) - VM_WAIT; + vm_wait(NULL); #endif } Modified: head/sys/mips/mips/uma_machdep.c ============================================================================== --- head/sys/mips/mips/uma_machdep.c Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/mips/mips/uma_machdep.c Tue Feb 20 10:13:13 2018 (r329636) @@ -67,13 +67,11 @@ uma_small_alloc(uma_zone_t zone, vm_size_t bytes, int 0, MIPS_KSEG0_LARGEST_PHYS, PAGE_SIZE, 0)) continue; #endif - if (m == NULL) { - if (wait & M_NOWAIT) - return (NULL); - else - VM_WAIT; - } else + if (m != NULL) break; + if ((wait & M_NOWAIT) != 0) + return (NULL); + vm_wait(NULL); } pa = VM_PAGE_TO_PHYS(m); Modified: head/sys/powerpc/aim/mmu_oea.c ============================================================================== --- head/sys/powerpc/aim/mmu_oea.c Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/powerpc/aim/mmu_oea.c Tue Feb 20 10:13:13 2018 (r329636) @@ -1124,7 +1124,7 @@ moea_enter(mmu_t mmu, pmap_t pmap, vm_offset_t va, vm_ if ((flags & PMAP_ENTER_NOSLEEP) != 0) return (KERN_RESOURCE_SHORTAGE); VM_OBJECT_ASSERT_UNLOCKED(m->object); - VM_WAIT; + vm_wait(NULL); } } Modified: head/sys/powerpc/aim/mmu_oea64.c ============================================================================== --- head/sys/powerpc/aim/mmu_oea64.c Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/powerpc/aim/mmu_oea64.c Tue Feb 20 10:13:13 2018 (r329636) @@ -1384,7 +1384,7 @@ moea64_enter(mmu_t mmu, pmap_t pmap, vm_offset_t va, v if ((flags & PMAP_ENTER_NOSLEEP) != 0) return (KERN_RESOURCE_SHORTAGE); VM_OBJECT_ASSERT_UNLOCKED(m->object); - VM_WAIT; + vm_wait(NULL); } /* Modified: head/sys/powerpc/booke/pmap.c ============================================================================== --- head/sys/powerpc/booke/pmap.c Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/powerpc/booke/pmap.c Tue Feb 20 10:13:13 2018 (r329636) @@ -789,7 +789,7 @@ ptbl_alloc(mmu_t mmu, pmap_t pmap, pte_t ** pdir, unsi vm_wire_sub(i); return (NULL); } - VM_WAIT; + vm_wait(NULL); rw_wlock(&pvh_global_lock); PMAP_LOCK(pmap); } @@ -1033,7 +1033,7 @@ ptbl_alloc(mmu_t mmu, pmap_t pmap, unsigned int pdir_i vm_wire_sub(i); return (NULL); } - VM_WAIT; + vm_wait(NULL); rw_wlock(&pvh_global_lock); PMAP_LOCK(pmap); } @@ -1346,7 +1346,7 @@ pdir_alloc(mmu_t mmu, pmap_t pmap, unsigned int pp2d_i req = VM_ALLOC_NOOBJ | VM_ALLOC_WIRED; while ((m = vm_page_alloc(NULL, pidx, req)) == NULL) { PMAP_UNLOCK(pmap); - VM_WAIT; + vm_wait(NULL); PMAP_LOCK(pmap); } mtbl[i] = m; Modified: head/sys/riscv/riscv/pmap.c ============================================================================== --- head/sys/riscv/riscv/pmap.c Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/riscv/riscv/pmap.c Tue Feb 20 10:13:13 2018 (r329636) @@ -1203,7 +1203,7 @@ pmap_pinit(pmap_t pmap) */ while ((l1pt = vm_page_alloc(NULL, 0xdeadbeef, VM_ALLOC_NORMAL | VM_ALLOC_NOOBJ | VM_ALLOC_WIRED | VM_ALLOC_ZERO)) == NULL) - VM_WAIT; + vm_wait(NULL); l1phys = VM_PAGE_TO_PHYS(l1pt); pmap->pm_l1 = (pd_entry_t *)PHYS_TO_DMAP(l1phys); @@ -1252,7 +1252,7 @@ _pmap_alloc_l3(pmap_t pmap, vm_pindex_t ptepindex, str RELEASE_PV_LIST_LOCK(lockp); PMAP_UNLOCK(pmap); rw_runlock(&pvh_global_lock); - VM_WAIT; + vm_wait(NULL); rw_rlock(&pvh_global_lock); PMAP_LOCK(pmap); } Modified: head/sys/vm/vm_fault.c ============================================================================== --- head/sys/vm/vm_fault.c Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/vm/vm_fault.c Tue Feb 20 10:13:13 2018 (r329636) @@ -787,7 +787,7 @@ RetryFault:; } if (fs.m == NULL) { unlock_and_deallocate(&fs); - VM_WAITPFAULT; + vm_waitpfault(); goto RetryFault; } } @@ -1685,7 +1685,7 @@ again: if (dst_m == NULL) { VM_OBJECT_WUNLOCK(dst_object); VM_OBJECT_RUNLOCK(object); - VM_WAIT; + vm_wait(dst_object); VM_OBJECT_WLOCK(dst_object); goto again; } Modified: head/sys/vm/vm_page.c ============================================================================== --- head/sys/vm/vm_page.c Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/vm/vm_page.c Tue Feb 20 10:13:13 2018 (r329636) @@ -2567,7 +2567,7 @@ CTASSERT(powerof2(NRUNS)); * Returns true if reclamation is successful and false otherwise. Since * relocation requires the allocation of physical pages, reclamation may * fail due to a shortage of free pages. When reclamation fails, callers - * are expected to perform VM_WAIT before retrying a failed allocation + * are expected to perform vm_wait() before retrying a failed allocation * operation, e.g., vm_page_alloc_contig(). * * The caller must always specify an allocation class through "req". @@ -2767,15 +2767,42 @@ vm_wait_severe(void) u_int vm_wait_count(void) { - u_int cnt; - int i; - cnt = 0; - for (i = 0; i < vm_ndomains; i++) - cnt += VM_DOMAIN(i)->vmd_waiters; - cnt += vm_severe_waiters + vm_min_waiters; + return (vm_severe_waiters + vm_min_waiters); +} - return (cnt); +static void +vm_wait_doms(const domainset_t *wdoms) +{ + + /* + * We use racey wakeup synchronization to avoid expensive global + * locking for the pageproc when sleeping with a non-specific vm_wait. + * To handle this, we only sleep for one tick in this instance. It + * is expected that most allocations for the pageproc will come from + * kmem or vm_page_grab* which will use the more specific and + * race-free vm_wait_domain(). + */ + if (curproc == pageproc) { + mtx_lock(&vm_domainset_lock); + vm_pageproc_waiters++; + msleep(&vm_pageproc_waiters, &vm_domainset_lock, PVM, + "pageprocwait", 1); + mtx_unlock(&vm_domainset_lock); + } else { + /* + * XXX Ideally we would wait only until the allocation could + * be satisfied. This condition can cause new allocators to + * consume all freed pages while old allocators wait. + */ + mtx_lock(&vm_domainset_lock); + if (DOMAINSET_SUBSET(&vm_min_domains, wdoms)) { + vm_min_waiters++; + msleep(&vm_min_domains, &vm_domainset_lock, PVM, + "vmwait", 0); + } + mtx_unlock(&vm_domainset_lock); + } } /* @@ -2788,6 +2815,7 @@ void vm_wait_domain(int domain) { struct vm_domain *vmd; + domainset_t wdom; vmd = VM_DOMAIN(domain); vm_domain_free_assert_locked(vmd); @@ -2797,50 +2825,40 @@ vm_wait_domain(int domain) msleep(&vmd->vmd_pageout_pages_needed, vm_domain_free_lockptr(vmd), PDROP | PSWP, "VMWait", 0); } else { + vm_domain_free_unlock(vmd); if (pageproc == NULL) panic("vm_wait in early boot"); - pagedaemon_wait(domain, PVM, "vmwait"); + DOMAINSET_ZERO(&wdom); + DOMAINSET_SET(vmd->vmd_domain, &wdom); + vm_wait_doms(&wdom); } } /* - * vm_wait: (also see VM_WAIT macro) + * vm_wait: * - * Sleep until free pages are available for allocation. - * - Called in various places after failed memory allocations. + * Sleep until free pages are available for allocation in the + * affinity domains of the obj. If obj is NULL, the domain set + * for the calling thread is used. + * Called in various places after failed memory allocations. */ void -vm_wait(void) +vm_wait(vm_object_t obj) { + struct domainset *d; + d = NULL; + /* - * We use racey wakeup synchronization to avoid expensive global - * locking for the pageproc when sleeping with a non-specific vm_wait. - * To handle this, we only sleep for one tick in this instance. It - * is expected that most allocations for the pageproc will come from - * kmem or vm_page_grab* which will use the more specific and - * race-free vm_wait_domain(). + * Carefully fetch pointers only once: the struct domainset + * itself is ummutable but the pointer might change. */ - if (curproc == pageproc) { - mtx_lock(&vm_domainset_lock); - vm_pageproc_waiters++; - msleep(&vm_pageproc_waiters, &vm_domainset_lock, PVM, - "pageprocwait", 1); - mtx_unlock(&vm_domainset_lock); - } else { - /* - * XXX Ideally we would wait only until the allocation could - * be satisfied. This condition can cause new allocators to - * consume all freed pages while old allocators wait. - */ - mtx_lock(&vm_domainset_lock); - if (vm_page_count_min()) { - vm_min_waiters++; - msleep(&vm_min_domains, &vm_domainset_lock, PVM, - "vmwait", 0); - } - mtx_unlock(&vm_domainset_lock); - } + if (obj != NULL) + d = obj->domain.dr_policy; + if (d == NULL) + d = curthread->td_domain.dr_policy; + + vm_wait_doms(&d->ds_mask); } /* @@ -2877,7 +2895,7 @@ vm_domain_alloc_fail(struct vm_domain *vmd, vm_object_ } /* - * vm_waitpfault: (also see VM_WAITPFAULT macro) + * vm_waitpfault: * * Sleep until free pages are available for allocation. * - Called only in vm_fault so that processes page faulting @@ -3071,10 +3089,6 @@ vm_domain_free_wakeup(struct vm_domain *vmd) * high water mark. And wakeup scheduler process if we have * lots of memory. this process will swapin processes. */ - if (vmd->vmd_pages_needed && !vm_paging_min(vmd)) { - vmd->vmd_pages_needed = false; - wakeup(&vmd->vmd_free_count); - } if ((vmd->vmd_minset && !vm_paging_min(vmd)) || (vmd->vmd_severeset && !vm_paging_severe(vmd))) vm_domain_clear(vmd); Modified: head/sys/vm/vm_pageout.c ============================================================================== --- head/sys/vm/vm_pageout.c Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/vm/vm_pageout.c Tue Feb 20 10:13:13 2018 (r329636) @@ -1750,8 +1750,6 @@ vm_pageout_oom(int shortage) } sx_sunlock(&allproc_lock); if (bigproc != NULL) { - int i; - if (vm_panic_on_oom != 0) panic("out of swap space"); PROC_LOCK(bigproc); @@ -1759,8 +1757,6 @@ vm_pageout_oom(int shortage) sched_nice(bigproc, PRIO_MIN); _PRELE(bigproc); PROC_UNLOCK(bigproc); - for (i = 0; i < vm_ndomains; i++) - wakeup(&VM_DOMAIN(i)->vmd_free_count); } } @@ -1796,23 +1792,6 @@ vm_pageout_worker(void *arg) vm_domain_free_lock(vmd); /* - * Generally, after a level >= 1 scan, if there are enough - * free pages to wakeup the waiters, then they are already - * awake. A call to vm_page_free() during the scan awakened - * them. However, in the following case, this wakeup serves - * to bound the amount of time that a thread might wait. - * Suppose a thread's call to vm_page_alloc() fails, but - * before that thread calls VM_WAIT, enough pages are freed by - * other threads to alleviate the free page shortage. The - * thread will, nonetheless, wait until another page is freed - * or this wakeup is performed. - */ - if (vmd->vmd_pages_needed && !vm_paging_min(vmd)) { - vmd->vmd_pages_needed = false; - wakeup(&vmd->vmd_free_count); - } - - /* * Do not clear vmd_pageout_wanted until we reach our free page * target. Otherwise, we may be awakened over and over again, * wasting CPU time. @@ -1840,16 +1819,12 @@ vm_pageout_worker(void *arg) pass++; } else { /* - * Yes. If threads are still sleeping in VM_WAIT + * Yes. If threads are still sleeping in vm_wait() * then we immediately start a new scan. Otherwise, * sleep until the next wakeup or until pages need to * have their reference stats updated. */ - if (vmd->vmd_pages_needed) { - vm_domain_free_unlock(vmd); - if (pass == 0) - pass++; - } else if (mtx_sleep(&vmd->vmd_pageout_wanted, + if (mtx_sleep(&vmd->vmd_pageout_wanted, vm_domain_free_lockptr(vmd), PDROP | PVM, "psleep", hz) == 0) { VM_CNT_INC(v_pdwakeups); @@ -1999,34 +1974,4 @@ pagedaemon_wakeup(int domain) vmd->vmd_pageout_wanted = true; wakeup(&vmd->vmd_pageout_wanted); } -} - -/* - * Wake up the page daemon and wait for it to reclaim free pages. - * - * This function returns with the free queues mutex unlocked. - */ -void -pagedaemon_wait(int domain, int pri, const char *wmesg) -{ - struct vm_domain *vmd; - - vmd = VM_DOMAIN(domain); - vm_domain_free_assert_locked(vmd); - - /* - * vmd_pageout_wanted may have been set by an advisory wakeup, but if - * the page daemon is running on a CPU, the wakeup will have been lost. - * Thus, deliver a potentially spurious wakeup to ensure that the page - * daemon has been notified of the shortage. - */ - if (!vmd->vmd_pageout_wanted || !vmd->vmd_pages_needed) { - vmd->vmd_pageout_wanted = true; - wakeup(&vmd->vmd_pageout_wanted); - } - vmd->vmd_pages_needed = true; - vmd->vmd_waiters++; - msleep(&vmd->vmd_free_count, vm_domain_free_lockptr(vmd), PDROP | pri, - wmesg, 0); - vmd->vmd_waiters--; } Modified: head/sys/vm/vm_pageout.h ============================================================================== --- head/sys/vm/vm_pageout.h Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/vm/vm_pageout.h Tue Feb 20 10:13:13 2018 (r329636) @@ -93,11 +93,8 @@ extern int vm_pageout_page_count; * Signal pageout-daemon and wait for it. */ -void pagedaemon_wait(int domain, int pri, const char *wmesg); void pagedaemon_wakeup(int domain); -#define VM_WAIT vm_wait() -#define VM_WAITPFAULT vm_waitpfault() -void vm_wait(void); +void vm_wait(vm_object_t obj); void vm_waitpfault(void); void vm_wait_domain(int domain); void vm_wait_min(void); Modified: head/sys/vm/vm_pagequeue.h ============================================================================== --- head/sys/vm/vm_pagequeue.h Tue Feb 20 07:30:57 2018 (r329635) +++ head/sys/vm/vm_pagequeue.h Tue Feb 20 10:13:13 2018 (r329636) @@ -93,8 +93,6 @@ struct vm_domain { int vmd_pageout_pages_needed; /* page daemon waiting for pages? */ int vmd_pageout_deficit; /* Estimated number of pages deficit */ - int vmd_waiters; /* Pageout waiters. */ - bool vmd_pages_needed; /* Are threads waiting for free pages? */ bool vmd_pageout_wanted; /* pageout daemon wait channel */ bool vmd_minset; /* Are we in vm_min_domains? */ bool vmd_severeset; /* Are we in vm_severe_domains? */