From owner-svn-src-head@freebsd.org Thu Sep 6 19:28:55 2018 Return-Path: Delivered-To: svn-src-head@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id EFDA4FE0CFD; Thu, 6 Sep 2018 19:28:54 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 98CA288077; Thu, 6 Sep 2018 19:28:54 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 8FF17116D7; Thu, 6 Sep 2018 19:28:54 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w86JSspT067809; Thu, 6 Sep 2018 19:28:54 GMT (envelope-from markj@FreeBSD.org) Received: (from markj@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id w86JSrSb067799; Thu, 6 Sep 2018 19:28:53 GMT (envelope-from markj@FreeBSD.org) Message-Id: <201809061928.w86JSrSb067799@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: markj set sender to markj@FreeBSD.org using -f From: Mark Johnston Date: Thu, 6 Sep 2018 19:28:53 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r338507 - in head/sys: sys vm X-SVN-Group: head X-SVN-Commit-Author: markj X-SVN-Commit-Paths: in head/sys: sys vm X-SVN-Commit-Revision: 338507 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.27 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 06 Sep 2018 19:28:55 -0000 Author: markj Date: Thu Sep 6 19:28:52 2018 New Revision: 338507 URL: https://svnweb.freebsd.org/changeset/base/338507 Log: Avoid resource deadlocks when one domain has exhausted its memory. Attempt other allowed domains if the requested domain is below the minimum paging threshold. Block in fork only if all domains available to the forking thread are below the severe threshold rather than any. Submitted by: jeff Reported by: mjg Reviewed by: alc, kib, markj Approved by: re (rgrimes) Differential Revision: https://reviews.freebsd.org/D16191 Modified: head/sys/sys/vmmeter.h head/sys/vm/vm_domainset.c head/sys/vm/vm_domainset.h head/sys/vm/vm_fault.c head/sys/vm/vm_glue.c head/sys/vm/vm_page.c head/sys/vm/vm_pageout.h Modified: head/sys/sys/vmmeter.h ============================================================================== --- head/sys/sys/vmmeter.h Thu Sep 6 19:21:31 2018 (r338506) +++ head/sys/sys/vmmeter.h Thu Sep 6 19:28:52 2018 (r338507) @@ -187,6 +187,13 @@ vm_page_count_severe(void) return (!DOMAINSET_EMPTY(&vm_severe_domains)); } +static inline int +vm_page_count_severe_set(domainset_t *mask) +{ + + return (DOMAINSET_SUBSET(&vm_severe_domains, mask)); +} + /* * Return TRUE if we are under our minimum low-free-pages threshold. * Modified: head/sys/vm/vm_domainset.c ============================================================================== --- head/sys/vm/vm_domainset.c Thu Sep 6 19:21:31 2018 (r338506) +++ head/sys/vm/vm_domainset.c Thu Sep 6 19:28:52 2018 (r338507) @@ -100,6 +100,8 @@ vm_domainset_iter_init(struct vm_domainset_iter *di, s pindex += (((uintptr_t)obj) / sizeof(*obj)); di->di_offset = pindex; } + /* Skip zones below min on the first pass. */ + di->di_minskip = true; } static void @@ -213,6 +215,8 @@ vm_domainset_iter_page_init(struct vm_domainset_iter * *req = (di->di_flags & ~(VM_ALLOC_WAITOK | VM_ALLOC_WAITFAIL)) | VM_ALLOC_NOWAIT; vm_domainset_iter_first(di, domain); + if (DOMAINSET_ISSET(*domain, &vm_min_domains)) + vm_domainset_iter_page(di, domain, req); } int @@ -227,8 +231,15 @@ vm_domainset_iter_page(struct vm_domainset_iter *di, i return (ENOMEM); /* If there are more domains to visit we run the iterator. */ - if (--di->di_n != 0) { + while (--di->di_n != 0) { vm_domainset_iter_next(di, domain); + if (!di->di_minskip || + !DOMAINSET_ISSET(*domain, &vm_min_domains)) + return (0); + } + if (di->di_minskip) { + di->di_minskip = false; + vm_domainset_iter_first(di, domain); return (0); } @@ -258,6 +269,8 @@ vm_domainset_iter_malloc_init(struct vm_domainset_iter di->di_flags = *flags; *flags = (di->di_flags & ~M_WAITOK) | M_NOWAIT; vm_domainset_iter_first(di, domain); + if (DOMAINSET_ISSET(*domain, &vm_min_domains)) + vm_domainset_iter_malloc(di, domain, flags); } int @@ -265,8 +278,17 @@ vm_domainset_iter_malloc(struct vm_domainset_iter *di, { /* If there are more domains to visit we run the iterator. */ - if (--di->di_n != 0) { + while (--di->di_n != 0) { vm_domainset_iter_next(di, domain); + if (!di->di_minskip || + !DOMAINSET_ISSET(*domain, &vm_min_domains)) + return (0); + } + + /* If we skipped zones below min start the search from the beginning. */ + if (di->di_minskip) { + di->di_minskip = false; + vm_domainset_iter_first(di, domain); return (0); } Modified: head/sys/vm/vm_domainset.h ============================================================================== --- head/sys/vm/vm_domainset.h Thu Sep 6 19:21:31 2018 (r338506) +++ head/sys/vm/vm_domainset.h Thu Sep 6 19:28:52 2018 (r338507) @@ -34,9 +34,10 @@ struct vm_domainset_iter { struct domainset *di_domain; int *di_iter; vm_pindex_t di_offset; - int di_policy; int di_flags; - int di_n; + uint16_t di_policy; + domainid_t di_n; + bool di_minskip; }; int vm_domainset_iter_page(struct vm_domainset_iter *, int *, int *); @@ -45,5 +46,7 @@ void vm_domainset_iter_page_init(struct vm_domainset_i int vm_domainset_iter_malloc(struct vm_domainset_iter *, int *, int *); void vm_domainset_iter_malloc_init(struct vm_domainset_iter *, struct vm_object *, int *, int *); + +void vm_wait_doms(const domainset_t *); #endif /* __VM_DOMAINSET_H__ */ Modified: head/sys/vm/vm_fault.c ============================================================================== --- head/sys/vm/vm_fault.c Thu Sep 6 19:21:31 2018 (r338506) +++ head/sys/vm/vm_fault.c Thu Sep 6 19:28:52 2018 (r338507) @@ -548,6 +548,7 @@ vm_fault_hold(vm_map_t map, vm_offset_t vaddr, vm_prot { struct faultstate fs; struct vnode *vp; + struct domainset *dset; vm_object_t next_object, retry_object; vm_offset_t e_end, e_start; vm_pindex_t retry_pindex; @@ -791,7 +792,11 @@ RetryFault:; * there, and allocation can fail, causing * restart and new reading of the p_flag. */ - if (!vm_page_count_severe() || P_KILLED(curproc)) { + dset = fs.object->domain.dr_policy; + if (dset == NULL) + dset = curthread->td_domain.dr_policy; + if (!vm_page_count_severe_set(&dset->ds_mask) || + P_KILLED(curproc)) { #if VM_NRESERVLEVEL > 0 vm_object_color(fs.object, atop(vaddr) - fs.pindex); @@ -806,7 +811,7 @@ RetryFault:; } if (fs.m == NULL) { unlock_and_deallocate(&fs); - vm_waitpfault(); + vm_waitpfault(dset); goto RetryFault; } } Modified: head/sys/vm/vm_glue.c ============================================================================== --- head/sys/vm/vm_glue.c Thu Sep 6 19:21:31 2018 (r338506) +++ head/sys/vm/vm_glue.c Thu Sep 6 19:28:52 2018 (r338507) @@ -92,6 +92,7 @@ __FBSDID("$FreeBSD$"); #include #include #include +#include #include #include #include @@ -534,6 +535,7 @@ vm_forkproc(struct thread *td, struct proc *p2, struct struct vmspace *vm2, int flags) { struct proc *p1 = td->td_proc; + struct domainset *dset; int error; if ((flags & RFPROC) == 0) { @@ -557,9 +559,9 @@ vm_forkproc(struct thread *td, struct proc *p2, struct p2->p_vmspace = p1->p_vmspace; atomic_add_int(&p1->p_vmspace->vm_refcnt, 1); } - - while (vm_page_count_severe()) { - vm_wait_severe(); + dset = td2->td_domain.dr_policy; + while (vm_page_count_severe_set(&dset->ds_mask)) { + vm_wait_doms(&dset->ds_mask); } if ((flags & RFMEM) == 0) { Modified: head/sys/vm/vm_page.c ============================================================================== --- head/sys/vm/vm_page.c Thu Sep 6 19:21:31 2018 (r338506) +++ head/sys/vm/vm_page.c Thu Sep 6 19:28:52 2018 (r338507) @@ -2935,7 +2935,7 @@ vm_wait_count(void) return (vm_severe_waiters + vm_min_waiters + vm_pageproc_waiters); } -static void +void vm_wait_doms(const domainset_t *wdoms) { @@ -2961,10 +2961,10 @@ vm_wait_doms(const domainset_t *wdoms) mtx_lock(&vm_domainset_lock); if (DOMAINSET_SUBSET(&vm_min_domains, wdoms)) { vm_min_waiters++; - msleep(&vm_min_domains, &vm_domainset_lock, PVM, - "vmwait", 0); - } - mtx_unlock(&vm_domainset_lock); + msleep(&vm_min_domains, &vm_domainset_lock, + PVM | PDROP, "vmwait", 0); + } else + mtx_unlock(&vm_domainset_lock); } } @@ -3069,15 +3069,21 @@ vm_domain_alloc_fail(struct vm_domain *vmd, vm_object_ * this balance without careful testing first. */ void -vm_waitpfault(void) +vm_waitpfault(struct domainset *dset) { + /* + * XXX Ideally we would wait only until the allocation could + * be satisfied. This condition can cause new allocators to + * consume all freed pages while old allocators wait. + */ mtx_lock(&vm_domainset_lock); - if (vm_page_count_min()) { + if (DOMAINSET_SUBSET(&vm_min_domains, &dset->ds_mask)) { vm_min_waiters++; - msleep(&vm_min_domains, &vm_domainset_lock, PUSER, "pfault", 0); - } - mtx_unlock(&vm_domainset_lock); + msleep(&vm_min_domains, &vm_domainset_lock, PUSER | PDROP, + "pfault", 0); + } else + mtx_unlock(&vm_domainset_lock); } struct vm_pagequeue * Modified: head/sys/vm/vm_pageout.h ============================================================================== --- head/sys/vm/vm_pageout.h Thu Sep 6 19:21:31 2018 (r338506) +++ head/sys/vm/vm_pageout.h Thu Sep 6 19:28:52 2018 (r338507) @@ -96,7 +96,7 @@ extern int vm_pageout_page_count; */ void vm_wait(vm_object_t obj); -void vm_waitpfault(void); +void vm_waitpfault(struct domainset *); void vm_wait_domain(int domain); void vm_wait_min(void); void vm_wait_severe(void);