From owner-svn-src-all@freebsd.org Fri May 31 21:02:44 2019 Return-Path: Delivered-To: svn-src-all@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id EAC7A15C5C9B; Fri, 31 May 2019 21:02:43 +0000 (UTC) (envelope-from dougm@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 8ACB384771; Fri, 31 May 2019 21:02:43 +0000 (UTC) (envelope-from dougm@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 627095DFB; Fri, 31 May 2019 21:02:43 +0000 (UTC) (envelope-from dougm@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id x4VL2hne071813; Fri, 31 May 2019 21:02:43 GMT (envelope-from dougm@FreeBSD.org) Received: (from dougm@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id x4VL2gvw071809; Fri, 31 May 2019 21:02:42 GMT (envelope-from dougm@FreeBSD.org) Message-Id: <201905312102.x4VL2gvw071809@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: dougm set sender to dougm@FreeBSD.org using -f From: Doug Moore Date: Fri, 31 May 2019 21:02:42 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r348484 - head/sys/vm X-SVN-Group: head X-SVN-Commit-Author: dougm X-SVN-Commit-Paths: head/sys/vm X-SVN-Commit-Revision: 348484 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 8ACB384771 X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [-2.96 / 15.00]; local_wl_from(0.00)[FreeBSD.org]; NEURAL_HAM_MEDIUM(-1.00)[-0.999,0]; NEURAL_HAM_SHORT(-0.96)[-0.964,0]; ASN(0.00)[asn:11403, ipnet:2610:1c1:1::/48, country:US]; NEURAL_HAM_LONG(-1.00)[-1.000,0] X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 31 May 2019 21:02:44 -0000 Author: dougm Date: Fri May 31 21:02:42 2019 New Revision: 348484 URL: https://svnweb.freebsd.org/changeset/base/348484 Log: The function vm_phys_free_contig invokes vm_phys_free_pages for every power-of-two page block it frees, launching an unsuccessful search for a buddy to pair up with each time. The only possible buddy-up mergers are across the boundaries of the freed region, so change vm_phys_free_contig simply to enqueue the freed interior blocks, via a new function vm_phys_enqueue_contig, and then call vm_phys_free_pages on the bounding blocks to create as big a cross-boundary block as possible after buddy-merging. The only callers of vm_phys_free_contig at the moment call it in situations where merging blocks across the boundary is clearly impossible, so just call vm_phys_enqueue_contig in those places and avoid trying to buddy-up at all. One beneficiary of this change is in breaking reservations. For the case where memory is freed in breaking a reservation with only the first and last pages allocated, the number of cycles consumed by the operation drops about 11% with this change. Suggested by: alc Reviewed by: alc Approved by: kib, markj (mentors) Differential Revision: https://reviews.freebsd.org/D16901 Modified: head/sys/vm/vm_page.c head/sys/vm/vm_phys.c head/sys/vm/vm_phys.h head/sys/vm/vm_reserv.c Modified: head/sys/vm/vm_page.c ============================================================================== --- head/sys/vm/vm_page.c Fri May 31 20:36:32 2019 (r348483) +++ head/sys/vm/vm_page.c Fri May 31 21:02:42 2019 (r348484) @@ -833,7 +833,7 @@ vm_page_startup(vm_offset_t vaddr) vmd = VM_DOMAIN(seg->domain); vm_domain_free_lock(vmd); - vm_phys_free_contig(m, pagecount); + vm_phys_enqueue_contig(m, pagecount); vm_domain_free_unlock(vmd); vm_domain_freecnt_inc(vmd, pagecount); vm_cnt.v_page_count += (u_int)pagecount; Modified: head/sys/vm/vm_phys.c ============================================================================== --- head/sys/vm/vm_phys.c Fri May 31 20:36:32 2019 (r348483) +++ head/sys/vm/vm_phys.c Fri May 31 21:02:42 2019 (r348484) @@ -1095,14 +1095,35 @@ vm_phys_free_pages(vm_page_t m, int order) } /* - * Free a contiguous, arbitrarily sized set of physical pages. + * Return the largest possible order of a set of pages starting at m. + */ +static int +max_order(vm_page_t m) +{ + + /* + * Unsigned "min" is used here so that "order" is assigned + * "VM_NFREEORDER - 1" when "m"'s physical address is zero + * or the low-order bits of its physical address are zero + * because the size of a physical address exceeds the size of + * a long. + */ + return (min(ffsl(VM_PAGE_TO_PHYS(m) >> PAGE_SHIFT) - 1, + VM_NFREEORDER - 1)); +} + +/* + * Free a contiguous, arbitrarily sized set of physical pages, without + * merging across set boundaries. * * The free page queues must be locked. */ void -vm_phys_free_contig(vm_page_t m, u_long npages) +vm_phys_enqueue_contig(vm_page_t m, u_long npages) { - u_int n; + struct vm_freelist *fl; + struct vm_phys_seg *seg; + vm_page_t m_end; int order; /* @@ -1110,29 +1131,68 @@ vm_phys_free_contig(vm_page_t m, u_long npages) * possible power-of-two-sized subsets. */ vm_domain_free_assert_locked(vm_pagequeue_domain(m)); - for (;; npages -= n) { - /* - * Unsigned "min" is used here so that "order" is assigned - * "VM_NFREEORDER - 1" when "m"'s physical address is zero - * or the low-order bits of its physical address are zero - * because the size of a physical address exceeds the size of - * a long. - */ - order = min(ffsl(VM_PAGE_TO_PHYS(m) >> PAGE_SHIFT) - 1, - VM_NFREEORDER - 1); - n = 1 << order; - if (npages < n) - break; - vm_phys_free_pages(m, order); - m += n; + seg = &vm_phys_segs[m->segind]; + fl = (*seg->free_queues)[m->pool]; + m_end = m + npages; + /* Free blocks of increasing size. */ + while ((order = max_order(m)) < VM_NFREEORDER - 1 && + m + (1 << order) <= m_end) { + KASSERT(seg == &vm_phys_segs[m->segind], + ("%s: page range [%p,%p) spans multiple segments", + __func__, m_end - npages, m)); + vm_freelist_add(fl, m, order, 1); + m += 1 << order; } - /* The residual "npages" is less than "1 << (VM_NFREEORDER - 1)". */ - for (; npages > 0; npages -= n) { - order = flsl(npages) - 1; - n = 1 << order; - vm_phys_free_pages(m, order); - m += n; + /* Free blocks of maximum size. */ + while (m + (1 << order) <= m_end) { + KASSERT(seg == &vm_phys_segs[m->segind], + ("%s: page range [%p,%p) spans multiple segments", + __func__, m_end - npages, m)); + vm_freelist_add(fl, m, order, 1); + m += 1 << order; } + /* Free blocks of diminishing size. */ + while (m < m_end) { + KASSERT(seg == &vm_phys_segs[m->segind], + ("%s: page range [%p,%p) spans multiple segments", + __func__, m_end - npages, m)); + order = flsl(m_end - m) - 1; + vm_freelist_add(fl, m, order, 1); + m += 1 << order; + } +} + +/* + * Free a contiguous, arbitrarily sized set of physical pages. + * + * The free page queues must be locked. + */ +void +vm_phys_free_contig(vm_page_t m, u_long npages) +{ + int order_start, order_end; + vm_page_t m_start, m_end; + + vm_domain_free_assert_locked(vm_pagequeue_domain(m)); + + m_start = m; + order_start = max_order(m_start); + if (order_start < VM_NFREEORDER - 1) + m_start += 1 << order_start; + m_end = m + npages; + order_end = max_order(m_end); + if (order_end < VM_NFREEORDER - 1) + m_end -= 1 << order_end; + /* + * Avoid unnecessary coalescing by freeing the pages at the start and + * end of the range last. + */ + if (m_start < m_end) + vm_phys_enqueue_contig(m_start, m_end - m_start); + if (order_start < VM_NFREEORDER - 1) + vm_phys_free_pages(m, order_start); + if (order_end < VM_NFREEORDER - 1) + vm_phys_free_pages(m_end, order_end); } /* Modified: head/sys/vm/vm_phys.h ============================================================================== --- head/sys/vm/vm_phys.h Fri May 31 20:36:32 2019 (r348483) +++ head/sys/vm/vm_phys.h Fri May 31 21:02:42 2019 (r348484) @@ -84,6 +84,7 @@ vm_page_t vm_phys_alloc_freelist_pages(int domain, int int vm_phys_alloc_npages(int domain, int pool, int npages, vm_page_t ma[]); vm_page_t vm_phys_alloc_pages(int domain, int pool, int order); int vm_phys_domain_match(int prefer, vm_paddr_t low, vm_paddr_t high); +void vm_phys_enqueue_contig(vm_page_t m, u_long npages); int vm_phys_fictitious_reg_range(vm_paddr_t start, vm_paddr_t end, vm_memattr_t memattr); void vm_phys_fictitious_unreg_range(vm_paddr_t start, vm_paddr_t end); Modified: head/sys/vm/vm_reserv.c ============================================================================== --- head/sys/vm/vm_reserv.c Fri May 31 20:36:32 2019 (r348483) +++ head/sys/vm/vm_reserv.c Fri May 31 21:02:42 2019 (r348484) @@ -1066,7 +1066,7 @@ vm_reserv_break(vm_reserv_t rv) else { hi = NBPOPMAP * i + bitpos; vm_domain_free_lock(VM_DOMAIN(rv->domain)); - vm_phys_free_contig(&rv->pages[lo], hi - lo); + vm_phys_enqueue_contig(&rv->pages[lo], hi - lo); vm_domain_free_unlock(VM_DOMAIN(rv->domain)); lo = hi; }