From owner-svn-src-all@freebsd.org Thu Sep 28 15:21:49 2017 Return-Path: Delivered-To: svn-src-all@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 2777EE0108B; Thu, 28 Sep 2017 15:21:49 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id DF3BF3201; Thu, 28 Sep 2017 15:21:48 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v8SFLlZ9011963; Thu, 28 Sep 2017 15:21:47 GMT (envelope-from markj@FreeBSD.org) Received: (from markj@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id v8SFLl3R011960; Thu, 28 Sep 2017 15:21:47 GMT (envelope-from markj@FreeBSD.org) Message-Id: <201709281521.v8SFLl3R011960@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: markj set sender to markj@FreeBSD.org using -f From: Mark Johnston Date: Thu, 28 Sep 2017 15:21:47 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-stable@freebsd.org, svn-src-stable-11@freebsd.org Subject: svn commit: r324084 - stable/11/sys/vm X-SVN-Group: stable-11 X-SVN-Commit-Author: markj X-SVN-Commit-Paths: stable/11/sys/vm X-SVN-Commit-Revision: 324084 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 28 Sep 2017 15:21:49 -0000 Author: markj Date: Thu Sep 28 15:21:47 2017 New Revision: 324084 URL: https://svnweb.freebsd.org/changeset/base/324084 Log: MFC r323290: Speed up vm_page_array initialization. Modified: stable/11/sys/vm/vm_page.c stable/11/sys/vm/vm_phys.c stable/11/sys/vm/vm_phys.h Directory Properties: stable/11/ (props changed) Modified: stable/11/sys/vm/vm_page.c ============================================================================== --- stable/11/sys/vm/vm_page.c Thu Sep 28 15:18:20 2017 (r324083) +++ stable/11/sys/vm/vm_page.c Thu Sep 28 15:21:47 2017 (r324084) @@ -418,17 +418,15 @@ vm_page_domain_init(struct vm_domain *vmd) vm_offset_t vm_page_startup(vm_offset_t vaddr) { - vm_offset_t mapped; - vm_paddr_t high_avail, low_avail, page_range, size; - vm_paddr_t new_end; - int i; - vm_paddr_t pa; - vm_paddr_t last_pa; + struct vm_domain *vmd; + struct vm_phys_seg *seg; + vm_page_t m; char *list, *listend; - vm_paddr_t end; - vm_paddr_t biggestsize; - int biggestone; - int pages_per_zone; + vm_offset_t mapped; + vm_paddr_t end, high_avail, low_avail, new_end, page_range, size; + vm_paddr_t biggestsize, last_pa, pa; + u_long pagecount; + int biggestone, i, pages_per_zone, segind; biggestsize = 0; biggestone = 0; @@ -509,6 +507,8 @@ vm_page_startup(vm_offset_t vaddr) vm_page_dump = (void *)(uintptr_t)pmap_map(&vaddr, new_end, new_end + vm_page_dump_size, VM_PROT_READ | VM_PROT_WRITE); bzero((void *)vm_page_dump, vm_page_dump_size); +#else + (void)last_pa; #endif #if defined(__aarch64__) || defined(__amd64__) || defined(__mips__) /* @@ -613,7 +613,9 @@ vm_page_startup(vm_offset_t vaddr) new_end = trunc_page(end - page_range * sizeof(struct vm_page)); mapped = pmap_map(&vaddr, new_end, end, VM_PROT_READ | VM_PROT_WRITE); - vm_page_array = (vm_page_t) mapped; + vm_page_array = (vm_page_t)mapped; + vm_page_array_size = page_range; + #if VM_NRESERVLEVEL > 0 /* * Allocate physical memory for the reservation management system's @@ -640,33 +642,52 @@ vm_page_startup(vm_offset_t vaddr) vm_phys_add_seg(phys_avail[i], phys_avail[i + 1]); /* - * Clear all of the page structures - */ - bzero((caddr_t) vm_page_array, page_range * sizeof(struct vm_page)); - for (i = 0; i < page_range; i++) - vm_page_array[i].order = VM_NFREEORDER; - vm_page_array_size = page_range; - - /* * Initialize the physical memory allocator. */ vm_phys_init(); /* - * Add every available physical page that is not blacklisted to - * the free lists. + * Initialize the page structures and add every available page to the + * physical memory allocator's free lists. */ vm_cnt.v_page_count = 0; vm_cnt.v_free_count = 0; - for (i = 0; phys_avail[i + 1] != 0; i += 2) { - pa = phys_avail[i]; - last_pa = phys_avail[i + 1]; - while (pa < last_pa) { - vm_phys_add_page(pa); - pa += PAGE_SIZE; + for (segind = 0; segind < vm_phys_nsegs; segind++) { + seg = &vm_phys_segs[segind]; + for (pa = seg->start; pa < seg->end; pa += PAGE_SIZE) + vm_phys_init_page(pa); + + /* + * Add the segment to the free lists only if it is covered by + * one of the ranges in phys_avail. Because we've added the + * ranges to the vm_phys_segs array, we can assume that each + * segment is either entirely contained in one of the ranges, + * or doesn't overlap any of them. + */ + for (i = 0; phys_avail[i + 1] != 0; i += 2) { + if (seg->start < phys_avail[i] || + seg->end > phys_avail[i + 1]) + continue; + + m = seg->first_page; + pagecount = (u_long)atop(seg->end - seg->start); + + mtx_lock(&vm_page_queue_free_mtx); + vm_phys_free_contig(m, pagecount); + vm_phys_freecnt_adj(m, (int)pagecount); + mtx_unlock(&vm_page_queue_free_mtx); + vm_cnt.v_page_count += (u_int)pagecount; + + vmd = &vm_dom[seg->domain]; + vmd->vmd_page_count += (u_int)pagecount; + vmd->vmd_segs |= 1UL << m->segind; + break; } } + /* + * Remove blacklisted pages from the physical memory allocator. + */ TAILQ_INIT(&blacklist_head); vm_page_blacklist_load(&list, &listend); vm_page_blacklist_check(list, listend); Modified: stable/11/sys/vm/vm_phys.c ============================================================================== --- stable/11/sys/vm/vm_phys.c Thu Sep 28 15:18:20 2017 (r324083) +++ stable/11/sys/vm/vm_phys.c Thu Sep 28 15:21:47 2017 (r324084) @@ -731,32 +731,28 @@ vm_phys_split_pages(vm_page_t m, int oind, struct vm_f } /* - * Initialize a physical page and add it to the free lists. + * Initialize a physical page in preparation for adding it to the free + * lists. */ void -vm_phys_add_page(vm_paddr_t pa) +vm_phys_init_page(vm_paddr_t pa) { vm_page_t m; - struct vm_domain *vmd; - vm_cnt.v_page_count++; m = vm_phys_paddr_to_vm_page(pa); + m->object = NULL; + m->wire_count = 0; m->busy_lock = VPB_UNBUSIED; + m->hold_count = 0; + m->flags = m->aflags = m->oflags = 0; m->phys_addr = pa; m->queue = PQ_NONE; + m->psind = 0; m->segind = vm_phys_paddr_to_segind(pa); - vmd = vm_phys_domain(m); - vmd->vmd_page_count++; - vmd->vmd_segs |= 1UL << m->segind; - KASSERT(m->order == VM_NFREEORDER, - ("vm_phys_add_page: page %p has unexpected order %d", - m, m->order)); + m->order = VM_NFREEORDER; m->pool = VM_FREEPOOL_DEFAULT; + m->valid = m->dirty = 0; pmap_page_init(m); - mtx_lock(&vm_page_queue_free_mtx); - vm_phys_freecnt_adj(m, 1); - vm_phys_free_pages(m, 0); - mtx_unlock(&vm_page_queue_free_mtx); } /* @@ -912,6 +908,7 @@ vm_phys_fictitious_init_range(vm_page_t range, vm_padd { long i; + bzero(range, page_count * sizeof(*range)); for (i = 0; i < page_count; i++) { vm_page_initfake(&range[i], start + PAGE_SIZE * i, memattr); range[i].oflags &= ~VPO_UNMANAGED; @@ -986,7 +983,7 @@ vm_phys_fictitious_reg_range(vm_paddr_t start, vm_padd alloc: #endif fp = malloc(page_count * sizeof(struct vm_page), M_FICT_PAGES, - M_WAITOK | M_ZERO); + M_WAITOK); #ifdef VM_PHYSSEG_DENSE } #endif Modified: stable/11/sys/vm/vm_phys.h ============================================================================== --- stable/11/sys/vm/vm_phys.h Thu Sep 28 15:18:20 2017 (r324083) +++ stable/11/sys/vm/vm_phys.h Thu Sep 28 15:21:47 2017 (r324084) @@ -69,7 +69,6 @@ extern int vm_phys_nsegs; /* * The following functions are only to be used by the virtual memory system. */ -void vm_phys_add_page(vm_paddr_t pa); void vm_phys_add_seg(vm_paddr_t start, vm_paddr_t end); vm_page_t vm_phys_alloc_contig(u_long npages, vm_paddr_t low, vm_paddr_t high, u_long alignment, vm_paddr_t boundary); @@ -83,6 +82,7 @@ vm_page_t vm_phys_fictitious_to_vm_page(vm_paddr_t pa) void vm_phys_free_contig(vm_page_t m, u_long npages); void vm_phys_free_pages(vm_page_t m, int order); void vm_phys_init(void); +void vm_phys_init_page(vm_paddr_t pa); vm_page_t vm_phys_paddr_to_vm_page(vm_paddr_t pa); vm_page_t vm_phys_scan_contig(u_long npages, vm_paddr_t low, vm_paddr_t high, u_long alignment, vm_paddr_t boundary, int options);