From owner-svn-src-all@freebsd.org Thu Jan 23 04:54:50 2020 Return-Path: Delivered-To: svn-src-all@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id C5DBB22C1DA; Thu, 23 Jan 2020 04:54:50 +0000 (UTC) (envelope-from jeff@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 48391Q4lQ6z44t5; Thu, 23 Jan 2020 04:54:50 +0000 (UTC) (envelope-from jeff@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 998019489; Thu, 23 Jan 2020 04:54:50 +0000 (UTC) (envelope-from jeff@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id 00N4so31060100; Thu, 23 Jan 2020 04:54:50 GMT (envelope-from jeff@FreeBSD.org) Received: (from jeff@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id 00N4snwI060096; Thu, 23 Jan 2020 04:54:49 GMT (envelope-from jeff@FreeBSD.org) Message-Id: <202001230454.00N4snwI060096@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: jeff set sender to jeff@FreeBSD.org using -f From: Jeff Roberson Date: Thu, 23 Jan 2020 04:54:49 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r357017 - in head/sys: dev/spibus kern vm X-SVN-Group: head X-SVN-Commit-Author: jeff X-SVN-Commit-Paths: in head/sys: dev/spibus kern vm X-SVN-Commit-Revision: 357017 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 23 Jan 2020 04:54:50 -0000 Author: jeff Date: Thu Jan 23 04:54:49 2020 New Revision: 357017 URL: https://svnweb.freebsd.org/changeset/base/357017 Log: Consistently use busy and vm_page_valid() rather than touching page bits directly. This improves API compliance, asserts, etc. Reviewed by: kib, markj Differential Revision: https://reviews.freebsd.org/D23283 Modified: head/sys/dev/spibus/spigen.c head/sys/kern/kern_kcov.c head/sys/kern/kern_sendfile.c head/sys/vm/vm_glue.c head/sys/vm/vm_kern.c Modified: head/sys/dev/spibus/spigen.c ============================================================================== --- head/sys/dev/spibus/spigen.c Thu Jan 23 03:38:41 2020 (r357016) +++ head/sys/dev/spibus/spigen.c Thu Jan 23 04:54:49 2020 (r357017) @@ -325,8 +325,9 @@ spigen_mmap_single(struct cdev *cdev, vm_ooffset_t *of vm_object_reference_locked(mmap->bufobj); // kernel and userland both for (n = 0; n < pages; n++) { m[n] = vm_page_grab(mmap->bufobj, n, - VM_ALLOC_NOBUSY | VM_ALLOC_ZERO | VM_ALLOC_WIRED); - m[n]->valid = VM_PAGE_BITS_ALL; + VM_ALLOC_ZERO | VM_ALLOC_WIRED); + vm_page_valid(m[n]); + vm_page_xunbusy(m[n]); } VM_OBJECT_WUNLOCK(mmap->bufobj); pmap_qenter(mmap->kvaddr, m, pages); Modified: head/sys/kern/kern_kcov.c ============================================================================== --- head/sys/kern/kern_kcov.c Thu Jan 23 03:38:41 2020 (r357016) +++ head/sys/kern/kern_kcov.c Thu Jan 23 04:54:49 2020 (r357017) @@ -383,8 +383,9 @@ kcov_alloc(struct kcov_info *info, size_t entries) VM_OBJECT_WLOCK(info->bufobj); for (n = 0; n < pages; n++) { m = vm_page_grab(info->bufobj, n, - VM_ALLOC_NOBUSY | VM_ALLOC_ZERO | VM_ALLOC_WIRED); - m->valid = VM_PAGE_BITS_ALL; + VM_ALLOC_ZERO | VM_ALLOC_WIRED); + vm_page_valid(m); + vm_page_xunbusy(m); pmap_qenter(info->kvaddr + n * PAGE_SIZE, &m, 1); } VM_OBJECT_WUNLOCK(info->bufobj); Modified: head/sys/kern/kern_sendfile.c ============================================================================== --- head/sys/kern/kern_sendfile.c Thu Jan 23 03:38:41 2020 (r357016) +++ head/sys/kern/kern_sendfile.c Thu Jan 23 04:54:49 2020 (r357017) @@ -388,7 +388,7 @@ sendfile_swapin(vm_object_t obj, struct sf_io *sfio, i if (!vm_pager_has_page(obj, OFF_TO_IDX(vmoff(i, off)), NULL, &a)) { pmap_zero_page(pa[i]); - pa[i]->valid = VM_PAGE_BITS_ALL; + vm_page_valid(pa[i]); MPASS(pa[i]->dirty == 0); vm_page_xunbusy(pa[i]); i++; Modified: head/sys/vm/vm_glue.c ============================================================================== --- head/sys/vm/vm_glue.c Thu Jan 23 03:38:41 2020 (r357016) +++ head/sys/vm/vm_glue.c Thu Jan 23 04:54:49 2020 (r357017) @@ -340,10 +340,12 @@ vm_thread_stack_create(struct domainset *ds, vm_object * page of stack. */ VM_OBJECT_WLOCK(ksobj); - (void)vm_page_grab_pages(ksobj, 0, VM_ALLOC_NORMAL | VM_ALLOC_NOBUSY | - VM_ALLOC_WIRED, ma, pages); - for (i = 0; i < pages; i++) - ma[i]->valid = VM_PAGE_BITS_ALL; + (void)vm_page_grab_pages(ksobj, 0, VM_ALLOC_NORMAL | VM_ALLOC_WIRED, + ma, pages); + for (i = 0; i < pages; i++) { + vm_page_valid(ma[i]); + vm_page_xunbusy(ma[i]); + } VM_OBJECT_WUNLOCK(ksobj); pmap_qenter(ks, ma, pages); *ksobjp = ksobj; Modified: head/sys/vm/vm_kern.c ============================================================================== --- head/sys/vm/vm_kern.c Thu Jan 23 03:38:41 2020 (r357016) +++ head/sys/vm/vm_kern.c Thu Jan 23 04:54:49 2020 (r357017) @@ -193,7 +193,7 @@ kmem_alloc_attr_domain(int domain, vm_size_t size, int if (vmem_alloc(vmem, size, M_BESTFIT | flags, &addr)) return (0); offset = addr - VM_MIN_KERNEL_ADDRESS; - pflags = malloc2vm_flags(flags) | VM_ALLOC_NOBUSY | VM_ALLOC_WIRED; + pflags = malloc2vm_flags(flags) | VM_ALLOC_WIRED; pflags &= ~(VM_ALLOC_NOWAIT | VM_ALLOC_WAITOK | VM_ALLOC_WAITFAIL); pflags |= VM_ALLOC_NOWAIT; prot = (flags & M_EXEC) != 0 ? VM_PROT_ALL : VM_PROT_RW; @@ -223,7 +223,8 @@ retry: vm_phys_domain(m), domain)); if ((flags & M_ZERO) && (m->flags & PG_ZERO) == 0) pmap_zero_page(m); - m->valid = VM_PAGE_BITS_ALL; + vm_page_valid(m); + vm_page_xunbusy(m); pmap_enter(kernel_pmap, addr + i, m, prot, prot | PMAP_ENTER_WIRED, 0); } @@ -284,7 +285,7 @@ kmem_alloc_contig_domain(int domain, vm_size_t size, i if (vmem_alloc(vmem, size, flags | M_BESTFIT, &addr)) return (0); offset = addr - VM_MIN_KERNEL_ADDRESS; - pflags = malloc2vm_flags(flags) | VM_ALLOC_NOBUSY | VM_ALLOC_WIRED; + pflags = malloc2vm_flags(flags) | VM_ALLOC_WIRED; pflags &= ~(VM_ALLOC_NOWAIT | VM_ALLOC_WAITOK | VM_ALLOC_WAITFAIL); pflags |= VM_ALLOC_NOWAIT; npages = atop(size); @@ -315,7 +316,8 @@ retry: for (; m < end_m; m++) { if ((flags & M_ZERO) && (m->flags & PG_ZERO) == 0) pmap_zero_page(m); - m->valid = VM_PAGE_BITS_ALL; + vm_page_valid(m); + vm_page_xunbusy(m); pmap_enter(kernel_pmap, tmp, m, VM_PROT_RW, VM_PROT_RW | PMAP_ENTER_WIRED, 0); tmp += PAGE_SIZE; @@ -465,7 +467,7 @@ kmem_back_domain(int domain, vm_object_t object, vm_of ("kmem_back_domain: only supports kernel object.")); offset = addr - VM_MIN_KERNEL_ADDRESS; - pflags = malloc2vm_flags(flags) | VM_ALLOC_NOBUSY | VM_ALLOC_WIRED; + pflags = malloc2vm_flags(flags) | VM_ALLOC_WIRED; pflags &= ~(VM_ALLOC_NOWAIT | VM_ALLOC_WAITOK | VM_ALLOC_WAITFAIL); if (flags & M_WAITOK) pflags |= VM_ALLOC_WAITFAIL; @@ -498,7 +500,8 @@ retry: pmap_zero_page(m); KASSERT((m->oflags & VPO_UNMANAGED) != 0, ("kmem_malloc: page %p is managed", m)); - m->valid = VM_PAGE_BITS_ALL; + vm_page_valid(m); + vm_page_xunbusy(m); pmap_enter(kernel_pmap, addr + i, m, prot, prot | PMAP_ENTER_WIRED, 0); #if VM_NRESERVLEVEL > 0