From owner-svn-src-all@freebsd.org Tue Oct 8 07:14:24 2019 Return-Path: Delivered-To: svn-src-all@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 5C55113C149; Tue, 8 Oct 2019 07:14:24 +0000 (UTC) (envelope-from dougm@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 46nT9r1nKYz4TtD; Tue, 8 Oct 2019 07:14:24 +0000 (UTC) (envelope-from dougm@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 21D62246B8; Tue, 8 Oct 2019 07:14:24 +0000 (UTC) (envelope-from dougm@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id x987EOAP075194; Tue, 8 Oct 2019 07:14:24 GMT (envelope-from dougm@FreeBSD.org) Received: (from dougm@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id x987ELNt075181; Tue, 8 Oct 2019 07:14:21 GMT (envelope-from dougm@FreeBSD.org) Message-Id: <201910080714.x987ELNt075181@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: dougm set sender to dougm@FreeBSD.org using -f From: Doug Moore Date: Tue, 8 Oct 2019 07:14:21 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r353298 - in head/sys: compat/linprocfs dev/hwpmc fs/procfs fs/tmpfs kern security/mac vm X-SVN-Group: head X-SVN-Commit-Author: dougm X-SVN-Commit-Paths: in head/sys: compat/linprocfs dev/hwpmc fs/procfs fs/tmpfs kern security/mac vm X-SVN-Commit-Revision: 353298 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Oct 2019 07:14:24 -0000 Author: dougm Date: Tue Oct 8 07:14:21 2019 New Revision: 353298 URL: https://svnweb.freebsd.org/changeset/base/353298 Log: Define macro VM_MAP_ENTRY_FOREACH for enumerating the entries in a vm_map. In case the implementation ever changes from using a chain of next pointers, then changing the macro definition will be necessary, but changing all the files that iterate over vm_map entries will not. Drop a counter in vm_object.c that would have an effect only if the vm_map entry count was wrong. Discussed with: alc Reviewed by: markj Tested by: pho (earlier version) Differential Revision: https://reviews.freebsd.org/D21882 Modified: head/sys/compat/linprocfs/linprocfs.c head/sys/dev/hwpmc/hwpmc_mod.c head/sys/fs/procfs/procfs_map.c head/sys/fs/tmpfs/tmpfs_vfsops.c head/sys/kern/imgact_elf.c head/sys/kern/kern_proc.c head/sys/kern/sys_process.c head/sys/security/mac/mac_process.c head/sys/vm/swap_pager.c head/sys/vm/vm_map.h head/sys/vm/vm_object.c head/sys/vm/vm_pageout.c head/sys/vm/vm_swapout.c Modified: head/sys/compat/linprocfs/linprocfs.c ============================================================================== --- head/sys/compat/linprocfs/linprocfs.c Tue Oct 8 02:36:53 2019 (r353297) +++ head/sys/compat/linprocfs/linprocfs.c Tue Oct 8 07:14:21 2019 (r353298) @@ -1174,8 +1174,7 @@ linprocfs_doprocmaps(PFS_FILL_ARGS) l_map_str = l32_map_str; map = &vm->vm_map; vm_map_lock_read(map); - for (entry = map->header.next; entry != &map->header; - entry = entry->next) { + VM_MAP_ENTRY_FOREACH(entry, map) { name = ""; freename = NULL; if (entry->eflags & MAP_ENTRY_IS_SUB_MAP) Modified: head/sys/dev/hwpmc/hwpmc_mod.c ============================================================================== --- head/sys/dev/hwpmc/hwpmc_mod.c Tue Oct 8 02:36:53 2019 (r353297) +++ head/sys/dev/hwpmc/hwpmc_mod.c Tue Oct 8 07:14:21 2019 (r353298) @@ -1884,7 +1884,7 @@ pmc_log_process_mappings(struct pmc_owner *po, struct map = &vm->vm_map; vm_map_lock_read(map); - for (entry = map->header.next; entry != &map->header; entry = entry->next) { + VM_MAP_ENTRY_FOREACH(entry, map) { if (entry == NULL) { PMCDBG2(LOG,OPS,2, "hwpmc: vm_map entry unexpectedly " @@ -1988,7 +1988,7 @@ pmc_log_process_mappings(struct pmc_owner *po, struct * new lookup for this entry. If there is no entry * for this address range, vm_map_lookup_entry() will * return the previous one, so we always want to go to - * entry->next on the next loop iteration. + * the next entry on the next loop iteration. * * There is an edge condition here that can occur if * there is no entry at or before this address. In Modified: head/sys/fs/procfs/procfs_map.c ============================================================================== --- head/sys/fs/procfs/procfs_map.c Tue Oct 8 02:36:53 2019 (r353297) +++ head/sys/fs/procfs/procfs_map.c Tue Oct 8 07:14:21 2019 (r353298) @@ -118,8 +118,7 @@ procfs_doprocmap(PFS_FILL_ARGS) return (ESRCH); map = &vm->vm_map; vm_map_lock_read(map); - for (entry = map->header.next; entry != &map->header; - entry = entry->next) { + VM_MAP_ENTRY_FOREACH(entry, map) { if (entry->eflags & MAP_ENTRY_IS_SUB_MAP) continue; Modified: head/sys/fs/tmpfs/tmpfs_vfsops.c ============================================================================== --- head/sys/fs/tmpfs/tmpfs_vfsops.c Tue Oct 8 02:36:53 2019 (r353297) +++ head/sys/fs/tmpfs/tmpfs_vfsops.c Tue Oct 8 07:14:21 2019 (r353298) @@ -262,8 +262,7 @@ again: vm_map_lock(map); if (map->busy) vm_map_wait_busy(map); - for (entry = map->header.next; entry != &map->header; - entry = entry->next) { + VM_MAP_ENTRY_FOREACH(entry, map) { if ((entry->eflags & (MAP_ENTRY_GUARD | MAP_ENTRY_IS_SUB_MAP | MAP_ENTRY_COW)) != 0 || (entry->max_protection & VM_PROT_WRITE) == 0) Modified: head/sys/kern/imgact_elf.c ============================================================================== --- head/sys/kern/imgact_elf.c Tue Oct 8 02:36:53 2019 (r353297) +++ head/sys/kern/imgact_elf.c Tue Oct 8 07:14:21 2019 (r353298) @@ -1738,8 +1738,7 @@ each_dumpable_segment(struct thread *td, segment_callb boolean_t ignore_entry; vm_map_lock_read(map); - for (entry = map->header.next; entry != &map->header; - entry = entry->next) { + VM_MAP_ENTRY_FOREACH(entry, map) { /* * Don't dump inaccessible mappings, deal with legacy * coredump mode. Modified: head/sys/kern/kern_proc.c ============================================================================== --- head/sys/kern/kern_proc.c Tue Oct 8 02:36:53 2019 (r353297) +++ head/sys/kern/kern_proc.c Tue Oct 8 07:14:21 2019 (r353298) @@ -2239,8 +2239,7 @@ sysctl_kern_proc_ovmmap(SYSCTL_HANDLER_ARGS) map = &vm->vm_map; vm_map_lock_read(map); - for (entry = map->header.next; entry != &map->header; - entry = entry->next) { + VM_MAP_ENTRY_FOREACH(entry, map) { vm_object_t obj, tobj, lobj; vm_offset_t addr; @@ -2455,8 +2454,7 @@ kern_proc_vmmap_out(struct proc *p, struct sbuf *sb, s error = 0; map = &vm->vm_map; vm_map_lock_read(map); - for (entry = map->header.next; entry != &map->header; - entry = entry->next) { + VM_MAP_ENTRY_FOREACH(entry, map) { if (entry->eflags & MAP_ENTRY_IS_SUB_MAP) continue; Modified: head/sys/kern/sys_process.c ============================================================================== --- head/sys/kern/sys_process.c Tue Oct 8 02:36:53 2019 (r353297) +++ head/sys/kern/sys_process.c Tue Oct 8 07:14:21 2019 (r353298) @@ -382,21 +382,18 @@ ptrace_vm_entry(struct thread *td, struct proc *p, str vm_map_lock_read(map); do { - entry = map->header.next; + KASSERT((map->header.eflags & MAP_ENTRY_IS_SUB_MAP) == 0, + ("Submap in map header")); index = 0; - while (index < pve->pve_entry && entry != &map->header) { - entry = entry->next; + VM_MAP_ENTRY_FOREACH(entry, map) { + if (index >= pve->pve_entry && + (entry->eflags & MAP_ENTRY_IS_SUB_MAP) == 0) + break; index++; } - if (index != pve->pve_entry) { + if (index < pve->pve_entry) { error = EINVAL; break; - } - KASSERT((map->header.eflags & MAP_ENTRY_IS_SUB_MAP) == 0, - ("Submap in map header")); - while ((entry->eflags & MAP_ENTRY_IS_SUB_MAP) != 0) { - entry = entry->next; - index++; } if (entry == &map->header) { error = ENOENT; Modified: head/sys/security/mac/mac_process.c ============================================================================== --- head/sys/security/mac/mac_process.c Tue Oct 8 02:36:53 2019 (r353297) +++ head/sys/security/mac/mac_process.c Tue Oct 8 07:14:21 2019 (r353298) @@ -264,7 +264,7 @@ mac_proc_vm_revoke_recurse(struct thread *td, struct u return; vm_map_lock(map); - for (vme = map->header.next; vme != &map->header; vme = vme->next) { + VM_MAP_ENTRY_FOREACH(vme, map) { if (vme->eflags & MAP_ENTRY_IS_SUB_MAP) { mac_proc_vm_revoke_recurse(td, cred, vme->object.sub_map); Modified: head/sys/vm/swap_pager.c ============================================================================== --- head/sys/vm/swap_pager.c Tue Oct 8 02:36:53 2019 (r353297) +++ head/sys/vm/swap_pager.c Tue Oct 8 07:14:21 2019 (r353298) @@ -2621,7 +2621,7 @@ vmspace_swap_count(struct vmspace *vmspace) map = &vmspace->vm_map; count = 0; - for (cur = map->header.next; cur != &map->header; cur = cur->next) { + VM_MAP_ENTRY_FOREACH(cur, map) { if ((cur->eflags & MAP_ENTRY_IS_SUB_MAP) != 0) continue; object = cur->object.vm_object; Modified: head/sys/vm/vm_map.h ============================================================================== --- head/sys/vm/vm_map.h Tue Oct 8 02:36:53 2019 (r353297) +++ head/sys/vm/vm_map.h Tue Oct 8 07:14:21 2019 (r353298) @@ -416,6 +416,10 @@ int vm_map_lookup_locked(vm_map_t *, vm_offset_t, vm_p vm_pindex_t *, vm_prot_t *, boolean_t *); void vm_map_lookup_done (vm_map_t, vm_map_entry_t); boolean_t vm_map_lookup_entry (vm_map_t, vm_offset_t, vm_map_entry_t *); +#define VM_MAP_ENTRY_FOREACH(it, map) \ + for ((it) = (map)->header.next; \ + (it) != &(map)->header; \ + (it) = (it)->next) int vm_map_protect (vm_map_t, vm_offset_t, vm_offset_t, vm_prot_t, boolean_t); int vm_map_remove (vm_map_t, vm_offset_t, vm_offset_t); void vm_map_try_merge_entries(vm_map_t map, vm_map_entry_t prev, Modified: head/sys/vm/vm_object.c ============================================================================== --- head/sys/vm/vm_object.c Tue Oct 8 02:36:53 2019 (r353297) +++ head/sys/vm/vm_object.c Tue Oct 8 07:14:21 2019 (r353298) @@ -2376,29 +2376,22 @@ _vm_object_in_map(vm_map_t map, vm_object_t object, vm vm_map_t tmpm; vm_map_entry_t tmpe; vm_object_t obj; - int entcount; if (map == 0) return 0; if (entry == 0) { - tmpe = map->header.next; - entcount = map->nentries; - while (entcount-- && (tmpe != &map->header)) { + VM_MAP_ENTRY_FOREACH(tmpe, map) { if (_vm_object_in_map(map, object, tmpe)) { return 1; } - tmpe = tmpe->next; } } else if (entry->eflags & MAP_ENTRY_IS_SUB_MAP) { tmpm = entry->object.sub_map; - tmpe = tmpm->header.next; - entcount = tmpm->nentries; - while (entcount-- && tmpe != &tmpm->header) { + VM_MAP_ENTRY_FOREACH(tmpe, tmpm) { if (_vm_object_in_map(tmpm, object, tmpe)) { return 1; } - tmpe = tmpe->next; } } else if ((obj = entry->object.vm_object) != NULL) { for (; obj; obj = obj->backing_object) Modified: head/sys/vm/vm_pageout.c ============================================================================== --- head/sys/vm/vm_pageout.c Tue Oct 8 02:36:53 2019 (r353297) +++ head/sys/vm/vm_pageout.c Tue Oct 8 07:14:21 2019 (r353298) @@ -1783,8 +1783,7 @@ vm_pageout_oom_pagecount(struct vmspace *vmspace) KASSERT(!map->system_map, ("system map")); sx_assert(&map->lock, SA_LOCKED); res = 0; - for (entry = map->header.next; entry != &map->header; - entry = entry->next) { + VM_MAP_ENTRY_FOREACH(entry, map) { if ((entry->eflags & MAP_ENTRY_IS_SUB_MAP) != 0) continue; obj = entry->object.vm_object; Modified: head/sys/vm/vm_swapout.c ============================================================================== --- head/sys/vm/vm_swapout.c Tue Oct 8 02:36:53 2019 (r353297) +++ head/sys/vm/vm_swapout.c Tue Oct 8 07:14:21 2019 (r353298) @@ -284,8 +284,7 @@ vm_swapout_map_deactivate_pages(vm_map_t map, long des * first, search out the biggest object, and try to free pages from * that. */ - tmpe = map->header.next; - while (tmpe != &map->header) { + VM_MAP_ENTRY_FOREACH(tmpe, map) { if ((tmpe->eflags & MAP_ENTRY_IS_SUB_MAP) == 0) { obj = tmpe->object.vm_object; if (obj != NULL && VM_OBJECT_TRYRLOCK(obj)) { @@ -302,7 +301,6 @@ vm_swapout_map_deactivate_pages(vm_map_t map, long des } if (tmpe->wired_count > 0) nothingwired = FALSE; - tmpe = tmpe->next; } if (bigobj != NULL) { @@ -313,8 +311,7 @@ vm_swapout_map_deactivate_pages(vm_map_t map, long des * Next, hunt around for other pages to deactivate. We actually * do this search sort of wrong -- .text first is not the best idea. */ - tmpe = map->header.next; - while (tmpe != &map->header) { + VM_MAP_ENTRY_FOREACH(tmpe, map) { if (pmap_resident_count(vm_map_pmap(map)) <= desired) break; if ((tmpe->eflags & MAP_ENTRY_IS_SUB_MAP) == 0) { @@ -326,7 +323,6 @@ vm_swapout_map_deactivate_pages(vm_map_t map, long des VM_OBJECT_RUNLOCK(obj); } } - tmpe = tmpe->next; } /*