Date: Wed, 16 Nov 2016 18:53:43 +0200 From: Konstantin Belousov <kostikbel@gmail.com> To: Ruslan Bukin <ruslan.bukin@cl.cam.ac.uk> Cc: Alan Cox <alc@FreeBSD.org>, src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: Re: svn commit: r308691 - in head/sys: cddl/compat/opensolaris/sys cddl/contrib/opensolaris/uts/common/fs/zfs fs/tmpfs kern vm Message-ID: <20161116165343.GX54029@kib.kiev.ua> In-Reply-To: <20161116133718.GA10251@bsdpad.com> References: <201611151822.uAFIMoj2092581@repo.freebsd.org> <20161116133718.GA10251@bsdpad.com>
next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Nov 16, 2016 at 01:37:18PM +0000, Ruslan Bukin wrote: > I have a panic with this on RISC-V. Any ideas ? How did you checked that the revision you replied to, makes the problem ? Note that the backtrace below is not reasonable. > > vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv > vvvvvvvvvvvvvvvvvvvvvvvvvvvv > rrrrrrrrrrrrr vvvvvvvvvvvvvvvvvvvvvvvvvv > rrrrrrrrrrrrrrrr vvvvvvvvvvvvvvvvvvvvvvvv > rrrrrrrrrrrrrrrrrr vvvvvvvvvvvvvvvvvvvvvvvv > rrrrrrrrrrrrrrrrrr vvvvvvvvvvvvvvvvvvvvvvvv > rrrrrrrrrrrrrrrrrr vvvvvvvvvvvvvvvvvvvvvvvv > rrrrrrrrrrrrrrrr vvvvvvvvvvvvvvvvvvvvvv > rrrrrrrrrrrrr vvvvvvvvvvvvvvvvvvvvvv > rr vvvvvvvvvvvvvvvvvvvvvv > rr vvvvvvvvvvvvvvvvvvvvvvvv rr > rrrr vvvvvvvvvvvvvvvvvvvvvvvvvv rrrr > rrrrrr vvvvvvvvvvvvvvvvvvvvvv rrrrrr > rrrrrrrr vvvvvvvvvvvvvvvvvv rrrrrrrr > rrrrrrrrrr vvvvvvvvvvvvvv rrrrrrrrrr > rrrrrrrrrrrr vvvvvvvvvv rrrrrrrrrrrr > rrrrrrrrrrrrrr vvvvvv rrrrrrrrrrrrrr > rrrrrrrrrrrrrrrr vv rrrrrrrrrrrrrrrr > rrrrrrrrrrrrrrrrrr rrrrrrrrrrrrrrrrrr > rrrrrrrrrrrrrrrrrrrr rrrrrrrrrrrrrrrrrrrr > rrrrrrrrrrrrrrrrrrrrrr rrrrrrrrrrrrrrrrrrrrrr > > INSTRUCTION SETS WANT TO BE FREE > KDB: debugger backends: ddb > KDB: current backend: ddb > Found 2 CPUs in the device tree > Copyright (c) 1992-2016 The FreeBSD Project. > Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 > The Regents of the University of California. All rights reserved. > FreeBSD is a registered trademark of The FreeBSD Foundation. > FreeBSD 12.0-CURRENT #4 0a3288b(br-riscv-isa-update)-dirty: Wed Nov 16 13:28:11 UTC 2016 > rb743@vica.cl.cam.ac.uk:/home/rb743/obj/riscv.riscv64/home/rb743/dev/freebsd-riscv/sys/SPIKE riscv > gcc version 6.1.0 (GCC) > Preloaded elf64 kernel "kernel" at 0xffffffc0026be360. > CPU(0): Unknown Implementer Unknown Processor > Starting CPU 1 (0) > FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs > ULE: setup cpu 0 > ULE: setup cpu 1 > random: entropy device external interface > crypto: <crypto core> > mem: <memory> > openfirm: <Open Firmware control device> > null: <full device, null device, zero device> > nfslock: pseudo-device > random: harvesting attach, 8 bytes (4 bits) from nexus0 > ofwbus0: <Open Firmware Device Tree> > simplebus0: <Flattened device tree simple bus> on ofwbus0 > random: harvesting attach, 8 bytes (4 bits) from simplebus0 > random: harvesting attach, 8 bytes (4 bits) from ofwbus0 > timer0: <RISC-V Timer> mem 0x40000000-0x40000007,0x40000008-0x40001007 irq 5 on simplebus0 > Timecounter "RISC-V Timecounter" frequency 1000000 Hz quality 1000 > Event timer "RISC-V Eventtimer" frequency 1000000 Hz quality 1000 > random: harvesting attach, 8 bytes (4 bits) from timer0 > cpulist0: <Open Firmware CPU Group> on ofwbus0 > cpu0: <Open Firmware CPU> on cpulist0 > cpu0: missing 'clock-frequency' property > riscv64_cpu0: register <0> > random: harvesting attach, 8 bytes (4 bits) from riscv64_cpu0 > random: harvesting attach, 8 bytes (4 bits) from cpu0 > cpu1: <Open Firmware CPU> on cpulist0 > cpu1: missing 'clock-frequency' property > riscv64_cpu1: register <0> > random: harvesting attach, 8 bytes (4 bits) from riscv64_cpu1 > random: harvesting attach, 8 bytes (4 bits) from cpu1 > random: harvesting attach, 8 bytes (4 bits) from cpulist0 > simplebus0: <pic@0> compat riscv,pic (no driver attached) > rcons0: <RISC-V console> irq 1 on simplebus0 > random: harvesting attach, 8 bytes (4 bits) from rcons0 > cryptosoft0: <software crypto> > crypto: assign cryptosoft0 driver id 0, flags 100663296 > crypto: cryptosoft0 registers alg 1 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 2 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 3 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 4 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 5 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 16 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 6 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 7 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 18 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 19 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 20 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 8 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 15 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 9 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 10 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 13 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 14 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 11 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 22 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 23 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 25 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 24 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 26 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 27 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 28 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 21 flags 0 maxoplen 0 > crypto: cryptosoft0 registers alg 17 flags 0 maxoplen 0 > random: harvesting attach, 8 bytes (4 bits) from cryptosoft0 > Device configuration finished. > procfs registered > Timecounters tick every 1.000 msec > lo0: bpf attached > vlan: initialized, using hash tables with chaining > tcp_init: net.inet.tcp.tcbhashsize auto tuned to 8192 > IPsec: Initialized Security Association Processing. > t[0] == 0xffffffc00265bf50 > t[1] == 0xffffffc00016494c > t[2] == 0x0000000000000050 > t[3] == 0x0000000000000000 > t[4] == 0x0000000000000000 > t[5] == 0x0000000000000000 > t[6] == 0x0000000000000000 > s[0] == 0xffffffc000003db0 > s[1] == 0x0000000000000000 > s[2] == 0xffffffc002c4c510 > s[3] == 0xffffffc002cea9c0 > s[4] == 0x000000000098967f > s[5] == 0x00000000039386ff > s[6] == 0xffffffc000574778 > s[7] == 0x0000000000000000 > s[8] == 0xffffffc00267c580 > s[9] == 0xffffffc000531218 > s[10] == 0x0000000000000412 > s[11] == 0x0000000000000000 > a[0] == 0x0000000000000000 > a[1] == 0x0000000000000000 > a[2] == 0xffffffc000531218 > a[3] == 0x0000000000000412 > a[4] == 0xffffffc00267c580 > a[5] == 0x0000000000000002 > a[6] == 0x0000000000000000 > a[7] == 0x0000000000000003 > sepc == 0xffffffc00013deac > sstatus == 0x8000000000006100 > panic: vm_fault failed: ffffffc00013deac, va 0x0000000000000018 > cpuid = 0 > KDB: stack backtrace: > db_trace_self() at db_read_token+0x704 > pc = 0xffffffc000455b44 ra = 0xffffffc0000244f8 > sp = 0xffffffc000003858 fp = 0xffffffc000003a78 > > db_read_token() at kdb_backtrace+0x3c > pc = 0xffffffc0000244f8 ra = 0xffffffc0001a5588 > sp = 0xffffffc000003a78 fp = 0xffffffc000003a88 > > kdb_backtrace() at vpanic+0x158 > pc = 0xffffffc0001a5588 ra = 0xffffffc00015bd74 > sp = 0xffffffc000003a88 fp = 0xffffffc000003ac8 > > vpanic() at panic+0x34 > pc = 0xffffffc00015bd74 ra = 0xffffffc00015c58c > sp = 0xffffffc000003ac8 fp = 0xffffffc000003ae8 > > panic() at sysarch+0x36c > pc = 0xffffffc00015c58c ra = 0xffffffc0004622b0 > sp = 0xffffffc000003ae8 fp = 0xffffffc000003bf8 > > sysarch() at do_trap_supervisor+0xa0 > pc = 0xffffffc0004622b0 ra = 0xffffffc000462498 > sp = 0xffffffc000003bf8 fp = 0xffffffc000003c18 > > do_trap_supervisor() at cpu_exception_handler_supervisor+0xb0 > pc = 0xffffffc000462498 ra = 0xffffffc000456440 > sp = 0xffffffc000003c18 fp = 0xffffffc000003db0 > > cpu_exception_handler_supervisor() at ruxagg+0x34 > pc = 0xffffffc000456440 ra = 0xffffffc000154458 > sp = 0xffffffc000003db0 fp = 0xffffffc000003dd0 > > ruxagg() at rufetch+0x9c > pc = 0xffffffc000154458 ra = 0xffffffc000154784 > sp = 0xffffffc000003dd0 fp = 0xffffffc000003e00 > > rufetch() at exec_shell_imgact+0x1204 > pc = 0xffffffc000154784 ra = 0xffffffc0000f9d4c > sp = 0xffffffc000003e00 fp = 0xffffffc000003ee0 > > exec_shell_imgact() at mi_startup+0x190 > pc = 0xffffffc0000f9d4c ra = 0xffffffc0000fa768 > sp = 0xffffffc000003ee0 fp = 0xffffffc000003f20 > > mi_startup() at kernbase+0x248 > pc = 0xffffffc0000fa768 ra = 0xffffffc000000248 > sp = 0xffffffc000003f20 fp = 0x0000000085002ff8 > > KDB: enter: panic > [ thread pid 0 tid 100000 ] > Stopped at kdb_enter+0x4c: > db> > > Ruslan > > On Tue, Nov 15, 2016 at 06:22:50PM +0000, Alan Cox wrote: > > Author: alc > > Date: Tue Nov 15 18:22:50 2016 > > New Revision: 308691 > > URL: https://svnweb.freebsd.org/changeset/base/308691 > > > > Log: > > Remove most of the code for implementing PG_CACHED pages. (This change does > > not remove user-space visible fields from vm_cnt or all of the references to > > cached pages from comments. Those changes will come later.) > > > > Reviewed by: kib, markj > > Tested by: pho > > Sponsored by: Dell EMC Isilon > > Differential Revision: https://reviews.freebsd.org/D8497 > > > > Modified: > > head/sys/cddl/compat/opensolaris/sys/vnode.h > > head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c > > head/sys/fs/tmpfs/tmpfs_subr.c > > head/sys/kern/kern_exec.c > > head/sys/kern/uipc_shm.c > > head/sys/vm/swap_pager.c > > head/sys/vm/vm_fault.c > > head/sys/vm/vm_mmap.c > > head/sys/vm/vm_object.c > > head/sys/vm/vm_object.h > > head/sys/vm/vm_page.c > > head/sys/vm/vm_page.h > > head/sys/vm/vm_reserv.c > > head/sys/vm/vm_reserv.h > > head/sys/vm/vnode_pager.c > > > > Modified: head/sys/cddl/compat/opensolaris/sys/vnode.h > > ============================================================================== > > --- head/sys/cddl/compat/opensolaris/sys/vnode.h Tue Nov 15 17:01:48 2016 (r308690) > > +++ head/sys/cddl/compat/opensolaris/sys/vnode.h Tue Nov 15 18:22:50 2016 (r308691) > > @@ -75,8 +75,7 @@ vn_is_readonly(vnode_t *vp) > > #define vn_mountedvfs(vp) ((vp)->v_mountedhere) > > #define vn_has_cached_data(vp) \ > > ((vp)->v_object != NULL && \ > > - ((vp)->v_object->resident_page_count > 0 || \ > > - !vm_object_cache_is_empty((vp)->v_object))) > > + (vp)->v_object->resident_page_count > 0) > > #define vn_exists(vp) do { } while (0) > > #define vn_invalid(vp) do { } while (0) > > #define vn_renamepath(tdvp, svp, tnm, lentnm) do { } while (0) > > > > Modified: head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c > > ============================================================================== > > --- head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c Tue Nov 15 17:01:48 2016 (r308690) > > +++ head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c Tue Nov 15 18:22:50 2016 (r308691) > > @@ -426,10 +426,6 @@ page_busy(vnode_t *vp, int64_t start, in > > continue; > > } > > vm_page_sbusy(pp); > > - } else if (pp == NULL) { > > - pp = vm_page_alloc(obj, OFF_TO_IDX(start), > > - VM_ALLOC_SYSTEM | VM_ALLOC_IFCACHED | > > - VM_ALLOC_SBUSY); > > } else { > > ASSERT(pp != NULL && !pp->valid); > > pp = NULL; > > > > Modified: head/sys/fs/tmpfs/tmpfs_subr.c > > ============================================================================== > > --- head/sys/fs/tmpfs/tmpfs_subr.c Tue Nov 15 17:01:48 2016 (r308690) > > +++ head/sys/fs/tmpfs/tmpfs_subr.c Tue Nov 15 18:22:50 2016 (r308691) > > @@ -1372,12 +1372,9 @@ retry: > > VM_WAIT; > > VM_OBJECT_WLOCK(uobj); > > goto retry; > > - } else if (m->valid != VM_PAGE_BITS_ALL) > > - rv = vm_pager_get_pages(uobj, &m, 1, > > - NULL, NULL); > > - else > > - /* A cached page was reactivated. */ > > - rv = VM_PAGER_OK; > > + } > > + rv = vm_pager_get_pages(uobj, &m, 1, NULL, > > + NULL); > > vm_page_lock(m); > > if (rv == VM_PAGER_OK) { > > vm_page_deactivate(m); > > > > Modified: head/sys/kern/kern_exec.c > > ============================================================================== > > --- head/sys/kern/kern_exec.c Tue Nov 15 17:01:48 2016 (r308690) > > +++ head/sys/kern/kern_exec.c Tue Nov 15 18:22:50 2016 (r308691) > > @@ -1006,7 +1006,7 @@ exec_map_first_page(imgp) > > break; > > } else { > > ma[i] = vm_page_alloc(object, i, > > - VM_ALLOC_NORMAL | VM_ALLOC_IFNOTCACHED); > > + VM_ALLOC_NORMAL); > > if (ma[i] == NULL) > > break; > > } > > > > Modified: head/sys/kern/uipc_shm.c > > ============================================================================== > > --- head/sys/kern/uipc_shm.c Tue Nov 15 17:01:48 2016 (r308690) > > +++ head/sys/kern/uipc_shm.c Tue Nov 15 18:22:50 2016 (r308691) > > @@ -455,12 +455,9 @@ retry: > > VM_WAIT; > > VM_OBJECT_WLOCK(object); > > goto retry; > > - } else if (m->valid != VM_PAGE_BITS_ALL) > > - rv = vm_pager_get_pages(object, &m, 1, > > - NULL, NULL); > > - else > > - /* A cached page was reactivated. */ > > - rv = VM_PAGER_OK; > > + } > > + rv = vm_pager_get_pages(object, &m, 1, NULL, > > + NULL); > > vm_page_lock(m); > > if (rv == VM_PAGER_OK) { > > vm_page_deactivate(m); > > > > Modified: head/sys/vm/swap_pager.c > > ============================================================================== > > --- head/sys/vm/swap_pager.c Tue Nov 15 17:01:48 2016 (r308690) > > +++ head/sys/vm/swap_pager.c Tue Nov 15 18:22:50 2016 (r308691) > > @@ -1126,7 +1126,7 @@ swap_pager_getpages(vm_object_t object, > > if (shift != 0) { > > for (i = 1; i <= shift; i++) { > > p = vm_page_alloc(object, m[0]->pindex - i, > > - VM_ALLOC_NORMAL | VM_ALLOC_IFNOTCACHED); > > + VM_ALLOC_NORMAL); > > if (p == NULL) { > > /* Shift allocated pages to the left. */ > > for (j = 0; j < i - 1; j++) > > @@ -1144,8 +1144,7 @@ swap_pager_getpages(vm_object_t object, > > if (rahead != NULL) { > > for (i = 0; i < *rahead; i++) { > > p = vm_page_alloc(object, > > - m[reqcount - 1]->pindex + i + 1, > > - VM_ALLOC_NORMAL | VM_ALLOC_IFNOTCACHED); > > + m[reqcount - 1]->pindex + i + 1, VM_ALLOC_NORMAL); > > if (p == NULL) > > break; > > bp->b_pages[shift + reqcount + i] = p; > > > > Modified: head/sys/vm/vm_fault.c > > ============================================================================== > > --- head/sys/vm/vm_fault.c Tue Nov 15 17:01:48 2016 (r308690) > > +++ head/sys/vm/vm_fault.c Tue Nov 15 18:22:50 2016 (r308691) > > @@ -559,8 +559,7 @@ fast_failed: > > unlock_and_deallocate(&fs); > > VM_WAITPFAULT; > > goto RetryFault; > > - } else if (fs.m->valid == VM_PAGE_BITS_ALL) > > - break; > > + } > > } > > > > readrest: > > > > Modified: head/sys/vm/vm_mmap.c > > ============================================================================== > > --- head/sys/vm/vm_mmap.c Tue Nov 15 17:01:48 2016 (r308690) > > +++ head/sys/vm/vm_mmap.c Tue Nov 15 18:22:50 2016 (r308691) > > @@ -877,9 +877,6 @@ RestartScan: > > pindex = OFF_TO_IDX(current->offset + > > (addr - current->start)); > > m = vm_page_lookup(object, pindex); > > - if (m == NULL && > > - vm_page_is_cached(object, pindex)) > > - mincoreinfo = MINCORE_INCORE; > > if (m != NULL && m->valid == 0) > > m = NULL; > > if (m != NULL) > > > > Modified: head/sys/vm/vm_object.c > > ============================================================================== > > --- head/sys/vm/vm_object.c Tue Nov 15 17:01:48 2016 (r308690) > > +++ head/sys/vm/vm_object.c Tue Nov 15 18:22:50 2016 (r308691) > > @@ -178,9 +178,6 @@ vm_object_zdtor(void *mem, int size, voi > > ("object %p has reservations", > > object)); > > #endif > > - KASSERT(vm_object_cache_is_empty(object), > > - ("object %p has cached pages", > > - object)); > > KASSERT(object->paging_in_progress == 0, > > ("object %p paging_in_progress = %d", > > object, object->paging_in_progress)); > > @@ -212,8 +209,6 @@ vm_object_zinit(void *mem, int size, int > > object->paging_in_progress = 0; > > object->resident_page_count = 0; > > object->shadow_count = 0; > > - object->cache.rt_root = 0; > > - object->cache.rt_flags = 0; > > > > mtx_lock(&vm_object_list_mtx); > > TAILQ_INSERT_TAIL(&vm_object_list, object, object_list); > > @@ -792,8 +787,6 @@ vm_object_terminate(vm_object_t object) > > if (__predict_false(!LIST_EMPTY(&object->rvq))) > > vm_reserv_break_all(object); > > #endif > > - if (__predict_false(!vm_object_cache_is_empty(object))) > > - vm_page_cache_free(object, 0, 0); > > > > KASSERT(object->cred == NULL || object->type == OBJT_DEFAULT || > > object->type == OBJT_SWAP, > > @@ -1135,13 +1128,6 @@ shadowlookup: > > } else if ((tobject->flags & OBJ_UNMANAGED) != 0) > > goto unlock_tobject; > > m = vm_page_lookup(tobject, tpindex); > > - if (m == NULL && advise == MADV_WILLNEED) { > > - /* > > - * If the page is cached, reactivate it. > > - */ > > - m = vm_page_alloc(tobject, tpindex, VM_ALLOC_IFCACHED | > > - VM_ALLOC_NOBUSY); > > - } > > if (m == NULL) { > > /* > > * There may be swap even if there is no backing page > > @@ -1406,19 +1392,6 @@ retry: > > swap_pager_copy(orig_object, new_object, offidxstart, 0); > > TAILQ_FOREACH(m, &new_object->memq, listq) > > vm_page_xunbusy(m); > > - > > - /* > > - * Transfer any cached pages from orig_object to new_object. > > - * If swap_pager_copy() found swapped out pages within the > > - * specified range of orig_object, then it changed > > - * new_object's type to OBJT_SWAP when it transferred those > > - * pages to new_object. Otherwise, new_object's type > > - * should still be OBJT_DEFAULT and orig_object should not > > - * contain any cached pages within the specified range. > > - */ > > - if (__predict_false(!vm_object_cache_is_empty(orig_object))) > > - vm_page_cache_transfer(orig_object, offidxstart, > > - new_object); > > } > > VM_OBJECT_WUNLOCK(orig_object); > > VM_OBJECT_WUNLOCK(new_object); > > @@ -1754,13 +1727,6 @@ vm_object_collapse(vm_object_t object) > > backing_object, > > object, > > OFF_TO_IDX(object->backing_object_offset), TRUE); > > - > > - /* > > - * Free any cached pages from backing_object. > > - */ > > - if (__predict_false( > > - !vm_object_cache_is_empty(backing_object))) > > - vm_page_cache_free(backing_object, 0, 0); > > } > > /* > > * Object now shadows whatever backing_object did. > > @@ -1889,7 +1855,7 @@ vm_object_page_remove(vm_object_t object > > (options & (OBJPR_CLEANONLY | OBJPR_NOTMAPPED)) == OBJPR_NOTMAPPED, > > ("vm_object_page_remove: illegal options for object %p", object)); > > if (object->resident_page_count == 0) > > - goto skipmemq; > > + return; > > vm_object_pip_add(object, 1); > > again: > > p = vm_page_find_least(object, start); > > @@ -1946,9 +1912,6 @@ next: > > vm_page_unlock(p); > > } > > vm_object_pip_wakeup(object); > > -skipmemq: > > - if (__predict_false(!vm_object_cache_is_empty(object))) > > - vm_page_cache_free(object, start, end); > > } > > > > /* > > > > Modified: head/sys/vm/vm_object.h > > ============================================================================== > > --- head/sys/vm/vm_object.h Tue Nov 15 17:01:48 2016 (r308690) > > +++ head/sys/vm/vm_object.h Tue Nov 15 18:22:50 2016 (r308691) > > @@ -118,7 +118,6 @@ struct vm_object { > > vm_ooffset_t backing_object_offset;/* Offset in backing object */ > > TAILQ_ENTRY(vm_object) pager_object_list; /* list of all objects of this pager type */ > > LIST_HEAD(, vm_reserv) rvq; /* list of reservations */ > > - struct vm_radix cache; /* (o + f) root of the cache page radix trie */ > > void *handle; > > union { > > /* > > @@ -290,13 +289,6 @@ void vm_object_pip_wakeup(vm_object_t ob > > void vm_object_pip_wakeupn(vm_object_t object, short i); > > void vm_object_pip_wait(vm_object_t object, char *waitid); > > > > -static __inline boolean_t > > -vm_object_cache_is_empty(vm_object_t object) > > -{ > > - > > - return (vm_radix_is_empty(&object->cache)); > > -} > > - > > void umtx_shm_object_init(vm_object_t object); > > void umtx_shm_object_terminated(vm_object_t object); > > extern int umtx_shm_vnobj_persistent; > > > > Modified: head/sys/vm/vm_page.c > > ============================================================================== > > --- head/sys/vm/vm_page.c Tue Nov 15 17:01:48 2016 (r308690) > > +++ head/sys/vm/vm_page.c Tue Nov 15 18:22:50 2016 (r308691) > > @@ -154,8 +154,7 @@ static int vm_pageout_pages_needed; > > > > static uma_zone_t fakepg_zone; > > > > -static struct vnode *vm_page_alloc_init(vm_page_t m); > > -static void vm_page_cache_turn_free(vm_page_t m); > > +static void vm_page_alloc_check(vm_page_t m); > > static void vm_page_clear_dirty_mask(vm_page_t m, vm_page_bits_t pagebits); > > static void vm_page_enqueue(uint8_t queue, vm_page_t m); > > static void vm_page_free_wakeup(void); > > @@ -1118,9 +1117,7 @@ void > > vm_page_dirty_KBI(vm_page_t m) > > { > > > > - /* These assertions refer to this operation by its public name. */ > > - KASSERT((m->flags & PG_CACHED) == 0, > > - ("vm_page_dirty: page in cache!")); > > + /* Refer to this operation by its public name. */ > > KASSERT(m->valid == VM_PAGE_BITS_ALL, > > ("vm_page_dirty: page is invalid!")); > > m->dirty = VM_PAGE_BITS_ALL; > > @@ -1459,142 +1456,6 @@ vm_page_rename(vm_page_t m, vm_object_t > > } > > > > /* > > - * Convert all of the given object's cached pages that have a > > - * pindex within the given range into free pages. If the value > > - * zero is given for "end", then the range's upper bound is > > - * infinity. If the given object is backed by a vnode and it > > - * transitions from having one or more cached pages to none, the > > - * vnode's hold count is reduced. > > - */ > > -void > > -vm_page_cache_free(vm_object_t object, vm_pindex_t start, vm_pindex_t end) > > -{ > > - vm_page_t m; > > - boolean_t empty; > > - > > - mtx_lock(&vm_page_queue_free_mtx); > > - if (__predict_false(vm_radix_is_empty(&object->cache))) { > > - mtx_unlock(&vm_page_queue_free_mtx); > > - return; > > - } > > - while ((m = vm_radix_lookup_ge(&object->cache, start)) != NULL) { > > - if (end != 0 && m->pindex >= end) > > - break; > > - vm_radix_remove(&object->cache, m->pindex); > > - vm_page_cache_turn_free(m); > > - } > > - empty = vm_radix_is_empty(&object->cache); > > - mtx_unlock(&vm_page_queue_free_mtx); > > - if (object->type == OBJT_VNODE && empty) > > - vdrop(object->handle); > > -} > > - > > -/* > > - * Returns the cached page that is associated with the given > > - * object and offset. If, however, none exists, returns NULL. > > - * > > - * The free page queue must be locked. > > - */ > > -static inline vm_page_t > > -vm_page_cache_lookup(vm_object_t object, vm_pindex_t pindex) > > -{ > > - > > - mtx_assert(&vm_page_queue_free_mtx, MA_OWNED); > > - return (vm_radix_lookup(&object->cache, pindex)); > > -} > > - > > -/* > > - * Remove the given cached page from its containing object's > > - * collection of cached pages. > > - * > > - * The free page queue must be locked. > > - */ > > -static void > > -vm_page_cache_remove(vm_page_t m) > > -{ > > - > > - mtx_assert(&vm_page_queue_free_mtx, MA_OWNED); > > - KASSERT((m->flags & PG_CACHED) != 0, > > - ("vm_page_cache_remove: page %p is not cached", m)); > > - vm_radix_remove(&m->object->cache, m->pindex); > > - m->object = NULL; > > - vm_cnt.v_cache_count--; > > -} > > - > > -/* > > - * Transfer all of the cached pages with offset greater than or > > - * equal to 'offidxstart' from the original object's cache to the > > - * new object's cache. However, any cached pages with offset > > - * greater than or equal to the new object's size are kept in the > > - * original object. Initially, the new object's cache must be > > - * empty. Offset 'offidxstart' in the original object must > > - * correspond to offset zero in the new object. > > - * > > - * The new object must be locked. > > - */ > > -void > > -vm_page_cache_transfer(vm_object_t orig_object, vm_pindex_t offidxstart, > > - vm_object_t new_object) > > -{ > > - vm_page_t m; > > - > > - /* > > - * Insertion into an object's collection of cached pages > > - * requires the object to be locked. In contrast, removal does > > - * not. > > - */ > > - VM_OBJECT_ASSERT_WLOCKED(new_object); > > - KASSERT(vm_radix_is_empty(&new_object->cache), > > - ("vm_page_cache_transfer: object %p has cached pages", > > - new_object)); > > - mtx_lock(&vm_page_queue_free_mtx); > > - while ((m = vm_radix_lookup_ge(&orig_object->cache, > > - offidxstart)) != NULL) { > > - /* > > - * Transfer all of the pages with offset greater than or > > - * equal to 'offidxstart' from the original object's > > - * cache to the new object's cache. > > - */ > > - if ((m->pindex - offidxstart) >= new_object->size) > > - break; > > - vm_radix_remove(&orig_object->cache, m->pindex); > > - /* Update the page's object and offset. */ > > - m->object = new_object; > > - m->pindex -= offidxstart; > > - if (vm_radix_insert(&new_object->cache, m)) > > - vm_page_cache_turn_free(m); > > - } > > - mtx_unlock(&vm_page_queue_free_mtx); > > -} > > - > > -/* > > - * Returns TRUE if a cached page is associated with the given object and > > - * offset, and FALSE otherwise. > > - * > > - * The object must be locked. > > - */ > > -boolean_t > > -vm_page_is_cached(vm_object_t object, vm_pindex_t pindex) > > -{ > > - vm_page_t m; > > - > > - /* > > - * Insertion into an object's collection of cached pages requires the > > - * object to be locked. Therefore, if the object is locked and the > > - * object's collection is empty, there is no need to acquire the free > > - * page queues lock in order to prove that the specified page doesn't > > - * exist. > > - */ > > - VM_OBJECT_ASSERT_WLOCKED(object); > > - if (__predict_true(vm_object_cache_is_empty(object))) > > - return (FALSE); > > - mtx_lock(&vm_page_queue_free_mtx); > > - m = vm_page_cache_lookup(object, pindex); > > - mtx_unlock(&vm_page_queue_free_mtx); > > - return (m != NULL); > > -} > > - > > -/* > > * vm_page_alloc: > > * > > * Allocate and return a page that is associated with the specified > > @@ -1610,9 +1471,6 @@ vm_page_is_cached(vm_object_t object, vm > > * optional allocation flags: > > * VM_ALLOC_COUNT(number) the number of additional pages that the caller > > * intends to allocate > > - * VM_ALLOC_IFCACHED return page only if it is cached > > - * VM_ALLOC_IFNOTCACHED return NULL, do not reactivate if the page > > - * is cached > > * VM_ALLOC_NOBUSY do not exclusive busy the page > > * VM_ALLOC_NODUMP do not include the page in a kernel core dump > > * VM_ALLOC_NOOBJ page is not associated with an object and > > @@ -1626,8 +1484,6 @@ vm_page_is_cached(vm_object_t object, vm > > vm_page_t > > vm_page_alloc(vm_object_t object, vm_pindex_t pindex, int req) > > { > > - struct vnode *vp = NULL; > > - vm_object_t m_object; > > vm_page_t m, mpred; > > int flags, req_class; > > > > @@ -1670,31 +1526,12 @@ vm_page_alloc(vm_object_t object, vm_pin > > * Allocate from the free queue if the number of free pages > > * exceeds the minimum for the request class. > > */ > > - if (object != NULL && > > - (m = vm_page_cache_lookup(object, pindex)) != NULL) { > > - if ((req & VM_ALLOC_IFNOTCACHED) != 0) { > > - mtx_unlock(&vm_page_queue_free_mtx); > > - return (NULL); > > - } > > - if (vm_phys_unfree_page(m)) > > - vm_phys_set_pool(VM_FREEPOOL_DEFAULT, m, 0); > > -#if VM_NRESERVLEVEL > 0 > > - else if (!vm_reserv_reactivate_page(m)) > > -#else > > - else > > -#endif > > - panic("vm_page_alloc: cache page %p is missing" > > - " from the free queue", m); > > - } else if ((req & VM_ALLOC_IFCACHED) != 0) { > > - mtx_unlock(&vm_page_queue_free_mtx); > > - return (NULL); > > #if VM_NRESERVLEVEL > 0 > > - } else if (object == NULL || (object->flags & (OBJ_COLORED | > > + if (object == NULL || (object->flags & (OBJ_COLORED | > > OBJ_FICTITIOUS)) != OBJ_COLORED || (m = > > - vm_reserv_alloc_page(object, pindex, mpred)) == NULL) { > > -#else > > - } else { > > + vm_reserv_alloc_page(object, pindex, mpred)) == NULL) > > #endif > > + { > > m = vm_phys_alloc_pages(object != NULL ? > > VM_FREEPOOL_DEFAULT : VM_FREEPOOL_DIRECT, 0); > > #if VM_NRESERVLEVEL > 0 > > @@ -1720,33 +1557,9 @@ vm_page_alloc(vm_object_t object, vm_pin > > * At this point we had better have found a good page. > > */ > > KASSERT(m != NULL, ("vm_page_alloc: missing page")); > > - KASSERT(m->queue == PQ_NONE, > > - ("vm_page_alloc: page %p has unexpected queue %d", m, m->queue)); > > - KASSERT(m->wire_count == 0, ("vm_page_alloc: page %p is wired", m)); > > - KASSERT(m->hold_count == 0, ("vm_page_alloc: page %p is held", m)); > > - KASSERT(!vm_page_busied(m), ("vm_page_alloc: page %p is busy", m)); > > - KASSERT(m->dirty == 0, ("vm_page_alloc: page %p is dirty", m)); > > - KASSERT(pmap_page_get_memattr(m) == VM_MEMATTR_DEFAULT, > > - ("vm_page_alloc: page %p has unexpected memattr %d", m, > > - pmap_page_get_memattr(m))); > > - if ((m->flags & PG_CACHED) != 0) { > > - KASSERT((m->flags & PG_ZERO) == 0, > > - ("vm_page_alloc: cached page %p is PG_ZERO", m)); > > - KASSERT(m->valid != 0, > > - ("vm_page_alloc: cached page %p is invalid", m)); > > - if (m->object != object || m->pindex != pindex) > > - m->valid = 0; > > - m_object = m->object; > > - vm_page_cache_remove(m); > > - if (m_object->type == OBJT_VNODE && > > - vm_object_cache_is_empty(m_object)) > > - vp = m_object->handle; > > - } else { > > - KASSERT(m->valid == 0, > > - ("vm_page_alloc: free page %p is valid", m)); > > - vm_phys_freecnt_adj(m, -1); > > - } > > + vm_phys_freecnt_adj(m, -1); > > mtx_unlock(&vm_page_queue_free_mtx); > > + vm_page_alloc_check(m); > > > > /* > > * Initialize the page. Only the PG_ZERO flag is inherited. > > @@ -1778,9 +1591,6 @@ vm_page_alloc(vm_object_t object, vm_pin > > > > if (object != NULL) { > > if (vm_page_insert_after(m, object, pindex, mpred)) { > > - /* See the comment below about hold count. */ > > - if (vp != NULL) > > - vdrop(vp); > > pagedaemon_wakeup(); > > if (req & VM_ALLOC_WIRED) { > > atomic_subtract_int(&vm_cnt.v_wire_count, 1); > > @@ -1801,15 +1611,6 @@ vm_page_alloc(vm_object_t object, vm_pin > > m->pindex = pindex; > > > > /* > > - * The following call to vdrop() must come after the above call > > - * to vm_page_insert() in case both affect the same object and > > - * vnode. Otherwise, the affected vnode's hold count could > > - * temporarily become zero. > > - */ > > - if (vp != NULL) > > - vdrop(vp); > > - > > - /* > > * Don't wakeup too often - wakeup the pageout daemon when > > * we would be nearly out of memory. > > */ > > @@ -1819,16 +1620,6 @@ vm_page_alloc(vm_object_t object, vm_pin > > return (m); > > } > > > > -static void > > -vm_page_alloc_contig_vdrop(struct spglist *lst) > > -{ > > - > > - while (!SLIST_EMPTY(lst)) { > > - vdrop((struct vnode *)SLIST_FIRST(lst)-> plinks.s.pv); > > - SLIST_REMOVE_HEAD(lst, plinks.s.ss); > > - } > > -} > > - > > /* > > * vm_page_alloc_contig: > > * > > @@ -1873,8 +1664,6 @@ vm_page_alloc_contig(vm_object_t object, > > u_long npages, vm_paddr_t low, vm_paddr_t high, u_long alignment, > > vm_paddr_t boundary, vm_memattr_t memattr) > > { > > - struct vnode *drop; > > - struct spglist deferred_vdrop_list; > > vm_page_t m, m_tmp, m_ret; > > u_int flags; > > int req_class; > > @@ -1900,7 +1689,6 @@ vm_page_alloc_contig(vm_object_t object, > > if (curproc == pageproc && req_class != VM_ALLOC_INTERRUPT) > > req_class = VM_ALLOC_SYSTEM; > > > > - SLIST_INIT(&deferred_vdrop_list); > > mtx_lock(&vm_page_queue_free_mtx); > > if (vm_cnt.v_free_count + vm_cnt.v_cache_count >= npages + > > vm_cnt.v_free_reserved || (req_class == VM_ALLOC_SYSTEM && > > @@ -1922,17 +1710,7 @@ retry: > > return (NULL); > > } > > if (m_ret != NULL) > > - for (m = m_ret; m < &m_ret[npages]; m++) { > > - drop = vm_page_alloc_init(m); > > - if (drop != NULL) { > > - /* > > - * Enqueue the vnode for deferred vdrop(). > > - */ > > - m->plinks.s.pv = drop; > > - SLIST_INSERT_HEAD(&deferred_vdrop_list, m, > > - plinks.s.ss); > > - } > > - } > > + vm_phys_freecnt_adj(m_ret, -npages); > > else { > > #if VM_NRESERVLEVEL > 0 > > if (vm_reserv_reclaim_contig(npages, low, high, alignment, > > @@ -1943,6 +1721,8 @@ retry: > > mtx_unlock(&vm_page_queue_free_mtx); > > if (m_ret == NULL) > > return (NULL); > > + for (m = m_ret; m < &m_ret[npages]; m++) > > + vm_page_alloc_check(m); > > > > /* > > * Initialize the pages. Only the PG_ZERO flag is inherited. > > @@ -1975,8 +1755,6 @@ retry: > > m->oflags = VPO_UNMANAGED; > > if (object != NULL) { > > if (vm_page_insert(m, object, pindex)) { > > - vm_page_alloc_contig_vdrop( > > - &deferred_vdrop_list); > > if (vm_paging_needed()) > > pagedaemon_wakeup(); > > if ((req & VM_ALLOC_WIRED) != 0) > > @@ -2001,57 +1779,28 @@ retry: > > pmap_page_set_memattr(m, memattr); > > pindex++; > > } > > - vm_page_alloc_contig_vdrop(&deferred_vdrop_list); > > if (vm_paging_needed()) > > pagedaemon_wakeup(); > > return (m_ret); > > } > > > > /* > > - * Initialize a page that has been freshly dequeued from a freelist. > > - * The caller has to drop the vnode returned, if it is not NULL. > > - * > > - * This function may only be used to initialize unmanaged pages. > > - * > > - * To be called with vm_page_queue_free_mtx held. > > + * Check a page that has been freshly dequeued from a freelist. > > */ > > -static struct vnode * > > -vm_page_alloc_init(vm_page_t m) > > +static void > > +vm_page_alloc_check(vm_page_t m) > > { > > - struct vnode *drop; > > - vm_object_t m_object; > > > > KASSERT(m->queue == PQ_NONE, > > - ("vm_page_alloc_init: page %p has unexpected queue %d", > > - m, m->queue)); > > - KASSERT(m->wire_count == 0, > > - ("vm_page_alloc_init: page %p is wired", m)); > > - KASSERT(m->hold_count == 0, > > - ("vm_page_alloc_init: page %p is held", m)); > > - KASSERT(!vm_page_busied(m), > > - ("vm_page_alloc_init: page %p is busy", m)); > > - KASSERT(m->dirty == 0, > > - ("vm_page_alloc_init: page %p is dirty", m)); > > + ("page %p has unexpected queue %d", m, m->queue)); > > + KASSERT(m->wire_count == 0, ("page %p is wired", m)); > > + KASSERT(m->hold_count == 0, ("page %p is held", m)); > > + KASSERT(!vm_page_busied(m), ("page %p is busy", m)); > > + KASSERT(m->dirty == 0, ("page %p is dirty", m)); > > KASSERT(pmap_page_get_memattr(m) == VM_MEMATTR_DEFAULT, > > - ("vm_page_alloc_init: page %p has unexpected memattr %d", > > + ("page %p has unexpected memattr %d", > > m, pmap_page_get_memattr(m))); > > - mtx_assert(&vm_page_queue_free_mtx, MA_OWNED); > > - drop = NULL; > > - if ((m->flags & PG_CACHED) != 0) { > > - KASSERT((m->flags & PG_ZERO) == 0, > > - ("vm_page_alloc_init: cached page %p is PG_ZERO", m)); > > - m->valid = 0; > > - m_object = m->object; > > - vm_page_cache_remove(m); > > - if (m_object->type == OBJT_VNODE && > > - vm_object_cache_is_empty(m_object)) > > - drop = m_object->handle; > > - } else { > > - KASSERT(m->valid == 0, > > - ("vm_page_alloc_init: free page %p is valid", m)); > > - vm_phys_freecnt_adj(m, -1); > > - } > > - return (drop); > > + KASSERT(m->valid == 0, ("free page %p is valid", m)); > > } > > > > /* > > @@ -2077,7 +1826,6 @@ vm_page_alloc_init(vm_page_t m) > > vm_page_t > > vm_page_alloc_freelist(int flind, int req) > > { > > - struct vnode *drop; > > vm_page_t m; > > u_int flags; > > int req_class; > > @@ -2111,8 +1859,9 @@ vm_page_alloc_freelist(int flind, int re > > mtx_unlock(&vm_page_queue_free_mtx); > > return (NULL); > > } > > - drop = vm_page_alloc_init(m); > > + vm_phys_freecnt_adj(m, -1); > > mtx_unlock(&vm_page_queue_free_mtx); > > + vm_page_alloc_check(m); > > > > /* > > * Initialize the page. Only the PG_ZERO flag is inherited. > > @@ -2132,8 +1881,6 @@ vm_page_alloc_freelist(int flind, int re > > } > > /* Unmanaged pages don't use "act_count". */ > > m->oflags = VPO_UNMANAGED; > > - if (drop != NULL) > > - vdrop(drop); > > if (vm_paging_needed()) > > pagedaemon_wakeup(); > > return (m); > > @@ -2259,38 +2006,8 @@ retry: > > /* Don't care: PG_NODUMP, PG_ZERO. */ > > if (object->type != OBJT_DEFAULT && > > object->type != OBJT_SWAP && > > - object->type != OBJT_VNODE) > > + object->type != OBJT_VNODE) { > > run_ext = 0; > > - else if ((m->flags & PG_CACHED) != 0 || > > - m != vm_page_lookup(object, m->pindex)) { > > - /* > > - * The page is cached or recently converted > > - * from cached to free. > > - */ > > -#if VM_NRESERVLEVEL > 0 > > - if (level >= 0) { > > - /* > > - * The page is reserved. Extend the > > - * current run by one page. > > - */ > > - run_ext = 1; > > - } else > > -#endif > > - if ((order = m->order) < VM_NFREEORDER) { > > - /* > > - * The page is enqueued in the > > - * physical memory allocator's cache/ > > - * free page queues. Moreover, it is > > - * the first page in a power-of-two- > > - * sized run of contiguous cache/free > > - * pages. Add these pages to the end > > - * of the current run, and jump > > - * ahead. > > - */ > > - run_ext = 1 << order; > > - m_inc = 1 << order; > > - } else > > - run_ext = 0; > > #if VM_NRESERVLEVEL > 0 > > } else if ((options & VPSC_NOSUPER) != 0 && > > (level = vm_reserv_level_iffullpop(m)) >= 0) { > > @@ -2457,15 +2174,7 @@ retry: > > object->type != OBJT_SWAP && > > object->type != OBJT_VNODE) > > error = EINVAL; > > - else if ((m->flags & PG_CACHED) != 0 || > > - m != vm_page_lookup(object, m->pindex)) { > > - /* > > - * The page is cached or recently converted > > - * from cached to free. > > - */ > > - VM_OBJECT_WUNLOCK(object); > > - goto cached; > > - } else if (object->memattr != VM_MEMATTR_DEFAULT) > > + else if (object->memattr != VM_MEMATTR_DEFAULT) > > error = EINVAL; > > else if (m->queue != PQ_NONE && !vm_page_busied(m)) { > > KASSERT(pmap_page_get_memattr(m) == > > @@ -2566,7 +2275,6 @@ retry: > > unlock: > > VM_OBJECT_WUNLOCK(object); > > } else { > > -cached: > > mtx_lock(&vm_page_queue_free_mtx); > > order = m->order; > > if (order < VM_NFREEORDER) { > > @@ -2964,27 +2672,6 @@ vm_page_free_wakeup(void) > > } > > > > /* > > - * Turn a cached page into a free page, by changing its attributes. > > - * Keep the statistics up-to-date. > > - * > > - * The free page queue must be locked. > > - */ > > -static void > > -vm_page_cache_turn_free(vm_page_t m) > > -{ > > - > > - mtx_assert(&vm_page_queue_free_mtx, MA_OWNED); > > - > > - m->object = NULL; > > - m->valid = 0; > > - KASSERT((m->flags & PG_CACHED) != 0, > > - ("vm_page_cache_turn_free: page %p is not cached", m)); > > - m->flags &= ~PG_CACHED; > > - vm_cnt.v_cache_count--; > > - vm_phys_freecnt_adj(m, 1); > > -} > > - > > -/* > > * vm_page_free_toq: > > * > > * Returns the given page to the free list, > > @@ -3383,8 +3070,7 @@ retrylookup: > > VM_WAIT; > > VM_OBJECT_WLOCK(object); > > goto retrylookup; > > - } else if (m->valid != 0) > > - return (m); > > + } > > if (allocflags & VM_ALLOC_ZERO && (m->flags & PG_ZERO) == 0) > > pmap_zero_page(m); > > return (m); > > > > Modified: head/sys/vm/vm_page.h > > ============================================================================== > > --- head/sys/vm/vm_page.h Tue Nov 15 17:01:48 2016 (r308690) > > +++ head/sys/vm/vm_page.h Tue Nov 15 18:22:50 2016 (r308691) > > @@ -326,7 +326,6 @@ extern struct mtx_padalign pa_lock[]; > > * Page flags. If changed at any other time than page allocation or > > * freeing, the modification must be protected by the vm_page lock. > > */ > > -#define PG_CACHED 0x0001 /* page is cached */ > > #define PG_FICTITIOUS 0x0004 /* physical page doesn't exist */ > > #define PG_ZERO 0x0008 /* page is zeroed */ > > #define PG_MARKER 0x0010 /* special queue marker page */ > > @@ -409,8 +408,6 @@ vm_page_t PHYS_TO_VM_PAGE(vm_paddr_t pa) > > #define VM_ALLOC_ZERO 0x0040 /* (acfg) Try to obtain a zeroed page */ > > #define VM_ALLOC_NOOBJ 0x0100 /* (acg) No associated object */ > > #define VM_ALLOC_NOBUSY 0x0200 /* (acg) Do not busy the page */ > > -#define VM_ALLOC_IFCACHED 0x0400 /* (ag) Fail if page is not cached */ > > -#define VM_ALLOC_IFNOTCACHED 0x0800 /* (ag) Fail if page is cached */ > > #define VM_ALLOC_IGN_SBUSY 0x1000 /* (g) Ignore shared busy flag */ > > #define VM_ALLOC_NODUMP 0x2000 /* (ag) don't include in dump */ > > #define VM_ALLOC_SBUSY 0x4000 /* (acg) Shared busy the page */ > > @@ -453,8 +450,6 @@ vm_page_t vm_page_alloc_contig(vm_object > > vm_paddr_t boundary, vm_memattr_t memattr); > > vm_page_t vm_page_alloc_freelist(int, int); > > vm_page_t vm_page_grab (vm_object_t, vm_pindex_t, int); > > -void vm_page_cache_free(vm_object_t, vm_pindex_t, vm_pindex_t); > > -void vm_page_cache_transfer(vm_object_t, vm_pindex_t, vm_object_t); > > int vm_page_try_to_free (vm_page_t); > > void vm_page_deactivate (vm_page_t); > > void vm_page_deactivate_noreuse(vm_page_t); > > @@ -464,7 +459,6 @@ vm_page_t vm_page_find_least(vm_object_t > > vm_page_t vm_page_getfake(vm_paddr_t paddr, vm_memattr_t memattr); > > void vm_page_initfake(vm_page_t m, vm_paddr_t paddr, vm_memattr_t memattr); > > int vm_page_insert (vm_page_t, vm_object_t, vm_pindex_t); > > -boolean_t vm_page_is_cached(vm_object_t object, vm_pindex_t pindex); > > void vm_page_launder(vm_page_t m); > > vm_page_t vm_page_lookup (vm_object_t, vm_pindex_t); > > vm_page_t vm_page_next(vm_page_t m); > > > > Modified: head/sys/vm/vm_reserv.c > > ============================================================================== > > --- head/sys/vm/vm_reserv.c Tue Nov 15 17:01:48 2016 (r308690) > > +++ head/sys/vm/vm_reserv.c Tue Nov 15 18:22:50 2016 (r308691) > > @@ -908,45 +908,6 @@ vm_reserv_level_iffullpop(vm_page_t m) > > } > > > > /* > > - * Prepare for the reactivation of a cached page. > > - * > > - * First, suppose that the given page "m" was allocated individually, i.e., not > > - * as part of a reservation, and cached. Then, suppose a reservation > > - * containing "m" is allocated by the same object. Although "m" and the > > - * reservation belong to the same object, "m"'s pindex may not match the > > - * reservation's. > > - * > > - * The free page queue must be locked. > > - */ > > -boolean_t > > -vm_reserv_reactivate_page(vm_page_t m) > > -{ > > - vm_reserv_t rv; > > - int index; > > - > > - mtx_assert(&vm_page_queue_free_mtx, MA_OWNED); > > - rv = vm_reserv_from_page(m); > > - if (rv->object == NULL) > > - return (FALSE); > > - KASSERT((m->flags & PG_CACHED) != 0, > > - ("vm_reserv_reactivate_page: page %p is not cached", m)); > > - if (m->object == rv->object && > > - m->pindex - rv->pindex == (index = VM_RESERV_INDEX(m->object, > > - m->pindex))) > > - vm_reserv_populate(rv, index); > > - else { > > - KASSERT(rv->inpartpopq, > > - ("vm_reserv_reactivate_page: reserv %p's inpartpopq is FALSE", > > - rv)); > > - TAILQ_REMOVE(&vm_rvq_partpop, rv, partpopq); > > - rv->inpartpopq = FALSE; > > - /* Don't release "m" to the physical memory allocator. */ > > - vm_reserv_break(rv, m); > > - } > > - return (TRUE); > > -} > > - > > -/* > > * Breaks the given partially-populated reservation, releasing its cached and > > * free pages to the physical memory allocator. > > * > > > > Modified: head/sys/vm/vm_reserv.h > > ============================================================================== > > --- head/sys/vm/vm_reserv.h Tue Nov 15 17:01:48 2016 (r308690) > > +++ head/sys/vm/vm_reserv.h Tue Nov 15 18:22:50 2016 (r308691) > > @@ -56,7 +56,6 @@ void vm_reserv_init(void); > > bool vm_reserv_is_page_free(vm_page_t m); > > int vm_reserv_level(vm_page_t m); > > int vm_reserv_level_iffullpop(vm_page_t m); > > -boolean_t vm_reserv_reactivate_page(vm_page_t m); > > boolean_t vm_reserv_reclaim_contig(u_long npages, vm_paddr_t low, > > vm_paddr_t high, u_long alignment, vm_paddr_t boundary); > > boolean_t vm_reserv_reclaim_inactive(void); > > > > Modified: head/sys/vm/vnode_pager.c > > ============================================================================== > > --- head/sys/vm/vnode_pager.c Tue Nov 15 17:01:48 2016 (r308690) > > +++ head/sys/vm/vnode_pager.c Tue Nov 15 18:22:50 2016 (r308691) > > @@ -466,10 +466,6 @@ vnode_pager_setsize(struct vnode *vp, vm > > * replacement from working properly. > > */ > > vm_page_clear_dirty(m, base, PAGE_SIZE - base); > > - } else if ((nsize & PAGE_MASK) && > > - vm_page_is_cached(object, OFF_TO_IDX(nsize))) { > > - vm_page_cache_free(object, OFF_TO_IDX(nsize), > > - nobjsize); > > } > > } > > object->un_pager.vnp.vnp_size = nsize; > > @@ -894,8 +890,7 @@ vnode_pager_generic_getpages(struct vnod > > for (tpindex = m[0]->pindex - 1; > > tpindex >= startpindex && tpindex < m[0]->pindex; > > tpindex--, i++) { > > - p = vm_page_alloc(object, tpindex, VM_ALLOC_NORMAL | > > - VM_ALLOC_IFNOTCACHED); > > + p = vm_page_alloc(object, tpindex, VM_ALLOC_NORMAL); > > if (p == NULL) { > > /* Shift the array. */ > > for (int j = 0; j < i; j++) > > @@ -932,8 +927,7 @@ vnode_pager_generic_getpages(struct vnod > > > > for (tpindex = m[count - 1]->pindex + 1; > > tpindex < endpindex; i++, tpindex++) { > > - p = vm_page_alloc(object, tpindex, VM_ALLOC_NORMAL | > > - VM_ALLOC_IFNOTCACHED); > > + p = vm_page_alloc(object, tpindex, VM_ALLOC_NORMAL); > > if (p == NULL) > > break; > > bp->b_pages[i] = p; > >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20161116165343.GX54029>