From owner-svn-src-user@FreeBSD.ORG Sun Jun 8 18:09:45 2014 Return-Path: Delivered-To: svn-src-user@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 2343B481; Sun, 8 Jun 2014 18:09:45 +0000 (UTC) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 0E1272B46; Sun, 8 Jun 2014 18:09:45 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.8/8.14.8) with ESMTP id s58I9ivT030654; Sun, 8 Jun 2014 18:09:44 GMT (envelope-from attilio@svn.freebsd.org) Received: (from attilio@localhost) by svn.freebsd.org (8.14.8/8.14.8/Submit) id s58I9gWE030636; Sun, 8 Jun 2014 18:09:42 GMT (envelope-from attilio@svn.freebsd.org) Message-Id: <201406081809.s58I9gWE030636@svn.freebsd.org> From: Attilio Rao Date: Sun, 8 Jun 2014 18:09:42 +0000 (UTC) To: src-committers@freebsd.org, svn-src-user@freebsd.org Subject: svn commit: r267237 - in user/attilio/rm_vmobj_cache/sys: dev/agp dev/cxgbe/tom dev/drm dev/drm2/i915 kern net vm X-SVN-Group: user MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-user@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: "SVN commit messages for the experimental " user" src tree" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 08 Jun 2014 18:09:45 -0000 Author: attilio Date: Sun Jun 8 18:09:42 2014 New Revision: 267237 URL: http://svnweb.freebsd.org/changeset/base/267237 Log: - Fixup documentation for vm_page_unwire(). - Add stronger checks when queueing and dequeueing pages from the pagequeues. - Modify the vm_page_unwire() KPI by having it to accept directly the queue where to enqueue the page after the wire count reaches 0. This will make the interface quickly more extendible if new page queues are added. Modified: user/attilio/rm_vmobj_cache/sys/dev/agp/agp.c user/attilio/rm_vmobj_cache/sys/dev/agp/agp_i810.c user/attilio/rm_vmobj_cache/sys/dev/cxgbe/tom/t4_ddp.c user/attilio/rm_vmobj_cache/sys/dev/drm/via_dmablit.c user/attilio/rm_vmobj_cache/sys/dev/drm2/i915/i915_gem.c user/attilio/rm_vmobj_cache/sys/kern/uipc_syscalls.c user/attilio/rm_vmobj_cache/sys/kern/vfs_bio.c user/attilio/rm_vmobj_cache/sys/net/bpf_zerocopy.c user/attilio/rm_vmobj_cache/sys/vm/vm_fault.c user/attilio/rm_vmobj_cache/sys/vm/vm_glue.c user/attilio/rm_vmobj_cache/sys/vm/vm_page.c Modified: user/attilio/rm_vmobj_cache/sys/dev/agp/agp.c ============================================================================== --- user/attilio/rm_vmobj_cache/sys/dev/agp/agp.c Sun Jun 8 17:50:07 2014 (r267236) +++ user/attilio/rm_vmobj_cache/sys/dev/agp/agp.c Sun Jun 8 18:09:42 2014 (r267237) @@ -629,7 +629,7 @@ bad: if (k >= i) vm_page_xunbusy(m); vm_page_lock(m); - vm_page_unwire(m, 0); + vm_page_unwire(m, PQ_INACTIVE); vm_page_unlock(m); } VM_OBJECT_WUNLOCK(mem->am_obj); @@ -663,7 +663,7 @@ agp_generic_unbind_memory(device_t dev, for (i = 0; i < mem->am_size; i += PAGE_SIZE) { m = vm_page_lookup(mem->am_obj, atop(i)); vm_page_lock(m); - vm_page_unwire(m, 0); + vm_page_unwire(m, PQ_INACTIVE); vm_page_unlock(m); } VM_OBJECT_WUNLOCK(mem->am_obj); Modified: user/attilio/rm_vmobj_cache/sys/dev/agp/agp_i810.c ============================================================================== --- user/attilio/rm_vmobj_cache/sys/dev/agp/agp_i810.c Sun Jun 8 17:50:07 2014 (r267236) +++ user/attilio/rm_vmobj_cache/sys/dev/agp/agp_i810.c Sun Jun 8 18:09:42 2014 (r267237) @@ -2009,7 +2009,7 @@ agp_i810_free_memory(device_t dev, struc VM_OBJECT_WLOCK(mem->am_obj); m = vm_page_lookup(mem->am_obj, 0); vm_page_lock(m); - vm_page_unwire(m, 0); + vm_page_unwire(m, PQ_INACTIVE); vm_page_unlock(m); VM_OBJECT_WUNLOCK(mem->am_obj); } else { Modified: user/attilio/rm_vmobj_cache/sys/dev/cxgbe/tom/t4_ddp.c ============================================================================== --- user/attilio/rm_vmobj_cache/sys/dev/cxgbe/tom/t4_ddp.c Sun Jun 8 17:50:07 2014 (r267236) +++ user/attilio/rm_vmobj_cache/sys/dev/cxgbe/tom/t4_ddp.c Sun Jun 8 18:09:42 2014 (r267237) @@ -869,7 +869,7 @@ unwire_ddp_buffer(struct ddp_buffer *db) for (i = 0; i < db->npages; i++) { p = db->pages[i]; vm_page_lock(p); - vm_page_unwire(p, 0); + vm_page_unwire(p, PQ_INACTIVE); vm_page_unlock(p); } } Modified: user/attilio/rm_vmobj_cache/sys/dev/drm/via_dmablit.c ============================================================================== --- user/attilio/rm_vmobj_cache/sys/dev/drm/via_dmablit.c Sun Jun 8 17:50:07 2014 (r267236) +++ user/attilio/rm_vmobj_cache/sys/dev/drm/via_dmablit.c Sun Jun 8 18:09:42 2014 (r267237) @@ -179,7 +179,7 @@ via_free_sg_info(drm_via_sg_info_t *vsg) for (i=0; i < vsg->num_pages; ++i) { page = vsg->pages[i]; vm_page_lock(page); - vm_page_unwire(page, 0); + vm_page_unwire(page, PQ_INACTIVE); vm_page_unlock(page); } case dr_via_pages_alloc: Modified: user/attilio/rm_vmobj_cache/sys/dev/drm2/i915/i915_gem.c ============================================================================== --- user/attilio/rm_vmobj_cache/sys/dev/drm2/i915/i915_gem.c Sun Jun 8 17:50:07 2014 (r267236) +++ user/attilio/rm_vmobj_cache/sys/dev/drm2/i915/i915_gem.c Sun Jun 8 18:09:42 2014 (r267237) @@ -1039,7 +1039,7 @@ i915_gem_swap_io(struct drm_device *dev, vm_page_dirty(m); vm_page_reference(m); vm_page_lock(m); - vm_page_unwire(m, 1); + vm_page_unwire(m, PQ_ACTIVE); vm_page_unlock(m); atomic_add_long(&i915_gem_wired_pages_cnt, -1); @@ -2247,7 +2247,7 @@ failed: for (j = 0; j < i; j++) { m = obj->pages[j]; vm_page_lock(m); - vm_page_unwire(m, 0); + vm_page_unwire(m, PQ_INACTIVE); vm_page_unlock(m); atomic_add_long(&i915_gem_wired_pages_cnt, -1); } @@ -2308,7 +2308,7 @@ i915_gem_object_put_pages_gtt(struct drm if (obj->madv == I915_MADV_WILLNEED) vm_page_reference(m); vm_page_lock(m); - vm_page_unwire(obj->pages[i], 1); + vm_page_unwire(obj->pages[i], PQ_ACTIVE); vm_page_unlock(m); atomic_add_long(&i915_gem_wired_pages_cnt, -1); } @@ -3611,7 +3611,7 @@ i915_gem_detach_phys_object(struct drm_d vm_page_reference(m); vm_page_lock(m); vm_page_dirty(m); - vm_page_unwire(m, 0); + vm_page_unwire(m, PQ_INACTIVE); vm_page_unlock(m); atomic_add_long(&i915_gem_wired_pages_cnt, -1); } @@ -3676,7 +3676,7 @@ i915_gem_attach_phys_object(struct drm_d vm_page_reference(m); vm_page_lock(m); - vm_page_unwire(m, 0); + vm_page_unwire(m, PQ_INACTIVE); vm_page_unlock(m); atomic_add_long(&i915_gem_wired_pages_cnt, -1); } Modified: user/attilio/rm_vmobj_cache/sys/kern/uipc_syscalls.c ============================================================================== --- user/attilio/rm_vmobj_cache/sys/kern/uipc_syscalls.c Sun Jun 8 17:50:07 2014 (r267236) +++ user/attilio/rm_vmobj_cache/sys/kern/uipc_syscalls.c Sun Jun 8 18:09:42 2014 (r267237) @@ -1996,7 +1996,7 @@ sf_buf_mext(struct mbuf *mb, void *addr, m = sf_buf_page(args); sf_buf_free(args); vm_page_lock(m); - vm_page_unwire(m, 0); + vm_page_unwire(m, PQ_INACTIVE); /* * Check for the object going away on us. This can * happen since we don't hold a reference to it. @@ -2692,7 +2692,7 @@ sendfile_readpage(vm_object_t obj, struc } else if (m != NULL) { free_page: vm_page_lock(m); - vm_page_unwire(m, 0); + vm_page_unwire(m, PQ_INACTIVE); /* * See if anyone else might know about this page. If @@ -3050,7 +3050,7 @@ retry_space: if (sf == NULL) { SFSTAT_INC(sf_allocfail); vm_page_lock(pg); - vm_page_unwire(pg, 0); + vm_page_unwire(pg, PQ_INACTIVE); KASSERT(pg->object != NULL, ("%s: object disappeared", __func__)); vm_page_unlock(pg); Modified: user/attilio/rm_vmobj_cache/sys/kern/vfs_bio.c ============================================================================== --- user/attilio/rm_vmobj_cache/sys/kern/vfs_bio.c Sun Jun 8 17:50:07 2014 (r267236) +++ user/attilio/rm_vmobj_cache/sys/kern/vfs_bio.c Sun Jun 8 18:09:42 2014 (r267237) @@ -1879,7 +1879,7 @@ vfs_vmio_release(struct buf *bp) * everything on the inactive queue. */ vm_page_lock(m); - vm_page_unwire(m, 0); + vm_page_unwire(m, PQ_INACTIVE); /* * Might as well free the page if we can and it has @@ -3468,7 +3468,7 @@ allocbuf(struct buf *bp, int size) bp->b_pages[i] = NULL; vm_page_lock(m); - vm_page_unwire(m, 0); + vm_page_unwire(m, PQ_INACTIVE); vm_page_unlock(m); } VM_OBJECT_WUNLOCK(bp->b_bufobj->bo_object); Modified: user/attilio/rm_vmobj_cache/sys/net/bpf_zerocopy.c ============================================================================== --- user/attilio/rm_vmobj_cache/sys/net/bpf_zerocopy.c Sun Jun 8 17:50:07 2014 (r267236) +++ user/attilio/rm_vmobj_cache/sys/net/bpf_zerocopy.c Sun Jun 8 18:09:42 2014 (r267237) @@ -114,7 +114,7 @@ zbuf_page_free(vm_page_t pp) { vm_page_lock(pp); - vm_page_unwire(pp, 0); + vm_page_unwire(pp, PQ_INACTIVE); if (pp->wire_count == 0 && pp->object == NULL) vm_page_free(pp); vm_page_unlock(pp); Modified: user/attilio/rm_vmobj_cache/sys/vm/vm_fault.c ============================================================================== --- user/attilio/rm_vmobj_cache/sys/vm/vm_fault.c Sun Jun 8 17:50:07 2014 (r267236) +++ user/attilio/rm_vmobj_cache/sys/vm/vm_fault.c Sun Jun 8 18:09:42 2014 (r267237) @@ -757,7 +757,7 @@ vnode_locked: vm_page_unlock(fs.first_m); vm_page_lock(fs.m); - vm_page_unwire(fs.m, FALSE); + vm_page_unwire(fs.m, PQ_INACTIVE); vm_page_unlock(fs.m); } /* @@ -919,7 +919,7 @@ vnode_locked: if (wired) vm_page_wire(fs.m); else - vm_page_unwire(fs.m, 1); + vm_page_unwire(fs.m, PQ_ACTIVE); } else vm_page_activate(fs.m); if (m_hold != NULL) { @@ -1210,7 +1210,7 @@ vm_fault_unwire(vm_map_t map, vm_offset_ if (!fictitious) { m = PHYS_TO_VM_PAGE(pa); vm_page_lock(m); - vm_page_unwire(m, TRUE); + vm_page_unwire(m, PQ_ACTIVE); vm_page_unlock(m); } } @@ -1392,7 +1392,7 @@ again: if (upgrade) { if (src_m != dst_m) { vm_page_lock(src_m); - vm_page_unwire(src_m, 0); + vm_page_unwire(src_m, PQ_INACTIVE); vm_page_unlock(src_m); vm_page_lock(dst_m); vm_page_wire(dst_m); Modified: user/attilio/rm_vmobj_cache/sys/vm/vm_glue.c ============================================================================== --- user/attilio/rm_vmobj_cache/sys/vm/vm_glue.c Sun Jun 8 17:50:07 2014 (r267236) +++ user/attilio/rm_vmobj_cache/sys/vm/vm_glue.c Sun Jun 8 18:09:42 2014 (r267237) @@ -418,7 +418,7 @@ vm_thread_stack_dispose(vm_object_t ksob if (m == NULL) panic("vm_thread_dispose: kstack already missing?"); vm_page_lock(m); - vm_page_unwire(m, 0); + vm_page_unwire(m, PQ_INACTIVE); vm_page_free(m); vm_page_unlock(m); } @@ -507,7 +507,7 @@ vm_thread_swapout(struct thread *td) panic("vm_thread_swapout: kstack already missing?"); vm_page_dirty(m); vm_page_lock(m); - vm_page_unwire(m, 0); + vm_page_unwire(m, PQ_INACTIVE); vm_page_unlock(m); } VM_OBJECT_WUNLOCK(ksobj); Modified: user/attilio/rm_vmobj_cache/sys/vm/vm_page.c ============================================================================== --- user/attilio/rm_vmobj_cache/sys/vm/vm_page.c Sun Jun 8 17:50:07 2014 (r267236) +++ user/attilio/rm_vmobj_cache/sys/vm/vm_page.c Sun Jun 8 18:09:42 2014 (r267237) @@ -147,7 +147,7 @@ static uma_zone_t fakepg_zone; static struct vnode *vm_page_alloc_init(vm_page_t m); static void vm_page_cache_turn_free(vm_page_t m); static void vm_page_clear_dirty_mask(vm_page_t m, vm_page_bits_t pagebits); -static void vm_page_enqueue(int queue, vm_page_t m); +static void vm_page_enqueue(uint8_t queue, vm_page_t m); static void vm_page_init_fakepg(void *dummy); static int vm_page_insert_after(vm_page_t m, vm_object_t object, vm_pindex_t pindex, vm_page_t mpred); @@ -2036,8 +2036,8 @@ vm_page_dequeue(vm_page_t m) struct vm_pagequeue *pq; vm_page_assert_locked(m); - KASSERT(m->queue == PQ_ACTIVE || m->queue == PQ_INACTIVE, - ("vm_page_dequeue: page %p is not queued", m)); + KASSERT(m->queue < PQ_COUNT, ("vm_page_dequeue: page %p is not queued", + m)); pq = vm_page_pagequeue(m); vm_pagequeue_lock(pq); m->queue = PQ_NONE; @@ -2074,11 +2074,15 @@ vm_page_dequeue_locked(vm_page_t m) * The page must be locked. */ static void -vm_page_enqueue(int queue, vm_page_t m) +vm_page_enqueue(uint8_t queue, vm_page_t m) { struct vm_pagequeue *pq; vm_page_lock_assert(m, MA_OWNED); + KASSERT(queue < PQ_COUNT, + ("vm_page_enqueue: invalid queue %u request for page %m", + queue, m)); + pq = &vm_phys_domain(m)->vmd_pagequeues[queue]; vm_pagequeue_lock(pq); m->queue = queue; @@ -2343,21 +2347,19 @@ vm_page_wire(vm_page_t m) * * Release one wiring of the specified page, potentially enabling it to be * paged again. If paging is enabled, then the value of the parameter - * "activate" determines to which queue the page is added. If "activate" is - * non-zero, then the page is added to the active queue. Otherwise, it is - * added to the inactive queue. - * - * However, unless the page belongs to an object, it is not enqueued because - * it cannot be paged out. + * "queue" determines to which queue the page is added. * - * If a page is fictitious, then its wire count must always be one. + * If a page is fictitious or managed, then its wire count must always be one. * * A managed page must be locked. */ void -vm_page_unwire(vm_page_t m, int activate) +vm_page_unwire(vm_page_t m, uint8_t queue) { + KASSERT(queue < PQ_COUNT, + ("vm_page_unwire: invalid queue %u request for page %m", + queue, m)); if ((m->oflags & VPO_UNMANAGED) == 0) vm_page_lock_assert(m, MA_OWNED); if ((m->flags & PG_FICTITIOUS) != 0) { @@ -2373,9 +2375,9 @@ vm_page_unwire(vm_page_t m, int activate panic("vm_page_unwire: unmanaged page %p's wire count is one", m); atomic_subtract_int(&vm_cnt.v_wire_count, 1); - if (!activate) + if (queue == PQ_INACTIVE) m->flags &= ~PG_WINATCFLS; - vm_page_enqueue(activate ? PQ_ACTIVE : PQ_INACTIVE, m); + vm_page_enqueue(queue, m); } } else panic("vm_page_unwire: page %p's wire count is zero", m);