From owner-svn-src-all@FreeBSD.ORG Thu Nov 1 16:20:03 2012 Return-Path: Delivered-To: svn-src-all@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id 7FD11D30; Thu, 1 Nov 2012 16:20:03 +0000 (UTC) (envelope-from alc@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:4f8:fff6::2c]) by mx1.freebsd.org (Postfix) with ESMTP id 67A068FC0C; Thu, 1 Nov 2012 16:20:03 +0000 (UTC) Received: from svn.freebsd.org (localhost [127.0.0.1]) by svn.freebsd.org (8.14.4/8.14.4) with ESMTP id qA1GK3nV029235; Thu, 1 Nov 2012 16:20:03 GMT (envelope-from alc@svn.freebsd.org) Received: (from alc@localhost) by svn.freebsd.org (8.14.4/8.14.4/Submit) id qA1GK3qX029232; Thu, 1 Nov 2012 16:20:03 GMT (envelope-from alc@svn.freebsd.org) Message-Id: <201211011620.qA1GK3qX029232@svn.freebsd.org> From: Alan Cox Date: Thu, 1 Nov 2012 16:20:03 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r242434 - head/sys/vm X-SVN-Group: head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2012 16:20:03 -0000 Author: alc Date: Thu Nov 1 16:20:02 2012 New Revision: 242434 URL: http://svn.freebsd.org/changeset/base/242434 Log: In general, we call pmap_remove_all() before calling vm_page_cache(). So, the call to pmap_remove_all() within vm_page_cache() is usually redundant. This change eliminates that call to pmap_remove_all() and introduces a call to pmap_remove_all() before vm_page_cache() in the one place where it didn't already exist. When iterating over a paging queue, if the object containing the current page has a zero reference count, then the page can't have any managed mappings. So, a call to pmap_remove_all() is pointless. Change a panic() call in vm_page_cache() to a KASSERT(). MFC after: 6 weeks Modified: head/sys/vm/vm_page.c head/sys/vm/vm_pageout.c Modified: head/sys/vm/vm_page.c ============================================================================== --- head/sys/vm/vm_page.c Thu Nov 1 15:17:43 2012 (r242433) +++ head/sys/vm/vm_page.c Thu Nov 1 16:20:02 2012 (r242434) @@ -2277,9 +2277,9 @@ vm_page_cache(vm_page_t m) if ((m->oflags & (VPO_UNMANAGED | VPO_BUSY)) || m->busy || m->hold_count || m->wire_count) panic("vm_page_cache: attempting to cache busy page"); - pmap_remove_all(m); - if (m->dirty != 0) - panic("vm_page_cache: page %p is dirty", m); + KASSERT(!pmap_page_is_mapped(m), + ("vm_page_cache: page %p is mapped", m)); + KASSERT(m->dirty == 0, ("vm_page_cache: page %p is dirty", m)); if (m->valid == 0 || object->type == OBJT_DEFAULT || (object->type == OBJT_SWAP && !vm_pager_has_page(object, m->pindex, NULL, NULL))) { Modified: head/sys/vm/vm_pageout.c ============================================================================== --- head/sys/vm/vm_pageout.c Thu Nov 1 15:17:43 2012 (r242433) +++ head/sys/vm/vm_pageout.c Thu Nov 1 16:20:02 2012 (r242434) @@ -594,7 +594,7 @@ vm_pageout_launder(int queue, int tries, continue; } vm_page_test_dirty(m); - if (m->dirty == 0) + if (m->dirty == 0 && object->ref_count != 0) pmap_remove_all(m); if (m->dirty != 0) { vm_page_unlock(m); @@ -1059,31 +1059,16 @@ vm_pageout_scan(int pass) } /* - * If the upper level VM system does not believe that the page - * is fully dirty, but it is mapped for write access, then we - * consult the pmap to see if the page's dirty status should - * be updated. + * If the page appears to be clean at the machine-independent + * layer, then remove all of its mappings from the pmap in + * anticipation of placing it onto the cache queue. If, + * however, any of the page's mappings allow write access, + * then the page may still be modified until the last of those + * mappings are removed. */ - if (m->dirty != VM_PAGE_BITS_ALL && - pmap_page_is_write_mapped(m)) { - /* - * Avoid a race condition: Unless write access is - * removed from the page, another processor could - * modify it before all access is removed by the call - * to vm_page_cache() below. If vm_page_cache() finds - * that the page has been modified when it removes all - * access, it panics because it cannot cache dirty - * pages. In principle, we could eliminate just write - * access here rather than all access. In the expected - * case, when there are no last instant modifications - * to the page, removing all access will be cheaper - * overall. - */ - if (pmap_is_modified(m)) - vm_page_dirty(m); - else if (m->dirty == 0) - pmap_remove_all(m); - } + vm_page_test_dirty(m); + if (m->dirty == 0 && object->ref_count != 0) + pmap_remove_all(m); if (m->valid == 0) { /*