From owner-svn-src-head@FreeBSD.ORG Tue Jun 2 08:02:27 2009 Return-Path: Delivered-To: svn-src-head@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CF09F106567C; Tue, 2 Jun 2009 08:02:27 +0000 (UTC) (envelope-from alc@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:4f8:fff6::2c]) by mx1.freebsd.org (Postfix) with ESMTP id BD3658FC1D; Tue, 2 Jun 2009 08:02:27 +0000 (UTC) (envelope-from alc@FreeBSD.org) Received: from svn.freebsd.org (localhost [127.0.0.1]) by svn.freebsd.org (8.14.3/8.14.3) with ESMTP id n5282ROp006796; Tue, 2 Jun 2009 08:02:27 GMT (envelope-from alc@svn.freebsd.org) Received: (from alc@localhost) by svn.freebsd.org (8.14.3/8.14.3/Submit) id n5282RM0006794; Tue, 2 Jun 2009 08:02:27 GMT (envelope-from alc@svn.freebsd.org) Message-Id: <200906020802.n5282RM0006794@svn.freebsd.org> From: Alan Cox Date: Tue, 2 Jun 2009 08:02:27 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org X-SVN-Group: head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Cc: Subject: svn commit: r193303 - in head/sys: kern vm X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 02 Jun 2009 08:02:28 -0000 Author: alc Date: Tue Jun 2 08:02:27 2009 New Revision: 193303 URL: http://svn.freebsd.org/changeset/base/193303 Log: Correct a boundary case error in the management of a page's dirty bits by shm_dotruncate() and vnode_pager_setsize(). Specifically, if the length of a shared memory object or a file is truncated such that the length modulo the page size is between 1 and 511, then all of the page's dirty bits were cleared. Now, a dirty bit is cleared only if the corresponding block is truncated in its entirety. Modified: head/sys/kern/uipc_shm.c head/sys/vm/vnode_pager.c Modified: head/sys/kern/uipc_shm.c ============================================================================== --- head/sys/kern/uipc_shm.c Tue Jun 2 07:35:51 2009 (r193302) +++ head/sys/kern/uipc_shm.c Tue Jun 2 08:02:27 2009 (r193303) @@ -274,7 +274,7 @@ shm_dotruncate(struct shmfd *shmfd, off_ /* * If the last page is partially mapped, then zero out * the garbage at the end of the page. See comments - * in vnode_page_setsize() for more details. + * in vnode_pager_setsize() for more details. * * XXXJHB: This handles in memory pages, but what about * a page swapped out to disk? @@ -286,10 +286,23 @@ shm_dotruncate(struct shmfd *shmfd, off_ int size = PAGE_SIZE - base; pmap_zero_page_area(m, base, size); + + /* + * Update the valid bits to reflect the blocks that + * have been zeroed. Some of these valid bits may + * have already been set. + */ + vm_page_set_valid(m, base, size); + + /* + * Round "base" to the next block boundary so that the + * dirty bit for a partially zeroed block is not + * cleared. + */ + base = roundup2(base, DEV_BSIZE); + vm_page_lock_queues(); - vm_page_set_validclean(m, base, size); - if (m->dirty != 0) - m->dirty = VM_PAGE_BITS_ALL; + vm_page_clear_dirty(m, base, PAGE_SIZE - base); vm_page_unlock_queues(); } else if ((length & PAGE_MASK) && __predict_false(object->cache != NULL)) { Modified: head/sys/vm/vnode_pager.c ============================================================================== --- head/sys/vm/vnode_pager.c Tue Jun 2 07:35:51 2009 (r193302) +++ head/sys/vm/vnode_pager.c Tue Jun 2 08:02:27 2009 (r193303) @@ -403,22 +403,28 @@ vnode_pager_setsize(vp, nsize) pmap_zero_page_area(m, base, size); /* - * Clear out partial-page dirty bits. This - * has the side effect of setting the valid - * bits, but that is ok. There are a bunch - * of places in the VM system where we expected - * m->dirty == VM_PAGE_BITS_ALL. The file EOF - * case is one of them. If the page is still - * partially dirty, make it fully dirty. + * Update the valid bits to reflect the blocks that + * have been zeroed. Some of these valid bits may + * have already been set. + */ + vm_page_set_valid(m, base, size); + + /* + * Round "base" to the next block boundary so that the + * dirty bit for a partially zeroed block is not + * cleared. + */ + base = roundup2(base, DEV_BSIZE); + + /* + * Clear out partial-page dirty bits. * * note that we do not clear out the valid * bits. This would prevent bogus_page * replacement from working properly. */ vm_page_lock_queues(); - vm_page_set_validclean(m, base, size); - if (m->dirty != 0) - m->dirty = VM_PAGE_BITS_ALL; + vm_page_clear_dirty(m, base, PAGE_SIZE - base); vm_page_unlock_queues(); } else if ((nsize & PAGE_MASK) && __predict_false(object->cache != NULL)) {