From owner-svn-src-user@FreeBSD.ORG Mon Feb 22 15:35:31 2010 Return-Path: Delivered-To: svn-src-user@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7109C106566B; Mon, 22 Feb 2010 15:35:31 +0000 (UTC) (envelope-from kib@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:4f8:fff6::2c]) by mx1.freebsd.org (Postfix) with ESMTP id 5F55E8FC14; Mon, 22 Feb 2010 15:35:31 +0000 (UTC) Received: from svn.freebsd.org (localhost [127.0.0.1]) by svn.freebsd.org (8.14.3/8.14.3) with ESMTP id o1MFZVPE084667; Mon, 22 Feb 2010 15:35:31 GMT (envelope-from kib@svn.freebsd.org) Received: (from kib@localhost) by svn.freebsd.org (8.14.3/8.14.3/Submit) id o1MFZV2x084663; Mon, 22 Feb 2010 15:35:31 GMT (envelope-from kib@svn.freebsd.org) Message-Id: <201002221535.o1MFZV2x084663@svn.freebsd.org> From: Konstantin Belousov Date: Mon, 22 Feb 2010 15:35:31 +0000 (UTC) To: src-committers@freebsd.org, svn-src-user@freebsd.org X-SVN-Group: user MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Cc: Subject: svn commit: r204200 - user/kib/vm6/sys/vm X-BeenThere: svn-src-user@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: "SVN commit messages for the experimental " user" src tree" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 22 Feb 2010 15:35:31 -0000 Author: kib Date: Mon Feb 22 15:35:31 2010 New Revision: 204200 URL: http://svn.freebsd.org/changeset/base/204200 Log: Detect sequential writes in vnode_pager_write() and initiate immediate page cleanup for continous regions written sequentially. This allows the buffer clustering and UFS reallocation code to defragment the file. Tested by: pho Modified: user/kib/vm6/sys/vm/vm_object.h user/kib/vm6/sys/vm/vm_readwrite.c user/kib/vm6/sys/vm/vnode_pager.c Modified: user/kib/vm6/sys/vm/vm_object.h ============================================================================== --- user/kib/vm6/sys/vm/vm_object.h Mon Feb 22 15:03:16 2010 (r204199) +++ user/kib/vm6/sys/vm/vm_object.h Mon Feb 22 15:35:31 2010 (r204200) @@ -109,9 +109,13 @@ struct vm_object { * VNode pager * * vnp_size - current size of file + * wpos - start write position for seq write detector + * off - offset from wpos for current write */ struct { off_t vnp_size; + off_t wpos; + ssize_t off; } vnp; /* Modified: user/kib/vm6/sys/vm/vm_readwrite.c ============================================================================== --- user/kib/vm6/sys/vm/vm_readwrite.c Mon Feb 22 15:03:16 2010 (r204199) +++ user/kib/vm6/sys/vm/vm_readwrite.c Mon Feb 22 15:35:31 2010 (r204200) @@ -715,10 +715,10 @@ vnode_pager_write(struct vnode *vp, stru vm_pindex_t idx, clean_start, clean_end; vm_page_t reserv; struct vattr vattr; - ssize_t size, size1, osize, osize1, resid, sresid; - int error, vn_locked, wpmax, wp, i; + ssize_t size, size1, osize, osize1, resid, sresid, written; + int error, vn_locked, wpmax, wp, i, pflags; u_int bits; - boolean_t vnode_locked; + boolean_t vnode_locked, freed, freed1; struct thread *td; if (ioflags & (IO_EXT|IO_INVAL|IO_DIRECT)) @@ -735,6 +735,16 @@ vnode_pager_write(struct vnode *vp, stru vnode_locked = TRUE; error = 0; + /* + * Reversed logic from vnode_generic_putpages(). + */ + if (ioflags & IO_SYNC) + pflags = VM_PAGER_PUT_SYNC; + else if (ioflags & IO_ASYNC) + pflags = 0; + else + pflags = VM_PAGER_CLUSTER_OK; + wpmax = atomic_load_acq_int(&vmio_write_pack); vm_page_t ma[wpmax + 1]; @@ -1002,6 +1012,7 @@ vnode_pager_write(struct vnode *vp, stru error = uiomove_fromphys(ma, off, size, uio); td->td_pflags &= ~TDP_VMIO; + freed = FALSE; VM_OBJECT_LOCK(obj); vm_page_lock_queues(); for (i = 0; i < wp; i++) { @@ -1019,12 +1030,50 @@ vnode_pager_write(struct vnode *vp, stru ma[i]->flags |= PG_WRITEDIRTY; vmio_writedirty++; } + freed1 = FALSE; + if (VM_PAGE_GETQUEUE(ma[i]) == PQ_HOLD) + freed = freed1 = TRUE; vm_page_unhold(ma[i]); - vm_page_activate(ma[i]); + if (!freed1) + vm_page_activate(ma[i]); } - vm_page_unlock_queues(); /* See the comment above about page dirtiness. */ vm_object_set_writeable_dirty(obj); + + /* + * Try to cluster writes. + */ + written = sresid - uio->uio_resid; + if (obj->un_pager.vnp.wpos + obj->un_pager.vnp.off == + uio->uio_offset - written) { + /* + * Sequential writes detected, make a note and + * try to take immediate advantage of it. + */ + if (!freed && OFF_TO_IDX(uio->uio_offset) > + OFF_TO_IDX(uio->uio_offset - written) && + vn_lock(vp, vn_locked | LK_NOWAIT) == 0) { + vm_pageout_flush(ma, wp, pflags); + VOP_UNLOCK(vp, 0); + } +/* printf("seq write, wpos %jd off %jd written %d\n", (intmax_t)obj->un_pager.vnp.wpos, (intmax_t)obj->un_pager.vnp.off, written); */ + obj->un_pager.vnp.off += written; + } else { + /* + * Not a sequential write situation, still + * might be good to not split large write in + * the daemons struggling under pressure. + */ + if (!freed && wp >= vm_pageout_page_count && + vn_lock(vp, vn_locked | LK_NOWAIT) == 0) { + vm_pageout_flush(ma, wp, pflags); + VOP_UNLOCK(vp, 0); + } +/* printf("nonseq write, wpos %jd off %jd wp %d\n", (intmax_t)obj->un_pager.vnp.wpos, (intmax_t)obj->un_pager.vnp.off, wp); */ + obj->un_pager.vnp.wpos = uio->uio_offset; + obj->un_pager.vnp.off = 0; + } + vm_page_unlock_queues(); vm_object_pip_wakeup(obj); VM_OBJECT_UNLOCK(obj); if (error != 0) Modified: user/kib/vm6/sys/vm/vnode_pager.c ============================================================================== --- user/kib/vm6/sys/vm/vnode_pager.c Mon Feb 22 15:03:16 2010 (r204199) +++ user/kib/vm6/sys/vm/vnode_pager.c Mon Feb 22 15:35:31 2010 (r204200) @@ -1017,7 +1017,6 @@ vnode_pager_putpages(object, m, count, s { int rtval; struct vnode *vp; - struct mount *mp; int bytes = count * PAGE_SIZE; /* @@ -1040,8 +1039,6 @@ vnode_pager_putpages(object, m, count, s */ vp = object->handle; VM_OBJECT_UNLOCK(object); - if (vp->v_type != VREG) - mp = NULL; rtval = VOP_PUTPAGES(vp, m, bytes, sync, rtvals, 0); KASSERT(rtval != EOPNOTSUPP, ("vnode_pager: stale FS putpages\n"));