From owner-svn-src-all@freebsd.org Thu Mar 5 00:36:39 2020 Return-Path: Delivered-To: svn-src-all@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 995CE256D0A; Thu, 5 Mar 2020 00:36:39 +0000 (UTC) (envelope-from glebius@freebsd.org) Received: from cell.glebi.us (glebi.us [162.251.186.162]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (2048 bits) client-digest SHA256) (Client CN "cell.glebi.us", Issuer "cell.glebi.us" (not verified)) by mx1.freebsd.org (Postfix) with ESMTPS id 48XsJ53MNmz44Cn; Thu, 5 Mar 2020 00:36:37 +0000 (UTC) (envelope-from glebius@freebsd.org) Received: from cell.glebi.us (localhost [127.0.0.1]) by cell.glebi.us (8.15.2/8.15.2) with ESMTPS id 0250aSdk029387 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO); Wed, 4 Mar 2020 16:36:28 -0800 (PST) (envelope-from glebius@freebsd.org) Received: (from glebius@localhost) by cell.glebi.us (8.15.2/8.15.2/Submit) id 0250aSxs029386; Wed, 4 Mar 2020 16:36:28 -0800 (PST) (envelope-from glebius@freebsd.org) X-Authentication-Warning: cell.glebi.us: glebius set sender to glebius@freebsd.org using -f Date: Wed, 4 Mar 2020 16:36:28 -0800 From: Gleb Smirnoff To: Jeff Roberson Cc: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: Re: svn commit: r358451 - in head/sys: kern vm Message-ID: <20200305003628.GS5741@FreeBSD.org> References: <202002282142.01SLgmnN089938@repo.freebsd.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <202002282142.01SLgmnN089938@repo.freebsd.org> X-Rspamd-Queue-Id: 48XsJ53MNmz44Cn X-Spamd-Bar: / Authentication-Results: mx1.freebsd.org; none X-Spamd-Result: default: False [-0.07 / 15.00]; local_wl_from(0.00)[freebsd.org]; NEURAL_SPAM_MEDIUM(0.92)[0.922,0]; ASN(0.00)[asn:27348, ipnet:162.251.186.0/24, country:US]; NEURAL_HAM_LONG(-0.99)[-0.990,0] X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Mar 2020 00:36:39 -0000 Hi Jeff, may be I'm missing something, but from a quick review this change doesn't seem to be correct for sendfile with INVARIANTS on. sendfile_swapin does immediate vm_page_xunbusy() for all valid pages that are substituted to bogus one. With this change KASSERT in vm_page_relookup() would fire: KASSERT(m != NULL && vm_page_busied(m) && On Fri, Feb 28, 2020 at 09:42:48PM +0000, Jeff Roberson wrote: J> Author: jeff J> Date: Fri Feb 28 21:42:48 2020 J> New Revision: 358451 J> URL: https://svnweb.freebsd.org/changeset/base/358451 J> J> Log: J> Provide a lock free alternative to resolve bogus pages. This is not likely J> to be much of a perf win, just a nice code simplification. J> J> Reviewed by: markj, kib J> Differential Revision: https://reviews.freebsd.org/D23866 J> J> Modified: J> head/sys/kern/kern_sendfile.c J> head/sys/kern/vfs_bio.c J> head/sys/vm/vm_page.c J> head/sys/vm/vm_page.h J> J> Modified: head/sys/kern/kern_sendfile.c J> ============================================================================== J> --- head/sys/kern/kern_sendfile.c Fri Feb 28 21:31:40 2020 (r358450) J> +++ head/sys/kern/kern_sendfile.c Fri Feb 28 21:42:48 2020 (r358451) J> @@ -350,7 +350,6 @@ sendfile_swapin(vm_object_t obj, struct sf_io *sfio, i J> { J> vm_page_t *pa = sfio->pa; J> int grabbed; J> - bool locked; J> J> *nios = 0; J> flags = (flags & SF_NODISKIO) ? VM_ALLOC_NOWAIT : 0; J> @@ -359,8 +358,6 @@ sendfile_swapin(vm_object_t obj, struct sf_io *sfio, i J> * First grab all the pages and wire them. Note that we grab J> * only required pages. Readahead pages are dealt with later. J> */ J> - locked = false; J> - J> grabbed = vm_page_grab_pages_unlocked(obj, OFF_TO_IDX(off), J> VM_ALLOC_NORMAL | VM_ALLOC_WIRED | flags, pa, npages); J> if (grabbed < npages) { J> @@ -381,10 +378,6 @@ sendfile_swapin(vm_object_t obj, struct sf_io *sfio, i J> i++; J> continue; J> } J> - if (!locked) { J> - VM_OBJECT_WLOCK(obj); J> - locked = true; J> - } J> J> /* J> * Next page is invalid. Check if it belongs to pager. It J> @@ -396,8 +389,10 @@ sendfile_swapin(vm_object_t obj, struct sf_io *sfio, i J> * stored in 'a', about how many pages we can pagein after J> * this page in a single I/O. J> */ J> + VM_OBJECT_RLOCK(obj); J> if (!vm_pager_has_page(obj, OFF_TO_IDX(vmoff(i, off)), NULL, J> &a)) { J> + VM_OBJECT_RUNLOCK(obj); J> pmap_zero_page(pa[i]); J> vm_page_valid(pa[i]); J> MPASS(pa[i]->dirty == 0); J> @@ -405,6 +400,7 @@ sendfile_swapin(vm_object_t obj, struct sf_io *sfio, i J> i++; J> continue; J> } J> + VM_OBJECT_RUNLOCK(obj); J> J> /* J> * We want to pagein as many pages as possible, limited only J> @@ -435,11 +431,9 @@ sendfile_swapin(vm_object_t obj, struct sf_io *sfio, i J> } J> J> refcount_acquire(&sfio->nios); J> - VM_OBJECT_WUNLOCK(obj); J> rv = vm_pager_get_pages_async(obj, pa + i, count, NULL, J> i + count == npages ? &rhpages : NULL, J> &sendfile_iodone, sfio); J> - VM_OBJECT_WLOCK(obj); J> if (__predict_false(rv != VM_PAGER_OK)) { J> /* J> * Perform full pages recovery before returning EIO. J> @@ -451,7 +445,7 @@ sendfile_swapin(vm_object_t obj, struct sf_io *sfio, i J> for (j = 0; j < npages; j++) { J> if (j > i && j < i + count - 1 && J> pa[j] == bogus_page) J> - pa[j] = vm_page_lookup(obj, J> + pa[j] = vm_page_relookup(obj, J> OFF_TO_IDX(vmoff(j, off))); J> else if (j >= i) J> vm_page_xunbusy(pa[j]); J> @@ -460,7 +454,6 @@ sendfile_swapin(vm_object_t obj, struct sf_io *sfio, i J> __func__, pa, j)); J> vm_page_unwire(pa[j], PQ_INACTIVE); J> } J> - VM_OBJECT_WUNLOCK(obj); J> return (EIO); J> } J> J> @@ -475,7 +468,7 @@ sendfile_swapin(vm_object_t obj, struct sf_io *sfio, i J> */ J> for (j = i + 1; j < i + count - 1; j++) J> if (pa[j] == bogus_page) { J> - pa[j] = vm_page_lookup(obj, J> + pa[j] = vm_page_relookup(obj, J> OFF_TO_IDX(vmoff(j, off))); J> KASSERT(pa[j], ("%s: page %p[%d] disappeared", J> __func__, pa, j)); J> @@ -484,9 +477,6 @@ sendfile_swapin(vm_object_t obj, struct sf_io *sfio, i J> i += count; J> (*nios)++; J> } J> - J> - if (locked) J> - VM_OBJECT_WUNLOCK(obj); J> J> if (*nios == 0 && npages != 0) J> SFSTAT_INC(sf_noiocnt); J> J> Modified: head/sys/kern/vfs_bio.c J> ============================================================================== J> --- head/sys/kern/vfs_bio.c Fri Feb 28 21:31:40 2020 (r358450) J> +++ head/sys/kern/vfs_bio.c Fri Feb 28 21:42:48 2020 (r358451) J> @@ -2878,11 +2878,8 @@ vfs_vmio_iodone(struct buf *bp) J> */ J> m = bp->b_pages[i]; J> if (m == bogus_page) { J> - if (bogus == false) { J> - bogus = true; J> - VM_OBJECT_RLOCK(obj); J> - } J> - m = vm_page_lookup(obj, OFF_TO_IDX(foff)); J> + bogus = true; J> + m = vm_page_relookup(obj, OFF_TO_IDX(foff)); J> if (m == NULL) J> panic("biodone: page disappeared!"); J> bp->b_pages[i] = m; J> @@ -2905,8 +2902,6 @@ vfs_vmio_iodone(struct buf *bp) J> foff = (foff + PAGE_SIZE) & ~(off_t)PAGE_MASK; J> iosize -= resid; J> } J> - if (bogus) J> - VM_OBJECT_RUNLOCK(obj); J> vm_object_pip_wakeupn(obj, bp->b_npages); J> if (bogus && buf_mapped(bp)) { J> BUF_CHECK_MAPPED(bp); J> @@ -4470,22 +4465,16 @@ vfs_unbusy_pages(struct buf *bp) J> int i; J> vm_object_t obj; J> vm_page_t m; J> - bool bogus; J> J> runningbufwakeup(bp); J> if (!(bp->b_flags & B_VMIO)) J> return; J> J> obj = bp->b_bufobj->bo_object; J> - bogus = false; J> for (i = 0; i < bp->b_npages; i++) { J> m = bp->b_pages[i]; J> if (m == bogus_page) { J> - if (bogus == false) { J> - bogus = true; J> - VM_OBJECT_RLOCK(obj); J> - } J> - m = vm_page_lookup(obj, OFF_TO_IDX(bp->b_offset) + i); J> + m = vm_page_relookup(obj, OFF_TO_IDX(bp->b_offset) + i); J> if (!m) J> panic("vfs_unbusy_pages: page missing\n"); J> bp->b_pages[i] = m; J> @@ -4498,8 +4487,6 @@ vfs_unbusy_pages(struct buf *bp) J> } J> vm_page_sunbusy(m); J> } J> - if (bogus) J> - VM_OBJECT_RUNLOCK(obj); J> vm_object_pip_wakeupn(obj, bp->b_npages); J> } J> J> J> Modified: head/sys/vm/vm_page.c J> ============================================================================== J> --- head/sys/vm/vm_page.c Fri Feb 28 21:31:40 2020 (r358450) J> +++ head/sys/vm/vm_page.c Fri Feb 28 21:42:48 2020 (r358451) J> @@ -1671,6 +1671,24 @@ vm_page_lookup(vm_object_t object, vm_pindex_t pindex) J> } J> J> /* J> + * vm_page_relookup: J> + * J> + * Returns a page that must already have been busied by J> + * the caller. Used for bogus page replacement. J> + */ J> +vm_page_t J> +vm_page_relookup(vm_object_t object, vm_pindex_t pindex) J> +{ J> + vm_page_t m; J> + J> + m = vm_radix_lookup_unlocked(&object->rtree, pindex); J> + KASSERT(m != NULL && vm_page_busied(m) && J> + m->object == object && m->pindex == pindex, J> + ("vm_page_relookup: Invalid page %p", m)); J> + return (m); J> +} J> + J> +/* J> * This should only be used by lockless functions for releasing transient J> * incorrect acquires. The page may have been freed after we acquired a J> * busy lock. In this case busy_lock == VPB_FREED and we have nothing J> J> Modified: head/sys/vm/vm_page.h J> ============================================================================== J> --- head/sys/vm/vm_page.h Fri Feb 28 21:31:40 2020 (r358450) J> +++ head/sys/vm/vm_page.h Fri Feb 28 21:42:48 2020 (r358451) J> @@ -653,6 +653,7 @@ void vm_page_reference(vm_page_t m); J> #define VPR_NOREUSE 0x02 J> void vm_page_release(vm_page_t m, int flags); J> void vm_page_release_locked(vm_page_t m, int flags); J> +vm_page_t vm_page_relookup(vm_object_t, vm_pindex_t); J> bool vm_page_remove(vm_page_t); J> bool vm_page_remove_xbusy(vm_page_t); J> int vm_page_rename(vm_page_t, vm_object_t, vm_pindex_t); J> _______________________________________________ J> svn-src-all@freebsd.org mailing list J> https://lists.freebsd.org/mailman/listinfo/svn-src-all J> To unsubscribe, send any mail to "svn-src-all-unsubscribe@freebsd.org" -- Gleb Smirnoff