From owner-svn-src-all@freebsd.org Thu Jan 23 05:14:41 2020 Return-Path: Delivered-To: svn-src-all@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id CE66122C918; Thu, 23 Jan 2020 05:14:41 +0000 (UTC) (envelope-from jeff@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4839SK54k3z46Z0; Thu, 23 Jan 2020 05:14:41 +0000 (UTC) (envelope-from jeff@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id A53699856; Thu, 23 Jan 2020 05:14:41 +0000 (UTC) (envelope-from jeff@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id 00N5EfiL072371; Thu, 23 Jan 2020 05:14:41 GMT (envelope-from jeff@FreeBSD.org) Received: (from jeff@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id 00N5Efks072370; Thu, 23 Jan 2020 05:14:41 GMT (envelope-from jeff@FreeBSD.org) Message-Id: <202001230514.00N5Efks072370@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: jeff set sender to jeff@FreeBSD.org using -f From: Jeff Roberson Date: Thu, 23 Jan 2020 05:14:41 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r357024 - head/sys/vm X-SVN-Group: head X-SVN-Commit-Author: jeff X-SVN-Commit-Paths: head/sys/vm X-SVN-Commit-Revision: 357024 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 23 Jan 2020 05:14:41 -0000 Author: jeff Date: Thu Jan 23 05:14:41 2020 New Revision: 357024 URL: https://svnweb.freebsd.org/changeset/base/357024 Log: (fault 5/9) Move the backing_object traversal into a dedicated function. Reviewed by: dougm, kib, markj Differential Revision: https://reviews.freebsd.org/D23310 Modified: head/sys/vm/vm_fault.c Modified: head/sys/vm/vm_fault.c ============================================================================== --- head/sys/vm/vm_fault.c Thu Jan 23 05:11:01 2020 (r357023) +++ head/sys/vm/vm_fault.c Thu Jan 23 05:14:41 2020 (r357024) @@ -932,6 +932,75 @@ vm_fault_cow(struct faultstate *fs) curthread->td_cow++; } +static bool +vm_fault_next(struct faultstate *fs) +{ + vm_object_t next_object; + + /* + * The requested page does not exist at this object/ + * offset. Remove the invalid page from the object, + * waking up anyone waiting for it, and continue on to + * the next object. However, if this is the top-level + * object, we must leave the busy page in place to + * prevent another process from rushing past us, and + * inserting the page in that object at the same time + * that we are. + */ + if (fs->object == fs->first_object) { + fs->first_m = fs->m; + fs->m = NULL; + } else + fault_page_free(&fs->m); + + /* + * Move on to the next object. Lock the next object before + * unlocking the current one. + */ + VM_OBJECT_ASSERT_WLOCKED(fs->object); + next_object = fs->object->backing_object; + if (next_object == NULL) { + /* + * If there's no object left, fill the page in the top + * object with zeros. + */ + VM_OBJECT_WUNLOCK(fs->object); + if (fs->object != fs->first_object) { + vm_object_pip_wakeup(fs->object); + fs->object = fs->first_object; + fs->pindex = fs->first_pindex; + } + MPASS(fs->first_m != NULL); + MPASS(fs->m == NULL); + fs->m = fs->first_m; + fs->first_m = NULL; + + /* + * Zero the page if necessary and mark it valid. + */ + if ((fs->m->flags & PG_ZERO) == 0) { + pmap_zero_page(fs->m); + } else { + VM_CNT_INC(v_ozfod); + } + VM_CNT_INC(v_zfod); + vm_page_valid(fs->m); + + return (false); + } + MPASS(fs->first_m != NULL); + KASSERT(fs->object != next_object, ("object loop %p", next_object)); + VM_OBJECT_WLOCK(next_object); + vm_object_pip_add(next_object, 1); + if (fs->object != fs->first_object) + vm_object_pip_wakeup(fs->object); + fs->pindex += OFF_TO_IDX(fs->object->backing_object_offset); + VM_OBJECT_WUNLOCK(fs->object); + fs->object = next_object; + + return (true); +} + /* * Wait/Retry if the page is busy. We have to do this if the page is * either exclusive or shared busy because the vm_pager may be using @@ -974,7 +1043,6 @@ vm_fault(vm_map_t map, vm_offset_t vaddr, vm_prot_t fa { struct faultstate fs; struct domainset *dset; - vm_object_t next_object; vm_offset_t e_end, e_start; int ahead, alloc_req, behind, cluster_offset, faultcount; int nera, oom, result, rv; @@ -1187,8 +1255,13 @@ readrest: * object without dropping the lock to preserve atomicity of * shadow faults. */ - if (fs.object->type == OBJT_DEFAULT) - goto next; + if (fs.object->type == OBJT_DEFAULT) { + if (vm_fault_next(&fs)) + continue; + /* Don't try to prefault neighboring pages. */ + faultcount = 1; + break; + } /* * At this point, we have either allocated a new page or found @@ -1304,70 +1377,14 @@ readrest: } -next: /* - * The requested page does not exist at this object/ - * offset. Remove the invalid page from the object, - * waking up anyone waiting for it, and continue on to - * the next object. However, if this is the top-level - * object, we must leave the busy page in place to - * prevent another process from rushing past us, and - * inserting the page in that object at the same time - * that we are. + * The page was not found in the current object. Try to traverse + * into a backing object or zero fill if none is found. */ - if (fs.object == fs.first_object) { - fs.first_m = fs.m; - fs.m = NULL; - } else - fault_page_free(&fs.m); - - /* - * Move on to the next object. Lock the next object before - * unlocking the current one. - */ - VM_OBJECT_ASSERT_WLOCKED(fs.object); - next_object = fs.object->backing_object; - if (next_object == NULL) { - /* - * If there's no object left, fill the page in the top - * object with zeros. - */ - VM_OBJECT_WUNLOCK(fs.object); - if (fs.object != fs.first_object) { - vm_object_pip_wakeup(fs.object); - fs.object = fs.first_object; - fs.pindex = fs.first_pindex; - } - MPASS(fs.first_m != NULL); - MPASS(fs.m == NULL); - fs.m = fs.first_m; - fs.first_m = NULL; - - /* - * Zero the page if necessary and mark it valid. - */ - if ((fs.m->flags & PG_ZERO) == 0) { - pmap_zero_page(fs.m); - } else { - VM_CNT_INC(v_ozfod); - } - VM_CNT_INC(v_zfod); - vm_page_valid(fs.m); + if (!vm_fault_next(&fs)) { /* Don't try to prefault neighboring pages. */ faultcount = 1; break; /* break to PAGE HAS BEEN FOUND. */ - } else { - MPASS(fs.first_m != NULL); - KASSERT(fs.object != next_object, - ("object loop %p", next_object)); - VM_OBJECT_WLOCK(next_object); - vm_object_pip_add(next_object, 1); - if (fs.object != fs.first_object) - vm_object_pip_wakeup(fs.object); - fs.pindex += - OFF_TO_IDX(fs.object->backing_object_offset); - VM_OBJECT_WUNLOCK(fs.object); - fs.object = next_object; } }