From owner-svn-src-all@freebsd.org Thu Jan 23 05:18:01 2020 Return-Path: Delivered-To: svn-src-all@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 769A922C9FB; Thu, 23 Jan 2020 05:18:01 +0000 (UTC) (envelope-from jeff@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4839X92W0Nz46jp; Thu, 23 Jan 2020 05:18:01 +0000 (UTC) (envelope-from jeff@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 514229860; Thu, 23 Jan 2020 05:18:01 +0000 (UTC) (envelope-from jeff@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id 00N5I1Ke072569; Thu, 23 Jan 2020 05:18:01 GMT (envelope-from jeff@FreeBSD.org) Received: (from jeff@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id 00N5I1l9072568; Thu, 23 Jan 2020 05:18:01 GMT (envelope-from jeff@FreeBSD.org) Message-Id: <202001230518.00N5I1l9072568@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: jeff set sender to jeff@FreeBSD.org using -f From: Jeff Roberson Date: Thu, 23 Jan 2020 05:18:01 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r357025 - head/sys/vm X-SVN-Group: head X-SVN-Commit-Author: jeff X-SVN-Commit-Paths: head/sys/vm X-SVN-Commit-Revision: 357025 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 23 Jan 2020 05:18:01 -0000 Author: jeff Date: Thu Jan 23 05:18:00 2020 New Revision: 357025 URL: https://svnweb.freebsd.org/changeset/base/357025 Log: (fault 6/9) Move getpages and associated logic into a dedicated function. Reviewed by: kib Differential Revision: https://reviews.freebsd.org/D23311 Modified: head/sys/vm/vm_fault.c Modified: head/sys/vm/vm_fault.c ============================================================================== --- head/sys/vm/vm_fault.c Thu Jan 23 05:14:41 2020 (r357024) +++ head/sys/vm/vm_fault.c Thu Jan 23 05:18:00 2020 (r357025) @@ -1001,7 +1001,96 @@ vm_fault_next(struct faultstate *fs) return (true); } + /* + * Call the pager to retrieve the page if there is a chance + * that the pager has it, and potentially retrieve additional + * pages at the same time. + */ +static int +vm_fault_getpages(struct faultstate *fs, int nera, int *behindp, int *aheadp) +{ + vm_offset_t e_end, e_start; + int ahead, behind, cluster_offset, rv; + u_char behavior; + + /* + * Prepare for unlocking the map. Save the map + * entry's start and end addresses, which are used to + * optimize the size of the pager operation below. + * Even if the map entry's addresses change after + * unlocking the map, using the saved addresses is + * safe. + */ + e_start = fs->entry->start; + e_end = fs->entry->end; + behavior = vm_map_entry_behavior(fs->entry); + + /* + * Release the map lock before locking the vnode or + * sleeping in the pager. (If the current object has + * a shadow, then an earlier iteration of this loop + * may have already unlocked the map.) + */ + unlock_map(fs); + + rv = vm_fault_lock_vnode(fs, false); + MPASS(rv == KERN_SUCCESS || rv == KERN_RESOURCE_SHORTAGE); + if (rv == KERN_RESOURCE_SHORTAGE) + return (rv); + KASSERT(fs->vp == NULL || !fs->map->system_map, + ("vm_fault: vnode-backed object mapped by system map")); + + /* + * Page in the requested page and hint the pager, + * that it may bring up surrounding pages. + */ + if (nera == -1 || behavior == MAP_ENTRY_BEHAV_RANDOM || + P_KILLED(curproc)) { + behind = 0; + ahead = 0; + } else { + /* Is this a sequential fault? */ + if (nera > 0) { + behind = 0; + ahead = nera; + } else { + /* + * Request a cluster of pages that is + * aligned to a VM_FAULT_READ_DEFAULT + * page offset boundary within the + * object. Alignment to a page offset + * boundary is more likely to coincide + * with the underlying file system + * block than alignment to a virtual + * address boundary. + */ + cluster_offset = fs->pindex % VM_FAULT_READ_DEFAULT; + behind = ulmin(cluster_offset, + atop(fs->vaddr - e_start)); + ahead = VM_FAULT_READ_DEFAULT - 1 - cluster_offset; + } + ahead = ulmin(ahead, atop(e_end - fs->vaddr) - 1); + } + *behindp = behind; + *aheadp = ahead; + rv = vm_pager_get_pages(fs->object, &fs->m, 1, behindp, aheadp); + if (rv == VM_PAGER_OK) + return (KERN_SUCCESS); + if (rv == VM_PAGER_ERROR) + printf("vm_fault: pager read error, pid %d (%s)\n", + curproc->p_pid, curproc->p_comm); + /* + * If an I/O error occurred or the requested page was + * outside the range of the pager, clean up and return + * an error. + */ + if (rv == VM_PAGER_ERROR || rv == VM_PAGER_BAD) + return (KERN_OUT_OF_BOUNDS); + return (KERN_NOT_RECEIVER); +} + +/* * Wait/Retry if the page is busy. We have to do this if the page is * either exclusive or shared busy because the vm_pager may be using * read busy for pageouts (and even pageins if it is the vnode pager), @@ -1043,10 +1132,8 @@ vm_fault(vm_map_t map, vm_offset_t vaddr, vm_prot_t fa { struct faultstate fs; struct domainset *dset; - vm_offset_t e_end, e_start; - int ahead, alloc_req, behind, cluster_offset, faultcount; + int ahead, alloc_req, behind, faultcount; int nera, oom, result, rv; - u_char behavior; bool dead, hardfault; VM_CNT_INC(v_vm_faults); @@ -1282,104 +1369,28 @@ readrest: * have the page, the number of additional pages to read will * apply to subsequent objects in the shadow chain. */ - if (nera == -1 && !P_KILLED(curproc)) { + if (nera == -1 && !P_KILLED(curproc)) nera = vm_fault_readahead(&fs); - /* - * Prepare for unlocking the map. Save the map - * entry's start and end addresses, which are used to - * optimize the size of the pager operation below. - * Even if the map entry's addresses change after - * unlocking the map, using the saved addresses is - * safe. - */ - e_start = fs.entry->start; - e_end = fs.entry->end; - behavior = vm_map_entry_behavior(fs.entry); - } - /* - * Call the pager to retrieve the page if there is a chance - * that the pager has it, and potentially retrieve additional - * pages at the same time. - */ - if (fs.object->type != OBJT_DEFAULT) { - /* - * Release the map lock before locking the vnode or - * sleeping in the pager. (If the current object has - * a shadow, then an earlier iteration of this loop - * may have already unlocked the map.) - */ - unlock_map(&fs); - - rv = vm_fault_lock_vnode(&fs, false); - MPASS(rv == KERN_SUCCESS || - rv == KERN_RESOURCE_SHORTAGE); - if (rv == KERN_RESOURCE_SHORTAGE) - goto RetryFault; - KASSERT(fs.vp == NULL || !fs.map->system_map, - ("vm_fault: vnode-backed object mapped by system map")); - - /* - * Page in the requested page and hint the pager, - * that it may bring up surrounding pages. - */ - if (nera == -1 || behavior == MAP_ENTRY_BEHAV_RANDOM || - P_KILLED(curproc)) { - behind = 0; - ahead = 0; - } else { - /* Is this a sequential fault? */ - if (nera > 0) { - behind = 0; - ahead = nera; - } else { - /* - * Request a cluster of pages that is - * aligned to a VM_FAULT_READ_DEFAULT - * page offset boundary within the - * object. Alignment to a page offset - * boundary is more likely to coincide - * with the underlying file system - * block than alignment to a virtual - * address boundary. - */ - cluster_offset = fs.pindex % - VM_FAULT_READ_DEFAULT; - behind = ulmin(cluster_offset, - atop(vaddr - e_start)); - ahead = VM_FAULT_READ_DEFAULT - 1 - - cluster_offset; - } - ahead = ulmin(ahead, atop(e_end - vaddr) - 1); - } - rv = vm_pager_get_pages(fs.object, &fs.m, 1, - &behind, &ahead); - if (rv == VM_PAGER_OK) { - faultcount = behind + 1 + ahead; - hardfault = true; - break; /* break to PAGE HAS BEEN FOUND. */ - } - VM_OBJECT_WLOCK(fs.object); - if (rv == VM_PAGER_ERROR) - printf("vm_fault: pager read error, pid %d (%s)\n", - curproc->p_pid, curproc->p_comm); - - /* - * If an I/O error occurred or the requested page was - * outside the range of the pager, clean up and return - * an error. - */ - if (rv == VM_PAGER_ERROR || rv == VM_PAGER_BAD) { - fault_page_free(&fs.m); - unlock_and_deallocate(&fs); - return (KERN_OUT_OF_BOUNDS); - } - + rv = vm_fault_getpages(&fs, nera, &behind, &ahead); + if (rv == KERN_SUCCESS) { + faultcount = behind + 1 + ahead; + hardfault = true; + break; /* break to PAGE HAS BEEN FOUND. */ } + if (rv == KERN_RESOURCE_SHORTAGE) + goto RetryFault; + VM_OBJECT_WLOCK(fs.object); + if (rv == KERN_OUT_OF_BOUNDS) { + fault_page_free(&fs.m); + unlock_and_deallocate(&fs); + return (rv); + } /* - * The page was not found in the current object. Try to traverse - * into a backing object or zero fill if none is found. + * The page was not found in the current object. Try to + * traverse into a backing object or zero fill if none is + * found. */ if (!vm_fault_next(&fs)) { /* Don't try to prefault neighboring pages. */