From owner-svn-src-all@freebsd.org Mon Aug 1 21:21:27 2016 Return-Path: Delivered-To: svn-src-all@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id A66FEBABFE4; Mon, 1 Aug 2016 21:21:27 +0000 (UTC) (envelope-from alc@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 1DD7A139E; Mon, 1 Aug 2016 21:21:27 +0000 (UTC) (envelope-from alc@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id u71LLQu0095809; Mon, 1 Aug 2016 21:21:26 GMT (envelope-from alc@FreeBSD.org) Received: (from alc@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id u71LLQYf095808; Mon, 1 Aug 2016 21:21:26 GMT (envelope-from alc@FreeBSD.org) Message-Id: <201608012121.u71LLQYf095808@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: alc set sender to alc@FreeBSD.org using -f From: Alan Cox Date: Mon, 1 Aug 2016 21:21:26 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-stable@freebsd.org, svn-src-stable-11@freebsd.org Subject: svn commit: r303641 - stable/11/sys/vm X-SVN-Group: stable-11 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 01 Aug 2016 21:21:27 -0000 Author: alc Date: Mon Aug 1 21:21:26 2016 New Revision: 303641 URL: https://svnweb.freebsd.org/changeset/base/303641 Log: MFC r303356 and r303465 Remove any mention of cache (PG_CACHE) pages from the comments in vm_pageout_scan(). That function has not cached pages since r284376. Approved by: re (kib) Modified: stable/11/sys/vm/vm_pageout.c Directory Properties: stable/11/ (props changed) Modified: stable/11/sys/vm/vm_pageout.c ============================================================================== --- stable/11/sys/vm/vm_pageout.c Mon Aug 1 21:21:21 2016 (r303640) +++ stable/11/sys/vm/vm_pageout.c Mon Aug 1 21:21:26 2016 (r303641) @@ -872,7 +872,7 @@ unlock_mp: * vm_pageout_scan does the dirty work for the pageout daemon. * * pass 0 - Update active LRU/deactivate pages - * pass 1 - Move inactive to cache or free + * pass 1 - Free inactive pages * pass 2 - Launder dirty pages */ static void @@ -915,8 +915,7 @@ vm_pageout_scan(struct vm_domain *vmd, i addl_page_shortage = 0; /* - * Calculate the number of pages we want to either free or move - * to the cache. + * Calculate the number of pages that we want to free. */ if (pass > 0) { deficit = atomic_readandclear_int(&vm_pageout_deficit); @@ -943,11 +942,10 @@ vm_pageout_scan(struct vm_domain *vmd, i vnodes_skipped = 0; /* - * Start scanning the inactive queue for pages we can move to the - * cache or free. The scan will stop when the target is reached or - * we have scanned the entire inactive queue. Note that m->act_count - * is not used to form decisions for the inactive queue, only for the - * active queue. + * Start scanning the inactive queue for pages that we can free. The + * scan will stop when we reach the target or we have scanned the + * entire queue. (Note that m->act_count is not used to make + * decisions for the inactive queue, only for the active queue.) */ pq = &vmd->vmd_pagequeues[PQ_INACTIVE]; maxscan = pq->pq_cnt; @@ -1072,10 +1070,9 @@ unlock_page: /* * If the page appears to be clean at the machine-independent * layer, then remove all of its mappings from the pmap in - * anticipation of placing it onto the cache queue. If, - * however, any of the page's mappings allow write access, - * then the page may still be modified until the last of those - * mappings are removed. + * anticipation of freeing it. If, however, any of the page's + * mappings allow write access, then the page may still be + * modified until the last of those mappings are removed. */ if (object->ref_count != 0) { vm_page_test_dirty(m); @@ -1171,8 +1168,8 @@ relock_queues: #if !defined(NO_SWAPPING) /* - * Wakeup the swapout daemon if we didn't cache or free the targeted - * number of pages. + * Wakeup the swapout daemon if we didn't free the targeted number of + * pages. */ if (vm_swap_enabled && page_shortage > 0) vm_req_vmdaemon(VM_SWAP_NORMAL); @@ -1180,7 +1177,7 @@ relock_queues: /* * Wakeup the sync daemon if we skipped a vnode in a writeable object - * and we didn't cache or free enough pages. + * and we didn't free enough pages. */ if (vnodes_skipped > 0 && page_shortage > vm_cnt.v_free_target - vm_cnt.v_free_min)