From owner-svn-src-head@freebsd.org Thu Apr 19 14:09:45 2018 Return-Path: Delivered-To: svn-src-head@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id CED32FA838C; Thu, 19 Apr 2018 14:09:45 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 80B467C6C8; Thu, 19 Apr 2018 14:09:45 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 7B86728C2; Thu, 19 Apr 2018 14:09:45 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w3JE9jr9017030; Thu, 19 Apr 2018 14:09:45 GMT (envelope-from markj@FreeBSD.org) Received: (from markj@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id w3JE9jvW017026; Thu, 19 Apr 2018 14:09:45 GMT (envelope-from markj@FreeBSD.org) Message-Id: <201804191409.w3JE9jvW017026@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: markj set sender to markj@FreeBSD.org using -f From: Mark Johnston Date: Thu, 19 Apr 2018 14:09:45 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r332771 - head/sys/vm X-SVN-Group: head X-SVN-Commit-Author: markj X-SVN-Commit-Paths: head/sys/vm X-SVN-Commit-Revision: 332771 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-head@freebsd.org X-Mailman-Version: 2.1.25 Precedence: list List-Id: SVN commit messages for the src tree for head/-current List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Apr 2018 14:09:46 -0000 Author: markj Date: Thu Apr 19 14:09:44 2018 New Revision: 332771 URL: https://svnweb.freebsd.org/changeset/base/332771 Log: Initialize marker pages in vm_page_domain_init(). They were previously initialized by the corresponding page daemon threads, but for vmd_inacthead this may be too late if vm_page_deactivate_noreuse() is called during boot. Reported and tested by: cperciva Reviewed by: alc, kib MFC after: 1 week Modified: head/sys/vm/vm_page.c head/sys/vm/vm_page.h head/sys/vm/vm_pageout.c head/sys/vm/vm_pagequeue.h Modified: head/sys/vm/vm_page.c ============================================================================== --- head/sys/vm/vm_page.c Thu Apr 19 13:37:59 2018 (r332770) +++ head/sys/vm/vm_page.c Thu Apr 19 14:09:44 2018 (r332771) @@ -437,6 +437,23 @@ sysctl_vm_page_blacklist(SYSCTL_HANDLER_ARGS) return (error); } +/* + * Initialize a dummy page for use in scans of the specified paging queue. + * In principle, this function only needs to set the flag PG_MARKER. + * Nonetheless, it write busies and initializes the hold count to one as + * safety precautions. + */ +void +vm_page_init_marker(vm_page_t marker, int queue) +{ + + bzero(marker, sizeof(*marker)); + marker->flags = PG_MARKER; + marker->busy_lock = VPB_SINGLE_EXCLUSIVER; + marker->queue = queue; + marker->hold_count = 1; +} + static void vm_page_domain_init(int domain) { @@ -464,9 +481,13 @@ vm_page_domain_init(int domain) TAILQ_INIT(&pq->pq_pl); mtx_init(&pq->pq_mutex, pq->pq_name, "vm pagequeue", MTX_DEF | MTX_DUPOK); + vm_page_init_marker(&vmd->vmd_markers[i], i); } mtx_init(&vmd->vmd_free_mtx, "vm page free queue", NULL, MTX_DEF); mtx_init(&vmd->vmd_pageout_mtx, "vm pageout lock", NULL, MTX_DEF); + vm_page_init_marker(&vmd->vmd_inacthead, PQ_INACTIVE); + TAILQ_INSERT_HEAD(&vmd->vmd_pagequeues[PQ_INACTIVE].pq_pl, + &vmd->vmd_inacthead, plinks.q); snprintf(vmd->vmd_name, sizeof(vmd->vmd_name), "%d", domain); } Modified: head/sys/vm/vm_page.h ============================================================================== --- head/sys/vm/vm_page.h Thu Apr 19 13:37:59 2018 (r332770) +++ head/sys/vm/vm_page.h Thu Apr 19 14:09:44 2018 (r332771) @@ -490,6 +490,7 @@ void vm_page_free_phys_pglist(struct pglist *tq); bool vm_page_free_prep(vm_page_t m, bool pagequeue_locked); vm_page_t vm_page_getfake(vm_paddr_t paddr, vm_memattr_t memattr); void vm_page_initfake(vm_page_t m, vm_paddr_t paddr, vm_memattr_t memattr); +void vm_page_init_marker(vm_page_t m, int queue); int vm_page_insert (vm_page_t, vm_object_t, vm_pindex_t); void vm_page_launder(vm_page_t m); vm_page_t vm_page_lookup (vm_object_t, vm_pindex_t); Modified: head/sys/vm/vm_pageout.c ============================================================================== --- head/sys/vm/vm_pageout.c Thu Apr 19 13:37:59 2018 (r332770) +++ head/sys/vm/vm_pageout.c Thu Apr 19 14:09:44 2018 (r332771) @@ -208,23 +208,6 @@ static void vm_pageout_laundry_worker(void *arg); static boolean_t vm_pageout_page_lock(vm_page_t, vm_page_t *); /* - * Initialize a dummy page for marking the caller's place in the specified - * paging queue. In principle, this function only needs to set the flag - * PG_MARKER. Nonetheless, it write busies and initializes the hold count - * to one as safety precautions. - */ -static void -vm_pageout_init_marker(vm_page_t marker, u_short queue) -{ - - bzero(marker, sizeof(*marker)); - marker->flags = PG_MARKER; - marker->busy_lock = VPB_SINGLE_EXCLUSIVER; - marker->queue = queue; - marker->hold_count = 1; -} - -/* * vm_pageout_fallback_object_lock: * * Lock vm object currently associated with `m'. VM_OBJECT_TRYWLOCK is @@ -244,11 +227,11 @@ vm_pageout_fallback_object_lock(vm_page_t m, vm_page_t struct vm_page marker; struct vm_pagequeue *pq; boolean_t unchanged; - u_short queue; vm_object_t object; + int queue; queue = m->queue; - vm_pageout_init_marker(&marker, queue); + vm_page_init_marker(&marker, queue); pq = vm_page_pagequeue(m); object = m->object; @@ -293,14 +276,14 @@ vm_pageout_page_lock(vm_page_t m, vm_page_t *next) struct vm_page marker; struct vm_pagequeue *pq; boolean_t unchanged; - u_short queue; + int queue; vm_page_lock_assert(m, MA_NOTOWNED); if (vm_page_trylock(m)) return (TRUE); queue = m->queue; - vm_pageout_init_marker(&marker, queue); + vm_page_init_marker(&marker, queue); pq = vm_page_pagequeue(m); TAILQ_INSERT_AFTER(&pq->pq_pl, m, &marker, plinks.q); @@ -694,8 +677,8 @@ vm_pageout_launder(struct vm_domain *vmd, int launder, { struct vm_pagequeue *pq; vm_object_t object; - vm_page_t m, next; - int act_delta, error, maxscan, numpagedout, starting_target; + vm_page_t m, marker, next; + int act_delta, error, maxscan, numpagedout, queue, starting_target; int vnodes_skipped; bool pageout_ok, queue_locked; @@ -716,11 +699,14 @@ vm_pageout_launder(struct vm_domain *vmd, int launder, * swap devices are configured. */ if (atomic_load_acq_int(&swapdev_enabled)) - pq = &vmd->vmd_pagequeues[PQ_UNSWAPPABLE]; + queue = PQ_UNSWAPPABLE; else - pq = &vmd->vmd_pagequeues[PQ_LAUNDRY]; + queue = PQ_LAUNDRY; scan: + pq = &vmd->vmd_pagequeues[queue]; + marker = &vmd->vmd_markers[queue]; + vm_pagequeue_lock(pq); maxscan = pq->pq_cnt; queue_locked = true; @@ -762,8 +748,7 @@ scan: * Unlock the laundry queue, invalidating the 'next' pointer. * Use a marker to remember our place in the laundry queue. */ - TAILQ_INSERT_AFTER(&pq->pq_pl, m, &vmd->vmd_laundry_marker, - plinks.q); + TAILQ_INSERT_AFTER(&pq->pq_pl, m, marker, plinks.q); vm_pagequeue_unlock(pq); queue_locked = false; @@ -889,13 +874,13 @@ relock_queue: vm_pagequeue_lock(pq); queue_locked = true; } - next = TAILQ_NEXT(&vmd->vmd_laundry_marker, plinks.q); - TAILQ_REMOVE(&pq->pq_pl, &vmd->vmd_laundry_marker, plinks.q); + next = TAILQ_NEXT(marker, plinks.q); + TAILQ_REMOVE(&pq->pq_pl, marker, plinks.q); } vm_pagequeue_unlock(pq); - if (launder > 0 && pq == &vmd->vmd_pagequeues[PQ_UNSWAPPABLE]) { - pq = &vmd->vmd_pagequeues[PQ_LAUNDRY]; + if (launder > 0 && queue == PQ_UNSWAPPABLE) { + queue = PQ_LAUNDRY; goto scan; } @@ -951,7 +936,6 @@ vm_pageout_laundry_worker(void *arg) vmd = VM_DOMAIN(domain); pq = &vmd->vmd_pagequeues[PQ_LAUNDRY]; KASSERT(vmd->vmd_segs != 0, ("domain without segments")); - vm_pageout_init_marker(&vmd->vmd_laundry_marker, PQ_LAUNDRY); shortfall = 0; in_shortfall = false; @@ -1105,7 +1089,7 @@ dolaundry: static bool vm_pageout_scan(struct vm_domain *vmd, int pass, int shortage) { - vm_page_t m, next; + vm_page_t m, marker, next; struct vm_pagequeue *pq; vm_object_t object; long min_scan; @@ -1159,6 +1143,7 @@ vm_pageout_scan(struct vm_domain *vmd, int pass, int s * decisions for the inactive queue, only for the active queue.) */ pq = &vmd->vmd_pagequeues[PQ_INACTIVE]; + marker = &vmd->vmd_markers[PQ_INACTIVE]; maxscan = pq->pq_cnt; vm_pagequeue_lock(pq); queue_locked = TRUE; @@ -1250,7 +1235,7 @@ unlock_page: * vm_page_free(), or vm_page_launder() is called. Use a * marker to remember our place in the inactive queue. */ - TAILQ_INSERT_AFTER(&pq->pq_pl, m, &vmd->vmd_marker, plinks.q); + TAILQ_INSERT_AFTER(&pq->pq_pl, m, marker, plinks.q); vm_page_dequeue_locked(m); vm_pagequeue_unlock(pq); queue_locked = FALSE; @@ -1336,8 +1321,8 @@ drop_page: vm_pagequeue_lock(pq); queue_locked = TRUE; } - next = TAILQ_NEXT(&vmd->vmd_marker, plinks.q); - TAILQ_REMOVE(&pq->pq_pl, &vmd->vmd_marker, plinks.q); + next = TAILQ_NEXT(marker, plinks.q); + TAILQ_REMOVE(&pq->pq_pl, marker, plinks.q); } vm_pagequeue_unlock(pq); @@ -1781,10 +1766,6 @@ vm_pageout_worker(void *arg) KASSERT(vmd->vmd_segs != 0, ("domain without segments")); vmd->vmd_last_active_scan = ticks; - vm_pageout_init_marker(&vmd->vmd_marker, PQ_INACTIVE); - vm_pageout_init_marker(&vmd->vmd_inacthead, PQ_INACTIVE); - TAILQ_INSERT_HEAD(&vmd->vmd_pagequeues[PQ_INACTIVE].pq_pl, - &vmd->vmd_inacthead, plinks.q); /* * The pageout daemon worker is never done, so loop forever. Modified: head/sys/vm/vm_pagequeue.h ============================================================================== --- head/sys/vm/vm_pagequeue.h Thu Apr 19 13:37:59 2018 (r332770) +++ head/sys/vm/vm_pagequeue.h Thu Apr 19 14:09:44 2018 (r332771) @@ -107,8 +107,7 @@ struct vm_domain { boolean_t vmd_oom; int vmd_oom_seq; int vmd_last_active_scan; - struct vm_page vmd_laundry_marker; - struct vm_page vmd_marker; /* marker for pagedaemon private use */ + struct vm_page vmd_markers[PQ_COUNT]; /* markers for queue scans */ struct vm_page vmd_inacthead; /* marker for LRU-defeating insertions */ int vmd_pageout_wanted; /* (a, p) pageout daemon wait channel */