From owner-svn-src-user@freebsd.org  Mon May  1 01:33:06 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 0D246D58AF4
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Mon,  1 May 2017 01:33:06 +0000 (UTC)
 (envelope-from markj@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id AE7C41527;
 Mon,  1 May 2017 01:33:05 +0000 (UTC)
 (envelope-from markj@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v411X458078540;
 Mon, 1 May 2017 01:33:04 GMT (envelope-from markj@FreeBSD.org)
Received: (from markj@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v411X4S5078539;
 Mon, 1 May 2017 01:33:04 GMT (envelope-from markj@FreeBSD.org)
Message-Id: <201705010133.v411X4S5078539@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: markj set sender to
 markj@FreeBSD.org using -f
From: Mark Johnston <markj@FreeBSD.org>
Date: Mon, 1 May 2017 01:33:04 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317616 - in user/markj: . PQ_LAUNDRY_11
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Mon, 01 May 2017 01:33:06 -0000

Author: markj
Date: Mon May  1 01:33:04 2017
New Revision: 317616
URL: https://svnweb.freebsd.org/changeset/base/317616

Log:
  Branch stable/11@r317613 for testing an MFC of the PQ_LAUNDRY code and
  the PG_CACHE removal.

Added:
  user/markj/
  user/markj/PQ_LAUNDRY_11/
     - copied from r317615, stable/11/

From owner-svn-src-user@freebsd.org  Mon May  1 01:35:45 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 066CFD58C02
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Mon,  1 May 2017 01:35:45 +0000 (UTC)
 (envelope-from markj@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id C6C601672;
 Mon,  1 May 2017 01:35:44 +0000 (UTC)
 (envelope-from markj@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v411ZhYB078750;
 Mon, 1 May 2017 01:35:43 GMT (envelope-from markj@FreeBSD.org)
Received: (from markj@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v411Zh60078742;
 Mon, 1 May 2017 01:35:43 GMT (envelope-from markj@FreeBSD.org)
Message-Id: <201705010135.v411Zh60078742@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: markj set sender to
 markj@FreeBSD.org using -f
From: Mark Johnston <markj@FreeBSD.org>
Date: Mon, 1 May 2017 01:35:43 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317617 - in user/markj/PQ_LAUNDRY_11/sys: sys vm
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Mon, 01 May 2017 01:35:45 -0000

Author: markj
Date: Mon May  1 01:35:43 2017
New Revision: 317617
URL: https://svnweb.freebsd.org/changeset/base/317617

Log:
  MFC r308474:
  Add PQ_LAUNDRY.

Modified:
  user/markj/PQ_LAUNDRY_11/sys/sys/vmmeter.h
  user/markj/PQ_LAUNDRY_11/sys/vm/swap_pager.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_fault.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_meter.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.h
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_pageout.c
Directory Properties:
  user/markj/PQ_LAUNDRY_11/   (props changed)

Modified: user/markj/PQ_LAUNDRY_11/sys/sys/vmmeter.h
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/sys/vmmeter.h	Mon May  1 01:33:04 2017	(r317616)
+++ user/markj/PQ_LAUNDRY_11/sys/sys/vmmeter.h	Mon May  1 01:35:43 2017	(r317617)
@@ -75,9 +75,10 @@ struct vmmeter {
 	u_int v_vnodepgsin;	/* (p) vnode_pager pages paged in */
 	u_int v_vnodepgsout;	/* (p) vnode pager pages paged out */
 	u_int v_intrans;	/* (p) intransit blocking page faults */
-	u_int v_reactivated;	/* (f) pages reactivated from free list */
+	u_int v_reactivated;	/* (p) pages reactivated by the pagedaemon */
 	u_int v_pdwakeups;	/* (p) times daemon has awaken from sleep */
 	u_int v_pdpages;	/* (p) pages analyzed by daemon */
+	u_int v_pdshortfalls;	/* (p) page reclamation shortfalls */
 
 	u_int v_tcached;	/* (p) total pages cached */
 	u_int v_dfree;		/* (p) pages freed by daemon */
@@ -96,6 +97,7 @@ struct vmmeter {
 	u_int v_active_count;	/* (q) pages active */
 	u_int v_inactive_target; /* (c) pages desired inactive */
 	u_int v_inactive_count;	/* (q) pages inactive */
+	u_int v_laundry_count;	/* (q) pages eligible for laundering */
 	u_int v_cache_count;	/* (f) pages on cache queue */
 	u_int v_pageout_free_min;   /* (c) min pages reserved for kernel */
 	u_int v_interrupt_free_min; /* (c) reserved pages for int code */
@@ -111,7 +113,6 @@ struct vmmeter {
 	u_int v_vforkpages;	/* (p) VM pages affected by vfork() */
 	u_int v_rforkpages;	/* (p) VM pages affected by rfork() */
 	u_int v_kthreadpages;	/* (p) VM pages affected by fork() by kernel */
-	u_int v_spare[2];
 };
 #ifdef _KERNEL
 
@@ -184,6 +185,25 @@ vm_paging_needed(void)
 	    (u_int)vm_pageout_wakeup_thresh);
 }
 
+/*
+ * Return the number of pages we need to launder.
+ * A positive number indicates that we have a shortfall of clean pages.
+ */
+static inline int
+vm_laundry_target(void)
+{
+
+	return (vm_paging_target());
+}
+
+/*
+ * Obtain the value of a per-CPU counter.
+ */
+#define	VM_METER_PCPU_CNT(member)					\
+	vm_meter_cnt(__offsetof(struct vmmeter, member))
+
+u_int	vm_meter_cnt(size_t);
+
 #endif
 
 /* systemwide totals computed every five seconds */

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/swap_pager.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/swap_pager.c	Mon May  1 01:33:04 2017	(r317616)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/swap_pager.c	Mon May  1 01:35:43 2017	(r317617)
@@ -1549,17 +1549,18 @@ swp_pager_async_iodone(struct buf *bp)
 			 * For write success, clear the dirty
 			 * status, then finish the I/O ( which decrements the
 			 * busy count and possibly wakes waiter's up ).
+			 * A page is only written to swap after a period of
+			 * inactivity.  Therefore, we do not expect it to be
+			 * reused.
 			 */
 			KASSERT(!pmap_page_is_write_mapped(m),
 			    ("swp_pager_async_iodone: page %p is not write"
 			    " protected", m));
 			vm_page_undirty(m);
+			vm_page_lock(m);
+			vm_page_deactivate_noreuse(m);
+			vm_page_unlock(m);
 			vm_page_sunbusy(m);
-			if (vm_page_count_severe()) {
-				vm_page_lock(m);
-				vm_page_try_to_cache(m);
-				vm_page_unlock(m);
-			}
 		}
 	}
 
@@ -1635,12 +1636,15 @@ swap_pager_isswapped(vm_object_t object,
 /*
  * SWP_PAGER_FORCE_PAGEIN() - force a swap block to be paged in
  *
- *	This routine dissociates the page at the given index within a
- *	swap block from its backing store, paging it in if necessary.
- *	If the page is paged in, it is placed in the inactive queue,
- *	since it had its backing store ripped out from under it.
- *	We also attempt to swap in all other pages in the swap block,
- *	we only guarantee that the one at the specified index is
+ *	This routine dissociates the page at the given index within an object
+ *	from its backing store, paging it in if it does not reside in memory.
+ *	If the page is paged in, it is marked dirty and placed in the laundry
+ *	queue.  The page is marked dirty because it no longer has backing
+ *	store.  It is placed in the laundry queue because it has not been
+ *	accessed recently.  Otherwise, it would already reside in memory.
+ *
+ *	We also attempt to swap in all other pages in the swap block.
+ *	However, we only guarantee that the one at the specified index is
  *	paged in.
  *
  *	XXX - The code to page the whole block in doesn't work, so we
@@ -1669,7 +1673,7 @@ swp_pager_force_pagein(vm_object_t objec
 	vm_object_pip_wakeup(object);
 	vm_page_dirty(m);
 	vm_page_lock(m);
-	vm_page_deactivate(m);
+	vm_page_launder(m);
 	vm_page_unlock(m);
 	vm_page_xunbusy(m);
 	vm_pager_page_unswapped(m);

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_fault.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_fault.c	Mon May  1 01:33:04 2017	(r317616)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_fault.c	Mon May  1 01:35:43 2017	(r317617)
@@ -485,11 +485,12 @@ int
 vm_fault_hold(vm_map_t map, vm_offset_t vaddr, vm_prot_t fault_type,
     int fault_flags, vm_page_t *m_hold)
 {
-	vm_prot_t prot;
-	vm_object_t next_object;
 	struct faultstate fs;
 	struct vnode *vp;
+	vm_object_t next_object, retry_object;
 	vm_offset_t e_end, e_start;
+	vm_pindex_t retry_pindex;
+	vm_prot_t prot, retry_prot;
 	int ahead, alloc_req, behind, cluster_offset, error, era, faultcount;
 	int locked, nera, result, rv;
 	u_char behavior;
@@ -1143,10 +1144,6 @@ readrest:
 	 * lookup.
 	 */
 	if (!fs.lookup_still_valid) {
-		vm_object_t retry_object;
-		vm_pindex_t retry_pindex;
-		vm_prot_t retry_prot;
-
 		if (!vm_map_trylock_read(fs.map)) {
 			release_page(&fs);
 			unlock_and_deallocate(&fs);

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_meter.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_meter.c	Mon May  1 01:33:04 2017	(r317616)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_meter.c	Mon May  1 01:35:43 2017	(r317617)
@@ -209,29 +209,37 @@ vmtotal(SYSCTL_HANDLER_ARGS)
 }
 
 /*
- * vcnt() -	accumulate statistics from all cpus and the global cnt
- *		structure.
+ * vm_meter_cnt() -	accumulate statistics from all cpus and the global cnt
+ *			structure.
  *
  *	The vmmeter structure is now per-cpu as well as global.  Those
  *	statistics which can be kept on a per-cpu basis (to avoid cache
  *	stalls between cpus) can be moved to the per-cpu vmmeter.  Remaining
  *	statistics, such as v_free_reserved, are left in the global
  *	structure.
- *
- * (sysctl_oid *oidp, void *arg1, int arg2, struct sysctl_req *req)
  */
-static int
-vcnt(SYSCTL_HANDLER_ARGS)
+u_int
+vm_meter_cnt(size_t offset)
 {
-	int count = *(int *)arg1;
-	int offset = (char *)arg1 - (char *)&vm_cnt;
+	struct pcpu *pcpu;
+	u_int count;
 	int i;
 
+	count = *(u_int *)((char *)&vm_cnt + offset);
 	CPU_FOREACH(i) {
-		struct pcpu *pcpu = pcpu_find(i);
-		count += *(int *)((char *)&pcpu->pc_cnt + offset);
+		pcpu = pcpu_find(i);
+		count += *(u_int *)((char *)&pcpu->pc_cnt + offset);
 	}
-	return (SYSCTL_OUT(req, &count, sizeof(int)));
+	return (count);
+}
+
+static int
+cnt_sysctl(SYSCTL_HANDLER_ARGS)
+{
+	u_int count;
+
+	count = vm_meter_cnt((char *)arg1 - (char *)&vm_cnt);
+	return (SYSCTL_OUT(req, &count, sizeof(count)));
 }
 
 SYSCTL_PROC(_vm, VM_TOTAL, vmtotal, CTLTYPE_OPAQUE|CTLFLAG_RD|CTLFLAG_MPSAFE,
@@ -246,8 +254,8 @@ SYSCTL_NODE(_vm_stats, OID_AUTO, misc, C
 
 #define	VM_STATS(parent, var, descr) \
 	SYSCTL_PROC(parent, OID_AUTO, var, \
-	    CTLTYPE_UINT | CTLFLAG_RD | CTLFLAG_MPSAFE, &vm_cnt.var, 0, vcnt, \
-	    "IU", descr)
+	    CTLTYPE_UINT | CTLFLAG_RD | CTLFLAG_MPSAFE, &vm_cnt.var, 0,	\
+	    cnt_sysctl, "IU", descr)
 #define	VM_STATS_VM(var, descr)		VM_STATS(_vm_stats_vm, var, descr)
 #define	VM_STATS_SYS(var, descr)	VM_STATS(_vm_stats_sys, var, descr)
 
@@ -271,9 +279,10 @@ VM_STATS_VM(v_vnodeout, "Vnode pager pag
 VM_STATS_VM(v_vnodepgsin, "Vnode pages paged in");
 VM_STATS_VM(v_vnodepgsout, "Vnode pages paged out");
 VM_STATS_VM(v_intrans, "In transit page faults");
-VM_STATS_VM(v_reactivated, "Pages reactivated from free list");
+VM_STATS_VM(v_reactivated, "Pages reactivated by pagedaemon");
 VM_STATS_VM(v_pdwakeups, "Pagedaemon wakeups");
 VM_STATS_VM(v_pdpages, "Pages analyzed by pagedaemon");
+VM_STATS_VM(v_pdshortfalls, "Page reclamation shortfalls");
 VM_STATS_VM(v_tcached, "Total pages cached");
 VM_STATS_VM(v_dfree, "Pages freed by pagedaemon");
 VM_STATS_VM(v_pfree, "Pages freed by exiting processes");
@@ -288,6 +297,7 @@ VM_STATS_VM(v_wire_count, "Wired pages")
 VM_STATS_VM(v_active_count, "Active pages");
 VM_STATS_VM(v_inactive_target, "Desired inactive pages");
 VM_STATS_VM(v_inactive_count, "Inactive pages");
+VM_STATS_VM(v_laundry_count, "Pages eligible for laundering");
 VM_STATS_VM(v_cache_count, "Pages on cache queue");
 VM_STATS_VM(v_pageout_free_min, "Min pages reserved for kernel");
 VM_STATS_VM(v_interrupt_free_min, "Reserved pages for interrupt code");

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.c	Mon May  1 01:33:04 2017	(r317616)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.c	Mon May  1 01:35:43 2017	(r317617)
@@ -2333,9 +2333,9 @@ sysctl_vm_object_list(SYSCTL_HANDLER_ARG
 			 * sysctl is only meant to give an
 			 * approximation of the system anyway.
 			 */
-			if (m->queue == PQ_ACTIVE)
+			if (vm_page_active(m))
 				kvo.kvo_active++;
-			else if (m->queue == PQ_INACTIVE)
+			else if (vm_page_inactive(m))
 				kvo.kvo_inactive++;
 		}
 

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c	Mon May  1 01:33:04 2017	(r317616)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c	Mon May  1 01:35:43 2017	(r317617)
@@ -391,6 +391,10 @@ vm_page_domain_init(struct vm_domain *vm
 	    "vm active pagequeue";
 	*__DECONST(u_int **, &vmd->vmd_pagequeues[PQ_ACTIVE].pq_vcnt) =
 	    &vm_cnt.v_active_count;
+	*__DECONST(char **, &vmd->vmd_pagequeues[PQ_LAUNDRY].pq_name) =
+	    "vm laundry pagequeue";
+	*__DECONST(int **, &vmd->vmd_pagequeues[PQ_LAUNDRY].pq_vcnt) =
+	    &vm_cnt.v_laundry_count;
 	vmd->vmd_page_count = 0;
 	vmd->vmd_free_count = 0;
 	vmd->vmd_segs = 0;
@@ -1756,9 +1760,7 @@ vm_page_alloc(vm_object_t object, vm_pin
 		    ("vm_page_alloc: cached page %p is PG_ZERO", m));
 		KASSERT(m->valid != 0,
 		    ("vm_page_alloc: cached page %p is invalid", m));
-		if (m->object == object && m->pindex == pindex)
-			vm_cnt.v_reactivated++;
-		else
+		if (m->object != object || m->pindex != pindex)
 			m->valid = 0;
 		m_object = m->object;
 		vm_page_cache_remove(m);
@@ -2284,7 +2286,7 @@ retry:
 			}
 			KASSERT((m->flags & PG_UNHOLDFREE) == 0,
 			    ("page %p is PG_UNHOLDFREE", m));
-			/* Don't care: PG_NODUMP, PG_WINATCFLS, PG_ZERO. */
+			/* Don't care: PG_NODUMP, PG_ZERO. */
 			if (object->type != OBJT_DEFAULT &&
 			    object->type != OBJT_SWAP &&
 			    object->type != OBJT_VNODE)
@@ -2480,7 +2482,7 @@ retry:
 			}
 			KASSERT((m->flags & PG_UNHOLDFREE) == 0,
 			    ("page %p is PG_UNHOLDFREE", m));
-			/* Don't care: PG_NODUMP, PG_WINATCFLS, PG_ZERO. */
+			/* Don't care: PG_NODUMP, PG_ZERO. */
 			if (object->type != OBJT_DEFAULT &&
 			    object->type != OBJT_SWAP &&
 			    object->type != OBJT_VNODE)
@@ -2809,7 +2811,10 @@ struct vm_pagequeue *
 vm_page_pagequeue(vm_page_t m)
 {
 
-	return (&vm_phys_domain(m)->vmd_pagequeues[m->queue]);
+	if (vm_page_in_laundry(m))
+		return (&vm_dom[0].vmd_pagequeues[m->queue]);
+	else
+		return (&vm_phys_domain(m)->vmd_pagequeues[m->queue]);
 }
 
 /*
@@ -2871,7 +2876,10 @@ vm_page_enqueue(uint8_t queue, vm_page_t
 	KASSERT(queue < PQ_COUNT,
 	    ("vm_page_enqueue: invalid queue %u request for page %p",
 	    queue, m));
-	pq = &vm_phys_domain(m)->vmd_pagequeues[queue];
+	if (queue == PQ_LAUNDRY)
+		pq = &vm_dom[0].vmd_pagequeues[queue];
+	else
+		pq = &vm_phys_domain(m)->vmd_pagequeues[queue];
 	vm_pagequeue_lock(pq);
 	m->queue = queue;
 	TAILQ_INSERT_TAIL(&pq->pq_pl, m, plinks.q);
@@ -3159,11 +3167,8 @@ vm_page_unwire(vm_page_t m, uint8_t queu
 		if (m->wire_count == 0) {
 			atomic_subtract_int(&vm_cnt.v_wire_count, 1);
 			if ((m->oflags & VPO_UNMANAGED) == 0 &&
-			    m->object != NULL && queue != PQ_NONE) {
-				if (queue == PQ_INACTIVE)
-					m->flags &= ~PG_WINATCFLS;
+			    m->object != NULL && queue != PQ_NONE)
 				vm_page_enqueue(queue, m);
-			}
 			return (TRUE);
 		} else
 			return (FALSE);
@@ -3216,7 +3221,6 @@ _vm_page_deactivate(vm_page_t m, boolean
 		} else {
 			if (queue != PQ_NONE)
 				vm_page_dequeue(m);
-			m->flags &= ~PG_WINATCFLS;
 			vm_pagequeue_lock(pq);
 		}
 		m->queue = PQ_INACTIVE;
@@ -3256,24 +3260,25 @@ vm_page_deactivate_noreuse(vm_page_t m)
 }
 
 /*
- * vm_page_try_to_cache:
+ * vm_page_launder
  *
- * Returns 0 on failure, 1 on success
+ * 	Put a page in the laundry.
  */
-int
-vm_page_try_to_cache(vm_page_t m)
+void
+vm_page_launder(vm_page_t m)
 {
+	int queue;
 
-	vm_page_lock_assert(m, MA_OWNED);
-	VM_OBJECT_ASSERT_WLOCKED(m->object);
-	if (m->dirty || m->hold_count || m->wire_count ||
-	    (m->oflags & VPO_UNMANAGED) != 0 || vm_page_busied(m))
-		return (0);
-	pmap_remove_all(m);
-	if (m->dirty)
-		return (0);
-	vm_page_cache(m);
-	return (1);
+	vm_page_assert_locked(m);
+	if ((queue = m->queue) != PQ_LAUNDRY) {
+		if (m->wire_count == 0 && (m->oflags & VPO_UNMANAGED) == 0) {
+			if (queue != PQ_NONE)
+				vm_page_dequeue(m);
+			vm_page_enqueue(PQ_LAUNDRY, m);
+		} else
+			KASSERT(queue == PQ_NONE,
+			    ("wired page %p is queued", m));
+	}
 }
 
 /*
@@ -3300,112 +3305,6 @@ vm_page_try_to_free(vm_page_t m)
 }
 
 /*
- * vm_page_cache
- *
- * Put the specified page onto the page cache queue (if appropriate).
- *
- * The object and page must be locked.
- */
-void
-vm_page_cache(vm_page_t m)
-{
-	vm_object_t object;
-	boolean_t cache_was_empty;
-
-	vm_page_lock_assert(m, MA_OWNED);
-	object = m->object;
-	VM_OBJECT_ASSERT_WLOCKED(object);
-	if (vm_page_busied(m) || (m->oflags & VPO_UNMANAGED) ||
-	    m->hold_count || m->wire_count)
-		panic("vm_page_cache: attempting to cache busy page");
-	KASSERT(!pmap_page_is_mapped(m),
-	    ("vm_page_cache: page %p is mapped", m));
-	KASSERT(m->dirty == 0, ("vm_page_cache: page %p is dirty", m));
-	if (m->valid == 0 || object->type == OBJT_DEFAULT ||
-	    (object->type == OBJT_SWAP &&
-	    !vm_pager_has_page(object, m->pindex, NULL, NULL))) {
-		/*
-		 * Hypothesis: A cache-eligible page belonging to a
-		 * default object or swap object but without a backing
-		 * store must be zero filled.
-		 */
-		vm_page_free(m);
-		return;
-	}
-	KASSERT((m->flags & PG_CACHED) == 0,
-	    ("vm_page_cache: page %p is already cached", m));
-
-	/*
-	 * Remove the page from the paging queues.
-	 */
-	vm_page_remque(m);
-
-	/*
-	 * Remove the page from the object's collection of resident
-	 * pages.
-	 */
-	vm_radix_remove(&object->rtree, m->pindex);
-	TAILQ_REMOVE(&object->memq, m, listq);
-	object->resident_page_count--;
-
-	/*
-	 * Restore the default memory attribute to the page.
-	 */
-	if (pmap_page_get_memattr(m) != VM_MEMATTR_DEFAULT)
-		pmap_page_set_memattr(m, VM_MEMATTR_DEFAULT);
-
-	/*
-	 * Insert the page into the object's collection of cached pages
-	 * and the physical memory allocator's cache/free page queues.
-	 */
-	m->flags &= ~PG_ZERO;
-	mtx_lock(&vm_page_queue_free_mtx);
-	cache_was_empty = vm_radix_is_empty(&object->cache);
-	if (vm_radix_insert(&object->cache, m)) {
-		mtx_unlock(&vm_page_queue_free_mtx);
-		if (object->type == OBJT_VNODE &&
-		    object->resident_page_count == 0)
-			vdrop(object->handle);
-		m->object = NULL;
-		vm_page_free(m);
-		return;
-	}
-
-	/*
-	 * The above call to vm_radix_insert() could reclaim the one pre-
-	 * existing cached page from this object, resulting in a call to
-	 * vdrop().
-	 */
-	if (!cache_was_empty)
-		cache_was_empty = vm_radix_is_singleton(&object->cache);
-
-	m->flags |= PG_CACHED;
-	vm_cnt.v_cache_count++;
-	PCPU_INC(cnt.v_tcached);
-#if VM_NRESERVLEVEL > 0
-	if (!vm_reserv_free_page(m)) {
-#else
-	if (TRUE) {
-#endif
-		vm_phys_free_pages(m, 0);
-	}
-	vm_page_free_wakeup();
-	mtx_unlock(&vm_page_queue_free_mtx);
-
-	/*
-	 * Increment the vnode's hold count if this is the object's only
-	 * cached page.  Decrement the vnode's hold count if this was
-	 * the object's only resident page.
-	 */
-	if (object->type == OBJT_VNODE) {
-		if (cache_was_empty && object->resident_page_count != 0)
-			vhold(object->handle);
-		else if (!cache_was_empty && object->resident_page_count == 0)
-			vdrop(object->handle);
-	}
-}
-
-/*
  * vm_page_advise
  *
  * 	Deactivate or do nothing, as appropriate.
@@ -3448,11 +3347,13 @@ vm_page_advise(vm_page_t m, int advice)
 	/*
 	 * Place clean pages near the head of the inactive queue rather than
 	 * the tail, thus defeating the queue's LRU operation and ensuring that
-	 * the page will be reused quickly.  Dirty pages are given a chance to
-	 * cycle once through the inactive queue before becoming eligible for
-	 * laundering.
+	 * the page will be reused quickly.  Dirty pages not already in the
+	 * laundry are moved there.
 	 */
-	_vm_page_deactivate(m, m->dirty == 0);
+	if (m->dirty == 0)
+		vm_page_deactivate_noreuse(m);
+	else
+		vm_page_launder(m);
 }
 
 /*
@@ -3961,6 +3862,7 @@ DB_SHOW_COMMAND(page, vm_page_print_page
 	db_printf("vm_cnt.v_cache_count: %d\n", vm_cnt.v_cache_count);
 	db_printf("vm_cnt.v_inactive_count: %d\n", vm_cnt.v_inactive_count);
 	db_printf("vm_cnt.v_active_count: %d\n", vm_cnt.v_active_count);
+	db_printf("vm_cnt.v_laundry_count: %d\n", vm_cnt.v_laundry_count);
 	db_printf("vm_cnt.v_wire_count: %d\n", vm_cnt.v_wire_count);
 	db_printf("vm_cnt.v_free_reserved: %d\n", vm_cnt.v_free_reserved);
 	db_printf("vm_cnt.v_free_min: %d\n", vm_cnt.v_free_min);
@@ -3975,12 +3877,14 @@ DB_SHOW_COMMAND(pageq, vm_page_print_pag
 	db_printf("pq_free %d pq_cache %d\n",
 	    vm_cnt.v_free_count, vm_cnt.v_cache_count);
 	for (dom = 0; dom < vm_ndomains; dom++) {
-		db_printf("dom %d page_cnt %d free %d pq_act %d pq_inact %d\n",
+		db_printf(
+	    "dom %d page_cnt %d free %d pq_act %d pq_inact %d pq_laund %d\n",
 		    dom,
 		    vm_dom[dom].vmd_page_count,
 		    vm_dom[dom].vmd_free_count,
 		    vm_dom[dom].vmd_pagequeues[PQ_ACTIVE].pq_cnt,
-		    vm_dom[dom].vmd_pagequeues[PQ_INACTIVE].pq_cnt);
+		    vm_dom[dom].vmd_pagequeues[PQ_INACTIVE].pq_cnt,
+		    vm_dom[dom].vmd_pagequeues[PQ_LAUNDRY].pq_cnt);
 	}
 }
 

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.h
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.h	Mon May  1 01:33:04 2017	(r317616)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.h	Mon May  1 01:35:43 2017	(r317617)
@@ -206,7 +206,8 @@ struct vm_page {
 #define	PQ_NONE		255
 #define	PQ_INACTIVE	0
 #define	PQ_ACTIVE	1
-#define	PQ_COUNT	2
+#define	PQ_LAUNDRY	2
+#define	PQ_COUNT	3
 
 TAILQ_HEAD(pglist, vm_page);
 SLIST_HEAD(spglist, vm_page);
@@ -228,6 +229,7 @@ struct vm_domain {
 	boolean_t vmd_oom;
 	int vmd_oom_seq;
 	int vmd_last_active_scan;
+	struct vm_page vmd_laundry_marker;
 	struct vm_page vmd_marker; /* marker for pagedaemon private use */
 	struct vm_page vmd_inacthead; /* marker for LRU-defeating insertions */
 };
@@ -236,6 +238,7 @@ extern struct vm_domain vm_dom[MAXMEMDOM
 
 #define	vm_pagequeue_assert_locked(pq)	mtx_assert(&(pq)->pq_mutex, MA_OWNED)
 #define	vm_pagequeue_lock(pq)		mtx_lock(&(pq)->pq_mutex)
+#define	vm_pagequeue_lockptr(pq)	(&(pq)->pq_mutex)
 #define	vm_pagequeue_unlock(pq)		mtx_unlock(&(pq)->pq_mutex)
 
 #ifdef _KERNEL
@@ -327,7 +330,6 @@ extern struct mtx_padalign pa_lock[];
 #define	PG_FICTITIOUS	0x0004		/* physical page doesn't exist */
 #define	PG_ZERO		0x0008		/* page is zeroed */
 #define	PG_MARKER	0x0010		/* special queue marker page */
-#define	PG_WINATCFLS	0x0040		/* flush dirty page on inactive q */
 #define	PG_NODUMP	0x0080		/* don't include this page in a dump */
 #define	PG_UNHOLDFREE	0x0100		/* delayed free of a held page */
 
@@ -451,10 +453,8 @@ vm_page_t vm_page_alloc_contig(vm_object
     vm_paddr_t boundary, vm_memattr_t memattr);
 vm_page_t vm_page_alloc_freelist(int, int);
 vm_page_t vm_page_grab (vm_object_t, vm_pindex_t, int);
-void vm_page_cache(vm_page_t);
 void vm_page_cache_free(vm_object_t, vm_pindex_t, vm_pindex_t);
 void vm_page_cache_transfer(vm_object_t, vm_pindex_t, vm_object_t);
-int vm_page_try_to_cache (vm_page_t);
 int vm_page_try_to_free (vm_page_t);
 void vm_page_deactivate (vm_page_t);
 void vm_page_deactivate_noreuse(vm_page_t);
@@ -465,6 +465,7 @@ vm_page_t vm_page_getfake(vm_paddr_t pad
 void vm_page_initfake(vm_page_t m, vm_paddr_t paddr, vm_memattr_t memattr);
 int vm_page_insert (vm_page_t, vm_object_t, vm_pindex_t);
 boolean_t vm_page_is_cached(vm_object_t object, vm_pindex_t pindex);
+void vm_page_launder(vm_page_t m);
 vm_page_t vm_page_lookup (vm_object_t, vm_pindex_t);
 vm_page_t vm_page_next(vm_page_t m);
 int vm_page_pa_tryrelock(pmap_t, vm_paddr_t, vm_paddr_t *);
@@ -698,5 +699,26 @@ vm_page_replace_checked(vm_page_t mnew, 
 	(void)mret;
 }
 
+static inline bool
+vm_page_active(vm_page_t m)
+{
+
+	return (m->queue == PQ_ACTIVE);
+}
+
+static inline bool
+vm_page_inactive(vm_page_t m)
+{
+
+	return (m->queue == PQ_INACTIVE);
+}
+
+static inline bool
+vm_page_in_laundry(vm_page_t m)
+{
+
+	return (m->queue == PQ_LAUNDRY);
+}
+
 #endif				/* _KERNEL */
 #endif				/* !_VM_PAGE_ */

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_pageout.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_pageout.c	Mon May  1 01:33:04 2017	(r317616)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_pageout.c	Mon May  1 01:35:43 2017	(r317617)
@@ -119,7 +119,7 @@ __FBSDID("$FreeBSD$");
 /* the kernel process "vm_pageout"*/
 static void vm_pageout(void);
 static void vm_pageout_init(void);
-static int vm_pageout_clean(vm_page_t m);
+static int vm_pageout_clean(vm_page_t m, int *numpagedout);
 static int vm_pageout_cluster(vm_page_t m);
 static bool vm_pageout_scan(struct vm_domain *vmd, int pass);
 static void vm_pageout_mightbe_oom(struct vm_domain *vmd, int page_shortage,
@@ -154,6 +154,9 @@ static struct kproc_desc vm_kp = {
 SYSINIT(vmdaemon, SI_SUB_KTHREAD_VM, SI_ORDER_FIRST, kproc_start, &vm_kp);
 #endif
 
+/* Pagedaemon activity rates, in subdivisions of one second. */
+#define	VM_LAUNDER_RATE		10
+#define	VM_INACT_SCAN_RATE	2
 
 int vm_pageout_deficit;		/* Estimated number of pages deficit */
 int vm_pageout_wakeup_thresh;
@@ -161,6 +164,13 @@ static int vm_pageout_oom_seq = 12;
 bool vm_pageout_wanted;		/* Event on which pageout daemon sleeps */
 bool vm_pages_needed;		/* Are threads waiting for free pages? */
 
+/* Pending request for dirty page laundering. */
+static enum {
+	VM_LAUNDRY_IDLE,
+	VM_LAUNDRY_BACKGROUND,
+	VM_LAUNDRY_SHORTFALL
+} vm_laundry_request = VM_LAUNDRY_IDLE;
+
 #if !defined(NO_SWAPPING)
 static int vm_pageout_req_swapout;	/* XXX */
 static int vm_daemon_needed;
@@ -168,9 +178,7 @@ static struct mtx vm_daemon_mtx;
 /* Allow for use by vm_pageout before vm_daemon is initialized. */
 MTX_SYSINIT(vm_daemon, &vm_daemon_mtx, "vm daemon", MTX_DEF);
 #endif
-static int vm_max_launder = 32;
 static int vm_pageout_update_period;
-static int defer_swap_pageouts;
 static int disable_swap_pageouts;
 static int lowmem_period = 10;
 static time_t lowmem_uptime;
@@ -193,9 +201,6 @@ SYSCTL_INT(_vm, OID_AUTO, pageout_wakeup
 	CTLFLAG_RW, &vm_pageout_wakeup_thresh, 0,
 	"free page threshold for waking up the pageout daemon");
 
-SYSCTL_INT(_vm, OID_AUTO, max_launder,
-	CTLFLAG_RW, &vm_max_launder, 0, "Limit dirty flushes in pageout");
-
 SYSCTL_INT(_vm, OID_AUTO, pageout_update_period,
 	CTLFLAG_RW, &vm_pageout_update_period, 0,
 	"Maximum active LRU update period");
@@ -215,9 +220,6 @@ SYSCTL_INT(_vm, OID_AUTO, swap_idle_enab
 	CTLFLAG_RW, &vm_swap_idle_enabled, 0, "Allow swapout on idle criteria");
 #endif
 
-SYSCTL_INT(_vm, OID_AUTO, defer_swapspace_pageouts,
-	CTLFLAG_RW, &defer_swap_pageouts, 0, "Give preference to dirty pages in mem");
-
 SYSCTL_INT(_vm, OID_AUTO, disable_swapspace_pageouts,
 	CTLFLAG_RW, &disable_swap_pageouts, 0, "Disallow swapout of dirty pages");
 
@@ -229,6 +231,25 @@ SYSCTL_INT(_vm, OID_AUTO, pageout_oom_se
 	CTLFLAG_RW, &vm_pageout_oom_seq, 0,
 	"back-to-back calls to oom detector to start OOM");
 
+static int act_scan_laundry_weight = 3;
+SYSCTL_INT(_vm, OID_AUTO, act_scan_laundry_weight, CTLFLAG_RW,
+    &act_scan_laundry_weight, 0,
+    "weight given to clean vs. dirty pages in active queue scans");
+
+static u_int vm_background_launder_target;
+SYSCTL_UINT(_vm, OID_AUTO, background_launder_target, CTLFLAG_RW,
+    &vm_background_launder_target, 0,
+    "background laundering target, in pages");
+
+static u_int vm_background_launder_rate = 4096;
+SYSCTL_UINT(_vm, OID_AUTO, background_launder_rate, CTLFLAG_RW,
+    &vm_background_launder_rate, 0,
+    "background laundering rate, in kilobytes per second");
+
+static u_int vm_background_launder_max = 20 * 1024;
+SYSCTL_UINT(_vm, OID_AUTO, background_launder_max, CTLFLAG_RW,
+    &vm_background_launder_max, 0, "background laundering cap, in kilobytes");
+
 #define VM_PAGEOUT_PAGE_COUNT 16
 int vm_pageout_page_count = VM_PAGEOUT_PAGE_COUNT;
 
@@ -236,7 +257,11 @@ int vm_page_max_wired;		/* XXX max # of 
 SYSCTL_INT(_vm, OID_AUTO, max_wired,
 	CTLFLAG_RW, &vm_page_max_wired, 0, "System-wide limit to wired page count");
 
+static u_int isqrt(u_int num);
 static boolean_t vm_pageout_fallback_object_lock(vm_page_t, vm_page_t *);
+static int vm_pageout_launder(struct vm_domain *vmd, int launder,
+    bool in_shortfall);
+static void vm_pageout_laundry_worker(void *arg);
 #if !defined(NO_SWAPPING)
 static void vm_pageout_map_deactivate_pages(vm_map_t, long);
 static void vm_pageout_object_deactivate_pages(pmap_t, vm_object_t, long);
@@ -387,7 +412,7 @@ vm_pageout_cluster(vm_page_t m)
 
 	/*
 	 * We can cluster only if the page is not clean, busy, or held, and
-	 * the page is inactive.
+	 * the page is in the laundry queue.
 	 *
 	 * During heavy mmap/modification loads the pageout
 	 * daemon can really fragment the underlying file
@@ -413,7 +438,7 @@ more:
 			break;
 		}
 		vm_page_lock(p);
-		if (p->queue != PQ_INACTIVE ||
+		if (!vm_page_in_laundry(p) ||
 		    p->hold_count != 0) {	/* may be undergoing I/O */
 			vm_page_unlock(p);
 			ib = 0;
@@ -439,7 +464,7 @@ more:
 		if (p->dirty == 0)
 			break;
 		vm_page_lock(p);
-		if (p->queue != PQ_INACTIVE ||
+		if (!vm_page_in_laundry(p) ||
 		    p->hold_count != 0) {	/* may be undergoing I/O */
 			vm_page_unlock(p);
 			break;
@@ -519,23 +544,33 @@ vm_pageout_flush(vm_page_t *mc, int coun
 		    ("vm_pageout_flush: page %p is not write protected", mt));
 		switch (pageout_status[i]) {
 		case VM_PAGER_OK:
+			vm_page_lock(mt);
+			if (vm_page_in_laundry(mt))
+				vm_page_deactivate_noreuse(mt);
+			vm_page_unlock(mt);
+			/* FALLTHROUGH */
 		case VM_PAGER_PEND:
 			numpagedout++;
 			break;
 		case VM_PAGER_BAD:
 			/*
-			 * Page outside of range of object. Right now we
-			 * essentially lose the changes by pretending it
-			 * worked.
+			 * The page is outside the object's range.  We pretend
+			 * that the page out worked and clean the page, so the
+			 * changes will be lost if the page is reclaimed by
+			 * the page daemon.
 			 */
 			vm_page_undirty(mt);
+			vm_page_lock(mt);
+			if (vm_page_in_laundry(mt))
+				vm_page_deactivate_noreuse(mt);
+			vm_page_unlock(mt);
 			break;
 		case VM_PAGER_ERROR:
 		case VM_PAGER_FAIL:
 			/*
-			 * If page couldn't be paged out, then reactivate the
-			 * page so it doesn't clog the inactive list.  (We
-			 * will try paging out it again later).
+			 * If the page couldn't be paged out, then reactivate
+			 * it so that it doesn't clog the laundry and inactive
+			 * queues.  (We will try paging it out again later).
 			 */
 			vm_page_lock(mt);
 			vm_page_activate(mt);
@@ -617,10 +652,10 @@ vm_pageout_object_deactivate_pages(pmap_
 					act_delta = 1;
 				vm_page_aflag_clear(p, PGA_REFERENCED);
 			}
-			if (p->queue != PQ_ACTIVE && act_delta != 0) {
+			if (!vm_page_active(p) && act_delta != 0) {
 				vm_page_activate(p);
 				p->act_count += act_delta;
-			} else if (p->queue == PQ_ACTIVE) {
+			} else if (vm_page_active(p)) {
 				if (act_delta == 0) {
 					p->act_count -= min(p->act_count,
 					    ACT_DECLINE);
@@ -636,7 +671,7 @@ vm_pageout_object_deactivate_pages(pmap_
 						p->act_count += ACT_ADVANCE;
 					vm_page_requeue(p);
 				}
-			} else if (p->queue == PQ_INACTIVE)
+			} else if (vm_page_inactive(p))
 				pmap_remove_all(p);
 			vm_page_unlock(p);
 		}
@@ -739,7 +774,7 @@ vm_pageout_map_deactivate_pages(map, des
  * Returns 0 on success and an errno otherwise.
  */
 static int
-vm_pageout_clean(vm_page_t m)
+vm_pageout_clean(vm_page_t m, int *numpagedout)
 {
 	struct vnode *vp;
 	struct mount *mp;
@@ -797,7 +832,7 @@ vm_pageout_clean(vm_page_t m)
 		 * (3) reallocated to a different offset, or
 		 * (4) cleaned.
 		 */
-		if (m->queue != PQ_INACTIVE || m->object != object ||
+		if (!vm_page_in_laundry(m) || m->object != object ||
 		    m->pindex != pindex || m->dirty == 0) {
 			vm_page_unlock(m);
 			error = ENXIO;
@@ -821,7 +856,7 @@ vm_pageout_clean(vm_page_t m)
 	 * laundry.  If it is still in the laundry, then we
 	 * start the cleaning operation. 
 	 */
-	if (vm_pageout_cluster(m) == 0)
+	if ((*numpagedout = vm_pageout_cluster(m)) == 0)
 		error = EIO;
 
 unlock_all:
@@ -840,11 +875,390 @@ unlock_mp:
 }
 
 /*
+ * Attempt to launder the specified number of pages.
+ *
+ * Returns the number of pages successfully laundered.
+ */
+static int
+vm_pageout_launder(struct vm_domain *vmd, int launder, bool in_shortfall)
+{
+	struct vm_pagequeue *pq;
+	vm_object_t object;
+	vm_page_t m, next;
+	int act_delta, error, maxscan, numpagedout, starting_target;
+	int vnodes_skipped;
+	bool pageout_ok, queue_locked;
+
+	starting_target = launder;
+	vnodes_skipped = 0;
+
+	/*
+	 * Scan the laundry queue for pages eligible to be laundered.  We stop
+	 * once the target number of dirty pages have been laundered, or once
+	 * we've reached the end of the queue.  A single iteration of this loop
+	 * may cause more than one page to be laundered because of clustering.
+	 *
+	 * maxscan ensures that we don't re-examine requeued pages.  Any
+	 * additional pages written as part of a cluster are subtracted from
+	 * maxscan since they must be taken from the laundry queue.
+	 */
+	pq = &vmd->vmd_pagequeues[PQ_LAUNDRY];
+	maxscan = pq->pq_cnt;
+
+	vm_pagequeue_lock(pq);
+	queue_locked = true;
+	for (m = TAILQ_FIRST(&pq->pq_pl);
+	    m != NULL && maxscan-- > 0 && launder > 0;
+	    m = next) {
+		vm_pagequeue_assert_locked(pq);
+		KASSERT(queue_locked, ("unlocked laundry queue"));
+		KASSERT(vm_page_in_laundry(m),
+		    ("page %p has an inconsistent queue", m));
+		next = TAILQ_NEXT(m, plinks.q);
+		if ((m->flags & PG_MARKER) != 0)
+			continue;
+		KASSERT((m->flags & PG_FICTITIOUS) == 0,
+		    ("PG_FICTITIOUS page %p cannot be in laundry queue", m));
+		KASSERT((m->oflags & VPO_UNMANAGED) == 0,
+		    ("VPO_UNMANAGED page %p cannot be in laundry queue", m));
+		if (!vm_pageout_page_lock(m, &next) || m->hold_count != 0) {
+			vm_page_unlock(m);
+			continue;
+		}
+		object = m->object;
+		if ((!VM_OBJECT_TRYWLOCK(object) &&
+		    (!vm_pageout_fallback_object_lock(m, &next) ||
+		    m->hold_count != 0)) || vm_page_busied(m)) {
+			VM_OBJECT_WUNLOCK(object);
+			vm_page_unlock(m);
+			continue;
+		}
+
+		/*
+		 * Unlock the laundry queue, invalidating the 'next' pointer.
+		 * Use a marker to remember our place in the laundry queue.
+		 */
+		TAILQ_INSERT_AFTER(&pq->pq_pl, m, &vmd->vmd_laundry_marker,
+		    plinks.q);
+		vm_pagequeue_unlock(pq);
+		queue_locked = false;
+
+		/*
+		 * Invalid pages can be easily freed.  They cannot be
+		 * mapped; vm_page_free() asserts this.
+		 */
+		if (m->valid == 0)
+			goto free_page;
+
+		/*
+		 * If the page has been referenced and the object is not dead,
+		 * reactivate or requeue the page depending on whether the
+		 * object is mapped.
+		 */
+		if ((m->aflags & PGA_REFERENCED) != 0) {
+			vm_page_aflag_clear(m, PGA_REFERENCED);
+			act_delta = 1;
+		} else
+			act_delta = 0;
+		if (object->ref_count != 0)
+			act_delta += pmap_ts_referenced(m);
+		else {
+			KASSERT(!pmap_page_is_mapped(m),
+			    ("page %p is mapped", m));
+		}
+		if (act_delta != 0) {
+			if (object->ref_count != 0) {
+				PCPU_INC(cnt.v_reactivated);
+				vm_page_activate(m);
+
+				/*
+				 * Increase the activation count if the page
+				 * was referenced while in the laundry queue.
+				 * This makes it less likely that the page will
+				 * be returned prematurely to the inactive
+				 * queue.
+ 				 */
+				m->act_count += act_delta + ACT_ADVANCE;
+
+				/*
+				 * If this was a background laundering, count
+				 * activated pages towards our target.  The
+				 * purpose of background laundering is to ensure
+				 * that pages are eventually cycled through the
+				 * laundry queue, and an activation is a valid
+				 * way out.
+				 */
+				if (!in_shortfall)
+					launder--;
+				goto drop_page;
+			} else if ((object->flags & OBJ_DEAD) == 0)
+				goto requeue_page;
+		}
+
+		/*
+		 * If the page appears to be clean at the machine-independent
+		 * layer, then remove all of its mappings from the pmap in
+		 * anticipation of freeing it.  If, however, any of the page's
+		 * mappings allow write access, then the page may still be
+		 * modified until the last of those mappings are removed.
+		 */
+		if (object->ref_count != 0) {
+			vm_page_test_dirty(m);
+			if (m->dirty == 0)
+				pmap_remove_all(m);
+		}
+
+		/*
+		 * Clean pages are freed, and dirty pages are paged out unless
+		 * they belong to a dead object.  Requeueing dirty pages from
+		 * dead objects is pointless, as they are being paged out and
+		 * freed by the thread that destroyed the object.
+		 */
+		if (m->dirty == 0) {
+free_page:
+			vm_page_free(m);
+			PCPU_INC(cnt.v_dfree);
+		} else if ((object->flags & OBJ_DEAD) == 0) {
+			if (object->type != OBJT_SWAP &&
+			    object->type != OBJT_DEFAULT)
+				pageout_ok = true;
+			else if (disable_swap_pageouts)
+				pageout_ok = false;
+			else
+				pageout_ok = true;
+			if (!pageout_ok) {
+requeue_page:
+				vm_pagequeue_lock(pq);
+				queue_locked = true;
+				vm_page_requeue_locked(m);
+				goto drop_page;
+			}
+
+			/*
+			 * Form a cluster with adjacent, dirty pages from the
+			 * same object, and page out that entire cluster.
+			 *
+			 * The adjacent, dirty pages must also be in the
+			 * laundry.  However, their mappings are not checked
+			 * for new references.  Consequently, a recently
+			 * referenced page may be paged out.  However, that
+			 * page will not be prematurely reclaimed.  After page
+			 * out, the page will be placed in the inactive queue,
+			 * where any new references will be detected and the
+			 * page reactivated.
+			 */
+			error = vm_pageout_clean(m, &numpagedout);
+			if (error == 0) {
+				launder -= numpagedout;

*** DIFF OUTPUT TRUNCATED AT 1000 LINES ***

From owner-svn-src-user@freebsd.org  Mon May  1 01:50:30 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 23EB4D58064
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Mon,  1 May 2017 01:50:30 +0000 (UTC)
 (envelope-from markj@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id E17D51FE1;
 Mon,  1 May 2017 01:50:29 +0000 (UTC)
 (envelope-from markj@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v411oTqm083135;
 Mon, 1 May 2017 01:50:29 GMT (envelope-from markj@FreeBSD.org)
Received: (from markj@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v411oRZV083118;
 Mon, 1 May 2017 01:50:27 GMT (envelope-from markj@FreeBSD.org)
Message-Id: <201705010150.v411oRZV083118@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: markj set sender to
 markj@FreeBSD.org using -f
From: Mark Johnston <markj@FreeBSD.org>
Date: Mon, 1 May 2017 01:50:27 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317621 - in user/markj/PQ_LAUNDRY_11/sys:
 cddl/compat/opensolaris/sys cddl/contrib/opensolaris/uts/common/fs/zfs
 fs/tmpfs kern vm
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Mon, 01 May 2017 01:50:30 -0000

Author: markj
Date: Mon May  1 01:50:27 2017
New Revision: 317621
URL: https://svnweb.freebsd.org/changeset/base/317621

Log:
  MFC r308691 (by alc):
  Remove most of the code for implementing PG_CACHED pages.

Modified:
  user/markj/PQ_LAUNDRY_11/sys/cddl/compat/opensolaris/sys/vnode.h
  user/markj/PQ_LAUNDRY_11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c
  user/markj/PQ_LAUNDRY_11/sys/fs/tmpfs/tmpfs_subr.c
  user/markj/PQ_LAUNDRY_11/sys/kern/kern_exec.c
  user/markj/PQ_LAUNDRY_11/sys/kern/uipc_shm.c
  user/markj/PQ_LAUNDRY_11/sys/vm/swap_pager.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_fault.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_mmap.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.h
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.h
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_phys.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_reserv.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_reserv.h
  user/markj/PQ_LAUNDRY_11/sys/vm/vnode_pager.c
Directory Properties:
  user/markj/PQ_LAUNDRY_11/   (props changed)

Modified: user/markj/PQ_LAUNDRY_11/sys/cddl/compat/opensolaris/sys/vnode.h
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/cddl/compat/opensolaris/sys/vnode.h	Mon May  1 01:42:26 2017	(r317620)
+++ user/markj/PQ_LAUNDRY_11/sys/cddl/compat/opensolaris/sys/vnode.h	Mon May  1 01:50:27 2017	(r317621)
@@ -75,8 +75,7 @@ vn_is_readonly(vnode_t *vp)
 #define	vn_mountedvfs(vp)	((vp)->v_mountedhere)
 #define	vn_has_cached_data(vp)	\
 	((vp)->v_object != NULL && \
-	 ((vp)->v_object->resident_page_count > 0 || \
-	  !vm_object_cache_is_empty((vp)->v_object)))
+	 (vp)->v_object->resident_page_count > 0)
 #define	vn_exists(vp)		do { } while (0)
 #define	vn_invalid(vp)		do { } while (0)
 #define	vn_renamepath(tdvp, svp, tnm, lentnm)	do { } while (0)

Modified: user/markj/PQ_LAUNDRY_11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c	Mon May  1 01:42:26 2017	(r317620)
+++ user/markj/PQ_LAUNDRY_11/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c	Mon May  1 01:50:27 2017	(r317621)
@@ -426,10 +426,6 @@ page_busy(vnode_t *vp, int64_t start, in
 				continue;
 			}
 			vm_page_sbusy(pp);
-		} else if (pp == NULL) {
-			pp = vm_page_alloc(obj, OFF_TO_IDX(start),
-			    VM_ALLOC_SYSTEM | VM_ALLOC_IFCACHED |
-			    VM_ALLOC_SBUSY);
 		} else {
 			ASSERT(pp != NULL && !pp->valid);
 			pp = NULL;

Modified: user/markj/PQ_LAUNDRY_11/sys/fs/tmpfs/tmpfs_subr.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/fs/tmpfs/tmpfs_subr.c	Mon May  1 01:42:26 2017	(r317620)
+++ user/markj/PQ_LAUNDRY_11/sys/fs/tmpfs/tmpfs_subr.c	Mon May  1 01:50:27 2017	(r317621)
@@ -1401,12 +1401,9 @@ retry:
 					VM_WAIT;
 					VM_OBJECT_WLOCK(uobj);
 					goto retry;
-				} else if (m->valid != VM_PAGE_BITS_ALL)
-					rv = vm_pager_get_pages(uobj, &m, 1,
-					    NULL, NULL);
-				else
-					/* A cached page was reactivated. */
-					rv = VM_PAGER_OK;
+				}
+				rv = vm_pager_get_pages(uobj, &m, 1, NULL,
+				    NULL);
 				vm_page_lock(m);
 				if (rv == VM_PAGER_OK) {
 					vm_page_deactivate(m);

Modified: user/markj/PQ_LAUNDRY_11/sys/kern/kern_exec.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/kern/kern_exec.c	Mon May  1 01:42:26 2017	(r317620)
+++ user/markj/PQ_LAUNDRY_11/sys/kern/kern_exec.c	Mon May  1 01:50:27 2017	(r317621)
@@ -1006,7 +1006,7 @@ exec_map_first_page(imgp)
 					break;
 			} else {
 				ma[i] = vm_page_alloc(object, i,
-				    VM_ALLOC_NORMAL | VM_ALLOC_IFNOTCACHED);
+				    VM_ALLOC_NORMAL);
 				if (ma[i] == NULL)
 					break;
 			}

Modified: user/markj/PQ_LAUNDRY_11/sys/kern/uipc_shm.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/kern/uipc_shm.c	Mon May  1 01:42:26 2017	(r317620)
+++ user/markj/PQ_LAUNDRY_11/sys/kern/uipc_shm.c	Mon May  1 01:50:27 2017	(r317621)
@@ -455,12 +455,9 @@ retry:
 					VM_WAIT;
 					VM_OBJECT_WLOCK(object);
 					goto retry;
-				} else if (m->valid != VM_PAGE_BITS_ALL)
-					rv = vm_pager_get_pages(object, &m, 1,
-					    NULL, NULL);
-				else
-					/* A cached page was reactivated. */
-					rv = VM_PAGER_OK;
+				}
+				rv = vm_pager_get_pages(object, &m, 1, NULL,
+				    NULL);
 				vm_page_lock(m);
 				if (rv == VM_PAGER_OK) {
 					vm_page_deactivate(m);

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/swap_pager.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/swap_pager.c	Mon May  1 01:42:26 2017	(r317620)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/swap_pager.c	Mon May  1 01:50:27 2017	(r317621)
@@ -1126,7 +1126,7 @@ swap_pager_getpages(vm_object_t object, 
 	if (shift != 0) {
 		for (i = 1; i <= shift; i++) {
 			p = vm_page_alloc(object, m[0]->pindex - i,
-			    VM_ALLOC_NORMAL | VM_ALLOC_IFNOTCACHED);
+			    VM_ALLOC_NORMAL);
 			if (p == NULL) {
 				/* Shift allocated pages to the left. */
 				for (j = 0; j < i - 1; j++)
@@ -1144,8 +1144,7 @@ swap_pager_getpages(vm_object_t object, 
 	if (rahead != NULL) {
 		for (i = 0; i < *rahead; i++) {
 			p = vm_page_alloc(object,
-			    m[reqcount - 1]->pindex + i + 1,
-			    VM_ALLOC_NORMAL | VM_ALLOC_IFNOTCACHED);
+			    m[reqcount - 1]->pindex + i + 1, VM_ALLOC_NORMAL);
 			if (p == NULL)
 				break;
 			bp->b_pages[shift + reqcount + i] = p;

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_fault.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_fault.c	Mon May  1 01:42:26 2017	(r317620)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_fault.c	Mon May  1 01:50:27 2017	(r317621)
@@ -756,8 +756,7 @@ RetryFault:;
 				unlock_and_deallocate(&fs);
 				VM_WAITPFAULT;
 				goto RetryFault;
-			} else if (fs.m->valid == VM_PAGE_BITS_ALL)
-				break;
+			}
 		}
 
 readrest:

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_mmap.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_mmap.c	Mon May  1 01:42:26 2017	(r317620)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_mmap.c	Mon May  1 01:50:27 2017	(r317621)
@@ -849,9 +849,6 @@ RestartScan:
 					pindex = OFF_TO_IDX(current->offset +
 					    (addr - current->start));
 					m = vm_page_lookup(object, pindex);
-					if (m == NULL &&
-					    vm_page_is_cached(object, pindex))
-						mincoreinfo = MINCORE_INCORE;
 					if (m != NULL && m->valid == 0)
 						m = NULL;
 					if (m != NULL)

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.c	Mon May  1 01:42:26 2017	(r317620)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.c	Mon May  1 01:50:27 2017	(r317621)
@@ -178,9 +178,6 @@ vm_object_zdtor(void *mem, int size, voi
 	    ("object %p has reservations",
 	    object));
 #endif
-	KASSERT(vm_object_cache_is_empty(object),
-	    ("object %p has cached pages",
-	    object));
 	KASSERT(object->paging_in_progress == 0,
 	    ("object %p paging_in_progress = %d",
 	    object, object->paging_in_progress));
@@ -212,8 +209,6 @@ vm_object_zinit(void *mem, int size, int
 	object->paging_in_progress = 0;
 	object->resident_page_count = 0;
 	object->shadow_count = 0;
-	object->cache.rt_root = 0;
-	object->cache.rt_flags = 0;
 
 	mtx_lock(&vm_object_list_mtx);
 	TAILQ_INSERT_TAIL(&vm_object_list, object, object_list);
@@ -792,8 +787,6 @@ vm_object_terminate(vm_object_t object)
 	if (__predict_false(!LIST_EMPTY(&object->rvq)))
 		vm_reserv_break_all(object);
 #endif
-	if (__predict_false(!vm_object_cache_is_empty(object)))
-		vm_page_cache_free(object, 0, 0);
 
 	KASSERT(object->cred == NULL || object->type == OBJT_DEFAULT ||
 	    object->type == OBJT_SWAP,
@@ -1135,13 +1128,6 @@ shadowlookup:
 		} else if ((tobject->flags & OBJ_UNMANAGED) != 0)
 			goto unlock_tobject;
 		m = vm_page_lookup(tobject, tpindex);
-		if (m == NULL && advise == MADV_WILLNEED) {
-			/*
-			 * If the page is cached, reactivate it.
-			 */
-			m = vm_page_alloc(tobject, tpindex, VM_ALLOC_IFCACHED |
-			    VM_ALLOC_NOBUSY);
-		}
 		if (m == NULL) {
 			/*
 			 * There may be swap even if there is no backing page
@@ -1406,19 +1392,6 @@ retry:
 		swap_pager_copy(orig_object, new_object, offidxstart, 0);
 		TAILQ_FOREACH(m, &new_object->memq, listq)
 			vm_page_xunbusy(m);
-
-		/*
-		 * Transfer any cached pages from orig_object to new_object.
-		 * If swap_pager_copy() found swapped out pages within the
-		 * specified range of orig_object, then it changed
-		 * new_object's type to OBJT_SWAP when it transferred those
-		 * pages to new_object.  Otherwise, new_object's type
-		 * should still be OBJT_DEFAULT and orig_object should not
-		 * contain any cached pages within the specified range.
-		 */
-		if (__predict_false(!vm_object_cache_is_empty(orig_object)))
-			vm_page_cache_transfer(orig_object, offidxstart,
-			    new_object);
 	}
 	VM_OBJECT_WUNLOCK(orig_object);
 	VM_OBJECT_WUNLOCK(new_object);
@@ -1758,13 +1731,6 @@ vm_object_collapse(vm_object_t object)
 				    backing_object,
 				    object,
 				    OFF_TO_IDX(object->backing_object_offset), TRUE);
-
-				/*
-				 * Free any cached pages from backing_object.
-				 */
-				if (__predict_false(
-				    !vm_object_cache_is_empty(backing_object)))
-					vm_page_cache_free(backing_object, 0, 0);
 			}
 			/*
 			 * Object now shadows whatever backing_object did.
@@ -1893,7 +1859,7 @@ vm_object_page_remove(vm_object_t object
 	    (options & (OBJPR_CLEANONLY | OBJPR_NOTMAPPED)) == OBJPR_NOTMAPPED,
 	    ("vm_object_page_remove: illegal options for object %p", object));
 	if (object->resident_page_count == 0)
-		goto skipmemq;
+		return;
 	vm_object_pip_add(object, 1);
 again:
 	p = vm_page_find_least(object, start);
@@ -1950,9 +1916,6 @@ next:
 		vm_page_unlock(p);
 	}
 	vm_object_pip_wakeup(object);
-skipmemq:
-	if (__predict_false(!vm_object_cache_is_empty(object)))
-		vm_page_cache_free(object, start, end);
 }
 
 /*

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.h
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.h	Mon May  1 01:42:26 2017	(r317620)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.h	Mon May  1 01:50:27 2017	(r317621)
@@ -118,7 +118,6 @@ struct vm_object {
 	vm_ooffset_t backing_object_offset;/* Offset in backing object */
 	TAILQ_ENTRY(vm_object) pager_object_list; /* list of all objects of this pager type */
 	LIST_HEAD(, vm_reserv) rvq;	/* list of reservations */
-	struct vm_radix cache;		/* (o + f) root of the cache page radix trie */
 	void *handle;
 	union {
 		/*
@@ -306,13 +305,6 @@ void vm_object_pip_wakeup(vm_object_t ob
 void vm_object_pip_wakeupn(vm_object_t object, short i);
 void vm_object_pip_wait(vm_object_t object, char *waitid);
 
-static __inline boolean_t
-vm_object_cache_is_empty(vm_object_t object)
-{
-
-	return (vm_radix_is_empty(&object->cache));
-}
-
 void umtx_shm_object_init(vm_object_t object);
 void umtx_shm_object_terminated(vm_object_t object);
 extern int umtx_shm_vnobj_persistent;

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c	Mon May  1 01:42:26 2017	(r317620)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c	Mon May  1 01:50:27 2017	(r317621)
@@ -155,8 +155,7 @@ static int vm_pageout_pages_needed;
 
 static uma_zone_t fakepg_zone;
 
-static struct vnode *vm_page_alloc_init(vm_page_t m);
-static void vm_page_cache_turn_free(vm_page_t m);
+static void vm_page_alloc_check(vm_page_t m);
 static void vm_page_clear_dirty_mask(vm_page_t m, vm_page_bits_t pagebits);
 static void vm_page_enqueue(uint8_t queue, vm_page_t m);
 static void vm_page_free_wakeup(void);
@@ -1140,9 +1139,7 @@ void
 vm_page_dirty_KBI(vm_page_t m)
 {
 
-	/* These assertions refer to this operation by its public name. */
-	KASSERT((m->flags & PG_CACHED) == 0,
-	    ("vm_page_dirty: page in cache!"));
+	/* Refer to this operation by its public name. */
 	KASSERT(m->valid == VM_PAGE_BITS_ALL,
 	    ("vm_page_dirty: page is invalid!"));
 	m->dirty = VM_PAGE_BITS_ALL;
@@ -1485,142 +1482,6 @@ vm_page_rename(vm_page_t m, vm_object_t 
 }
 
 /*
- *	Convert all of the given object's cached pages that have a
- *	pindex within the given range into free pages.  If the value
- *	zero is given for "end", then the range's upper bound is
- *	infinity.  If the given object is backed by a vnode and it
- *	transitions from having one or more cached pages to none, the
- *	vnode's hold count is reduced.
- */
-void
-vm_page_cache_free(vm_object_t object, vm_pindex_t start, vm_pindex_t end)
-{
-	vm_page_t m;
-	boolean_t empty;
-
-	mtx_lock(&vm_page_queue_free_mtx);
-	if (__predict_false(vm_radix_is_empty(&object->cache))) {
-		mtx_unlock(&vm_page_queue_free_mtx);
-		return;
-	}
-	while ((m = vm_radix_lookup_ge(&object->cache, start)) != NULL) {
-		if (end != 0 && m->pindex >= end)
-			break;
-		vm_radix_remove(&object->cache, m->pindex);
-		vm_page_cache_turn_free(m);
-	}
-	empty = vm_radix_is_empty(&object->cache);
-	mtx_unlock(&vm_page_queue_free_mtx);
-	if (object->type == OBJT_VNODE && empty)
-		vdrop(object->handle);
-}
-
-/*
- *	Returns the cached page that is associated with the given
- *	object and offset.  If, however, none exists, returns NULL.
- *
- *	The free page queue must be locked.
- */
-static inline vm_page_t
-vm_page_cache_lookup(vm_object_t object, vm_pindex_t pindex)
-{
-
-	mtx_assert(&vm_page_queue_free_mtx, MA_OWNED);
-	return (vm_radix_lookup(&object->cache, pindex));
-}
-
-/*
- *	Remove the given cached page from its containing object's
- *	collection of cached pages.
- *
- *	The free page queue must be locked.
- */
-static void
-vm_page_cache_remove(vm_page_t m)
-{
-
-	mtx_assert(&vm_page_queue_free_mtx, MA_OWNED);
-	KASSERT((m->flags & PG_CACHED) != 0,
-	    ("vm_page_cache_remove: page %p is not cached", m));
-	vm_radix_remove(&m->object->cache, m->pindex);
-	m->object = NULL;
-	vm_cnt.v_cache_count--;
-}
-
-/*
- *	Transfer all of the cached pages with offset greater than or
- *	equal to 'offidxstart' from the original object's cache to the
- *	new object's cache.  However, any cached pages with offset
- *	greater than or equal to the new object's size are kept in the
- *	original object.  Initially, the new object's cache must be
- *	empty.  Offset 'offidxstart' in the original object must
- *	correspond to offset zero in the new object.
- *
- *	The new object must be locked.
- */
-void
-vm_page_cache_transfer(vm_object_t orig_object, vm_pindex_t offidxstart,
-    vm_object_t new_object)
-{
-	vm_page_t m;
-
-	/*
-	 * Insertion into an object's collection of cached pages
-	 * requires the object to be locked.  In contrast, removal does
-	 * not.
-	 */
-	VM_OBJECT_ASSERT_WLOCKED(new_object);
-	KASSERT(vm_radix_is_empty(&new_object->cache),
-	    ("vm_page_cache_transfer: object %p has cached pages",
-	    new_object));
-	mtx_lock(&vm_page_queue_free_mtx);
-	while ((m = vm_radix_lookup_ge(&orig_object->cache,
-	    offidxstart)) != NULL) {
-		/*
-		 * Transfer all of the pages with offset greater than or
-		 * equal to 'offidxstart' from the original object's
-		 * cache to the new object's cache.
-		 */
-		if ((m->pindex - offidxstart) >= new_object->size)
-			break;
-		vm_radix_remove(&orig_object->cache, m->pindex);
-		/* Update the page's object and offset. */
-		m->object = new_object;
-		m->pindex -= offidxstart;
-		if (vm_radix_insert(&new_object->cache, m))
-			vm_page_cache_turn_free(m);
-	}
-	mtx_unlock(&vm_page_queue_free_mtx);
-}
-
-/*
- *	Returns TRUE if a cached page is associated with the given object and
- *	offset, and FALSE otherwise.
- *
- *	The object must be locked.
- */
-boolean_t
-vm_page_is_cached(vm_object_t object, vm_pindex_t pindex)
-{
-	vm_page_t m;
-
-	/*
-	 * Insertion into an object's collection of cached pages requires the
-	 * object to be locked.  Therefore, if the object is locked and the
-	 * object's collection is empty, there is no need to acquire the free
-	 * page queues lock in order to prove that the specified page doesn't
-	 * exist.
-	 */
-	VM_OBJECT_ASSERT_WLOCKED(object);
-	if (__predict_true(vm_object_cache_is_empty(object)))
-		return (FALSE);
-	mtx_lock(&vm_page_queue_free_mtx);
-	m = vm_page_cache_lookup(object, pindex);
-	mtx_unlock(&vm_page_queue_free_mtx);
-	return (m != NULL);
-}
-
-/*
  *	vm_page_alloc:
  *
  *	Allocate and return a page that is associated with the specified
@@ -1636,9 +1497,6 @@ vm_page_is_cached(vm_object_t object, vm
  *	optional allocation flags:
  *	VM_ALLOC_COUNT(number)	the number of additional pages that the caller
  *				intends to allocate
- *	VM_ALLOC_IFCACHED	return page only if it is cached
- *	VM_ALLOC_IFNOTCACHED	return NULL, do not reactivate if the page
- *				is cached
  *	VM_ALLOC_NOBUSY		do not exclusive busy the page
  *	VM_ALLOC_NODUMP		do not include the page in a kernel core dump
  *	VM_ALLOC_NOOBJ		page is not associated with an object and
@@ -1652,8 +1510,6 @@ vm_page_is_cached(vm_object_t object, vm
 vm_page_t
 vm_page_alloc(vm_object_t object, vm_pindex_t pindex, int req)
 {
-	struct vnode *vp = NULL;
-	vm_object_t m_object;
 	vm_page_t m, mpred;
 	int flags, req_class;
 
@@ -1696,31 +1552,12 @@ vm_page_alloc(vm_object_t object, vm_pin
 		 * Allocate from the free queue if the number of free pages
 		 * exceeds the minimum for the request class.
 		 */
-		if (object != NULL &&
-		    (m = vm_page_cache_lookup(object, pindex)) != NULL) {
-			if ((req & VM_ALLOC_IFNOTCACHED) != 0) {
-				mtx_unlock(&vm_page_queue_free_mtx);
-				return (NULL);
-			}
-			if (vm_phys_unfree_page(m))
-				vm_phys_set_pool(VM_FREEPOOL_DEFAULT, m, 0);
-#if VM_NRESERVLEVEL > 0
-			else if (!vm_reserv_reactivate_page(m))
-#else
-			else
-#endif
-				panic("vm_page_alloc: cache page %p is missing"
-				    " from the free queue", m);
-		} else if ((req & VM_ALLOC_IFCACHED) != 0) {
-			mtx_unlock(&vm_page_queue_free_mtx);
-			return (NULL);
 #if VM_NRESERVLEVEL > 0
-		} else if (object == NULL || (object->flags & (OBJ_COLORED |
+		if (object == NULL || (object->flags & (OBJ_COLORED |
 		    OBJ_FICTITIOUS)) != OBJ_COLORED || (m =
-		    vm_reserv_alloc_page(object, pindex, mpred)) == NULL) {
-#else
-		} else {
+		    vm_reserv_alloc_page(object, pindex, mpred)) == NULL)
 #endif
+		{
 			m = vm_phys_alloc_pages(object != NULL ?
 			    VM_FREEPOOL_DEFAULT : VM_FREEPOOL_DIRECT, 0);
 #if VM_NRESERVLEVEL > 0
@@ -1746,35 +1583,11 @@ vm_page_alloc(vm_object_t object, vm_pin
 	 *  At this point we had better have found a good page.
 	 */
 	KASSERT(m != NULL, ("vm_page_alloc: missing page"));
-	KASSERT(m->queue == PQ_NONE,
-	    ("vm_page_alloc: page %p has unexpected queue %d", m, m->queue));
-	KASSERT(m->wire_count == 0, ("vm_page_alloc: page %p is wired", m));
-	KASSERT(m->hold_count == 0, ("vm_page_alloc: page %p is held", m));
-	KASSERT(!vm_page_busied(m), ("vm_page_alloc: page %p is busy", m));
-	KASSERT(m->dirty == 0, ("vm_page_alloc: page %p is dirty", m));
-	KASSERT(pmap_page_get_memattr(m) == VM_MEMATTR_DEFAULT,
-	    ("vm_page_alloc: page %p has unexpected memattr %d", m,
-	    pmap_page_get_memattr(m)));
-	if ((m->flags & PG_CACHED) != 0) {
-		KASSERT((m->flags & PG_ZERO) == 0,
-		    ("vm_page_alloc: cached page %p is PG_ZERO", m));
-		KASSERT(m->valid != 0,
-		    ("vm_page_alloc: cached page %p is invalid", m));
-		if (m->object != object || m->pindex != pindex)
-			m->valid = 0;
-		m_object = m->object;
-		vm_page_cache_remove(m);
-		if (m_object->type == OBJT_VNODE &&
-		    vm_object_cache_is_empty(m_object))
-			vp = m_object->handle;
-	} else {
-		KASSERT(m->valid == 0,
-		    ("vm_page_alloc: free page %p is valid", m));
-		vm_phys_freecnt_adj(m, -1);
-		if ((m->flags & PG_ZERO) != 0)
-			vm_page_zero_count--;
-	}
+	vm_phys_freecnt_adj(m, -1);
+	if ((m->flags & PG_ZERO) != 0)
+		vm_page_zero_count--;
 	mtx_unlock(&vm_page_queue_free_mtx);
+	vm_page_alloc_check(m);
 
 	/*
 	 * Initialize the page.  Only the PG_ZERO flag is inherited.
@@ -1806,9 +1619,6 @@ vm_page_alloc(vm_object_t object, vm_pin
 
 	if (object != NULL) {
 		if (vm_page_insert_after(m, object, pindex, mpred)) {
-			/* See the comment below about hold count. */
-			if (vp != NULL)
-				vdrop(vp);
 			pagedaemon_wakeup();
 			if (req & VM_ALLOC_WIRED) {
 				atomic_subtract_int(&vm_cnt.v_wire_count, 1);
@@ -1829,15 +1639,6 @@ vm_page_alloc(vm_object_t object, vm_pin
 		m->pindex = pindex;
 
 	/*
-	 * The following call to vdrop() must come after the above call
-	 * to vm_page_insert() in case both affect the same object and
-	 * vnode.  Otherwise, the affected vnode's hold count could
-	 * temporarily become zero.
-	 */
-	if (vp != NULL)
-		vdrop(vp);
-
-	/*
 	 * Don't wakeup too often - wakeup the pageout daemon when
 	 * we would be nearly out of memory.
 	 */
@@ -1847,16 +1648,6 @@ vm_page_alloc(vm_object_t object, vm_pin
 	return (m);
 }
 
-static void
-vm_page_alloc_contig_vdrop(struct spglist *lst)
-{
-
-	while (!SLIST_EMPTY(lst)) {
-		vdrop((struct vnode *)SLIST_FIRST(lst)-> plinks.s.pv);
-		SLIST_REMOVE_HEAD(lst, plinks.s.ss);
-	}
-}
-
 /*
  *	vm_page_alloc_contig:
  *
@@ -1901,8 +1692,6 @@ vm_page_alloc_contig(vm_object_t object,
     u_long npages, vm_paddr_t low, vm_paddr_t high, u_long alignment,
     vm_paddr_t boundary, vm_memattr_t memattr)
 {
-	struct vnode *drop;
-	struct spglist deferred_vdrop_list;
 	vm_page_t m, m_tmp, m_ret;
 	u_int flags;
 	int req_class;
@@ -1928,7 +1717,6 @@ vm_page_alloc_contig(vm_object_t object,
 	if (curproc == pageproc && req_class != VM_ALLOC_INTERRUPT)
 		req_class = VM_ALLOC_SYSTEM;
 
-	SLIST_INIT(&deferred_vdrop_list);
 	mtx_lock(&vm_page_queue_free_mtx);
 	if (vm_cnt.v_free_count + vm_cnt.v_cache_count >= npages +
 	    vm_cnt.v_free_reserved || (req_class == VM_ALLOC_SYSTEM &&
@@ -1950,17 +1738,7 @@ retry:
 		return (NULL);
 	}
 	if (m_ret != NULL)
-		for (m = m_ret; m < &m_ret[npages]; m++) {
-			drop = vm_page_alloc_init(m);
-			if (drop != NULL) {
-				/*
-				 * Enqueue the vnode for deferred vdrop().
-				 */
-				m->plinks.s.pv = drop;
-				SLIST_INSERT_HEAD(&deferred_vdrop_list, m,
-				    plinks.s.ss);
-			}
-		}
+		vm_phys_freecnt_adj(m_ret, -npages);
 	else {
 #if VM_NRESERVLEVEL > 0
 		if (vm_reserv_reclaim_contig(npages, low, high, alignment,
@@ -1968,9 +1746,14 @@ retry:
 			goto retry;
 #endif
 	}
+	for (m = m_ret; m < &m_ret[npages]; m++)
+		if ((m->flags & PG_ZERO) != 0)
+			vm_page_zero_count--;
 	mtx_unlock(&vm_page_queue_free_mtx);
 	if (m_ret == NULL)
 		return (NULL);
+	for (m = m_ret; m < &m_ret[npages]; m++)
+		vm_page_alloc_check(m);
 
 	/*
 	 * Initialize the pages.  Only the PG_ZERO flag is inherited.
@@ -2003,8 +1786,6 @@ retry:
 		m->oflags = VPO_UNMANAGED;
 		if (object != NULL) {
 			if (vm_page_insert(m, object, pindex)) {
-				vm_page_alloc_contig_vdrop(
-				    &deferred_vdrop_list);
 				if (vm_paging_needed())
 					pagedaemon_wakeup();
 				if ((req & VM_ALLOC_WIRED) != 0)
@@ -2029,59 +1810,28 @@ retry:
 			pmap_page_set_memattr(m, memattr);
 		pindex++;
 	}
-	vm_page_alloc_contig_vdrop(&deferred_vdrop_list);
 	if (vm_paging_needed())
 		pagedaemon_wakeup();
 	return (m_ret);
 }
 
 /*
- * Initialize a page that has been freshly dequeued from a freelist.
- * The caller has to drop the vnode returned, if it is not NULL.
- *
- * This function may only be used to initialize unmanaged pages.
- *
- * To be called with vm_page_queue_free_mtx held.
+ * Check a page that has been freshly dequeued from a freelist.
  */
-static struct vnode *
-vm_page_alloc_init(vm_page_t m)
+static void
+vm_page_alloc_check(vm_page_t m)
 {
-	struct vnode *drop;
-	vm_object_t m_object;
 
 	KASSERT(m->queue == PQ_NONE,
-	    ("vm_page_alloc_init: page %p has unexpected queue %d",
-	    m, m->queue));
-	KASSERT(m->wire_count == 0,
-	    ("vm_page_alloc_init: page %p is wired", m));
-	KASSERT(m->hold_count == 0,
-	    ("vm_page_alloc_init: page %p is held", m));
-	KASSERT(!vm_page_busied(m),
-	    ("vm_page_alloc_init: page %p is busy", m));
-	KASSERT(m->dirty == 0,
-	    ("vm_page_alloc_init: page %p is dirty", m));
+	    ("page %p has unexpected queue %d", m, m->queue));
+	KASSERT(m->wire_count == 0, ("page %p is wired", m));
+	KASSERT(m->hold_count == 0, ("page %p is held", m));
+	KASSERT(!vm_page_busied(m), ("page %p is busy", m));
+	KASSERT(m->dirty == 0, ("page %p is dirty", m));
 	KASSERT(pmap_page_get_memattr(m) == VM_MEMATTR_DEFAULT,
-	    ("vm_page_alloc_init: page %p has unexpected memattr %d",
+	    ("page %p has unexpected memattr %d",
 	    m, pmap_page_get_memattr(m)));
-	mtx_assert(&vm_page_queue_free_mtx, MA_OWNED);
-	drop = NULL;
-	if ((m->flags & PG_CACHED) != 0) {
-		KASSERT((m->flags & PG_ZERO) == 0,
-		    ("vm_page_alloc_init: cached page %p is PG_ZERO", m));
-		m->valid = 0;
-		m_object = m->object;
-		vm_page_cache_remove(m);
-		if (m_object->type == OBJT_VNODE &&
-		    vm_object_cache_is_empty(m_object))
-			drop = m_object->handle;
-	} else {
-		KASSERT(m->valid == 0,
-		    ("vm_page_alloc_init: free page %p is valid", m));
-		vm_phys_freecnt_adj(m, -1);
-		if ((m->flags & PG_ZERO) != 0)
-			vm_page_zero_count--;
-	}
-	return (drop);
+	KASSERT(m->valid == 0, ("free page %p is valid", m));
 }
 
 /*
@@ -2107,7 +1857,6 @@ vm_page_alloc_init(vm_page_t m)
 vm_page_t
 vm_page_alloc_freelist(int flind, int req)
 {
-	struct vnode *drop;
 	vm_page_t m;
 	u_int flags;
 	int req_class;
@@ -2141,8 +1890,11 @@ vm_page_alloc_freelist(int flind, int re
 		mtx_unlock(&vm_page_queue_free_mtx);
 		return (NULL);
 	}
-	drop = vm_page_alloc_init(m);
+	vm_phys_freecnt_adj(m, -1);
+	if ((m->flags & PG_ZERO) != 0)
+		vm_page_zero_count--;
 	mtx_unlock(&vm_page_queue_free_mtx);
+	vm_page_alloc_check(m);
 
 	/*
 	 * Initialize the page.  Only the PG_ZERO flag is inherited.
@@ -2162,8 +1914,6 @@ vm_page_alloc_freelist(int flind, int re
 	}
 	/* Unmanaged pages don't use "act_count". */
 	m->oflags = VPO_UNMANAGED;
-	if (drop != NULL)
-		vdrop(drop);
 	if (vm_paging_needed())
 		pagedaemon_wakeup();
 	return (m);
@@ -2289,38 +2039,8 @@ retry:
 			/* Don't care: PG_NODUMP, PG_ZERO. */
 			if (object->type != OBJT_DEFAULT &&
 			    object->type != OBJT_SWAP &&
-			    object->type != OBJT_VNODE)
+			    object->type != OBJT_VNODE) {
 				run_ext = 0;
-			else if ((m->flags & PG_CACHED) != 0 ||
-			    m != vm_page_lookup(object, m->pindex)) {
-				/*
-				 * The page is cached or recently converted
-				 * from cached to free.
-				 */
-#if VM_NRESERVLEVEL > 0
-				if (level >= 0) {
-					/*
-					 * The page is reserved.  Extend the
-					 * current run by one page.
-					 */
-					run_ext = 1;
-				} else
-#endif
-				if ((order = m->order) < VM_NFREEORDER) {
-					/*
-					 * The page is enqueued in the
-					 * physical memory allocator's cache/
-					 * free page queues.  Moreover, it is
-					 * the first page in a power-of-two-
-					 * sized run of contiguous cache/free
-					 * pages.  Add these pages to the end
-					 * of the current run, and jump
-					 * ahead.
-					 */
-					run_ext = 1 << order;
-					m_inc = 1 << order;
-				} else
-					run_ext = 0;
 #if VM_NRESERVLEVEL > 0
 			} else if ((options & VPSC_NOSUPER) != 0 &&
 			    (level = vm_reserv_level_iffullpop(m)) >= 0) {
@@ -2487,15 +2207,7 @@ retry:
 			    object->type != OBJT_SWAP &&
 			    object->type != OBJT_VNODE)
 				error = EINVAL;
-			else if ((m->flags & PG_CACHED) != 0 ||
-			    m != vm_page_lookup(object, m->pindex)) {
-				/*
-				 * The page is cached or recently converted
-				 * from cached to free.
-				 */
-				VM_OBJECT_WUNLOCK(object);
-				goto cached;
-			} else if (object->memattr != VM_MEMATTR_DEFAULT)
+			else if (object->memattr != VM_MEMATTR_DEFAULT)
 				error = EINVAL;
 			else if (m->queue != PQ_NONE && !vm_page_busied(m)) {
 				KASSERT(pmap_page_get_memattr(m) ==
@@ -2596,7 +2308,6 @@ retry:
 unlock:
 			VM_OBJECT_WUNLOCK(object);
 		} else {
-cached:
 			mtx_lock(&vm_page_queue_free_mtx);
 			order = m->order;
 			if (order < VM_NFREEORDER) {
@@ -2995,27 +2706,6 @@ vm_page_free_wakeup(void)
 }
 
 /*
- *	Turn a cached page into a free page, by changing its attributes.
- *	Keep the statistics up-to-date.
- *
- *	The free page queue must be locked.
- */
-static void
-vm_page_cache_turn_free(vm_page_t m)
-{
-
-	mtx_assert(&vm_page_queue_free_mtx, MA_OWNED);
-
-	m->object = NULL;
-	m->valid = 0;
-	KASSERT((m->flags & PG_CACHED) != 0,
-	    ("vm_page_cache_turn_free: page %p is not cached", m));
-	m->flags &= ~PG_CACHED;
-	vm_cnt.v_cache_count--;
-	vm_phys_freecnt_adj(m, 1);
-}
-
-/*
  *	vm_page_free_toq:
  *
  *	Returns the given page to the free list,
@@ -3418,8 +3108,7 @@ retrylookup:
 		VM_WAIT;
 		VM_OBJECT_WLOCK(object);
 		goto retrylookup;
-	} else if (m->valid != 0)
-		return (m);
+	}
 	if (allocflags & VM_ALLOC_ZERO && (m->flags & PG_ZERO) == 0)
 		pmap_zero_page(m);
 	return (m);

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.h
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.h	Mon May  1 01:42:26 2017	(r317620)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.h	Mon May  1 01:50:27 2017	(r317621)
@@ -326,7 +326,6 @@ extern struct mtx_padalign pa_lock[];
  * Page flags.  If changed at any other time than page allocation or
  * freeing, the modification must be protected by the vm_page lock.
  */
-#define	PG_CACHED	0x0001		/* page is cached */
 #define	PG_FICTITIOUS	0x0004		/* physical page doesn't exist */
 #define	PG_ZERO		0x0008		/* page is zeroed */
 #define	PG_MARKER	0x0010		/* special queue marker page */
@@ -409,8 +408,6 @@ vm_page_t PHYS_TO_VM_PAGE(vm_paddr_t pa)
 #define	VM_ALLOC_ZERO		0x0040	/* (acfg) Try to obtain a zeroed page */
 #define	VM_ALLOC_NOOBJ		0x0100	/* (acg) No associated object */
 #define	VM_ALLOC_NOBUSY		0x0200	/* (acg) Do not busy the page */
-#define	VM_ALLOC_IFCACHED	0x0400	/* (ag) Fail if page is not cached */
-#define	VM_ALLOC_IFNOTCACHED	0x0800	/* (ag) Fail if page is cached */
 #define	VM_ALLOC_IGN_SBUSY	0x1000	/* (g) Ignore shared busy flag */
 #define	VM_ALLOC_NODUMP		0x2000	/* (ag) don't include in dump */
 #define	VM_ALLOC_SBUSY		0x4000	/* (acg) Shared busy the page */
@@ -453,8 +450,6 @@ vm_page_t vm_page_alloc_contig(vm_object
     vm_paddr_t boundary, vm_memattr_t memattr);
 vm_page_t vm_page_alloc_freelist(int, int);
 vm_page_t vm_page_grab (vm_object_t, vm_pindex_t, int);
-void vm_page_cache_free(vm_object_t, vm_pindex_t, vm_pindex_t);
-void vm_page_cache_transfer(vm_object_t, vm_pindex_t, vm_object_t);
 int vm_page_try_to_free (vm_page_t);
 void vm_page_deactivate (vm_page_t);
 void vm_page_deactivate_noreuse(vm_page_t);
@@ -464,7 +459,6 @@ vm_page_t vm_page_find_least(vm_object_t
 vm_page_t vm_page_getfake(vm_paddr_t paddr, vm_memattr_t memattr);
 void vm_page_initfake(vm_page_t m, vm_paddr_t paddr, vm_memattr_t memattr);
 int vm_page_insert (vm_page_t, vm_object_t, vm_pindex_t);
-boolean_t vm_page_is_cached(vm_object_t object, vm_pindex_t pindex);
 void vm_page_launder(vm_page_t m);
 vm_page_t vm_page_lookup (vm_object_t, vm_pindex_t);
 vm_page_t vm_page_next(vm_page_t m);

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_phys.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_phys.c	Mon May  1 01:42:26 2017	(r317620)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_phys.c	Mon May  1 01:50:27 2017	(r317621)
@@ -1314,7 +1314,7 @@ vm_phys_zero_pages_idle(void)
 	for (;;) {
 		TAILQ_FOREACH_REVERSE(m, &fl[oind].pl, pglist, plinks.q) {
 			for (m_tmp = m; m_tmp < &m[1 << oind]; m_tmp++) {
-				if ((m_tmp->flags & (PG_CACHED | PG_ZERO)) == 0) {
+				if ((m_tmp->flags & PG_ZERO) == 0) {
 					vm_phys_unfree_page(m_tmp);
 					vm_phys_freecnt_adj(m, -1);
 					mtx_unlock(&vm_page_queue_free_mtx);

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_reserv.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_reserv.c	Mon May  1 01:42:26 2017	(r317620)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_reserv.c	Mon May  1 01:50:27 2017	(r317621)
@@ -908,45 +908,6 @@ vm_reserv_level_iffullpop(vm_page_t m)
 }
 
 /*
- * Prepare for the reactivation of a cached page.
- *
- * First, suppose that the given page "m" was allocated individually, i.e., not
- * as part of a reservation, and cached.  Then, suppose a reservation
- * containing "m" is allocated by the same object.  Although "m" and the
- * reservation belong to the same object, "m"'s pindex may not match the
- * reservation's.
- *
- * The free page queue must be locked.
- */
-boolean_t
-vm_reserv_reactivate_page(vm_page_t m)
-{
-	vm_reserv_t rv;
-	int index;
-
-	mtx_assert(&vm_page_queue_free_mtx, MA_OWNED);
-	rv = vm_reserv_from_page(m);
-	if (rv->object == NULL)
-		return (FALSE);
-	KASSERT((m->flags & PG_CACHED) != 0,
-	    ("vm_reserv_reactivate_page: page %p is not cached", m));
-	if (m->object == rv->object &&
-	    m->pindex - rv->pindex == (index = VM_RESERV_INDEX(m->object,
-	    m->pindex)))
-		vm_reserv_populate(rv, index);
-	else {
-		KASSERT(rv->inpartpopq,
-	    ("vm_reserv_reactivate_page: reserv %p's inpartpopq is FALSE",
-		    rv));
-		TAILQ_REMOVE(&vm_rvq_partpop, rv, partpopq);
-		rv->inpartpopq = FALSE;
-		/* Don't release "m" to the physical memory allocator. */
-		vm_reserv_break(rv, m);
-	}
-	return (TRUE);
-}
-
-/*
  * Breaks the given partially-populated reservation, releasing its cached and
  * free pages to the physical memory allocator.
  *

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_reserv.h
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_reserv.h	Mon May  1 01:42:26 2017	(r317620)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_reserv.h	Mon May  1 01:50:27 2017	(r317621)
@@ -56,7 +56,6 @@ void		vm_reserv_init(void);
 bool		vm_reserv_is_page_free(vm_page_t m);
 int		vm_reserv_level(vm_page_t m);
 int		vm_reserv_level_iffullpop(vm_page_t m);
-boolean_t	vm_reserv_reactivate_page(vm_page_t m);
 boolean_t	vm_reserv_reclaim_contig(u_long npages, vm_paddr_t low,
 		    vm_paddr_t high, u_long alignment, vm_paddr_t boundary);
 boolean_t	vm_reserv_reclaim_inactive(void);

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vnode_pager.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vnode_pager.c	Mon May  1 01:42:26 2017	(r317620)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vnode_pager.c	Mon May  1 01:50:27 2017	(r317621)
@@ -466,10 +466,6 @@ vnode_pager_setsize(struct vnode *vp, vm
 			 * replacement from working properly.
 			 */
 			vm_page_clear_dirty(m, base, PAGE_SIZE - base);
-		} else if ((nsize & PAGE_MASK) &&
-		    vm_page_is_cached(object, OFF_TO_IDX(nsize))) {
-			vm_page_cache_free(object, OFF_TO_IDX(nsize),
-			    nobjsize);
 		}
 	}
 	object->un_pager.vnp.vnp_size = nsize;
@@ -894,8 +890,7 @@ vnode_pager_generic_getpages(struct vnod
 		for (tpindex = m[0]->pindex - 1;
 		    tpindex >= startpindex && tpindex < m[0]->pindex;
 		    tpindex--, i++) {
-			p = vm_page_alloc(object, tpindex, VM_ALLOC_NORMAL |
-			    VM_ALLOC_IFNOTCACHED);
+			p = vm_page_alloc(object, tpindex, VM_ALLOC_NORMAL);
 			if (p == NULL) {
 				/* Shift the array. */
 				for (int j = 0; j < i; j++)
@@ -932,8 +927,7 @@ vnode_pager_generic_getpages(struct vnod
 
 		for (tpindex = m[count - 1]->pindex + 1;
 		    tpindex < endpindex; i++, tpindex++) {
-			p = vm_page_alloc(object, tpindex, VM_ALLOC_NORMAL |
-			    VM_ALLOC_IFNOTCACHED);
+			p = vm_page_alloc(object, tpindex, VM_ALLOC_NORMAL);
 			if (p == NULL)
 				break;
 			bp->b_pages[i] = p;

From owner-svn-src-user@freebsd.org  Mon May  1 01:51:51 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id BE182D5818B
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Mon,  1 May 2017 01:51:51 +0000 (UTC)
 (envelope-from markj@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id 7813B35D;
 Mon,  1 May 2017 01:51:51 +0000 (UTC)
 (envelope-from markj@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v411poxl084142;
 Mon, 1 May 2017 01:51:50 GMT (envelope-from markj@FreeBSD.org)
Received: (from markj@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v411poLT084141;
 Mon, 1 May 2017 01:51:50 GMT (envelope-from markj@FreeBSD.org)
Message-Id: <201705010151.v411poLT084141@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: markj set sender to
 markj@FreeBSD.org using -f
From: Mark Johnston <markj@FreeBSD.org>
Date: Mon, 1 May 2017 01:51:50 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317622 - user/markj/PQ_LAUNDRY_11/sys/vm
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Mon, 01 May 2017 01:51:51 -0000

Author: markj
Date: Mon May  1 01:51:50 2017
New Revision: 317622
URL: https://svnweb.freebsd.org/changeset/base/317622

Log:
  MFC r309203 (by alc):
  Disallow recursion on the free page queue mutex.

Modified:
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c
Directory Properties:
  user/markj/PQ_LAUNDRY_11/   (props changed)

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c	Mon May  1 01:50:27 2017	(r317621)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c	Mon May  1 01:51:50 2017	(r317622)
@@ -1538,19 +1538,17 @@ vm_page_alloc(vm_object_t object, vm_pin
 	}
 
 	/*
-	 * The page allocation request can came from consumers which already
-	 * hold the free page queue mutex, like vm_page_insert() in
-	 * vm_page_cache().
+	 * Allocate a page if the number of free pages exceeds the minimum
+	 * for the request class.
 	 */
-	mtx_lock_flags(&vm_page_queue_free_mtx, MTX_RECURSE);
+	mtx_lock(&vm_page_queue_free_mtx);
 	if (vm_cnt.v_free_count + vm_cnt.v_cache_count > vm_cnt.v_free_reserved ||
 	    (req_class == VM_ALLOC_SYSTEM &&
 	    vm_cnt.v_free_count + vm_cnt.v_cache_count > vm_cnt.v_interrupt_free_min) ||
 	    (req_class == VM_ALLOC_INTERRUPT &&
 	    vm_cnt.v_free_count + vm_cnt.v_cache_count > 0)) {
 		/*
-		 * Allocate from the free queue if the number of free pages
-		 * exceeds the minimum for the request class.
+		 * Can we allocate the page from a reservation?
 		 */
 #if VM_NRESERVLEVEL > 0
 		if (object == NULL || (object->flags & (OBJ_COLORED |
@@ -1558,6 +1556,9 @@ vm_page_alloc(vm_object_t object, vm_pin
 		    vm_reserv_alloc_page(object, pindex, mpred)) == NULL)
 #endif
 		{
+			/*
+			 * If not, allocate it from the free page queues.
+			 */
 			m = vm_phys_alloc_pages(object != NULL ?
 			    VM_FREEPOOL_DEFAULT : VM_FREEPOOL_DIRECT, 0);
 #if VM_NRESERVLEVEL > 0
@@ -1872,7 +1873,7 @@ vm_page_alloc_freelist(int flind, int re
 	/*
 	 * Do not allocate reserved pages unless the req has asked for it.
 	 */
-	mtx_lock_flags(&vm_page_queue_free_mtx, MTX_RECURSE);
+	mtx_lock(&vm_page_queue_free_mtx);
 	if (vm_cnt.v_free_count + vm_cnt.v_cache_count > vm_cnt.v_free_reserved ||
 	    (req_class == VM_ALLOC_SYSTEM &&
 	    vm_cnt.v_free_count + vm_cnt.v_cache_count > vm_cnt.v_interrupt_free_min) ||

From owner-svn-src-user@freebsd.org  Mon May  1 01:53:07 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id A09C5D58223
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Mon,  1 May 2017 01:53:07 +0000 (UTC)
 (envelope-from markj@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id 5EAEC894;
 Mon,  1 May 2017 01:53:07 +0000 (UTC)
 (envelope-from markj@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v411r6sJ087251;
 Mon, 1 May 2017 01:53:06 GMT (envelope-from markj@FreeBSD.org)
Received: (from markj@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v411r6Ei087248;
 Mon, 1 May 2017 01:53:06 GMT (envelope-from markj@FreeBSD.org)
Message-Id: <201705010153.v411r6Ei087248@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: markj set sender to
 markj@FreeBSD.org using -f
From: Mark Johnston <markj@FreeBSD.org>
Date: Mon, 1 May 2017 01:53:06 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317624 - user/markj/PQ_LAUNDRY_11/sys/vm
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Mon, 01 May 2017 01:53:07 -0000

Author: markj
Date: Mon May  1 01:53:05 2017
New Revision: 317624
URL: https://svnweb.freebsd.org/changeset/base/317624

Log:
  MFC r309365 (by alc):
  Simplify vm_radix_insert() and vm_radix_remove().

Modified:
  user/markj/PQ_LAUNDRY_11/sys/vm/_vm_radix.h
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_radix.c
Directory Properties:
  user/markj/PQ_LAUNDRY_11/   (props changed)

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/_vm_radix.h
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/_vm_radix.h	Mon May  1 01:52:03 2017	(r317623)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/_vm_radix.h	Mon May  1 01:53:05 2017	(r317624)
@@ -36,12 +36,8 @@
  */
 struct vm_radix {
 	uintptr_t	rt_root;
-	uint8_t		rt_flags;
 };
 
-#define	RT_INSERT_INPROG	0x01
-#define	RT_TRIE_MODIFIED	0x02
-
 #ifdef _KERNEL
 
 static __inline boolean_t

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.c	Mon May  1 01:52:03 2017	(r317623)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.c	Mon May  1 01:53:05 2017	(r317624)
@@ -205,7 +205,6 @@ vm_object_zinit(void *mem, int size, int
 	object->type = OBJT_DEAD;
 	object->ref_count = 0;
 	object->rtree.rt_root = 0;
-	object->rtree.rt_flags = 0;
 	object->paging_in_progress = 0;
 	object->resident_page_count = 0;
 	object->shadow_count = 0;

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_radix.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_radix.c	Mon May  1 01:52:03 2017	(r317623)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_radix.c	Mon May  1 01:53:05 2017	(r317624)
@@ -339,8 +339,6 @@ vm_radix_insert(struct vm_radix *rtree, 
 
 	index = page->pindex;
 
-restart:
-
 	/*
 	 * The owner of record for root is not really important because it
 	 * will never be used.
@@ -358,32 +356,10 @@ restart:
 				panic("%s: key %jx is already present",
 				    __func__, (uintmax_t)index);
 			clev = vm_radix_keydiff(m->pindex, index);
-
-			/*
-			 * During node allocation the trie that is being
-			 * walked can be modified because of recursing radix
-			 * trie operations.
-			 * If this is the case, the recursing functions signal
-			 * such situation and the insert operation must
-			 * start from scratch again.
-			 * The freed radix node will then be in the UMA
-			 * caches very likely to avoid the same situation
-			 * to happen.
-			 */
-			rtree->rt_flags |= RT_INSERT_INPROG;
 			tmp = vm_radix_node_get(vm_radix_trimkey(index,
 			    clev + 1), 2, clev);
-			rtree->rt_flags &= ~RT_INSERT_INPROG;
-			if (tmp == NULL) {
-				rtree->rt_flags &= ~RT_TRIE_MODIFIED;
+			if (tmp == NULL)
 				return (ENOMEM);
-			}
-			if ((rtree->rt_flags & RT_TRIE_MODIFIED) != 0) {
-				rtree->rt_flags &= ~RT_TRIE_MODIFIED;
-				tmp->rn_count = 0;
-				vm_radix_node_put(tmp);
-				goto restart;
-			}
 			*parentp = tmp;
 			vm_radix_addpage(tmp, index, clev, page);
 			vm_radix_addpage(tmp, m->pindex, clev, m);
@@ -407,21 +383,9 @@ restart:
 	 */
 	newind = rnode->rn_owner;
 	clev = vm_radix_keydiff(newind, index);
-
-	/* See the comments above. */
-	rtree->rt_flags |= RT_INSERT_INPROG;
 	tmp = vm_radix_node_get(vm_radix_trimkey(index, clev + 1), 2, clev);
-	rtree->rt_flags &= ~RT_INSERT_INPROG;
-	if (tmp == NULL) {
-		rtree->rt_flags &= ~RT_TRIE_MODIFIED;
+	if (tmp == NULL)
 		return (ENOMEM);
-	}
-	if ((rtree->rt_flags & RT_TRIE_MODIFIED) != 0) {
-		rtree->rt_flags &= ~RT_TRIE_MODIFIED;
-		tmp->rn_count = 0;
-		vm_radix_node_put(tmp);
-		goto restart;
-	}
 	*parentp = tmp;
 	vm_radix_addpage(tmp, index, clev, page);
 	slot = vm_radix_slot(newind, clev);
@@ -706,20 +670,6 @@ vm_radix_remove(struct vm_radix *rtree, 
 	vm_page_t m;
 	int i, slot;
 
-	/*
-	 * Detect if a page is going to be removed from a trie which is
-	 * already undergoing another trie operation.
-	 * Right now this is only possible for vm_radix_remove() recursing
-	 * into vm_radix_insert().
-	 * If this is the case, the caller must be notified about this
-	 * situation.  It will also takecare to update the RT_TRIE_MODIFIED
-	 * accordingly.
-	 * The RT_TRIE_MODIFIED bit is set here because the remove operation
-	 * will always succeed.
-	 */
-	if ((rtree->rt_flags & RT_INSERT_INPROG) != 0)
-		rtree->rt_flags |= RT_TRIE_MODIFIED;
-
 	rnode = vm_radix_getroot(rtree);
 	if (vm_radix_isleaf(rnode)) {
 		m = vm_radix_topage(rnode);
@@ -774,9 +724,6 @@ vm_radix_reclaim_allnodes(struct vm_radi
 {
 	struct vm_radix_node *root;
 
-	KASSERT((rtree->rt_flags & RT_INSERT_INPROG) == 0,
-	    ("vm_radix_reclaim_allnodes: unexpected trie recursion"));
-
 	root = vm_radix_getroot(rtree);
 	if (root == NULL)
 		return;

From owner-svn-src-user@freebsd.org  Mon May  1 01:56:15 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 1784CD5834D
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Mon,  1 May 2017 01:56:15 +0000 (UTC)
 (envelope-from markj@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id D5438BD1;
 Mon,  1 May 2017 01:56:14 +0000 (UTC)
 (envelope-from markj@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v411uDY4087673;
 Mon, 1 May 2017 01:56:13 GMT (envelope-from markj@FreeBSD.org)
Received: (from markj@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v411uDDw087666;
 Mon, 1 May 2017 01:56:13 GMT (envelope-from markj@FreeBSD.org)
Message-Id: <201705010156.v411uDDw087666@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: markj set sender to
 markj@FreeBSD.org using -f
From: Mark Johnston <markj@FreeBSD.org>
Date: Mon, 1 May 2017 01:56:13 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317627 - in user/markj/PQ_LAUNDRY_11/sys: amd64/amd64
 arm64/arm64 i386/i386 vm
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Mon, 01 May 2017 01:56:15 -0000

Author: markj
Date: Mon May  1 01:56:13 2017
New Revision: 317627
URL: https://svnweb.freebsd.org/changeset/base/317627

Log:
  MFC r309703 (by alc):
  Have vm_radix_remove() return NULL instead of panicking if the specified
  page doesn't exist.

Modified:
  user/markj/PQ_LAUNDRY_11/sys/amd64/amd64/pmap.c
  user/markj/PQ_LAUNDRY_11/sys/arm64/arm64/pmap.c
  user/markj/PQ_LAUNDRY_11/sys/i386/i386/pmap.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_radix.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_radix.h
Directory Properties:
  user/markj/PQ_LAUNDRY_11/   (props changed)

Modified: user/markj/PQ_LAUNDRY_11/sys/amd64/amd64/pmap.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/amd64/amd64/pmap.c	Mon May  1 01:56:11 2017	(r317626)
+++ user/markj/PQ_LAUNDRY_11/sys/amd64/amd64/pmap.c	Mon May  1 01:56:13 2017	(r317627)
@@ -614,7 +614,6 @@ static vm_page_t pmap_enter_quick_locked
 static void pmap_fill_ptp(pt_entry_t *firstpte, pt_entry_t newpte);
 static int pmap_insert_pt_page(pmap_t pmap, vm_page_t mpte);
 static void pmap_kenter_attr(vm_offset_t va, vm_paddr_t pa, int mode);
-static vm_page_t pmap_lookup_pt_page(pmap_t pmap, vm_offset_t va);
 static void pmap_pde_attr(pd_entry_t *pde, int cache_bits, int mask);
 static void pmap_promote_pde(pmap_t pmap, pd_entry_t *pde, vm_offset_t va,
     struct rwlock **lockp);
@@ -625,7 +624,7 @@ static int pmap_remove_pde(pmap_t pmap, 
     struct spglist *free, struct rwlock **lockp);
 static int pmap_remove_pte(pmap_t pmap, pt_entry_t *ptq, vm_offset_t sva,
     pd_entry_t ptepde, struct spglist *free, struct rwlock **lockp);
-static void pmap_remove_pt_page(pmap_t pmap, vm_page_t mpte);
+static vm_page_t pmap_remove_pt_page(pmap_t pmap, vm_offset_t va);
 static void pmap_remove_page(pmap_t pmap, vm_offset_t va, pd_entry_t *pde,
     struct spglist *free);
 static boolean_t pmap_try_insert_pv_entry(pmap_t pmap, vm_offset_t va,
@@ -2218,29 +2217,17 @@ pmap_insert_pt_page(pmap_t pmap, vm_page
 }
 
 /*
- * Looks for a page table page mapping the specified virtual address in the
- * specified pmap's collection of idle page table pages.  Returns NULL if there
- * is no page table page corresponding to the specified virtual address.
+ * Removes the page table page mapping the specified virtual address from the
+ * specified pmap's collection of idle page table pages, and returns it.
+ * Otherwise, returns NULL if there is no page table page corresponding to the
+ * specified virtual address.
  */
 static __inline vm_page_t
-pmap_lookup_pt_page(pmap_t pmap, vm_offset_t va)
+pmap_remove_pt_page(pmap_t pmap, vm_offset_t va)
 {
 
 	PMAP_LOCK_ASSERT(pmap, MA_OWNED);
-	return (vm_radix_lookup(&pmap->pm_root, pmap_pde_pindex(va)));
-}
-
-/*
- * Removes the specified page table page from the specified pmap's collection
- * of idle page table pages.  The specified page table page must be a member of
- * the pmap's collection.
- */
-static __inline void
-pmap_remove_pt_page(pmap_t pmap, vm_page_t mpte)
-{
-
-	PMAP_LOCK_ASSERT(pmap, MA_OWNED);
-	vm_radix_remove(&pmap->pm_root, mpte->pindex);
+	return (vm_radix_remove(&pmap->pm_root, pmap_pde_pindex(va)));
 }
 
 /*
@@ -3460,10 +3447,8 @@ pmap_demote_pde_locked(pmap_t pmap, pd_e
 	oldpde = *pde;
 	KASSERT((oldpde & (PG_PS | PG_V)) == (PG_PS | PG_V),
 	    ("pmap_demote_pde: oldpde is missing PG_PS and/or PG_V"));
-	if ((oldpde & PG_A) != 0 && (mpte = pmap_lookup_pt_page(pmap, va)) !=
-	    NULL)
-		pmap_remove_pt_page(pmap, mpte);
-	else {
+	if ((oldpde & PG_A) == 0 || (mpte = pmap_remove_pt_page(pmap, va)) ==
+	    NULL) {
 		KASSERT((oldpde & PG_W) == 0,
 		    ("pmap_demote_pde: page table page for a wired mapping"
 		    " is missing"));
@@ -3577,11 +3562,10 @@ pmap_remove_kernel_pde(pmap_t pmap, pd_e
 
 	KASSERT(pmap == kernel_pmap, ("pmap %p is not kernel_pmap", pmap));
 	PMAP_LOCK_ASSERT(pmap, MA_OWNED);
-	mpte = pmap_lookup_pt_page(pmap, va);
+	mpte = pmap_remove_pt_page(pmap, va);
 	if (mpte == NULL)
 		panic("pmap_remove_kernel_pde: Missing pt page.");
 
-	pmap_remove_pt_page(pmap, mpte);
 	mptepa = VM_PAGE_TO_PHYS(mpte);
 	newpde = mptepa | X86_PG_M | X86_PG_A | X86_PG_RW | X86_PG_V;
 
@@ -3668,9 +3652,8 @@ pmap_remove_pde(pmap_t pmap, pd_entry_t 
 	if (pmap == kernel_pmap) {
 		pmap_remove_kernel_pde(pmap, pdq, sva);
 	} else {
-		mpte = pmap_lookup_pt_page(pmap, sva);
+		mpte = pmap_remove_pt_page(pmap, sva);
 		if (mpte != NULL) {
-			pmap_remove_pt_page(pmap, mpte);
 			pmap_resident_count_dec(pmap, 1);
 			KASSERT(mpte->wire_count == NPTEPG,
 			    ("pmap_remove_pde: pte page wire count error"));
@@ -5533,9 +5516,8 @@ pmap_remove_pages(pmap_t pmap)
 							    TAILQ_EMPTY(&mt->md.pv_list))
 								vm_page_aflag_clear(mt, PGA_WRITEABLE);
 					}
-					mpte = pmap_lookup_pt_page(pmap, pv->pv_va);
+					mpte = pmap_remove_pt_page(pmap, pv->pv_va);
 					if (mpte != NULL) {
-						pmap_remove_pt_page(pmap, mpte);
 						pmap_resident_count_dec(pmap, 1);
 						KASSERT(mpte->wire_count == NPTEPG,
 						    ("pmap_remove_pages: pte page wire count error"));

Modified: user/markj/PQ_LAUNDRY_11/sys/arm64/arm64/pmap.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/arm64/arm64/pmap.c	Mon May  1 01:56:11 2017	(r317626)
+++ user/markj/PQ_LAUNDRY_11/sys/arm64/arm64/pmap.c	Mon May  1 01:56:13 2017	(r317627)
@@ -2514,29 +2514,17 @@ pmap_insert_pt_page(pmap_t pmap, vm_page
 }
 
 /*
- * Looks for a page table page mapping the specified virtual address in the
- * specified pmap's collection of idle page table pages.  Returns NULL if there
- * is no page table page corresponding to the specified virtual address.
+ * Removes the page table page mapping the specified virtual address from the
+ * specified pmap's collection of idle page table pages, and returns it.
+ * Otherwise, returns NULL if there is no page table page corresponding to the
+ * specified virtual address.
  */
 static __inline vm_page_t
-pmap_lookup_pt_page(pmap_t pmap, vm_offset_t va)
+pmap_remove_pt_page(pmap_t pmap, vm_offset_t va)
 {
 
 	PMAP_LOCK_ASSERT(pmap, MA_OWNED);
-	return (vm_radix_lookup(&pmap->pm_root, pmap_l2_pindex(va)));
-}
-
-/*
- * Removes the specified page table page from the specified pmap's collection
- * of idle page table pages.  The specified page table page must be a member of
- * the pmap's collection.
- */
-static __inline void
-pmap_remove_pt_page(pmap_t pmap, vm_page_t mpte)
-{
-
-	PMAP_LOCK_ASSERT(pmap, MA_OWNED);
-	vm_radix_remove(&pmap->pm_root, mpte->pindex);
+	return (vm_radix_remove(&pmap->pm_root, pmap_l2_pindex(va)));
 }
 
 /*
@@ -3605,10 +3593,9 @@ pmap_remove_pages(pmap_t pmap)
 							    TAILQ_EMPTY(&mt->md.pv_list))
 								vm_page_aflag_clear(mt, PGA_WRITEABLE);
 					}
-					ml3 = pmap_lookup_pt_page(pmap,
+					ml3 = pmap_remove_pt_page(pmap,
 					    pv->pv_va);
 					if (ml3 != NULL) {
-						pmap_remove_pt_page(pmap, ml3);
 						pmap_resident_count_dec(pmap,1);
 						KASSERT(ml3->wire_count == NL3PG,
 						    ("pmap_remove_pages: l3 page wire count error"));
@@ -4381,9 +4368,7 @@ pmap_demote_l2_locked(pmap_t pmap, pt_en
 			return (NULL);
 	}
 
-	if ((ml3 = pmap_lookup_pt_page(pmap, va)) != NULL) {
-		pmap_remove_pt_page(pmap, ml3);
-	} else {
+	if ((ml3 = pmap_remove_pt_page(pmap, va)) == NULL) {
 		ml3 = vm_page_alloc(NULL, pmap_l2_pindex(va),
 		    (VIRT_IN_DMAP(va) ? VM_ALLOC_INTERRUPT : VM_ALLOC_NORMAL) |
 		    VM_ALLOC_NOOBJ | VM_ALLOC_WIRED);

Modified: user/markj/PQ_LAUNDRY_11/sys/i386/i386/pmap.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/i386/i386/pmap.c	Mon May  1 01:56:11 2017	(r317626)
+++ user/markj/PQ_LAUNDRY_11/sys/i386/i386/pmap.c	Mon May  1 01:56:13 2017	(r317627)
@@ -306,7 +306,6 @@ static boolean_t pmap_is_modified_pvh(st
 static boolean_t pmap_is_referenced_pvh(struct md_page *pvh);
 static void pmap_kenter_attr(vm_offset_t va, vm_paddr_t pa, int mode);
 static void pmap_kenter_pde(vm_offset_t va, pd_entry_t newpde);
-static vm_page_t pmap_lookup_pt_page(pmap_t pmap, vm_offset_t va);
 static void pmap_pde_attr(pd_entry_t *pde, int cache_bits);
 static void pmap_promote_pde(pmap_t pmap, pd_entry_t *pde, vm_offset_t va);
 static boolean_t pmap_protect_pde(pmap_t pmap, pd_entry_t *pde, vm_offset_t sva,
@@ -316,7 +315,7 @@ static void pmap_remove_pde(pmap_t pmap,
     struct spglist *free);
 static int pmap_remove_pte(pmap_t pmap, pt_entry_t *ptq, vm_offset_t sva,
     struct spglist *free);
-static void pmap_remove_pt_page(pmap_t pmap, vm_page_t mpte);
+static vm_page_t pmap_remove_pt_page(pmap_t pmap, vm_offset_t va);
 static void pmap_remove_page(struct pmap *pmap, vm_offset_t va,
     struct spglist *free);
 static void pmap_remove_entry(struct pmap *pmap, vm_page_t m,
@@ -1727,29 +1726,17 @@ pmap_insert_pt_page(pmap_t pmap, vm_page
 }
 
 /*
- * Looks for a page table page mapping the specified virtual address in the
- * specified pmap's collection of idle page table pages.  Returns NULL if there
- * is no page table page corresponding to the specified virtual address.
+ * Removes the page table page mapping the specified virtual address from the
+ * specified pmap's collection of idle page table pages, and returns it.
+ * Otherwise, returns NULL if there is no page table page corresponding to the
+ * specified virtual address.
  */
 static __inline vm_page_t
-pmap_lookup_pt_page(pmap_t pmap, vm_offset_t va)
+pmap_remove_pt_page(pmap_t pmap, vm_offset_t va)
 {
 
 	PMAP_LOCK_ASSERT(pmap, MA_OWNED);
-	return (vm_radix_lookup(&pmap->pm_root, va >> PDRSHIFT));
-}
-
-/*
- * Removes the specified page table page from the specified pmap's collection
- * of idle page table pages.  The specified page table page must be a member of
- * the pmap's collection.
- */
-static __inline void
-pmap_remove_pt_page(pmap_t pmap, vm_page_t mpte)
-{
-
-	PMAP_LOCK_ASSERT(pmap, MA_OWNED);
-	vm_radix_remove(&pmap->pm_root, mpte->pindex);
+	return (vm_radix_remove(&pmap->pm_root, va >> PDRSHIFT));
 }
 
 /*
@@ -2645,10 +2632,8 @@ pmap_demote_pde(pmap_t pmap, pd_entry_t 
 	oldpde = *pde;
 	KASSERT((oldpde & (PG_PS | PG_V)) == (PG_PS | PG_V),
 	    ("pmap_demote_pde: oldpde is missing PG_PS and/or PG_V"));
-	if ((oldpde & PG_A) != 0 && (mpte = pmap_lookup_pt_page(pmap, va)) !=
-	    NULL)
-		pmap_remove_pt_page(pmap, mpte);
-	else {
+	if ((oldpde & PG_A) == 0 || (mpte = pmap_remove_pt_page(pmap, va)) ==
+	    NULL) {
 		KASSERT((oldpde & PG_W) == 0,
 		    ("pmap_demote_pde: page table page for a wired mapping"
 		    " is missing"));
@@ -2786,11 +2771,10 @@ pmap_remove_kernel_pde(pmap_t pmap, pd_e
 	vm_page_t mpte;
 
 	PMAP_LOCK_ASSERT(pmap, MA_OWNED);
-	mpte = pmap_lookup_pt_page(pmap, va);
+	mpte = pmap_remove_pt_page(pmap, va);
 	if (mpte == NULL)
 		panic("pmap_remove_kernel_pde: Missing pt page.");
 
-	pmap_remove_pt_page(pmap, mpte);
 	mptepa = VM_PAGE_TO_PHYS(mpte);
 	newpde = mptepa | PG_M | PG_A | PG_RW | PG_V;
 
@@ -2872,9 +2856,8 @@ pmap_remove_pde(pmap_t pmap, pd_entry_t 
 	if (pmap == kernel_pmap) {
 		pmap_remove_kernel_pde(pmap, pdq, sva);
 	} else {
-		mpte = pmap_lookup_pt_page(pmap, sva);
+		mpte = pmap_remove_pt_page(pmap, sva);
 		if (mpte != NULL) {
-			pmap_remove_pt_page(pmap, mpte);
 			pmap->pm_stats.resident_count--;
 			KASSERT(mpte->wire_count == NPTEPG,
 			    ("pmap_remove_pde: pte page wire count error"));
@@ -4616,9 +4599,8 @@ pmap_remove_pages(pmap_t pmap)
 							if (TAILQ_EMPTY(&mt->md.pv_list))
 								vm_page_aflag_clear(mt, PGA_WRITEABLE);
 					}
-					mpte = pmap_lookup_pt_page(pmap, pv->pv_va);
+					mpte = pmap_remove_pt_page(pmap, pv->pv_va);
 					if (mpte != NULL) {
-						pmap_remove_pt_page(pmap, mpte);
 						pmap->pm_stats.resident_count--;
 						KASSERT(mpte->wire_count == NPTEPG,
 						    ("pmap_remove_pages: pte page wire count error"));

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c	Mon May  1 01:56:11 2017	(r317626)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c	Mon May  1 01:56:13 2017	(r317627)
@@ -1263,9 +1263,8 @@ vm_page_insert_radixdone(vm_page_t m, vm
 /*
  *	vm_page_remove:
  *
- *	Removes the given mem entry from the object/offset-page
- *	table and the object page list, but do not invalidate/terminate
- *	the backing store.
+ *	Removes the specified page from its containing object, but does not
+ *	invalidate any backing storage.
  *
  *	The object must be locked.  The page must be locked if it is managed.
  */
@@ -1273,6 +1272,7 @@ void
 vm_page_remove(vm_page_t m)
 {
 	vm_object_t object;
+	vm_page_t mrem;
 
 	if ((m->oflags & VPO_UNMANAGED) == 0)
 		vm_page_assert_locked(m);
@@ -1281,11 +1281,12 @@ vm_page_remove(vm_page_t m)
 	VM_OBJECT_ASSERT_WLOCKED(object);
 	if (vm_page_xbusied(m))
 		vm_page_xunbusy_maybelocked(m);
+	mrem = vm_radix_remove(&object->rtree, m->pindex);
+	KASSERT(mrem == m, ("removed page %p, expected page %p", mrem, m));
 
 	/*
 	 * Now remove from the object's list of backed pages.
 	 */
-	vm_radix_remove(&object->rtree, m->pindex);
 	TAILQ_REMOVE(&object->memq, m, listq);
 
 	/*

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_radix.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_radix.c	Mon May  1 01:56:11 2017	(r317626)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_radix.c	Mon May  1 01:56:13 2017	(r317627)
@@ -660,10 +660,10 @@ descend:
 }
 
 /*
- * Remove the specified index from the tree.
- * Panics if the key is not present.
+ * Remove the specified index from the trie, and return the value stored at
+ * that index.  If the index is not present, return NULL.
  */
-void
+vm_page_t
 vm_radix_remove(struct vm_radix *rtree, vm_pindex_t index)
 {
 	struct vm_radix_node *rnode, *parent;
@@ -674,23 +674,23 @@ vm_radix_remove(struct vm_radix *rtree, 
 	if (vm_radix_isleaf(rnode)) {
 		m = vm_radix_topage(rnode);
 		if (m->pindex != index)
-			panic("%s: invalid key found", __func__);
+			return (NULL);
 		vm_radix_setroot(rtree, NULL);
-		return;
+		return (m);
 	}
 	parent = NULL;
 	for (;;) {
 		if (rnode == NULL)
-			panic("vm_radix_remove: impossible to locate the key");
+			return (NULL);
 		slot = vm_radix_slot(index, rnode->rn_clev);
 		if (vm_radix_isleaf(rnode->rn_child[slot])) {
 			m = vm_radix_topage(rnode->rn_child[slot]);
 			if (m->pindex != index)
-				panic("%s: invalid key found", __func__);
+				return (NULL);
 			rnode->rn_child[slot] = NULL;
 			rnode->rn_count--;
 			if (rnode->rn_count > 1)
-				break;
+				return (m);
 			for (i = 0; i < VM_RADIX_COUNT; i++)
 				if (rnode->rn_child[i] != NULL)
 					break;
@@ -707,7 +707,7 @@ vm_radix_remove(struct vm_radix *rtree, 
 			rnode->rn_count--;
 			rnode->rn_child[i] = NULL;
 			vm_radix_node_put(rnode);
-			break;
+			return (m);
 		}
 		parent = rnode;
 		rnode = rnode->rn_child[slot];

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_radix.h
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_radix.h	Mon May  1 01:56:11 2017	(r317626)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_radix.h	Mon May  1 01:56:13 2017	(r317627)
@@ -42,7 +42,7 @@ vm_page_t	vm_radix_lookup(struct vm_radi
 vm_page_t	vm_radix_lookup_ge(struct vm_radix *rtree, vm_pindex_t index);
 vm_page_t	vm_radix_lookup_le(struct vm_radix *rtree, vm_pindex_t index);
 void		vm_radix_reclaim_allnodes(struct vm_radix *rtree);
-void		vm_radix_remove(struct vm_radix *rtree, vm_pindex_t index);
+vm_page_t	vm_radix_remove(struct vm_radix *rtree, vm_pindex_t index);
 vm_page_t	vm_radix_replace(struct vm_radix *rtree, vm_page_t newpage);
 
 #endif /* _KERNEL */

From owner-svn-src-user@freebsd.org  Mon May  1 01:58:21 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 5AAF3D583E9
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Mon,  1 May 2017 01:58:21 +0000 (UTC)
 (envelope-from markj@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id 246C3E3C;
 Mon,  1 May 2017 01:58:21 +0000 (UTC)
 (envelope-from markj@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v411wKfo087804;
 Mon, 1 May 2017 01:58:20 GMT (envelope-from markj@FreeBSD.org)
Received: (from markj@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v411wJml087798;
 Mon, 1 May 2017 01:58:19 GMT (envelope-from markj@FreeBSD.org)
Message-Id: <201705010158.v411wJml087798@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: markj set sender to
 markj@FreeBSD.org using -f
From: Mark Johnston <markj@FreeBSD.org>
Date: Mon, 1 May 2017 01:58:19 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317628 - user/markj/PQ_LAUNDRY_11/sys/vm
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Mon, 01 May 2017 01:58:21 -0000

Author: markj
Date: Mon May  1 01:58:19 2017
New Revision: 317628
URL: https://svnweb.freebsd.org/changeset/base/317628

Log:
  MFC r309898 (by alc):
  Purge mentions of PG_CACHED pages from sys/vm.

Modified:
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_map.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.h
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.h
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_reserv.c
Directory Properties:
  user/markj/PQ_LAUNDRY_11/   (props changed)

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_map.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_map.c	Mon May  1 01:56:13 2017	(r317627)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_map.c	Mon May  1 01:58:19 2017	(r317628)
@@ -1858,9 +1858,7 @@ vm_map_submap(
  *	limited number of page mappings are created at the low-end of the
  *	specified address range.  (For this purpose, a superpage mapping
  *	counts as one page mapping.)  Otherwise, all resident pages within
- *	the specified address range are mapped.  Because these mappings are
- *	being created speculatively, cached pages are not reactivated and
- *	mapped.
+ *	the specified address range are mapped.
  */
 static void
 vm_map_pmap_enter(vm_map_t map, vm_offset_t addr, vm_prot_t prot,

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.c	Mon May  1 01:56:13 2017	(r317627)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.c	Mon May  1 01:58:19 2017	(r317628)
@@ -1356,7 +1356,7 @@ retry:
 			goto retry;
 		}
 
-		/* vm_page_rename() will handle dirty and cache. */
+		/* vm_page_rename() will dirty the page. */
 		if (vm_page_rename(m, new_object, idx)) {
 			VM_OBJECT_WUNLOCK(new_object);
 			VM_OBJECT_WUNLOCK(orig_object);
@@ -1443,6 +1443,13 @@ vm_object_scan_all_shadowed(vm_object_t 
 
 	backing_object = object->backing_object;
 
+	/*
+	 * Initial conditions:
+	 *
+	 * We do not want to have to test for the existence of swap
+	 * pages in the backing object.  XXX but with the new swapper this
+	 * would be pretty easy to do.
+	 */
 	if (backing_object->type != OBJT_DEFAULT &&
 	    backing_object->type != OBJT_SWAP)
 		return (false);
@@ -1594,8 +1601,7 @@ vm_object_collapse_scan(vm_object_t obje
 		 * backing object to the main object.
 		 *
 		 * If the page was mapped to a process, it can remain mapped
-		 * through the rename.  vm_page_rename() will handle dirty and
-		 * cache.
+		 * through the rename.  vm_page_rename() will dirty the page.
 		 */
 		if (vm_page_rename(p, object, new_pindex)) {
 			next = vm_object_collapse_scan_wait(object, NULL, next,

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.h
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.h	Mon May  1 01:56:13 2017	(r317627)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_object.h	Mon May  1 01:58:19 2017	(r317628)
@@ -79,17 +79,6 @@
  *
  *	vm_object_t		Virtual memory object.
  *
- *	The root of cached pages pool is protected by both the per-object lock
- *	and the free pages queue mutex.
- *	On insert in the cache radix trie, the per-object lock is expected
- *	to be already held and the free pages queue mutex will be
- *	acquired during the operation too.
- *	On remove and lookup from the cache radix trie, only the free
- *	pages queue mutex is expected to be locked.
- *	These rules allow for reliably checking for the presence of cached
- *	pages with only the per-object lock held, thereby reducing contention
- *	for the free pages queue mutex.
- *
  * List of locks
  *	(c)	const until freed
  *	(o)	per-object lock 

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c	Mon May  1 01:56:13 2017	(r317627)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c	Mon May  1 01:58:19 2017	(r317628)
@@ -1435,9 +1435,7 @@ vm_page_replace(vm_page_t mnew, vm_objec
  *
  *	Note: we *always* dirty the page.  It is necessary both for the
  *	      fact that we moved it, and because we may be invalidating
- *	      swap.  If the page is on the cache, we have to deactivate it
- *	      or vm_page_dirty() will panic.  Dirty pages are not allowed
- *	      on the cache.
+ *	      swap.
  *
  *	The objects must be locked.
  */
@@ -2075,18 +2073,18 @@ unlock:
 		} else if (level >= 0) {
 			/*
 			 * The page is reserved but not yet allocated.  In
-			 * other words, it is still cached or free.  Extend
-			 * the current run by one page.
+			 * other words, it is still free.  Extend the current
+			 * run by one page.
 			 */
 			run_ext = 1;
 #endif
 		} else if ((order = m->order) < VM_NFREEORDER) {
 			/*
 			 * The page is enqueued in the physical memory
-			 * allocator's cache/free page queues.  Moreover, it
-			 * is the first page in a power-of-two-sized run of
-			 * contiguous cache/free pages.  Add these pages to
-			 * the end of the current run, and jump ahead.
+			 * allocator's free page queues.  Moreover, it is the
+			 * first page in a power-of-two-sized run of
+			 * contiguous free pages.  Add these pages to the end
+			 * of the current run, and jump ahead.
 			 */
 			run_ext = 1 << order;
 			m_inc = 1 << order;
@@ -2094,16 +2092,15 @@ unlock:
 			/*
 			 * Skip the page for one of the following reasons: (1)
 			 * It is enqueued in the physical memory allocator's
-			 * cache/free page queues.  However, it is not the
-			 * first page in a run of contiguous cache/free pages.
-			 * (This case rarely occurs because the scan is
-			 * performed in ascending order.) (2) It is not
-			 * reserved, and it is transitioning from free to
-			 * allocated.  (Conversely, the transition from
-			 * allocated to free for managed pages is blocked by
-			 * the page lock.) (3) It is allocated but not
-			 * contained by an object and not wired, e.g.,
-			 * allocated by Xen's balloon driver.
+			 * free page queues.  However, it is not the first
+			 * page in a run of contiguous free pages.  (This case
+			 * rarely occurs because the scan is performed in
+			 * ascending order.) (2) It is not reserved, and it is
+			 * transitioning from free to allocated.  (Conversely,
+			 * the transition from allocated to free for managed
+			 * pages is blocked by the page lock.) (3) It is
+			 * allocated but not contained by an object and not
+			 * wired, e.g., allocated by Xen's balloon driver.
 			 */
 			run_ext = 0;
 		}
@@ -2315,11 +2312,11 @@ unlock:
 			if (order < VM_NFREEORDER) {
 				/*
 				 * The page is enqueued in the physical memory
-				 * allocator's cache/free page queues.
-				 * Moreover, it is the first page in a power-
-				 * of-two-sized run of contiguous cache/free
-				 * pages.  Jump ahead to the last page within
-				 * that run, and continue from there.
+				 * allocator's free page queues.  Moreover, it
+				 * is the first page in a power-of-two-sized
+				 * run of contiguous free pages.  Jump ahead
+				 * to the last page within that run, and
+				 * continue from there.
 				 */
 				m += (1 << order) - 1;
 			}
@@ -2368,9 +2365,9 @@ CTASSERT(powerof2(NRUNS));
  *	conditions by relocating the virtual pages using that physical memory.
  *	Returns true if reclamation is successful and false otherwise.  Since
  *	relocation requires the allocation of physical pages, reclamation may
- *	fail due to a shortage of cache/free pages.  When reclamation fails,
- *	callers are expected to perform VM_WAIT before retrying a failed
- *	allocation operation, e.g., vm_page_alloc_contig().
+ *	fail due to a shortage of free pages.  When reclamation fails, callers
+ *	are expected to perform VM_WAIT before retrying a failed allocation
+ *	operation, e.g., vm_page_alloc_contig().
  *
  *	The caller must always specify an allocation class through "req".
  *
@@ -2405,8 +2402,8 @@ vm_page_reclaim_contig(int req, u_long n
 		req_class = VM_ALLOC_SYSTEM;
 
 	/*
-	 * Return if the number of cached and free pages cannot satisfy the
-	 * requested allocation.
+	 * Return if the number of free pages cannot satisfy the requested
+	 * allocation.
 	 */
 	count = vm_cnt.v_free_count + vm_cnt.v_cache_count;
 	if (count < npages + vm_cnt.v_free_reserved || (count < npages +
@@ -2676,9 +2673,8 @@ vm_page_activate(vm_page_t m)
 /*
  *	vm_page_free_wakeup:
  *
- *	Helper routine for vm_page_free_toq() and vm_page_cache().  This
- *	routine is called when a page has been added to the cache or free
- *	queues.
+ *	Helper routine for vm_page_free_toq().  This routine is called
+ *	when a page is added to the free queues.
  *
  *	The page queues must be locked.
  */
@@ -2766,8 +2762,8 @@ vm_page_free_toq(vm_page_t m)
 			pmap_page_set_memattr(m, VM_MEMATTR_DEFAULT);
 
 		/*
-		 * Insert the page into the physical memory allocator's
-		 * cache/free page queues.
+		 * Insert the page into the physical memory allocator's free
+		 * page queues.
 		 */
 		mtx_lock(&vm_page_queue_free_mtx);
 		vm_phys_freecnt_adj(m, 1);
@@ -2871,21 +2867,10 @@ vm_page_unwire(vm_page_t m, uint8_t queu
 /*
  * Move the specified page to the inactive queue.
  *
- * Many pages placed on the inactive queue should actually go
- * into the cache, but it is difficult to figure out which.  What
- * we do instead, if the inactive target is well met, is to put
- * clean pages at the head of the inactive queue instead of the tail.
- * This will cause them to be moved to the cache more quickly and
- * if not actively re-referenced, reclaimed more quickly.  If we just
- * stick these pages at the end of the inactive queue, heavy filesystem
- * meta-data accesses can cause an unnecessary paging load on memory bound
- * processes.  This optimization causes one-time-use metadata to be
- * reused more quickly.
- *
- * Normally noreuse is FALSE, resulting in LRU operation.  noreuse is set
- * to TRUE if we want this page to be 'as if it were placed in the cache',
- * except without unmapping it from the process address space.  In
- * practice this is implemented by inserting the page at the head of the
+ * Normally, "noreuse" is FALSE, resulting in LRU ordering of the inactive
+ * queue.  However, setting "noreuse" to TRUE will accelerate the specified
+ * page's reclamation, but it will not unmap the page from any address space.
+ * This is implemented by inserting the page near the head of the inactive
  * queue, using a marker page to guide FIFO insertion ordering.
  *
  * The page must be locked.
@@ -3012,16 +2997,9 @@ vm_page_advise(vm_page_t m, int advice)
 	if (advice == MADV_FREE)
 		/*
 		 * Mark the page clean.  This will allow the page to be freed
-		 * up by the system.  However, such pages are often reused
-		 * quickly by malloc() so we do not do anything that would
-		 * cause a page fault if we can help it.
-		 *
-		 * Specifically, we do not try to actually free the page now
-		 * nor do we try to put it in the cache (which would cause a
-		 * page fault on reuse).
-		 *
-		 * But we do make the page as freeable as we can without
-		 * actually taking the step of unmapping it.
+		 * without first paging it out.  MADV_FREE pages are often
+		 * quickly reused by malloc(3), so we do not do anything that
+		 * would result in a page fault on a later access.
 		 */
 		vm_page_undirty(m);
 	else if (advice != MADV_DONTNEED)

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.h
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.h	Mon May  1 01:56:13 2017	(r317627)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.h	Mon May  1 01:58:19 2017	(r317628)
@@ -352,19 +352,16 @@ extern struct mtx_padalign pa_lock[];
  *	free
  *		Available for allocation now.
  *
- *	cache
- *		Almost available for allocation. Still associated with
- *		an object, but clean and immediately freeable.
- *
- * The following lists are LRU sorted:
- *
  *	inactive
  *		Low activity, candidates for reclamation.
+ *		This list is approximately LRU ordered.
+ *
+ *	laundry
  *		This is the list of pages that should be
  *		paged out next.
  *
  *	active
- *		Pages that are "active" i.e. they have been
+ *		Pages that are "active", i.e., they have been
  *		recently referenced.
  *
  */

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_reserv.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_reserv.c	Mon May  1 01:56:13 2017	(r317627)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_reserv.c	Mon May  1 01:58:19 2017	(r317628)
@@ -62,7 +62,7 @@ __FBSDID("$FreeBSD$");
 
 /*
  * The reservation system supports the speculative allocation of large physical
- * pages ("superpages").  Speculative allocation enables the fully-automatic
+ * pages ("superpages").  Speculative allocation enables the fully automatic
  * utilization of superpages by the virtual memory system.  In other words, no
  * programmatic directives are required to use superpages.
  */
@@ -155,11 +155,11 @@ popmap_is_set(popmap_t popmap[], int i)
  * physical pages for the range [pindex, pindex + VM_LEVEL_0_NPAGES) of offsets
  * within that object.  The reservation's "popcnt" tracks the number of these
  * small physical pages that are in use at any given time.  When and if the
- * reservation is not fully utilized, it appears in the queue of partially-
+ * reservation is not fully utilized, it appears in the queue of partially
  * populated reservations.  The reservation always appears on the containing
  * object's list of reservations.
  *
- * A partially-populated reservation can be broken and reclaimed at any time.
+ * A partially populated reservation can be broken and reclaimed at any time.
  */
 struct vm_reserv {
 	TAILQ_ENTRY(vm_reserv) partpopq;
@@ -196,11 +196,11 @@ struct vm_reserv {
 static vm_reserv_t vm_reserv_array;
 
 /*
- * The partially-populated reservation queue
+ * The partially populated reservation queue
  *
- * This queue enables the fast recovery of an unused cached or free small page
- * from a partially-populated reservation.  The reservation at the head of
- * this queue is the least-recently-changed, partially-populated reservation.
+ * This queue enables the fast recovery of an unused free small page from a
+ * partially populated reservation.  The reservation at the head of this queue
+ * is the least recently changed, partially populated reservation.
  *
  * Access to this queue is synchronized by the free page queue lock.
  */
@@ -225,7 +225,7 @@ SYSCTL_PROC(_vm_reserv, OID_AUTO, fullpo
 static int sysctl_vm_reserv_partpopq(SYSCTL_HANDLER_ARGS);
 
 SYSCTL_OID(_vm_reserv, OID_AUTO, partpopq, CTLTYPE_STRING | CTLFLAG_RD, NULL, 0,
-    sysctl_vm_reserv_partpopq, "A", "Partially-populated reservation queues");
+    sysctl_vm_reserv_partpopq, "A", "Partially populated reservation queues");
 
 static long vm_reserv_reclaimed;
 SYSCTL_LONG(_vm_reserv, OID_AUTO, reclaimed, CTLFLAG_RD,
@@ -267,7 +267,7 @@ sysctl_vm_reserv_fullpop(SYSCTL_HANDLER_
 }
 
 /*
- * Describes the current state of the partially-populated reservation queue.
+ * Describes the current state of the partially populated reservation queue.
  */
 static int
 sysctl_vm_reserv_partpopq(SYSCTL_HANDLER_ARGS)
@@ -301,7 +301,7 @@ sysctl_vm_reserv_partpopq(SYSCTL_HANDLER
 /*
  * Reduces the given reservation's population count.  If the population count
  * becomes zero, the reservation is destroyed.  Additionally, moves the
- * reservation to the tail of the partially-populated reservation queue if the
+ * reservation to the tail of the partially populated reservation queue if the
  * population count is non-zero.
  *
  * The free page queue lock must be held.
@@ -363,7 +363,7 @@ vm_reserv_has_pindex(vm_reserv_t rv, vm_
 
 /*
  * Increases the given reservation's population count.  Moves the reservation
- * to the tail of the partially-populated reservation queue.
+ * to the tail of the partially populated reservation queue.
  *
  * The free page queue must be locked.
  */
@@ -597,7 +597,7 @@ found:
 }
 
 /*
- * Allocates a page from an existing or newly-created reservation.
+ * Allocates a page from an existing or newly created reservation.
  *
  * The page "mpred" must immediately precede the offset "pindex" within the
  * specified object.
@@ -721,12 +721,12 @@ found:
 }
 
 /*
- * Breaks the given reservation.  Except for the specified cached or free
- * page, all cached and free pages in the reservation are returned to the
- * physical memory allocator.  The reservation's population count and map are
- * reset to their initial state.
+ * Breaks the given reservation.  Except for the specified free page, all free
+ * pages in the reservation are returned to the physical memory allocator.
+ * The reservation's population count and map are reset to their initial
+ * state.
  *
- * The given reservation must not be in the partially-populated reservation
+ * The given reservation must not be in the partially populated reservation
  * queue.  The free page queue lock must be held.
  */
 static void
@@ -895,7 +895,7 @@ vm_reserv_level(vm_page_t m)
 }
 
 /*
- * Returns a reservation level if the given page belongs to a fully-populated
+ * Returns a reservation level if the given page belongs to a fully populated
  * reservation and -1 otherwise.
  */
 int
@@ -908,8 +908,8 @@ vm_reserv_level_iffullpop(vm_page_t m)
 }
 
 /*
- * Breaks the given partially-populated reservation, releasing its cached and
- * free pages to the physical memory allocator.
+ * Breaks the given partially populated reservation, releasing its free pages
+ * to the physical memory allocator.
  *
  * The free page queue lock must be held.
  */
@@ -927,9 +927,9 @@ vm_reserv_reclaim(vm_reserv_t rv)
 }
 
 /*
- * Breaks the reservation at the head of the partially-populated reservation
- * queue, releasing its cached and free pages to the physical memory
- * allocator.  Returns TRUE if a reservation is broken and FALSE otherwise.
+ * Breaks the reservation at the head of the partially populated reservation
+ * queue, releasing its free pages to the physical memory allocator.  Returns
+ * TRUE if a reservation is broken and FALSE otherwise.
  *
  * The free page queue lock must be held.
  */
@@ -947,11 +947,10 @@ vm_reserv_reclaim_inactive(void)
 }
 
 /*
- * Searches the partially-populated reservation queue for the least recently
- * active reservation with unused pages, i.e., cached or free, that satisfy the
- * given request for contiguous physical memory.  If a satisfactory reservation
- * is found, it is broken.  Returns TRUE if a reservation is broken and FALSE
- * otherwise.
+ * Searches the partially populated reservation queue for the least recently
+ * changed reservation with free pages that satisfy the given request for
+ * contiguous physical memory.  If a satisfactory reservation is found, it is
+ * broken.  Returns TRUE if a reservation is broken and FALSE otherwise.
  *
  * The free page queue lock must be held.
  */

From owner-svn-src-user@freebsd.org  Mon May  1 02:01:13 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id CE370D5856D
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Mon,  1 May 2017 02:01:13 +0000 (UTC)
 (envelope-from markj@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id A987EFD4;
 Mon,  1 May 2017 02:01:13 +0000 (UTC)
 (envelope-from markj@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v4121CDl091529;
 Mon, 1 May 2017 02:01:12 GMT (envelope-from markj@FreeBSD.org)
Received: (from markj@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v4121Csa091526;
 Mon, 1 May 2017 02:01:12 GMT (envelope-from markj@FreeBSD.org)
Message-Id: <201705010201.v4121Csa091526@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: markj set sender to
 markj@FreeBSD.org using -f
From: Mark Johnston <markj@FreeBSD.org>
Date: Mon, 1 May 2017 02:01:12 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317629 - user/markj/PQ_LAUNDRY_11/sys/vm
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Mon, 01 May 2017 02:01:13 -0000

Author: markj
Date: Mon May  1 02:01:12 2017
New Revision: 317629
URL: https://svnweb.freebsd.org/changeset/base/317629

Log:
  MFC r310720 (by alc):
  Relax the object type restrictions on vm_page_alloc_contig().

Modified:
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_reserv.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_reserv.h
Directory Properties:
  user/markj/PQ_LAUNDRY_11/   (props changed)

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c	Mon May  1 01:58:19 2017	(r317628)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c	Mon May  1 02:01:12 2017	(r317629)
@@ -1512,13 +1512,12 @@ vm_page_alloc(vm_object_t object, vm_pin
 	vm_page_t m, mpred;
 	int flags, req_class;
 
-	mpred = 0;	/* XXX: pacify gcc */
+	mpred = NULL;	/* XXX: pacify gcc */
 	KASSERT((object != NULL) == ((req & VM_ALLOC_NOOBJ) == 0) &&
 	    (object != NULL || (req & VM_ALLOC_SBUSY) == 0) &&
 	    ((req & (VM_ALLOC_NOBUSY | VM_ALLOC_SBUSY)) !=
 	    (VM_ALLOC_NOBUSY | VM_ALLOC_SBUSY)),
-	    ("vm_page_alloc: inconsistent object(%p)/req(%x)", (void *)object,
-	    req));
+	    ("vm_page_alloc: inconsistent object(%p)/req(%x)", object, req));
 	if (object != NULL)
 		VM_OBJECT_ASSERT_WLOCKED(object);
 
@@ -1624,10 +1623,11 @@ vm_page_alloc(vm_object_t object, vm_pin
 				atomic_subtract_int(&vm_cnt.v_wire_count, 1);
 				m->wire_count = 0;
 			}
-			m->object = NULL;
+			KASSERT(m->object == NULL, ("page %p has object", m));
 			m->oflags = VPO_UNMANAGED;
 			m->busy_lock = VPB_UNBUSIED;
-			vm_page_free(m);
+			/* Don't change PG_ZERO. */
+			vm_page_free_toq(m);
 			return (NULL);
 		}
 
@@ -1669,6 +1669,8 @@ vm_page_alloc(vm_object_t object, vm_pin
  *	memory attribute setting for the physical pages cannot be configured
  *	to VM_MEMATTR_DEFAULT.
  *
+ *	The specified object may not contain fictitious pages.
+ *
  *	The caller must always specify an allocation class.
  *
  *	allocation classes:
@@ -1692,20 +1694,21 @@ vm_page_alloc_contig(vm_object_t object,
     u_long npages, vm_paddr_t low, vm_paddr_t high, u_long alignment,
     vm_paddr_t boundary, vm_memattr_t memattr)
 {
-	vm_page_t m, m_tmp, m_ret;
-	u_int flags;
+	vm_page_t m, m_ret, mpred;
+	u_int busy_lock, flags, oflags;
 	int req_class;
 
+	mpred = NULL;	/* XXX: pacify gcc */
 	KASSERT((object != NULL) == ((req & VM_ALLOC_NOOBJ) == 0) &&
 	    (object != NULL || (req & VM_ALLOC_SBUSY) == 0) &&
 	    ((req & (VM_ALLOC_NOBUSY | VM_ALLOC_SBUSY)) !=
 	    (VM_ALLOC_NOBUSY | VM_ALLOC_SBUSY)),
-	    ("vm_page_alloc: inconsistent object(%p)/req(%x)", (void *)object,
+	    ("vm_page_alloc_contig: inconsistent object(%p)/req(%x)", object,
 	    req));
 	if (object != NULL) {
 		VM_OBJECT_ASSERT_WLOCKED(object);
-		KASSERT(object->type == OBJT_PHYS,
-		    ("vm_page_alloc_contig: object %p isn't OBJT_PHYS",
+		KASSERT((object->flags & OBJ_FICTITIOUS) == 0,
+		    ("vm_page_alloc_contig: object %p has fictitious pages",
 		    object));
 	}
 	KASSERT(npages > 0, ("vm_page_alloc_contig: npages is zero"));
@@ -1717,18 +1720,34 @@ vm_page_alloc_contig(vm_object_t object,
 	if (curproc == pageproc && req_class != VM_ALLOC_INTERRUPT)
 		req_class = VM_ALLOC_SYSTEM;
 
+	if (object != NULL) {
+		mpred = vm_radix_lookup_le(&object->rtree, pindex);
+		KASSERT(mpred == NULL || mpred->pindex != pindex,
+		    ("vm_page_alloc_contig: pindex already allocated"));
+	}
+
+	/*
+	 * Can we allocate the pages without the number of free pages falling
+	 * below the lower bound for the allocation class?
+	 */
 	mtx_lock(&vm_page_queue_free_mtx);
 	if (vm_cnt.v_free_count + vm_cnt.v_cache_count >= npages +
 	    vm_cnt.v_free_reserved || (req_class == VM_ALLOC_SYSTEM &&
 	    vm_cnt.v_free_count + vm_cnt.v_cache_count >= npages +
 	    vm_cnt.v_interrupt_free_min) || (req_class == VM_ALLOC_INTERRUPT &&
 	    vm_cnt.v_free_count + vm_cnt.v_cache_count >= npages)) {
+		/*
+		 * Can we allocate the pages from a reservation?
+		 */
 #if VM_NRESERVLEVEL > 0
 retry:
 		if (object == NULL || (object->flags & OBJ_COLORED) == 0 ||
 		    (m_ret = vm_reserv_alloc_contig(object, pindex, npages,
-		    low, high, alignment, boundary)) == NULL)
+		    low, high, alignment, boundary, mpred)) == NULL)
 #endif
+			/*
+			 * If not, allocate them from the free page queues.
+			 */
 			m_ret = vm_phys_alloc_contig(npages, low, high,
 			    alignment, boundary);
 	} else {
@@ -1763,6 +1782,13 @@ retry:
 		flags = PG_ZERO;
 	if ((req & VM_ALLOC_NODUMP) != 0)
 		flags |= PG_NODUMP;
+	oflags = object == NULL || (object->flags & OBJ_UNMANAGED) != 0 ?
+	    VPO_UNMANAGED : 0;
+	busy_lock = VPB_UNBUSIED;
+	if ((req & (VM_ALLOC_NOBUSY | VM_ALLOC_NOOBJ | VM_ALLOC_SBUSY)) == 0)
+		busy_lock = VPB_SINGLE_EXCLUSIVER;
+	if ((req & VM_ALLOC_SBUSY) != 0)
+		busy_lock = VPB_SHARERS_WORD(1);
 	if ((req & VM_ALLOC_WIRED) != 0)
 		atomic_add_int(&vm_cnt.v_wire_count, npages);
 	if (object != NULL) {
@@ -1773,37 +1799,32 @@ retry:
 	for (m = m_ret; m < &m_ret[npages]; m++) {
 		m->aflags = 0;
 		m->flags = (m->flags | PG_NODUMP) & flags;
-		m->busy_lock = VPB_UNBUSIED;
-		if (object != NULL) {
-			if ((req & (VM_ALLOC_NOBUSY | VM_ALLOC_SBUSY)) == 0)
-				m->busy_lock = VPB_SINGLE_EXCLUSIVER;
-			if ((req & VM_ALLOC_SBUSY) != 0)
-				m->busy_lock = VPB_SHARERS_WORD(1);
-		}
+		m->busy_lock = busy_lock;
 		if ((req & VM_ALLOC_WIRED) != 0)
 			m->wire_count = 1;
-		/* Unmanaged pages don't use "act_count". */
-		m->oflags = VPO_UNMANAGED;
+		m->act_count = 0;
+		m->oflags = oflags;
 		if (object != NULL) {
-			if (vm_page_insert(m, object, pindex)) {
-				if (vm_paging_needed())
-					pagedaemon_wakeup();
+			if (vm_page_insert_after(m, object, pindex, mpred)) {
+				pagedaemon_wakeup();
 				if ((req & VM_ALLOC_WIRED) != 0)
-					atomic_subtract_int(&vm_cnt.v_wire_count,
-					    npages);
-				for (m_tmp = m, m = m_ret;
-				    m < &m_ret[npages]; m++) {
-					if ((req & VM_ALLOC_WIRED) != 0)
+					atomic_subtract_int(
+					    &vm_cnt.v_wire_count, npages);
+				KASSERT(m->object == NULL,
+				    ("page %p has object", m));
+				mpred = m;
+				for (m = m_ret; m < &m_ret[npages]; m++) {
+					if (m <= mpred &&
+					    (req & VM_ALLOC_WIRED) != 0)
 						m->wire_count = 0;
-					if (m >= m_tmp) {
-						m->object = NULL;
-						m->oflags |= VPO_UNMANAGED;
-					}
+					m->oflags = VPO_UNMANAGED;
 					m->busy_lock = VPB_UNBUSIED;
-					vm_page_free(m);
+					/* Don't change PG_ZERO. */
+					vm_page_free_toq(m);
 				}
 				return (NULL);
 			}
+			mpred = m;
 		} else
 			m->pindex = pindex;
 		if (memattr != VM_MEMATTR_DEFAULT)
@@ -1822,6 +1843,7 @@ static void
 vm_page_alloc_check(vm_page_t m)
 {
 
+	KASSERT(m->object == NULL, ("page %p has object", m));
 	KASSERT(m->queue == PQ_NONE,
 	    ("page %p has unexpected queue %d", m, m->queue));
 	KASSERT(m->wire_count == 0, ("page %p is wired", m));

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_reserv.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_reserv.c	Mon May  1 01:58:19 2017	(r317628)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_reserv.c	Mon May  1 02:01:12 2017	(r317629)
@@ -404,14 +404,18 @@ vm_reserv_populate(vm_reserv_t rv, int i
  * physical address boundary that is a multiple of that value.  Both
  * "alignment" and "boundary" must be a power of two.
  *
+ * The page "mpred" must immediately precede the offset "pindex" within the
+ * specified object.
+ *
  * The object and free page queue must be locked.
  */
 vm_page_t
 vm_reserv_alloc_contig(vm_object_t object, vm_pindex_t pindex, u_long npages,
-    vm_paddr_t low, vm_paddr_t high, u_long alignment, vm_paddr_t boundary)
+    vm_paddr_t low, vm_paddr_t high, u_long alignment, vm_paddr_t boundary,
+    vm_page_t mpred)
 {
 	vm_paddr_t pa, size;
-	vm_page_t m, m_ret, mpred, msucc;
+	vm_page_t m, m_ret, msucc;
 	vm_pindex_t first, leftcap, rightcap;
 	vm_reserv_t rv;
 	u_long allocpages, maxpages, minpages;
@@ -448,10 +452,11 @@ vm_reserv_alloc_contig(vm_object_t objec
 	/*
 	 * Look for an existing reservation.
 	 */
-	mpred = vm_radix_lookup_le(&object->rtree, pindex);
 	if (mpred != NULL) {
+		KASSERT(mpred->object == object,
+		    ("vm_reserv_alloc_contig: object doesn't contain mpred"));
 		KASSERT(mpred->pindex < pindex,
-		    ("vm_reserv_alloc_contig: pindex already allocated"));
+		    ("vm_reserv_alloc_contig: mpred doesn't precede pindex"));
 		rv = vm_reserv_from_page(mpred);
 		if (rv->object == object && vm_reserv_has_pindex(rv, pindex))
 			goto found;
@@ -460,7 +465,7 @@ vm_reserv_alloc_contig(vm_object_t objec
 		msucc = TAILQ_FIRST(&object->memq);
 	if (msucc != NULL) {
 		KASSERT(msucc->pindex > pindex,
-		    ("vm_reserv_alloc_contig: pindex already allocated"));
+		    ("vm_reserv_alloc_contig: msucc doesn't succeed pindex"));
 		rv = vm_reserv_from_page(msucc);
 		if (rv->object == object && vm_reserv_has_pindex(rv, pindex))
 			goto found;

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_reserv.h
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_reserv.h	Mon May  1 01:58:19 2017	(r317628)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_reserv.h	Mon May  1 02:01:12 2017	(r317629)
@@ -47,7 +47,7 @@
  */
 vm_page_t	vm_reserv_alloc_contig(vm_object_t object, vm_pindex_t pindex,
 		    u_long npages, vm_paddr_t low, vm_paddr_t high,
-		    u_long alignment, vm_paddr_t boundary);
+		    u_long alignment, vm_paddr_t boundary, vm_page_t mpred);
 vm_page_t	vm_reserv_alloc_page(vm_object_t object, vm_pindex_t pindex,
 		    vm_page_t mpred);
 void		vm_reserv_break_all(vm_object_t object);

From owner-svn-src-user@freebsd.org  Mon May  1 02:08:45 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id EF395D58604
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Mon,  1 May 2017 02:08:45 +0000 (UTC)
 (envelope-from markj@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id BFD45139C;
 Mon,  1 May 2017 02:08:45 +0000 (UTC)
 (envelope-from markj@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v4128iTb091851;
 Mon, 1 May 2017 02:08:44 GMT (envelope-from markj@FreeBSD.org)
Received: (from markj@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v4128i3T091849;
 Mon, 1 May 2017 02:08:44 GMT (envelope-from markj@FreeBSD.org)
Message-Id: <201705010208.v4128i3T091849@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: markj set sender to
 markj@FreeBSD.org using -f
From: Mark Johnston <markj@FreeBSD.org>
Date: Mon, 1 May 2017 02:08:44 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317630 - user/markj/PQ_LAUNDRY_11/sys/vm
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Mon, 01 May 2017 02:08:46 -0000

Author: markj
Date: Mon May  1 02:08:44 2017
New Revision: 317630
URL: https://svnweb.freebsd.org/changeset/base/317630

Log:
  Restore VM_ALLOC_IF{,NOT}CACHED.
  
  VM_ALLOC_IFCACHED requests always fail, and VM_ALLOC_IFNOTCACHED has no
  effect.

Modified:
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c
  user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.h

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c	Mon May  1 02:01:12 2017	(r317629)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.c	Mon May  1 02:08:44 2017	(r317630)
@@ -1521,6 +1521,9 @@ vm_page_alloc(vm_object_t object, vm_pin
 	if (object != NULL)
 		VM_OBJECT_ASSERT_WLOCKED(object);
 
+	if (__predict_false((req & VM_ALLOC_IFCACHED) != 0))
+		return (NULL);
+
 	req_class = req & VM_ALLOC_CLASS_MASK;
 
 	/*

Modified: user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.h
==============================================================================
--- user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.h	Mon May  1 02:01:12 2017	(r317629)
+++ user/markj/PQ_LAUNDRY_11/sys/vm/vm_page.h	Mon May  1 02:08:44 2017	(r317630)
@@ -405,6 +405,8 @@ vm_page_t PHYS_TO_VM_PAGE(vm_paddr_t pa)
 #define	VM_ALLOC_ZERO		0x0040	/* (acfg) Try to obtain a zeroed page */
 #define	VM_ALLOC_NOOBJ		0x0100	/* (acg) No associated object */
 #define	VM_ALLOC_NOBUSY		0x0200	/* (acg) Do not busy the page */
+#define	VM_ALLOC_IFCACHED	0x0400
+#define	VM_ALLOC_IFNOTCACHED	0x0800
 #define	VM_ALLOC_IGN_SBUSY	0x1000	/* (g) Ignore shared busy flag */
 #define	VM_ALLOC_NODUMP		0x2000	/* (ag) don't include in dump */
 #define	VM_ALLOC_SBUSY		0x4000	/* (acg) Shared busy the page */

From owner-svn-src-user@freebsd.org  Mon May  1 07:44:41 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 80A13D59D0F
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Mon,  1 May 2017 07:44:41 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id 31769F6;
 Mon,  1 May 2017 07:44:41 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v417ie22029933;
 Mon, 1 May 2017 07:44:40 GMT (envelope-from pho@FreeBSD.org)
Received: (from pho@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v417iY0b029875;
 Mon, 1 May 2017 07:44:34 GMT (envelope-from pho@FreeBSD.org)
Message-Id: <201705010744.v417iY0b029875@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: pho set sender to pho@FreeBSD.org
 using -f
From: Peter Holm <pho@FreeBSD.org>
Date: Mon, 1 May 2017 07:44:34 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317638 - user/pho/stress2/misc
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Mon, 01 May 2017 07:44:41 -0000

Author: pho
Date: Mon May  1 07:44:34 2017
New Revision: 317638
URL: https://svnweb.freebsd.org/changeset/base/317638

Log:
  Style.
  Only use braces for shell variables when needed.
  
  Sponsored by:	Dell EMC Isilon

Modified:
  user/pho/stress2/misc/alternativeFlushPath.sh
  user/pho/stress2/misc/backingstore.sh
  user/pho/stress2/misc/backingstore2.sh
  user/pho/stress2/misc/backingstore3.sh
  user/pho/stress2/misc/core4.sh
  user/pho/stress2/misc/core5.sh
  user/pho/stress2/misc/crossmp3.sh
  user/pho/stress2/misc/crossmp4.sh
  user/pho/stress2/misc/crossmp5.sh
  user/pho/stress2/misc/crossmp8.sh
  user/pho/stress2/misc/crossmp9.sh
  user/pho/stress2/misc/dfull.sh
  user/pho/stress2/misc/ext2fs2.sh
  user/pho/stress2/misc/extattr.sh
  user/pho/stress2/misc/extattr_set_fd.sh
  user/pho/stress2/misc/fdgrowtable.sh
  user/pho/stress2/misc/fragments.sh
  user/pho/stress2/misc/fs.sh
  user/pho/stress2/misc/ftruncate2.sh
  user/pho/stress2/misc/fuse.sh
  user/pho/stress2/misc/fuzz.sh
  user/pho/stress2/misc/linger2.sh
  user/pho/stress2/misc/linger3.sh
  user/pho/stress2/misc/linger4.sh
  user/pho/stress2/misc/md.sh
  user/pho/stress2/misc/md3.sh
  user/pho/stress2/misc/mmap4.sh
  user/pho/stress2/misc/mount2.sh
  user/pho/stress2/misc/mountro.sh
  user/pho/stress2/misc/mountro2.sh
  user/pho/stress2/misc/mountro3.sh
  user/pho/stress2/misc/msdos.sh
  user/pho/stress2/misc/msdos2.sh
  user/pho/stress2/misc/msdos3.sh
  user/pho/stress2/misc/msdos6.sh
  user/pho/stress2/misc/msdos7.sh
  user/pho/stress2/misc/newfs3.sh
  user/pho/stress2/misc/nfs2.sh
  user/pho/stress2/misc/nullfs11.sh
  user/pho/stress2/misc/pfl.sh
  user/pho/stress2/misc/quota1.sh
  user/pho/stress2/misc/quota10.sh
  user/pho/stress2/misc/quota2.sh
  user/pho/stress2/misc/quota3.sh
  user/pho/stress2/misc/quota4.sh
  user/pho/stress2/misc/quota7.sh
  user/pho/stress2/misc/quota8.sh
  user/pho/stress2/misc/quota9.sh
  user/pho/stress2/misc/rename11.sh
  user/pho/stress2/misc/rename3.sh
  user/pho/stress2/misc/rename5.sh
  user/pho/stress2/misc/rename6.sh
  user/pho/stress2/misc/rename7.sh
  user/pho/stress2/misc/rename8.sh
  user/pho/stress2/misc/rename9.sh
  user/pho/stress2/misc/snap2.sh
  user/pho/stress2/misc/snap8.sh
  user/pho/stress2/misc/softupdate.sh
  user/pho/stress2/misc/suj10.sh
  user/pho/stress2/misc/suj18.sh
  user/pho/stress2/misc/suj19.sh
  user/pho/stress2/misc/suj20.sh
  user/pho/stress2/misc/suj21.sh
  user/pho/stress2/misc/suj22.sh
  user/pho/stress2/misc/suj24.sh
  user/pho/stress2/misc/suj30.sh
  user/pho/stress2/misc/suj32.sh
  user/pho/stress2/misc/suj34.sh
  user/pho/stress2/misc/symlink.sh
  user/pho/stress2/misc/symlink2.sh
  user/pho/stress2/misc/tmpfs4.sh
  user/pho/stress2/misc/tmpfs8.sh
  user/pho/stress2/misc/tmpfs9.sh
  user/pho/stress2/misc/truncate3.sh
  user/pho/stress2/misc/truncate5.sh
  user/pho/stress2/misc/umount.sh
  user/pho/stress2/misc/umountf.sh
  user/pho/stress2/misc/umountf2.sh
  user/pho/stress2/misc/umountf4.sh
  user/pho/stress2/misc/umountf5.sh
  user/pho/stress2/misc/union.sh
  user/pho/stress2/misc/vunref2.sh
  user/pho/stress2/misc/zfs.sh
  user/pho/stress2/misc/zfs2.sh
  user/pho/stress2/misc/zfs3.sh
  user/pho/stress2/misc/zfs4.sh
  user/pho/stress2/misc/zfs6.sh

Modified: user/pho/stress2/misc/alternativeFlushPath.sh
==============================================================================
--- user/pho/stress2/misc/alternativeFlushPath.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/alternativeFlushPath.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -41,7 +41,7 @@
 . ../default.cfg
 
 odir=`pwd`
-dir=${RUNDIR}/alternativeFlushPath
+dir=$RUNDIR/alternativeFlushPath
 
 [ -d $dir ] && find $dir -type f | xargs rm
 rm -rf $dir
@@ -58,6 +58,7 @@ done
 wait
 sysctl vfs.altbufferflushes
 
+cd $odir
 rm -rf /tmp/alternativeFlushPath $dir
 
 exit

Modified: user/pho/stress2/misc/backingstore.sh
==============================================================================
--- user/pho/stress2/misc/backingstore.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/backingstore.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -39,7 +39,7 @@ export here=`pwd`
 
 m=$mdstart
 
-mount | grep "${mntpoint}" | grep -q md$m && umount ${mntpoint}$m
+mount | grep "$mntpoint" | grep -q md$m && umount ${mntpoint}$m
 mdconfig -l | grep -q md$m &&  mdconfig -d -u $m
 
 dede $D$m 100m 1 || exit 1
@@ -47,23 +47,23 @@ dede $D$m 100m 1 || exit 1
 mdconfig -a -t vnode -f $D$m -u $m
 
 bsdlabel -w md$m auto
-newfs md${m}${part} > /dev/null 2>&1
+newfs md${m}$part > /dev/null 2>&1
 [ -d ${mntpoint}$m ] || mkdir -p ${mntpoint}$m
-mount $opt /dev/md${m}${part} ${mntpoint}$m
+mount $opt /dev/md${m}$part ${mntpoint}$m
 
 n=$m
 m=$((m + 1))
 
-mount | grep "${mntpoint}" | grep -q md$m && umount ${mntpoint}$m
+mount | grep "$mntpoint" | grep -q md$m && umount ${mntpoint}$m
 mdconfig -l | grep -q md$m &&  mdconfig -d -u $m
 
 truncate -s 500M ${mntpoint}$n/diskimage
 mdconfig -a -t vnode -f ${mntpoint}$n/diskimage -u $m
 
 bsdlabel -w md$m auto
-newfs md${m}${part} > /dev/null 2>&1
+newfs md${m}$part > /dev/null 2>&1
 [ -d ${mntpoint}$m ] || mkdir -p ${mntpoint}$m
-mount $opt /dev/md${m}${part} ${mntpoint}$m
+mount $opt /dev/md${m}$part ${mntpoint}$m
 
 export RUNDIR=${mntpoint}$m/stressX
 ../testcases/rw/rw -t 5m -i 200 -h -n

Modified: user/pho/stress2/misc/backingstore2.sh
==============================================================================
--- user/pho/stress2/misc/backingstore2.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/backingstore2.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -42,9 +42,9 @@ export here=`pwd`
 
 m1=$mdstart
 m2=$((m1 + 1))
-mount | grep "${mntpoint}" | grep -q md$m2 && umount ${mntpoint}$m2
+mount | grep "$mntpoint" | grep -q md$m2 && umount ${mntpoint}$m2
 mdconfig -l | grep -q md$m2 &&  mdconfig -d -u $m2
-mount | grep "${mntpoint}" | grep -q md$m1 && umount ${mntpoint}$m1
+mount | grep "$mntpoint" | grep -q md$m1 && umount ${mntpoint}$m1
 mdconfig -l | grep -q md$m1 &&  mdconfig -d -u $m1
 [ -d ${mntpoint}$m1 ] || mkdir -p ${mntpoint}$m1
 [ -d ${mntpoint}$m2 ] || mkdir -p ${mntpoint}$m2
@@ -54,22 +54,22 @@ dede $D$m1 100m 1 || exit 1
 mdconfig -a -t vnode -f $D$m1 -u $m1
 
 bsdlabel -w md$m1 auto
-newfs md${m1}${part} > /dev/null 2>&1
-mount /dev/md${m1}${part} ${mntpoint}$m1
+newfs md${m1}$part > /dev/null 2>&1
+mount /dev/md${m1}$part ${mntpoint}$m1
 
 
 truncate -s 500M ${mntpoint}$m1/diskimage
 mdconfig -a -t vnode -f ${mntpoint}$m1/diskimage -u $m2
 
 bsdlabel -w md$m2 auto
-newfs md${m2}${part} > /dev/null 2>&1
-mount /dev/md${m2}${part} ${mntpoint}$m2
+newfs md${m2}$part > /dev/null 2>&1
+mount /dev/md${m2}$part ${mntpoint}$m2
 
 # Reversed umount sequence:
-umount -f /dev/md${m1}${part}
-umount -f /dev/md${m2}${part}
+umount -f /dev/md${m1}$part
+umount -f /dev/md${m2}$part
 
-mount | grep "${mntpoint}" | grep -q md$m2 && umount ${mntpoint}$m2
+mount | grep "$mntpoint" | grep -q md$m2 && umount ${mntpoint}$m2
 mdconfig -l | grep -q md$m2 &&  mdconfig -d -u $m2
-mount | grep "${mntpoint}" | grep -q md$m1 && umount ${mntpoint}$m1
+mount | grep "$mntpoint" | grep -q md$m1 && umount ${mntpoint}$m1
 mdconfig -l | grep -q md$m1 &&  mdconfig -d -u $m1

Modified: user/pho/stress2/misc/backingstore3.sh
==============================================================================
--- user/pho/stress2/misc/backingstore3.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/backingstore3.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -41,9 +41,9 @@ export here=`pwd`
 
 m1=$mdstart
 m2=$((m1 + 1))
-mount | grep "${mntpoint}" | grep -q md$m2 && umount ${mntpoint}$m2
+mount | grep "$mntpoint" | grep -q md$m2 && umount ${mntpoint}$m2
 mdconfig -l | grep -q md$m2 &&  mdconfig -d -u $m2
-mount | grep "${mntpoint}" | grep -q md$m1 && umount ${mntpoint}$m1
+mount | grep "$mntpoint" | grep -q md$m1 && umount ${mntpoint}$m1
 mdconfig -l | grep -q md$m1 &&  mdconfig -d -u $m1
 [ -d ${mntpoint}$m1 ] || mkdir -p ${mntpoint}$m1
 [ -d ${mntpoint}$m2 ] || mkdir -p ${mntpoint}$m2
@@ -53,24 +53,24 @@ dede $D$m1 25m 1 || exit 1
 mdconfig -a -t vnode -f $D$m1 -u $m1
 
 bsdlabel -w md$m1 auto
-newfs md${m1}${part} > /dev/null 2>&1
-mount /dev/md${m1}${part} ${mntpoint}$m1
+newfs md${m1}$part > /dev/null 2>&1
+mount /dev/md${m1}$part ${mntpoint}$m1
 
 
 truncate -s 500M ${mntpoint}$m1/diskimage
 mdconfig -a -t vnode -f ${mntpoint}$m1/diskimage -u $m2
 
 bsdlabel -w md$m2 auto
-newfs md${m2}${part} > /dev/null 2>&1
-mount /dev/md${m2}${part} ${mntpoint}$m2
+newfs md${m2}$part > /dev/null 2>&1
+mount /dev/md${m2}$part ${mntpoint}$m2
 
 dd if=/dev/zero of=${mntpoint}$m2/file bs=1m > /dev/null 2>&1
 
 # Reversed umount sequence:
-umount -f /dev/md${m1}${part}
-umount -f /dev/md${m2}${part}
+umount -f /dev/md${m1}$part
+umount -f /dev/md${m2}$part
 
-mount | grep "${mntpoint}" | grep -q md$m2 && umount ${mntpoint}$m2
+mount | grep "$mntpoint" | grep -q md$m2 && umount ${mntpoint}$m2
 mdconfig -l | grep -q md$m2 &&  mdconfig -d -u $m2
-mount | grep "${mntpoint}" | grep -q md$m1 && umount ${mntpoint}$m1
+mount | grep "$mntpoint" | grep -q md$m1 && umount ${mntpoint}$m1
 mdconfig -l | grep -q md$m1 &&  mdconfig -d -u $m1

Modified: user/pho/stress2/misc/core4.sh
==============================================================================
--- user/pho/stress2/misc/core4.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/core4.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -53,7 +53,7 @@ mount | grep -q "$mntpoint" && umount $m
 mdconfig -l | grep -q $mdstart &&  mdconfig -d -u $mdstart
 
 mdconfig -a -t swap -s 2g -u $mdstart
-bsdlabel -w md${mdstart} auto
+bsdlabel -w md$mdstart auto
 
 newfs $newfs_flags md${mdstart}$part > /dev/null
 for i in `jot 20`; do

Modified: user/pho/stress2/misc/core5.sh
==============================================================================
--- user/pho/stress2/misc/core5.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/core5.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -94,7 +94,7 @@ mount | grep -q "on $mntpoint " && umoun
 [ -c /dev/md$mdstart ] && mdconfig -d -u $mdstart
 
 mdconfig -a -t malloc -s 1g -u $mdstart
-bsdlabel -w md${mdstart} auto
+bsdlabel -w md$mdstart auto
 
 newfs -b 4096 -f 512 -i 2048 md${mdstart}$part > /dev/null
 mount -o async /dev/md${mdstart}$part $mntpoint || exit 1

Modified: user/pho/stress2/misc/crossmp3.sh
==============================================================================
--- user/pho/stress2/misc/crossmp3.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/crossmp3.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -86,7 +86,7 @@ else
 		# The test: Parallel mount and unmounts
 		for i in `jot 3`; do
 			m=$1
-			mount /dev/md${m}${part} ${mntpoint}$m &&
+			mount /dev/md${m}$part ${mntpoint}$m &&
 			   chmod 777 ${mntpoint}$m
 			export RUNDIR=${mntpoint}$m/stressX
 			export CTRLDIR=${mntpoint}$m/stressX.control

Modified: user/pho/stress2/misc/crossmp4.sh
==============================================================================
--- user/pho/stress2/misc/crossmp4.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/crossmp4.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -49,7 +49,7 @@ size=$((usermem / 1024 / 1024))
 mounts=$N		# Number of parallel scripts
 
 if [ $# -eq 0 ]; then
-	mount | grep "$mntpoint" | grep -q md && umount ${mntpoint}
+	mount | grep "$mntpoint" | grep -q md && umount $mntpoint
 	mdconfig -l | grep -q md$mdstart &&  mdconfig -d -u $mdstart
 
 	mdconfig -a -t swap -s ${size}m -u $mdstart

Modified: user/pho/stress2/misc/crossmp5.sh
==============================================================================
--- user/pho/stress2/misc/crossmp5.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/crossmp5.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -79,7 +79,7 @@ else
 		# The test: Parallel mount and unmount
 		m=$1
 		for i in `jot 200`; do
-			mount /dev/md${m}${part} ${mntpoint}$m
+			mount /dev/md${m}$part ${mntpoint}$m
 			chmod 777 ${mntpoint}$m
 			l=`jot -r 1 65535`
 			dd if=/dev/zero of=$mntpoint/$i bs=$l count=100 \

Modified: user/pho/stress2/misc/crossmp8.sh
==============================================================================
--- user/pho/stress2/misc/crossmp8.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/crossmp8.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -97,7 +97,7 @@ else
 		start=`date '+%s'`
 		while [ $((`date '+%s'` - start)) -lt 300 ]; do
 			m=$1
-			mount /dev/md${m}${part} ${mntpoint}$m &&
+			mount /dev/md${m}$part ${mntpoint}$m &&
 			   chmod 777 ${mntpoint}$m
 			export RUNDIR=${mntpoint}$m/stressX
 			export CTRLDIR=${mntpoint}$m/stressX.control

Modified: user/pho/stress2/misc/crossmp9.sh
==============================================================================
--- user/pho/stress2/misc/crossmp9.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/crossmp9.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -80,7 +80,7 @@ else
 		start=`date '+%s'`
 		while [ $((`date '+%s'` - start)) -lt 300 ] ; do
 			m=$1
-			mount /dev/md${m}${part} ${mntpoint}$m
+			mount /dev/md${m}$part ${mntpoint}$m
 			while mount | grep -qw ${mntpoint}$m; do
 				opt=$([ $((`date '+%s'` % 2)) -eq 0 ] && echo "-f")
 				umount $opt ${mntpoint}$m > /dev/null 2>&1

Modified: user/pho/stress2/misc/dfull.sh
==============================================================================
--- user/pho/stress2/misc/dfull.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/dfull.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -43,7 +43,7 @@ bsdlabel -w md$mdstart auto
 newfs $newfs_flags md${mdstart}$part > /dev/null
 mount /dev/md${mdstart}$part $mntpoint
 
-export RUNDIR=${mntpoint}/stressX
+export RUNDIR=$mntpoint/stressX
 set `df -ik $mntpoint | tail -1 | awk '{print $4,$7}'`
 export KBLOCKS=$(($1 * 10))
 export INODES=$(($2 * 10))

Modified: user/pho/stress2/misc/ext2fs2.sh
==============================================================================
--- user/pho/stress2/misc/ext2fs2.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/ext2fs2.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -37,11 +37,11 @@
 
 # Uses mke2fs from sysutils/e2fsprogs
 [ -x /usr/local/sbin/mke2fs ] || exit 0
-mount | grep "$mntpoint" | grep -q md$mdstart && umount -f ${mntpoint}
-mdconfig -l | grep -q ${mdstart} &&  mdconfig -d -u $mdstart
+mount | grep "$mntpoint" | grep -q md$mdstart && umount -f $mntpoint
+mdconfig -l | grep -q $mdstart &&  mdconfig -d -u $mdstart
 
 mdconfig -a -t swap -s 1g -u $mdstart
-bsdlabel -w md${mdstart} auto
+bsdlabel -w md$mdstart auto
 mke2fs /dev/md${mdstart}a
 # No panic seen when disabling hashed b-tree lookup for large directories
 # tune2fs -O ^dir_index /dev/md${mdstart}$part

Modified: user/pho/stress2/misc/extattr.sh
==============================================================================
--- user/pho/stress2/misc/extattr.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/extattr.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -43,25 +43,25 @@ mycc -o extattr -Wall extattr.c
 rm -f extattr.c
 cd $odir
 
-mount | grep "${mntpoint}" | grep -q md${mdstart}${part} && umount $mntpoint
+mount | grep "$mntpoint" | grep -q md${mdstart}$part && umount $mntpoint
 mdconfig -l | grep -q md$mdstart &&  mdconfig -d -u $mdstart
 
 mdconfig -a -t swap -s 20m -u $mdstart
 bsdlabel -w md$mdstart auto
 
-newfs -O 2 md${mdstart}${part} > /dev/null
-mount /dev/md${mdstart}${part} $mntpoint
+newfs -O 2 md${mdstart}$part > /dev/null
+mount /dev/md${mdstart}$part $mntpoint
 
-mkdir -p ${mntpoint}/.attribute/system
-cd ${mntpoint}/.attribute/system
+mkdir -p $mntpoint/.attribute/system
+cd $mntpoint/.attribute/system
 
 extattrctl initattr -p . 388 posix1e.acl_access
 extattrctl initattr -p . 388 posix1e.acl_default
 cd /
 umount /mnt
-tunefs -a enable /dev/md${mdstart}${part}
-mount /dev/md${mdstart}${part} $mntpoint
-mount | grep md${mdstart}${part}
+tunefs -a enable /dev/md${mdstart}$part
+mount /dev/md${mdstart}$part $mntpoint
+mount | grep md${mdstart}$part
 
 touch $mntpoint/acl-test
 setfacl -b $mntpoint/acl-test

Modified: user/pho/stress2/misc/extattr_set_fd.sh
==============================================================================
--- user/pho/stress2/misc/extattr_set_fd.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/extattr_set_fd.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -44,9 +44,9 @@ rm -f extattr_set_fd.c
 mount | grep -q "$mntpoint" && umount $mntpoint
 mdconfig -l | grep -q $mdstart &&  mdconfig -d -u $mdstart
 mdconfig -a -t swap -s 1g -u $mdstart
-bsdlabel -w md${mdstart} auto
-newfs $newfs_flags md${mdstart}${part} > /dev/null
-mount /dev/md${mdstart}${part} $mntpoint
+bsdlabel -w md$mdstart auto
+newfs $newfs_flags md${mdstart}$part > /dev/null
+mount /dev/md${mdstart}$part $mntpoint
 chmod 777 $mntpoint
 
 (cd $mntpoint; /tmp/extattr_set_fd)

Modified: user/pho/stress2/misc/fdgrowtable.sh
==============================================================================
--- user/pho/stress2/misc/fdgrowtable.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/fdgrowtable.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -43,7 +43,7 @@ mycc -o fdgrowtable -Wall -Wextra -O2 fd
 rm -f fdgrowtable.c
 cd $here
 
-su ${testuser} -c "/tmp/fdgrowtable $max" &
+su $testuser -c "/tmp/fdgrowtable $max" &
 while kill -0 $! 2>/dev/null; do
 	../testcases/swap/swap -t 2m -i 40 -h
 done

Modified: user/pho/stress2/misc/fragments.sh
==============================================================================
--- user/pho/stress2/misc/fragments.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/fragments.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -47,7 +47,7 @@ mycc -o fragments -Wall -Wextra -O2 -g f
 rm -f fragments.c
 cd $here
 
-mount | grep "$mntpoint" | grep -q md$mdstart && umount -f ${mntpoint}
+mount | grep "$mntpoint" | grep -q md$mdstart && umount -f $mntpoint
 mdconfig -l | grep -q md$mdstart &&  mdconfig -d -u $mdstart
 
 mdconfig -a -t swap -s 1g -u $mdstart

Modified: user/pho/stress2/misc/fs.sh
==============================================================================
--- user/pho/stress2/misc/fs.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/fs.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -41,7 +41,7 @@ ftest () {	# option, disk full
 	mount /dev/md${mdstart}$part $mntpoint
 	chmod 777 $mntpoint
 
-	export RUNDIR=${mntpoint}/stressX
+	export RUNDIR=$mntpoint/stressX
 	export runRUNTIME=2m
 	disk=$(($2 + 1))	# 1 or 2
 	set `df -ik $mntpoint | tail -1 | awk '{print $4,$7}'`
@@ -59,11 +59,11 @@ ftest () {	# option, disk full
 }
 
 
-mount | grep "${mntpoint}" | grep md${mdstart}${part} > /dev/null && umount ${mntpoint}
-mdconfig -l | grep md${mdstart} > /dev/null &&  mdconfig -d -u ${mdstart}
+mount | grep "$mntpoint" | grep md${mdstart}$part > /dev/null && umount $mntpoint
+mdconfig -l | grep md$mdstart > /dev/null &&  mdconfig -d -u $mdstart
 
-mdconfig -a -t swap -s 20m -u ${mdstart}
-bsdlabel -w md${mdstart} auto
+mdconfig -a -t swap -s 20m -u $mdstart
+bsdlabel -w md$mdstart auto
 
 ftest "-O 1"  0	# ufs1
 ftest "-O 1"  1	# ufs1, disk full
@@ -74,4 +74,4 @@ ftest "-U"    1	# ufs2 + soft update, di
 ftest "-j"    0	# ufs2 + SU+J
 ftest "-j"    1	# ufs2 + SU+J, disk full
 
-mdconfig -d -u ${mdstart}
+mdconfig -d -u $mdstart

Modified: user/pho/stress2/misc/ftruncate2.sh
==============================================================================
--- user/pho/stress2/misc/ftruncate2.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/ftruncate2.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -28,7 +28,7 @@
 # $FreeBSD$
 #
 
-# A fuzz test. Most likely a disk full / FFS issue.
+# A fuzz test triggered a failed block allocation unwinding problem.
 
 # "panic: ffs_blkfree_cg: freeing free block" seen:
 # https://people.freebsd.org/~pho/stress/log/kostik923.txt
@@ -49,7 +49,7 @@ echo "Expect: \"/mnt: write failed, file
 mount | grep $mntpoint | grep -q "on $mntpoint " && umount -f $mntpoint
 mdconfig -l | grep -q md$mdstart &&  mdconfig -d -u $mdstart
 mdconfig -a -t swap -s 1g -u $mdstart || exit 1
-bsdlabel -w md${mdstart} auto
+bsdlabel -w md$mdstart auto
 newfs md${mdstart}$part > /dev/null		# Non SU panics
 mount /dev/md${mdstart}$part $mntpoint
 

Modified: user/pho/stress2/misc/fuse.sh
==============================================================================
--- user/pho/stress2/misc/fuse.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/fuse.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -43,9 +43,9 @@ mount | grep -q "$mntpoint" && umount $m
 mdconfig -l | grep -q $mdstart &&  mdconfig -d -u $mdstart
 
 mdconfig -a -t swap -s 1g -u $mdstart
-mkntfs -Ff /dev/md${mdstart} > /dev/null 2>&1 || exit 1
+mkntfs -Ff /dev/md$mdstart > /dev/null 2>&1 || exit 1
 
-$MOUNT /dev/md${mdstart} $mntpoint || exit 1
+$MOUNT /dev/md$mdstart $mntpoint || exit 1
 
 export RUNDIR=$mntpoint/stressX
 export runRUNTIME=20m

Modified: user/pho/stress2/misc/fuzz.sh
==============================================================================
--- user/pho/stress2/misc/fuzz.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/fuzz.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -84,7 +84,7 @@ tst() {
          break
       fi
    done
-   mdconfig -d -u ${mdstart}
+   mdconfig -d -u $mdstart
    rm -f $D
 }
 

Modified: user/pho/stress2/misc/linger2.sh
==============================================================================
--- user/pho/stress2/misc/linger2.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/linger2.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -54,7 +54,7 @@ set `df -i $mntpoint | tail -1 | awk '{p
 
 min=24
 [ -r $mntpoint/.sujournal ] && { size=88; min=8232; }
-if ! su ${testuser} -c "cd $mntpoint; /tmp/linger2 $size 2>/dev/null"; then
+if ! su $testuser -c "cd $mntpoint; /tmp/linger2 $size 2>/dev/null"; then
 	r=`df -i $mntpoint | head -1`
 	echo "         $r"
 	for i in `jot 12`; do

Modified: user/pho/stress2/misc/linger3.sh
==============================================================================
--- user/pho/stress2/misc/linger3.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/linger3.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -59,7 +59,7 @@ su $testuser -c "/tmp/linger3"
 cd $here
 
 while mount | grep -q $mntpoint; do
-	umount ${mntpoint} 2> /dev/null || sleep 1
+	umount $mntpoint 2> /dev/null || sleep 1
 done
 mdconfig -d -u $mdstart
 rm -f /tmp/linger3

Modified: user/pho/stress2/misc/linger4.sh
==============================================================================
--- user/pho/stress2/misc/linger4.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/linger4.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -60,7 +60,7 @@ su $testuser -c "/tmp/linger4" ||
 cd $here
 
 while mount | grep -q $mntpoint; do
-	umount ${mntpoint} 2> /dev/null || sleep 1
+	umount $mntpoint 2> /dev/null || sleep 1
 done
 mdconfig -d -u $mdstart
 rm -f /tmp/linger4

Modified: user/pho/stress2/misc/md.sh
==============================================================================
--- user/pho/stress2/misc/md.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/md.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -35,15 +35,15 @@
 
 . ../default.cfg
 
-mount | grep "$mntpoint" | grep md${mdstart}${part} > /dev/null && umount $mntpoint
+mount | grep "$mntpoint" | grep md${mdstart}$part > /dev/null && umount $mntpoint
 [ -c /dev/md$mdstart ] &&  mdconfig -d -u $mdstart
 
 mdconfig -a -t swap -s 2m -u $mdstart
-bsdlabel -w md${mdstart} auto
-newfs md${mdstart}${part} > /dev/null
+bsdlabel -w md$mdstart auto
+newfs md${mdstart}$part > /dev/null
 mount /dev/md${mdstart}$part $mntpoint
 
-export RUNDIR=${mntpoint}/stressX
+export RUNDIR=$mntpoint/stressX
 export KBLOCKS=30000		# Exaggerate disk capacity
 export INODES=8000
 

Modified: user/pho/stress2/misc/md3.sh
==============================================================================
--- user/pho/stress2/misc/md3.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/md3.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -38,10 +38,10 @@ mount | grep -q "$mntpoint" && umount $m
 mdconfig -l | grep -q $mdstart &&  mdconfig -d -u $mdstart
 
 mdconfig -a -t swap -s 1400m -u $mdstart
-bsdlabel -w md${mdstart} auto
+bsdlabel -w md$mdstart auto
 
-newfs $newfs_flags md5${part} > /dev/null
-mount /dev/md5${part} $mntpoint
+newfs $newfs_flags md5$part > /dev/null
+mount /dev/md5$part $mntpoint
 
 # Stop FS "out of inodes" problem by only using 70%
 set `df -ik /mnt | tail -1 | awk '{print $4,$7}'`

Modified: user/pho/stress2/misc/mmap4.sh
==============================================================================
--- user/pho/stress2/misc/mmap4.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/mmap4.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -48,7 +48,7 @@ mount | grep -q "$mntpoint" && umount $m
 mdconfig -l | grep -q $mdstart &&  mdconfig -d -u $mdstart
 
 mdconfig -a -t swap -s 40m -u $mdstart
-bsdlabel -w md${mdstart} auto
+bsdlabel -w md$mdstart auto
 
 newfs $newfs_flags md${mdstart}$part > /dev/null
 mount /dev/md${mdstart}$part $mntpoint

Modified: user/pho/stress2/misc/mount2.sh
==============================================================================
--- user/pho/stress2/misc/mount2.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/mount2.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -42,14 +42,14 @@ if [ $# -eq 0 ]; then
 	for i in `jot $mounts`; do
 		m=$(( i + mdstart - 1 ))
 		[ ! -d ${mntpoint}$m ] && mkdir ${mntpoint}$m
-		mount | grep "${mntpoint}" | grep -q md$m &&
+		mount | grep "$mntpoint" | grep -q md$m &&
 		    umount ${mntpoint}$m
 		mdconfig -l | grep -q md$m &&  mdconfig -d -u $m
 
 		dd if=/dev/zero of=$D$m bs=1m count=1 > /dev/null 2>&1
 		mdconfig -a -t vnode -f $D$m -u $m || { rm -f $D$m; exit 1; }
 		bsdlabel -w md$m auto
-		newfs md${m}${part} > /dev/null 2>&1
+		newfs md${m}$part > /dev/null 2>&1
 	done
 
 	# start the parallel tests
@@ -74,7 +74,7 @@ else
 	for i in `jot 1024`; do
 		m=$1
 		opt=`[ $(( m % 2 )) -eq 0 ] && echo -f`
-		mount /dev/md${m}${part} ${mntpoint}$m
+		mount /dev/md${m}$part ${mntpoint}$m
 		while mount | grep -q ${mntpoint}$m; do
 			umount $opt ${mntpoint}$m > /dev/null 2>&1
 		done

Modified: user/pho/stress2/misc/mountro.sh
==============================================================================
--- user/pho/stress2/misc/mountro.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/mountro.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -36,19 +36,19 @@
 D=$diskimage
 dede $D 1m 128 || exit
 
-mount | grep "$mntpoint"    | grep -q /md  && umount -f ${mntpoint}
-mdconfig -l | grep -q ${mdstart}  &&  mdconfig -d -u $mdstart
+mount | grep "$mntpoint"    | grep -q /md  && umount -f $mntpoint
+mdconfig -l | grep -q $mdstart  &&  mdconfig -d -u $mdstart
 
 mdconfig -a -t vnode -f $D -u $mdstart || { rm -f $D; exit 1; }
 
-bsdlabel -w md${mdstart} auto
-newfs $newfs_flags md${mdstart}${part} > /dev/null 2>&1
-mount /dev/md${mdstart}${part} $mntpoint
+bsdlabel -w md$mdstart auto
+newfs $newfs_flags md${mdstart}$part > /dev/null 2>&1
+mount /dev/md${mdstart}$part $mntpoint
 
-mkdir ${mntpoint}/stressX
-chmod 777 ${mntpoint}/stressX
+mkdir $mntpoint/stressX
+chmod 777 $mntpoint/stressX
 
-export RUNDIR=${mntpoint}/stressX
+export RUNDIR=$mntpoint/stressX
 export runRUNTIME=4m
 (cd ..; ./run.sh disk.cfg > /dev/null 2>&1) &
 sleep 30

Modified: user/pho/stress2/misc/mountro2.sh
==============================================================================
--- user/pho/stress2/misc/mountro2.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/mountro2.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -38,14 +38,14 @@
 D=$diskimage
 dede $D 1m 20 || exit
 
-mount | grep "$mntpoint"    | grep -q /md  && umount -f ${mntpoint}
-mdconfig -l | grep -q ${mdstart}  &&  mdconfig -d -u $mdstart
+mount | grep "$mntpoint"    | grep -q /md  && umount -f $mntpoint
+mdconfig -l | grep -q $mdstart  &&  mdconfig -d -u $mdstart
 
 mdconfig -a -t vnode -f $D -u $mdstart || { rm -f $D; exit 1; }
 
-bsdlabel -w md${mdstart} auto
-newfs $newfs_flags md${mdstart}${part} > /dev/null 2>&1
-mount /dev/md${mdstart}${part} $mntpoint
+bsdlabel -w md$mdstart auto
+newfs $newfs_flags md${mdstart}$part > /dev/null 2>&1
+mount /dev/md${mdstart}$part $mntpoint
 
 mtree -deU -f /etc/mtree/BSD.usr.dist -p $mntpoint/ >> /dev/null
 sync ; sync ; sync

Modified: user/pho/stress2/misc/mountro3.sh
==============================================================================
--- user/pho/stress2/misc/mountro3.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/mountro3.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -38,21 +38,21 @@
 D=$diskimage
 dede $D 1m 64 || exit 1
 
-mount | grep "$mntpoint" | grep md${mdstart}${part} > /dev/null && umount $mntpoint
-mdconfig -l | grep md${mdstart} > /dev/null &&  mdconfig -d -u ${mdstart}
+mount | grep "$mntpoint" | grep md${mdstart}$part > /dev/null && umount $mntpoint
+mdconfig -l | grep md$mdstart > /dev/null &&  mdconfig -d -u $mdstart
 
-mdconfig -a -t vnode -f $D -u ${mdstart} || { rm -f $D; exit 1; }
-bsdlabel -w md${mdstart} auto
-newfs $newfs_flags md${mdstart}${part} > /dev/null 2>&1
+mdconfig -a -t vnode -f $D -u $mdstart || { rm -f $D; exit 1; }
+bsdlabel -w md$mdstart auto
+newfs $newfs_flags md${mdstart}$part > /dev/null 2>&1
 
-mount /dev/md${mdstart}${part} $mntpoint
+mount /dev/md${mdstart}$part $mntpoint
 touch $mntpoint/file
 umount $mntpoint
 
-mount /dev/md${mdstart}${part} $mntpoint
+mount /dev/md${mdstart}$part $mntpoint
 rm $mntpoint/file
 mount -u -o ro $mntpoint	# Should fail with "Device busy"
 
 umount $mntpoint
-mdconfig -d -u ${mdstart}
+mdconfig -d -u $mdstart
 rm -f $D

Modified: user/pho/stress2/misc/msdos.sh
==============================================================================
--- user/pho/stress2/misc/msdos.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/msdos.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -35,11 +35,11 @@
 . ../default.cfg
 
 [ -x /sbin/mount_msdosfs ] || exit
-mount | grep "$mntpoint" | grep -q md$mdstart && umount -f ${mntpoint}
-mdconfig -l | grep -q ${mdstart} &&  mdconfig -d -u $mdstart
+mount | grep "$mntpoint" | grep -q md$mdstart && umount -f $mntpoint
+mdconfig -l | grep -q $mdstart &&  mdconfig -d -u $mdstart
 
 mdconfig -a -t swap -s 1g -u $mdstart
-bsdlabel -w md${mdstart} auto
+bsdlabel -w md$mdstart auto
 newfs_msdos /dev/md${mdstart}$part > /dev/null
 mount -t msdosfs /dev/md${mdstart}$part $mntpoint
 

Modified: user/pho/stress2/misc/msdos2.sh
==============================================================================
--- user/pho/stress2/misc/msdos2.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/msdos2.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -41,14 +41,14 @@ mount | grep "$mntpoint" | grep -q md$md
 mdconfig -l | grep -q $mdstart &&  mdconfig -d -u $mdstart
 
 mdconfig -a -t swap -s 1g -u $mdstart
-bsdlabel -w md${mdstart} auto
+bsdlabel -w md$mdstart auto
 newfs_msdos /dev/md${mdstart}$part > /dev/null
 mount -t msdosfs /dev/md${mdstart}$part $mntpoint
 
 u=$((mdstart + 1))
 mdconfig -l | grep -q $u &&  mdconfig -d -u $u
 mdconfig -a -t swap -s 1g -u $u
-bsdlabel -w md${u} auto
+bsdlabel -w md$u auto
 newfs_msdos /dev/md${u}$part > /dev/null
 mount -u /dev/md${u}$part $mntpoint > /dev/null 2>&1 # panic
 

Modified: user/pho/stress2/misc/msdos3.sh
==============================================================================
--- user/pho/stress2/misc/msdos3.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/msdos3.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -41,7 +41,7 @@ mount | grep "$mntpoint" | grep -q md$md
 mdconfig -l | grep -q $mdstart &&  mdconfig -d -u $mdstart
 
 mdconfig -a -t swap -s 1g -u $mdstart
-bsdlabel -w md${mdstart} auto
+bsdlabel -w md$mdstart auto
 newfs_msdos /dev/md${mdstart}$part > /dev/null
 
 mount -t msdosfs /dev/md${mdstart}$part $mntpoint

Modified: user/pho/stress2/misc/msdos6.sh
==============================================================================
--- user/pho/stress2/misc/msdos6.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/msdos6.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -50,7 +50,7 @@ if [ $# -eq 0 ]; then
 		mdconfig -a -t swap -s 1g -u $m
 		bsdlabel -w md$m auto
 		newfs_msdos -F 32 -b 8192 /dev/md${m}$part > /dev/null 2>&1
-		mount -t msdosfs /dev/md${m}${part} ${mntpoint}$m
+		mount -t msdosfs /dev/md${m}$part ${mntpoint}$m
 		(mkdir ${mntpoint}$m/test$i; cd ${mntpoint}$m/test$i; /tmp/fstool -l -f 100 -n 100 -s ${i}k)
 		umount ${mntpoint}$m > /dev/null 2>&1
 	done

Modified: user/pho/stress2/misc/msdos7.sh
==============================================================================
--- user/pho/stress2/misc/msdos7.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/msdos7.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -43,7 +43,7 @@ mount | grep -q "on $mntpoint " && umoun
 mdconfig -a -t swap -s 1g -u $mdstart
 bsdlabel -w md$mdstart auto
 newfs_msdos -F 32 -b 8192 /dev/md${mdstart}$part > /dev/null || exit 1
-mount -t msdosfs /dev/md${mdstart}${part} $mntpoint
+mount -t msdosfs /dev/md${mdstart}$part $mntpoint
 
 here=`pwd`
 cd /tmp

Modified: user/pho/stress2/misc/newfs3.sh
==============================================================================
--- user/pho/stress2/misc/newfs3.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/newfs3.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -73,7 +73,7 @@ while [ $size -le $((128 * 1024 * 1024))
 		done
 		blocksize=$((blocksize * 2))
 	done
-	mdconfig -d -u ${mdstart}
+	mdconfig -d -u $mdstart
 	size=$((size + 32 * 1024 * 1024))
 done
 rm -f $diskimage

Modified: user/pho/stress2/misc/nfs2.sh
==============================================================================
--- user/pho/stress2/misc/nfs2.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/nfs2.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -42,22 +42,22 @@ D=$diskimage
 dede $D 1m 128 || exit
 
 mount | grep "${mntpoint}2" | grep nfs > /dev/null && umount -f ${mntpoint}2
-mount | grep "$mntpoint"    | grep /md > /dev/null && umount -f ${mntpoint}
-mdconfig -l | grep -q ${mdstart}  &&  mdconfig -d -u $mdstart
+mount | grep "$mntpoint"    | grep /md > /dev/null && umount -f $mntpoint
+mdconfig -l | grep -q $mdstart  &&  mdconfig -d -u $mdstart
 
 mdconfig -a -t vnode -f $D -u $mdstart
 
-bsdlabel -w md${mdstart} auto
+bsdlabel -w md$mdstart auto
 newfs_msdos -F 16 -b 8192 /dev/md${mdstart}$part > /dev/null
 mount -t msdosfs -o rw /dev/md${mdstart}$part $mntpoint
 
-mkdir ${mntpoint}/stressX
-chmod 777 ${mntpoint}/stressX
+mkdir $mntpoint/stressX
+chmod 777 $mntpoint/stressX
 
 [ ! -d ${mntpoint}2 ] &&  mkdir ${mntpoint}2
 chmod 777 ${mntpoint}2
 
-mount -t nfs -o tcp -o retrycnt=3 -o intr -o soft -o rw \
+mount -t nfs -o tcp -o retrycnt=3 -o intr,soft -o rw \
     127.0.0.1:$mntpoint ${mntpoint}2
 
 export INODES=9999		# No inodes on a msdos fs

Modified: user/pho/stress2/misc/nullfs11.sh
==============================================================================
--- user/pho/stress2/misc/nullfs11.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/nullfs11.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -57,7 +57,7 @@ test() {
 	mdconfig -l | grep -q md$mdstart &&  mdconfig -d -u $mdstart
 	mdconfig -a -t swap -s 2g -u $mdstart || exit 1
 	bsdlabel -w md$mdstart auto
-	newfs $newfs_flags md${mdstart}${part} > /dev/null
+	newfs $newfs_flags md${mdstart}$part > /dev/null
 	mount /dev/md${mdstart}$part $mp1
 
 	mount -t nullfs $opt $mp1 $mp2

Modified: user/pho/stress2/misc/pfl.sh
==============================================================================
--- user/pho/stress2/misc/pfl.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/pfl.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -71,8 +71,8 @@ newfs $opt md${md2}$part > /dev/null
 mount /dev/md${md2}$part $mp2
 chmod 777 $mp2
 
-su ${testuser} -c "cd $mp1; /tmp/pfl" &
-su ${testuser} -c "cd $mp2; /tmp/pfl" &
+su $testuser -c "cd $mp1; /tmp/pfl" &
+su $testuser -c "cd $mp2; /tmp/pfl" &
 sleep .5
 start=`date '+%s'`
 while pgrep -q pfl; do

Modified: user/pho/stress2/misc/quota1.sh
==============================================================================
--- user/pho/stress2/misc/quota1.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/quota1.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -48,7 +48,7 @@ mdconfig -a -t vnode -f $D -u $mdstart
 bsdlabel -w md$mdstart auto
 newfs $newfs_flags  md${mdstart}$part > /dev/null
 mount /dev/md${mdstart}$part $mntpoint
-export RUNDIR=${mntpoint}/stressX
+export RUNDIR=$mntpoint/stressX
 export runRUNTIME=10m            # Run tests for 10 minutes
 (cd ..; ./run.sh disk.cfg)
 while mount | grep -q $mntpoint; do

Modified: user/pho/stress2/misc/quota10.sh
==============================================================================
--- user/pho/stress2/misc/quota10.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/quota10.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -54,7 +54,7 @@ if [ $# -eq 0 ]; then
 		dede $D$m 1m 1
 		mdconfig -a -t vnode -f $D$m -u $m
 		bsdlabel -w md$m auto
-		newfs md${m}${part} > /dev/null 2>&1
+		newfs md${m}$part > /dev/null 2>&1
 		echo "/dev/md${m}$part ${mntpoint}$m ufs rw,userquota 2 2" \
 		    >> $PATH_FSTAB
 		mount ${mntpoint}$m
@@ -93,7 +93,7 @@ else
 		for i in `jot 200`; do
 			m=$1
 			opt=`[ $(( m % 2 )) -eq 0 ] && echo -f`
-			mount $opt /dev/md${m}${part} ${mntpoint}$m
+			mount $opt /dev/md${m}$part ${mntpoint}$m
 			while mount | grep -qw $mntpoint$m; do
 				opt=$([ $((`date '+%s'` % 2)) -eq 0 ] &&
 				    echo "-f")

Modified: user/pho/stress2/misc/quota2.sh
==============================================================================
--- user/pho/stress2/misc/quota2.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/quota2.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -45,11 +45,11 @@ mdconfig -a -t vnode -f $D -u $mdstart
 bsdlabel -w md$mdstart auto
 newfs $newfs_flags  md${mdstart}$part > /dev/null
 echo "/dev/md${mdstart}$part $mntpoint ufs rw,userquota 2 2" > $PATH_FSTAB
-mount ${mntpoint}
+mount $mntpoint
 edquota -u -f $mntpoint -e $mntpoint:100000:110000:15000:16000 root
 quotacheck $mntpoint
 quotaon $mntpoint
-export RUNDIR=${mntpoint}/stressX
+export RUNDIR=$mntpoint/stressX
 export runRUNTIME=10m            # Run tests for 10 minutes
 (cd ..; ./run.sh disk.cfg) 2>/dev/null
 while mount | grep $mntpoint | grep -q /dev/md; do

Modified: user/pho/stress2/misc/quota3.sh
==============================================================================
--- user/pho/stress2/misc/quota3.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/quota3.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -38,14 +38,14 @@ export PATH_FSTAB=/tmp/fstab
 trap "rm -f $D $PATH_FSTAB" 0
 dede $D 1m 1k || exit 1
 
-mount | grep "${mntpoint}" | grep -q md${mdstart}$part && umount ${mntpoint}
+mount | grep "$mntpoint" | grep -q md${mdstart}$part && umount $mntpoint
 mdconfig -l | grep md$mdstart > /dev/null &&  mdconfig -d -u $mdstart
 
 mdconfig -a -t vnode -f $D -u $mdstart
 bsdlabel -w md$mdstart auto
 newfs $newfs_flags  md${mdstart}$part > /dev/null
 echo "/dev/md${mdstart}$part $mntpoint ufs rw,userquota 2 2" > $PATH_FSTAB
-mount ${mntpoint}
+mount $mntpoint
 edquota -u -f $mntpoint -e $mntpoint:850000:900000:130000:140000 root
 quotacheck $mntpoint
 quotaon $mntpoint

Modified: user/pho/stress2/misc/quota4.sh
==============================================================================
--- user/pho/stress2/misc/quota4.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/quota4.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -42,10 +42,10 @@ dede $D 1m 1k || exit 1
 
 mount | grep "$mntpoint" | grep md${mdstart}$part > /dev/null && umount \
     $mntpoint
-mdconfig -l | grep md${mdstart} > /dev/null &&  mdconfig -d -u ${mdstart}
+mdconfig -l | grep md$mdstart > /dev/null &&  mdconfig -d -u $mdstart
 
-mdconfig -a -t vnode -f $D -u ${mdstart}
-bsdlabel -w md${mdstart} auto
+mdconfig -a -t vnode -f $D -u $mdstart
+bsdlabel -w md$mdstart auto
 newfs $newfs_flags  md${mdstart}$part > /dev/null
 echo "/dev/md${mdstart}$part $mntpoint ufs rw,userquota 2 2" >> \
     /etc/fstab
@@ -53,8 +53,8 @@ mount $mntpoint
 edquota -u -f $mntpoint -e ${mntpoint}:850000:900000:130000:140000 root \
     > /dev/null 2>&1
 quotaon $mntpoint
-sed -i -e "/md${mdstart}${part}/d" /etc/fstab	# clean up before any panics
-export RUNDIR=${mntpoint}/stressX
+sed -i -e "/md${mdstart}$part/d" /etc/fstab	# clean up before any panics
+export RUNDIR=$mntpoint/stressX
 ../testcases/rw/rw -t 2m -i 200 -h -n 2>/dev/null &
 sleep 60
 false
@@ -62,6 +62,6 @@ while mount | grep -q $mntpoint; do
 	umount $([ $((`date '+%s'` % 2)) -eq 0 ] && echo "-f" || echo "") \
 	    $mntpoint > /dev/null 2>&1
 done
-mdconfig -d -u ${mdstart}
+mdconfig -d -u $mdstart
 rm -f $D
 exit 0

Modified: user/pho/stress2/misc/quota7.sh
==============================================================================
--- user/pho/stress2/misc/quota7.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/quota7.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -47,7 +47,7 @@ mdconfig -a -t vnode -f $D -u $mdstart
 bsdlabel -w md$mdstart auto
 newfs $newfs_flags  md${mdstart}$part > /dev/null
 export PATH_FSTAB=/tmp/fstab
-echo "/dev/md${mdstart}${part} $mntpoint ufs rw,userquota 2 2" > $PATH_FSTAB
+echo "/dev/md${mdstart}$part $mntpoint ufs rw,userquota 2 2" > $PATH_FSTAB
 mount $mntpoint
 set `df -ik $mntpoint | tail -1 | awk '{print $4,$7}'`
 export KBLOCKS=$(($1 / 21))

Modified: user/pho/stress2/misc/quota8.sh
==============================================================================
--- user/pho/stress2/misc/quota8.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/quota8.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -66,20 +66,20 @@ edquota -u -f $mntpoint -e ${mntpoint}:$
 $testuser
 quotaon $mntpoint
 sed -i -e "/md${mdstart}$part/d" /etc/fstab
-export RUNDIR=${mntpoint}/stressX
-mkdir ${mntpoint}/stressX
-chmod 777 ${mntpoint}/stressX
+export RUNDIR=$mntpoint/stressX
+mkdir $mntpoint/stressX
+chmod 777 $mntpoint/stressX
 su $testuser -c 'sh -c "(cd ..;runRUNTIME=20m ./run.sh disk.cfg > \
     /dev/null 2>&1)"&'
 for i in `jot 20`; do
-	echo "`date '+%T'` mksnap_ffs $mntpoint ${mntpoint}/.snap/snap$i"
-	mksnap_ffs $mntpoint ${mntpoint}/.snap/snap$i
+	echo "`date '+%T'` mksnap_ffs $mntpoint $mntpoint/.snap/snap$i"
+	mksnap_ffs $mntpoint $mntpoint/.snap/snap$i
 	sleep 1
 done
 # Remove random snapshot file
 i=$((`date +%S` % 20 + 1))
-echo "rm -f ${mntpoint}/.snap/snap$i"
-rm -f ${mntpoint}/.snap/snap$i
+echo "rm -f $mntpoint/.snap/snap$i"
+rm -f $mntpoint/.snap/snap$i
 wait
 
 su $testuser -c 'sh -c "../tools/killall.sh"'

Modified: user/pho/stress2/misc/quota9.sh
==============================================================================
--- user/pho/stress2/misc/quota9.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/quota9.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -57,8 +57,8 @@ if [ $# -eq 0 ]; then
 	echo "/dev/md${mdstart}$part $mntpoint ufs rw,userquota 2 2" \
 	    >> /etc/fstab
 	mount $mntpoint
-	mkdir ${mntpoint}/stressX
-	chown $testuser ${mntpoint}/stressX
+	mkdir $mntpoint/stressX
+	chown $testuser $mntpoint/stressX
 	set `df -ik $mntpoint | tail -1 | awk '{print $4,$7}'`
 	export KBLOCKS=$1
 	export INODES=$2
@@ -72,12 +72,12 @@ if [ $# -eq 0 ]; then
 
 	qc $mntpoint
 
-	su ${testuser} $0 xxx
+	su $testuser $0 xxx
 	du -k /mnt/stressX
 
 	qc $mntpoint
 
-	sed -i -e "/md${mdstart}${part}/d" /etc/fstab
+	sed -i -e "/md${mdstart}$part/d" /etc/fstab
 	while mount | grep -q $mntpoint; do
 		umount $([ $((`date '+%s'` % 2)) -eq 0 ] &&
 		    echo "-f" || echo "") $mntpoint > /dev/null 2>&1

Modified: user/pho/stress2/misc/rename11.sh
==============================================================================
--- user/pho/stress2/misc/rename11.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/rename11.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -49,9 +49,9 @@ bsdlabel -w md$mdstart auto
 newfs $newfs_flags md${mdstart}$part > /dev/null
 mount /dev/md${mdstart}$part $mntpoint
 
-mkdir ${mntpoint}/dir
+mkdir $mntpoint/dir
 (
-	cd ${mntpoint}/dir
+	cd $mntpoint/dir
 	/tmp/rename11 || echo FAIL
 )
 

Modified: user/pho/stress2/misc/rename3.sh
==============================================================================
--- user/pho/stress2/misc/rename3.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/rename3.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -39,15 +39,15 @@
 
 root=/tmp
 for i in `jot 10000`; do
-	rm -rf ${root}/a
-	mkdir -p ${root}/a/b/c/d/e/f/g
-	mkdir -p ${root}/a/b/c/d/e/f/z
-	cd ${root}/a/b/c/d/e/f
-	( mv ${root}/a/b/c ${root}/a/c ) &
+	rm -rf $root/a
+	mkdir -p $root/a/b/c/d/e/f/g
+	mkdir -p $root/a/b/c/d/e/f/z
+	cd $root/a/b/c/d/e/f
+	( mv $root/a/b/c $root/a/c ) &
 	if ! mv z g/z; then
 		echo "FAILURE at loop $i"
 		break
 	fi
 	wait
 done
-rm -rf ${root}/a
+rm -rf $root/a

Modified: user/pho/stress2/misc/rename5.sh
==============================================================================
--- user/pho/stress2/misc/rename5.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/rename5.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -43,7 +43,7 @@ mount | grep -q "$mntpoint" && umount $m
 mdconfig -l | grep -q $mdstart &&  mdconfig -d -u $mdstart
 
 mdconfig -a -t swap -s 1g -u $mdstart
-bsdlabel -w md${mdstart} auto
+bsdlabel -w md$mdstart auto
 
 newfs $newfs_flags md${mdstart}$part > /dev/null
 mount /dev/md${mdstart}$part $mntpoint

Modified: user/pho/stress2/misc/rename6.sh
==============================================================================
--- user/pho/stress2/misc/rename6.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/rename6.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -51,7 +51,7 @@ newfs $newfs_flags md${mdstart}$part > /
 mount /dev/md${mdstart}$part $mntpoint
 chmod 777 $mntpoint
 
-su ${testuser} -c "cd $mntpoint; /tmp/rename6"
+su $testuser -c "cd $mntpoint; /tmp/rename6"
 
 while mount | grep -q md${mdstart}$part; do
 	umount $mntpoint || sleep 1

Modified: user/pho/stress2/misc/rename7.sh
==============================================================================
--- user/pho/stress2/misc/rename7.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/rename7.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -53,7 +53,7 @@ newfs $newfs_flags md${mdstart}$part > /
 mount /dev/md${mdstart}$part $mntpoint
 chmod 777 $mntpoint
 
-su ${testuser} -c "cd $mntpoint; /tmp/rename7 || echo FAIL"
+su $testuser -c "cd $mntpoint; /tmp/rename7 || echo FAIL"
 
 for i in `jot 10`; do
 	mount | grep -q md${mdstart}$part  && \

Modified: user/pho/stress2/misc/rename8.sh
==============================================================================
--- user/pho/stress2/misc/rename8.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/rename8.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -51,7 +51,7 @@ newfs $newfs_flags md${mdstart}$part > /
 mount /dev/md${mdstart}$part $mntpoint
 chmod 777 $mntpoint
 
-su ${testuser} -c "cd $mntpoint; mkdir r; /tmp/rename8 r"
+su $testuser -c "cd $mntpoint; mkdir r; /tmp/rename8 r"
 ls -li $mntpoint/r | egrep -v "^total"
 
 for i in `jot 10`; do

Modified: user/pho/stress2/misc/rename9.sh
==============================================================================
--- user/pho/stress2/misc/rename9.sh	Mon May  1 06:42:39 2017	(r317637)
+++ user/pho/stress2/misc/rename9.sh	Mon May  1 07:44:34 2017	(r317638)
@@ -51,7 +51,7 @@ rm -rf $mntpoint/.snap
 chmod 777 $mntpoint
 
 (while true; do ls -lRi $mntpoint > /dev/null 2>&1; done) &
-su ${testuser} -c "cd $mntpoint; /tmp/rename9"
+su $testuser -c "cd $mntpoint; /tmp/rename9"
 kill $! > /dev/null 2>&1
 wait
 ls -ilR $mntpoint | egrep -v "^total "

Modified: user/pho/stress2/misc/snap2.sh
==============================================================================
--- user/pho/stress2/misc/snap2.sh	Mon May  1 06:42:39 2017	(r317637)

*** DIFF OUTPUT TRUNCATED AT 1000 LINES ***

From owner-svn-src-user@freebsd.org  Mon May  1 07:49:43 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 8B253D5A6A3
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Mon,  1 May 2017 07:49:43 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id 40165352;
 Mon,  1 May 2017 07:49:43 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v417ngN2030134;
 Mon, 1 May 2017 07:49:42 GMT (envelope-from pho@FreeBSD.org)
Received: (from pho@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v417ngui030132;
 Mon, 1 May 2017 07:49:42 GMT (envelope-from pho@FreeBSD.org)
Message-Id: <201705010749.v417ngui030132@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: pho set sender to pho@FreeBSD.org
 using -f
From: Peter Holm <pho@FreeBSD.org>
Date: Mon, 1 May 2017 07:49:42 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317639 - user/pho/stress2/misc
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Mon, 01 May 2017 07:49:43 -0000

Author: pho
Date: Mon May  1 07:49:41 2017
New Revision: 317639
URL: https://svnweb.freebsd.org/changeset/base/317639

Log:
  Style and prepare for ino64.
  
  Sponsored by:	Dell EMC Isilon

Modified:
  user/pho/stress2/misc/namecache.sh
  user/pho/stress2/misc/trim5.sh

Modified: user/pho/stress2/misc/namecache.sh
==============================================================================
--- user/pho/stress2/misc/namecache.sh	Mon May  1 07:44:34 2017	(r317638)
+++ user/pho/stress2/misc/namecache.sh	Mon May  1 07:49:41 2017	(r317639)
@@ -86,13 +86,13 @@ for i in `jot 30`; do
 	[ $((`date '+%s'` - start)) -gt 1800 ] && break
 done
 
-if ls -l ${dir}/file.0* 2>&1 | egrep "file.0[0-9]" | grep -q "No such file"; then
+if ls -l $dir/file.0* 2>&1 | egrep "file.0[0-9]" | grep -q "No such file"; then
 	echo FAIL
-	echo "ls -l ${dir}/file.0*"
-	ls -l ${dir}/file.0*
+	echo "ls -l $dir/file.0*"
+	ls -l $dir/file.0*
 fi
 
-rm -f /tmp/namecache # /${dir}/file.0*
+rm -f /tmp/namecache # /$dir/file.0*
 exit
 EOF
 /* Test scenario for possible name cache problem */
@@ -144,9 +144,10 @@ pm(void)
 
 			if (stat(dp->d_name, &statb) == -1) {
 				warn("stat(%s)", dp->d_name);
-				printf("name: %-10s, inode %7d, type %2d, namelen %d, d_reclen %d\n",
-					dp->d_name, dp->d_fileno, dp->d_type, dp->d_namlen,
-					dp->d_reclen);
+				printf("name: %-10s, inode %7lu, "
+				    "type %2d, namelen %d, d_reclen %d\n",
+				    dp->d_name, (unsigned long)dp->d_fileno, dp->d_type,
+				    dp->d_namlen, dp->d_reclen);
 				fflush(stdout);
 			} else {
 				printf("stat(%s) succeeded!\n", path);

Modified: user/pho/stress2/misc/trim5.sh
==============================================================================
--- user/pho/stress2/misc/trim5.sh	Mon May  1 07:44:34 2017	(r317638)
+++ user/pho/stress2/misc/trim5.sh	Mon May  1 07:49:41 2017	(r317639)
@@ -43,7 +43,7 @@ bsdlabel -w md$mdstart auto
 newfs -U -t md${mdstart}$part > /dev/null
 mount /dev/md${mdstart}$part $mntpoint
 
-mksnap_ffs $mntpoint ${mntpoint}/.snap/snap
+mksnap_ffs $mntpoint $mntpoint/.snap/snap
 
 while mount | grep -q $mntpoint; do
 	umount $mntpoint || sleep 1

From owner-svn-src-user@freebsd.org  Mon May  1 08:21:51 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id F0A97D58452
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Mon,  1 May 2017 08:21:51 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id C318F1B52;
 Mon,  1 May 2017 08:21:51 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v418Lob2045193;
 Mon, 1 May 2017 08:21:50 GMT (envelope-from pho@FreeBSD.org)
Received: (from pho@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v418LopJ045192;
 Mon, 1 May 2017 08:21:50 GMT (envelope-from pho@FreeBSD.org)
Message-Id: <201705010821.v418LopJ045192@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: pho set sender to pho@FreeBSD.org
 using -f
From: Peter Holm <pho@FreeBSD.org>
Date: Mon, 1 May 2017 08:21:50 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317643 - user/pho/stress2/misc
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Mon, 01 May 2017 08:21:52 -0000

Author: pho
Date: Mon May  1 08:21:50 2017
New Revision: 317643
URL: https://svnweb.freebsd.org/changeset/base/317643

Log:
  Wait for swap to terminate.
  
  Sponsored by:	Dell EMC Isilon

Modified:
  user/pho/stress2/misc/pthread6.sh

Modified: user/pho/stress2/misc/pthread6.sh
==============================================================================
--- user/pho/stress2/misc/pthread6.sh	Mon May  1 08:07:59 2017	(r317642)
+++ user/pho/stress2/misc/pthread6.sh	Mon May  1 08:21:50 2017	(r317643)
@@ -45,7 +45,10 @@ echo "Expect SIGABRT"
 for i in `jot 50`; do
 	/tmp/pthread6
 done
-killall -q swap
+while pgrep -q swap; do
+	pkill swap
+	sleep 1
+done
 
 rm -f /tmp/pthread6 /tmp/pthread6.core
 exit 0

From owner-svn-src-user@freebsd.org  Mon May  1 07:52:23 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id D36D9D5AB2F
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Mon,  1 May 2017 07:52:23 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id B0B4F838;
 Mon,  1 May 2017 07:52:23 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v417qMtR033108;
 Mon, 1 May 2017 07:52:22 GMT (envelope-from pho@FreeBSD.org)
Received: (from pho@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v417qMnR033106;
 Mon, 1 May 2017 07:52:22 GMT (envelope-from pho@FreeBSD.org)
Message-Id: <201705010752.v417qMnR033106@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: pho set sender to pho@FreeBSD.org
 using -f
From: Peter Holm <pho@FreeBSD.org>
Date: Mon, 1 May 2017 07:52:22 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317640 - user/pho/stress2/misc
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Mon, 01 May 2017 07:52:23 -0000

Author: pho
Date: Mon May  1 07:52:22 2017
New Revision: 317640
URL: https://svnweb.freebsd.org/changeset/base/317640

Log:
  Style and fix test termination.
  
  Sponsored by:	Dell EMC Isilon

Modified:
  user/pho/stress2/misc/nfs5.sh
  user/pho/stress2/misc/nfs6.sh

Modified: user/pho/stress2/misc/nfs5.sh
==============================================================================
--- user/pho/stress2/misc/nfs5.sh	Mon May  1 07:49:41 2017	(r317639)
+++ user/pho/stress2/misc/nfs5.sh	Mon May  1 07:52:22 2017	(r317640)
@@ -36,22 +36,23 @@ D=$diskimage
 dede $D 1m 128 || exit
 
 mount | grep "${mntpoint}2" | grep nfs > /dev/null && umount -f ${mntpoint}2
-mount | grep "$mntpoint"    | grep /md > /dev/null && umount -f ${mntpoint}
-mdconfig -l | grep -q ${mdstart}  &&  mdconfig -d -u $mdstart
+mount | grep "$mntpoint"    | grep /md > /dev/null && umount -f $mntpoint
+mdconfig -l | grep -q $mdstart  &&  mdconfig -d -u $mdstart
 
 mdconfig -a -t vnode -f $D -u $mdstart
 
-bsdlabel -w md${mdstart} auto
-newfs $newfs_flags md${mdstart}${part} > /dev/null
-mount /dev/md${mdstart}${part} $mntpoint
+bsdlabel -w md$mdstart auto
+newfs $newfs_flags md${mdstart}$part > /dev/null
+mount /dev/md${mdstart}$part $mntpoint
 
-mkdir ${mntpoint}/stressX
-chmod 777 ${mntpoint}/stressX
+mkdir $mntpoint/stressX
+chmod 777 $mntpoint/stressX
 
 [ ! -d ${mntpoint}2 ] &&  mkdir ${mntpoint}2
 chmod 777 ${mntpoint}2
 
-mount -t nfs -o tcp -o retrycnt=3 -o intr -o soft -o rw 127.0.0.1:/$mntpoint ${mntpoint}2
+mount -t nfs -o tcp -o retrycnt=3 -o intr,soft -o rw 127.0.0.1:$mntpoint \
+    ${mntpoint}2
 
 export RUNDIR=${mntpoint}2/stressX
 export runRUNTIME=4m
@@ -63,5 +64,7 @@ umount -f ${mntpoint}2 > /dev/null 2>&1
 
 mdconfig -d -u $mdstart
 rm -f $D
-kill `ps | grep run.sh | grep -v grep | awk '{print $1}'`
+kill $!
+../tools/killall.sh
 wait
+exit 0

Modified: user/pho/stress2/misc/nfs6.sh
==============================================================================
--- user/pho/stress2/misc/nfs6.sh	Mon May  1 07:49:41 2017	(r317639)
+++ user/pho/stress2/misc/nfs6.sh	Mon May  1 07:52:22 2017	(r317640)
@@ -42,22 +42,23 @@ D=$diskimage
 dede $D 1m 128 || exit
 
 mount | grep "${mntpoint}2" | grep nfs > /dev/null && umount -f ${mntpoint}2
-mount | grep "$mntpoint"    | grep /md > /dev/null && umount -f ${mntpoint}
-mdconfig -l | grep -q ${mdstart}  &&  mdconfig -d -u $mdstart
+mount | grep "$mntpoint"    | grep /md > /dev/null && umount -f $mntpoint
+mdconfig -l | grep -q $mdstart  &&  mdconfig -d -u $mdstart
 
 mdconfig -a -t vnode -f $D -u $mdstart
 
-bsdlabel -w md${mdstart} auto
-newfs $newfs_flags md${mdstart}${part} > /dev/null
-mount /dev/md${mdstart}${part} $mntpoint
+bsdlabel -w md$mdstart auto
+newfs $newfs_flags md${mdstart}$part > /dev/null
+mount /dev/md${mdstart}$part $mntpoint
 
-mkdir ${mntpoint}/stressX
-chmod 777 ${mntpoint}/stressX
+mkdir $mntpoint/stressX
+chmod 777 $mntpoint/stressX
 
 [ ! -d ${mntpoint}2 ] &&  mkdir ${mntpoint}2
 chmod 777 ${mntpoint}2
 
-mount -t nfs -o tcp -o retrycnt=3 -o intr -o soft -o rw 127.0.0.1:$mntpoint ${mntpoint}2
+mount -t nfs -o tcp -o retrycnt=3 -o intr,soft -o rw 127.0.0.1:$mntpoint \
+    ${mntpoint}2
 
 export RUNDIR=${mntpoint}2/stressX
 export runRUNTIME=4m
@@ -67,7 +68,7 @@ sleep 60
 for i in `jot 10`; do
 	umount -f $mntpoint    > /dev/null 2>&1
 	sleep 1
-	mount /dev/md${mdstart}${part} $mntpoint
+	mount /dev/md${mdstart}$part $mntpoint
 	sleep 1
 done
 
@@ -76,5 +77,7 @@ umount -f ${mntpoint}2 > /dev/null 2>&1
 
 mdconfig -d -u $mdstart
 rm -f $D
-kill `ps | grep run.sh | grep -v grep | awk '{print $1}'`
+kill $!
+../tools/killall.sh
 wait
+exit 0

From owner-svn-src-user@freebsd.org  Mon May  1 08:01:56 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 471ACD598EF
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Mon,  1 May 2017 08:01:56 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id F20BBEC7;
 Mon,  1 May 2017 08:01:55 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v4181sN4036924;
 Mon, 1 May 2017 08:01:54 GMT (envelope-from pho@FreeBSD.org)
Received: (from pho@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v4181s88036921;
 Mon, 1 May 2017 08:01:54 GMT (envelope-from pho@FreeBSD.org)
Message-Id: <201705010801.v4181s88036921@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: pho set sender to pho@FreeBSD.org
 using -f
From: Peter Holm <pho@FreeBSD.org>
Date: Mon, 1 May 2017 08:01:54 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317641 - user/pho/stress2/misc
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Mon, 01 May 2017 08:01:56 -0000

Author: pho
Date: Mon May  1 08:01:54 2017
New Revision: 317641
URL: https://svnweb.freebsd.org/changeset/base/317641

Log:
  Test for ufs_extattr and if setfacl(1) exist.
  
  Sponsored by:	Dell EMC Isilon

Modified:
  user/pho/stress2/misc/extattr.sh
  user/pho/stress2/misc/extattr_set_fd.sh
  user/pho/stress2/misc/extattrctl.sh

Modified: user/pho/stress2/misc/extattr.sh
==============================================================================
--- user/pho/stress2/misc/extattr.sh	Mon May  1 07:52:22 2017	(r317640)
+++ user/pho/stress2/misc/extattr.sh	Mon May  1 08:01:54 2017	(r317641)
@@ -34,6 +34,8 @@
 [ `id -u ` -ne 0 ] && echo "Must be root!" && exit 1
 
 . ../default.cfg
+[ "`sysctl -in kern.features.ufs_extattr`" != "1" ] && exit 0
+[ -z "`which setfacl`" ] && exit 0
 
 odir=`pwd`
 

Modified: user/pho/stress2/misc/extattr_set_fd.sh
==============================================================================
--- user/pho/stress2/misc/extattr_set_fd.sh	Mon May  1 07:52:22 2017	(r317640)
+++ user/pho/stress2/misc/extattr_set_fd.sh	Mon May  1 08:01:54 2017	(r317641)
@@ -34,6 +34,8 @@
 [ `id -u ` -ne 0 ] && echo "Must be root!" && exit 1
 
 . ../default.cfg
+[ "`sysctl -in kern.features.ufs_extattr`" != "1" ] && exit 0
+[ -z "`which setfacl`" ] && exit 0
 
 here=`pwd`
 cd /tmp

Modified: user/pho/stress2/misc/extattrctl.sh
==============================================================================
--- user/pho/stress2/misc/extattrctl.sh	Mon May  1 07:52:22 2017	(r317640)
+++ user/pho/stress2/misc/extattrctl.sh	Mon May  1 08:01:54 2017	(r317641)
@@ -42,27 +42,28 @@
 
 . ../default.cfg
 
-sysctl -a | ! grep -q ufs_extattr && echo "Missing options UFS_EXTATTR" && exit 1
+[ "`sysctl -in kern.features.ufs_extattr`" != "1" ] && exit 0
+[ -z "`which setfacl`" ] && exit 0
 
-mount | grep "${mntpoint}" | grep -q md${mdstart}${part} && umount $mntpoint
+mount | grep "$mntpoint" | grep -q md${mdstart}$part && umount $mntpoint
 mdconfig -l | grep -q md$mdstart &&  mdconfig -d -u $mdstart
 
 mdconfig -a -t swap -s 20m -u $mdstart
 bsdlabel -w md$mdstart auto
 
-newfs -O 1 md${mdstart}${part} > /dev/null
-mount /dev/md${mdstart}${part} $mntpoint
+newfs -O 1 md${mdstart}$part > /dev/null
+mount /dev/md${mdstart}$part $mntpoint
 
-mkdir -p ${mntpoint}/.attribute/system
-cd ${mntpoint}/.attribute/system
+mkdir -p $mntpoint/.attribute/system
+cd $mntpoint/.attribute/system
 
 extattrctl initattr -p . 388 posix1e.acl_access
 extattrctl initattr -p . 388 posix1e.acl_default
 cd /
 umount /mnt
-tunefs -a enable /dev/md${mdstart}${part}
-mount /dev/md${mdstart}${part} $mntpoint
-mount | grep md${mdstart}${part}
+tunefs -a enable /dev/md${mdstart}$part
+mount /dev/md${mdstart}$part $mntpoint
+mount | grep md${mdstart}$part
 
 touch $mntpoint/acl-test
 setfacl -b $mntpoint/acl-test

From owner-svn-src-user@freebsd.org  Mon May  1 08:08:01 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 39052D580E0
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Mon,  1 May 2017 08:08:01 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id 09F4D1321;
 Mon,  1 May 2017 08:08:00 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v41880RX037904;
 Mon, 1 May 2017 08:08:00 GMT (envelope-from pho@FreeBSD.org)
Received: (from pho@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v41880gD037901;
 Mon, 1 May 2017 08:08:00 GMT (envelope-from pho@FreeBSD.org)
Message-Id: <201705010808.v41880gD037901@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: pho set sender to pho@FreeBSD.org
 using -f
From: Peter Holm <pho@FreeBSD.org>
Date: Mon, 1 May 2017 08:08:00 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317642 - user/pho/stress2/misc
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Mon, 01 May 2017 08:08:01 -0000

Author: pho
Date: Mon May  1 08:07:59 2017
New Revision: 317642
URL: https://svnweb.freebsd.org/changeset/base/317642

Log:
  Make sure sendmail is installed.
  
  Sponsored by:	Dell EMC Isilon

Modified:
  user/pho/stress2/misc/posix_fadvise2.sh

Modified: user/pho/stress2/misc/posix_fadvise2.sh
==============================================================================
--- user/pho/stress2/misc/posix_fadvise2.sh	Mon May  1 08:01:54 2017	(r317641)
+++ user/pho/stress2/misc/posix_fadvise2.sh	Mon May  1 08:07:59 2017	(r317642)
@@ -33,6 +33,7 @@
 # Fixed by r292326.
 
 . ../default.cfg
+[ -f /usr/libexec/sendmail/sendmail ] || exit 0
 
 here=`pwd`
 cd /tmp

From owner-svn-src-user@freebsd.org  Mon May  1 10:13:01 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 32A29D58FCA
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Mon,  1 May 2017 10:13:01 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id 037B9953;
 Mon,  1 May 2017 10:13:00 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v41AD0RZ090940;
 Mon, 1 May 2017 10:13:00 GMT (envelope-from pho@FreeBSD.org)
Received: (from pho@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v41ACxEs090934;
 Mon, 1 May 2017 10:12:59 GMT (envelope-from pho@FreeBSD.org)
Message-Id: <201705011012.v41ACxEs090934@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: pho set sender to pho@FreeBSD.org
 using -f
From: Peter Holm <pho@FreeBSD.org>
Date: Mon, 1 May 2017 10:12:59 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317644 - user/pho/stress2/misc
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Mon, 01 May 2017 10:13:01 -0000

Author: pho
Date: Mon May  1 10:12:59 2017
New Revision: 317644
URL: https://svnweb.freebsd.org/changeset/base/317644

Log:
  Fix casting and format.
  
  Reported by:	 kib
  Sponsored by:	Dell EMC Isilon

Modified:
  user/pho/stress2/misc/kinfo2.sh
  user/pho/stress2/misc/namecache.sh

Modified: user/pho/stress2/misc/kinfo2.sh
==============================================================================
--- user/pho/stress2/misc/kinfo2.sh	Mon May  1 08:21:50 2017	(r317643)
+++ user/pho/stress2/misc/kinfo2.sh	Mon May  1 10:12:59 2017	(r317644)
@@ -124,9 +124,9 @@ list(void)
 	dp = (struct dirent *)bp;
 	for (;;) {
 #if defined(DEBUG)
-		printf("name: %-10s, inode %7lu, type %2d, namelen %d, "
+		printf("name: %-10s, inode %7ju, type %2d, namelen %d, "
 		    "d_reclen %d\n",
-		    dp->d_name, (unsigned long)dp->d_fileno, dp->d_type,
+		    dp->d_name, (uintmax_t)dp->d_fileno, dp->d_type,
 		    dp->d_namlen, dp->d_reclen); fflush(stdout);
 #endif
 

Modified: user/pho/stress2/misc/namecache.sh
==============================================================================
--- user/pho/stress2/misc/namecache.sh	Mon May  1 08:21:50 2017	(r317643)
+++ user/pho/stress2/misc/namecache.sh	Mon May  1 10:12:59 2017	(r317644)
@@ -144,9 +144,9 @@ pm(void)
 
 			if (stat(dp->d_name, &statb) == -1) {
 				warn("stat(%s)", dp->d_name);
-				printf("name: %-10s, inode %7lu, "
+				printf("name: %-10s, inode %7ju, "
 				    "type %2d, namelen %d, d_reclen %d\n",
-				    dp->d_name, (unsigned long)dp->d_fileno, dp->d_type,
+				    dp->d_name, (uintmax_t)dp->d_fileno, dp->d_type,
 				    dp->d_namlen, dp->d_reclen);
 				fflush(stdout);
 			} else {

From owner-svn-src-user@freebsd.org  Tue May  2 06:01:02 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id E6096D59FF7
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Tue,  2 May 2017 06:01:02 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id B88FBB50;
 Tue,  2 May 2017 06:01:02 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v42611jo082932;
 Tue, 2 May 2017 06:01:01 GMT (envelope-from pho@FreeBSD.org)
Received: (from pho@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v42611OF082931;
 Tue, 2 May 2017 06:01:01 GMT (envelope-from pho@FreeBSD.org)
Message-Id: <201705020601.v42611OF082931@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: pho set sender to pho@FreeBSD.org
 using -f
From: Peter Holm <pho@FreeBSD.org>
Date: Tue, 2 May 2017 06:01:01 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317668 - user/pho/stress2/misc
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Tue, 02 May 2017 06:01:03 -0000

Author: pho
Date: Tue May  2 06:01:01 2017
New Revision: 317668
URL: https://svnweb.freebsd.org/changeset/base/317668

Log:
  Simplify test termination.
  
  Sponsored by:	Dell EMC Isilon

Modified:
  user/pho/stress2/misc/mmap7.sh

Modified: user/pho/stress2/misc/mmap7.sh
==============================================================================
--- user/pho/stress2/misc/mmap7.sh	Tue May  2 05:20:54 2017	(r317667)
+++ user/pho/stress2/misc/mmap7.sh	Tue May  2 06:01:01 2017	(r317668)
@@ -45,12 +45,10 @@ rm -f wire_no_page.c
 cd $odir
 
 (cd ../testcases/swap; ./swap -t 1m -i 2) &
+sleep 1
 cp /tmp/mmap7 /tmp/mmap7.inputfile
 /tmp/mmap7 /tmp/mmap7.inputfile
-while ps | grep -v grep | grep -qw swap; do
-	killall -9 swap 2>/dev/null
-	sleep .1
-done
+while pkill -9 swap; do :; done
 wait
 rm -f /tmp/mmap7 /tmp/mmap7.inputfile
 exit

From owner-svn-src-user@freebsd.org  Tue May  2 06:01:58 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 64B68D5A11D
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Tue,  2 May 2017 06:01:58 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id 35BB7E1C;
 Tue,  2 May 2017 06:01:58 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v4261vuI084704;
 Tue, 2 May 2017 06:01:57 GMT (envelope-from pho@FreeBSD.org)
Received: (from pho@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v4261v0n084703;
 Tue, 2 May 2017 06:01:57 GMT (envelope-from pho@FreeBSD.org)
Message-Id: <201705020601.v4261v0n084703@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: pho set sender to pho@FreeBSD.org
 using -f
From: Peter Holm <pho@FreeBSD.org>
Date: Tue, 2 May 2017 06:01:57 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317669 - user/pho/stress2/misc
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Tue, 02 May 2017 06:01:58 -0000

Author: pho
Date: Tue May  2 06:01:57 2017
New Revision: 317669
URL: https://svnweb.freebsd.org/changeset/base/317669

Log:
  Give the test program the name corresponding to the script.
  
  Sponsored by:	Dell EMC Isilon

Modified:
  user/pho/stress2/misc/mmap5.sh

Modified: user/pho/stress2/misc/mmap5.sh
==============================================================================
--- user/pho/stress2/misc/mmap5.sh	Tue May  2 06:01:01 2017	(r317668)
+++ user/pho/stress2/misc/mmap5.sh	Tue May  2 06:01:57 2017	(r317669)
@@ -37,9 +37,9 @@
 dir=/tmp
 odir=`pwd`
 cd $dir
-sed '1,/^EOF/d' < $odir/$0 > $dir/wire_no_page.c
-mycc -o mmap5  -Wall -Wextra wire_no_page.c || exit 1
-rm -f wire_no_page.c
+sed '1,/^EOF/d' < $odir/$0 > $dir/mmap5.c
+mycc -o mmap5  -Wall -Wextra mmap5.c || exit 1
+rm -f mmap5.c
 cd $odir
 
 cp /tmp/mmap5  /tmp/mmap5.inputfile

From owner-svn-src-user@freebsd.org  Tue May  2 06:03:00 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id B9046D5A13A
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Tue,  2 May 2017 06:03:00 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id 89D47F66;
 Tue,  2 May 2017 06:03:00 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v4262xN6087065;
 Tue, 2 May 2017 06:02:59 GMT (envelope-from pho@FreeBSD.org)
Received: (from pho@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v4262xMG087064;
 Tue, 2 May 2017 06:02:59 GMT (envelope-from pho@FreeBSD.org)
Message-Id: <201705020602.v4262xMG087064@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: pho set sender to pho@FreeBSD.org
 using -f
From: Peter Holm <pho@FreeBSD.org>
Date: Tue, 2 May 2017 06:02:59 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317670 - user/pho/stress2/misc
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Tue, 02 May 2017 06:03:00 -0000

Author: pho
Date: Tue May  2 06:02:59 2017
New Revision: 317670
URL: https://svnweb.freebsd.org/changeset/base/317670

Log:
  Added comment about when problem was fixed.
  
  Sponsored by:	Dell EMC Isilon

Modified:
  user/pho/stress2/misc/mmap28.sh

Modified: user/pho/stress2/misc/mmap28.sh
==============================================================================
--- user/pho/stress2/misc/mmap28.sh	Tue May  2 06:01:57 2017	(r317669)
+++ user/pho/stress2/misc/mmap28.sh	Tue May  2 06:02:59 2017	(r317670)
@@ -37,6 +37,7 @@
 # whereas this test runs as expected on r292372.
 # https://people.freebsd.org/~pho/stress/log/mmap28-2.txt
 # https://people.freebsd.org/~pho/stress/log/mmap28-3.txt
+# Fixed by r307626
 
 # Test scenario refinement by kib@
 

From owner-svn-src-user@freebsd.org  Tue May  2 06:06:13 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id D4525D5A15F
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Tue,  2 May 2017 06:06:13 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id A489110B5;
 Tue,  2 May 2017 06:06:13 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v4266CVk087213;
 Tue, 2 May 2017 06:06:12 GMT (envelope-from pho@FreeBSD.org)
Received: (from pho@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v4266CTa087212;
 Tue, 2 May 2017 06:06:12 GMT (envelope-from pho@FreeBSD.org)
Message-Id: <201705020606.v4266CTa087212@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: pho set sender to pho@FreeBSD.org
 using -f
From: Peter Holm <pho@FreeBSD.org>
Date: Tue, 2 May 2017 06:06:12 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317671 - user/pho/stress2/misc
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Tue, 02 May 2017 06:06:13 -0000

Author: pho
Date: Tue May  2 06:06:12 2017
New Revision: 317671
URL: https://svnweb.freebsd.org/changeset/base/317671

Log:
  In case of errors, limit the output.
  
  Sponsored by:	Dell EMC Isilon

Modified:
  user/pho/stress2/misc/mmap13.sh

Modified: user/pho/stress2/misc/mmap13.sh
==============================================================================
--- user/pho/stress2/misc/mmap13.sh	Tue May  2 06:02:59 2017	(r317670)
+++ user/pho/stress2/misc/mmap13.sh	Tue May  2 06:06:12 2017	(r317671)
@@ -48,7 +48,7 @@ cd $odir
 v1=`sysctl -n vm.stats.vm.v_wire_count`
 for i in `jot 5000`; do
 	/tmp/mmap13
-done
+done 2>&1 | tail -5
 v2=`sysctl -n vm.stats.vm.v_wire_count`
 s=0
 [ $v2 -gt $((v1 + 500)) ] &&

From owner-svn-src-user@freebsd.org  Tue May  2 06:16:36 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id C5337D5A2EC
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Tue,  2 May 2017 06:16:36 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id 8B753162C;
 Tue,  2 May 2017 06:16:36 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v426GZHk091135;
 Tue, 2 May 2017 06:16:35 GMT (envelope-from pho@FreeBSD.org)
Received: (from pho@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v426GZfS091134;
 Tue, 2 May 2017 06:16:35 GMT (envelope-from pho@FreeBSD.org)
Message-Id: <201705020616.v426GZfS091134@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: pho set sender to pho@FreeBSD.org
 using -f
From: Peter Holm <pho@FreeBSD.org>
Date: Tue, 2 May 2017 06:16:35 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317672 - user/pho/stress2/misc
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Tue, 02 May 2017 06:16:36 -0000

Author: pho
Date: Tue May  2 06:16:35 2017
New Revision: 317672
URL: https://svnweb.freebsd.org/changeset/base/317672

Log:
  Simplify test termination, return exit status 0 and added casts for round_page().
  
  Sponsored by:	Dell EMC Isilon

Modified:
  user/pho/stress2/misc/mmap26.sh

Modified: user/pho/stress2/misc/mmap26.sh
==============================================================================
--- user/pho/stress2/misc/mmap26.sh	Tue May  2 06:06:12 2017	(r317671)
+++ user/pho/stress2/misc/mmap26.sh	Tue May  2 06:16:35 2017	(r317672)
@@ -53,11 +53,9 @@ sleep 1
 
 (cd /tmp; /tmp/mmap26 /tmp/mmap26.inputfile)
 
-while ps auxww | grep -v grep | grep -qw swap; do
-	killall -9 swap 2>/dev/null
-done
+while pkill -9 swap; do :; done
 rm -f /tmp/mmap26 /tmp/mmap26.inputfile /tmp/mmap26.core
-exit
+exit 0
 
 EOF
 #include <sys/fcntl.h>
@@ -103,7 +101,8 @@ test(void)
 	p[len - 1] = 1;
 
 	/* one byte past EOF */
-	if (round_page(p + len) == round_page(p + len - 1)) {
+	if (round_page((unsigned long)&p[len]) ==
+	    round_page((unsigned long)&p[len - 1])) {
 		fprintf(stderr, "Expect: Segmentation fault (core dumped)\n");
 		c = p[len];
 	}

From owner-svn-src-user@freebsd.org  Tue May  2 06:54:49 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id E4737D5AD8C
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Tue,  2 May 2017 06:54:49 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id 98F05171;
 Tue,  2 May 2017 06:54:49 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v426smU1014978;
 Tue, 2 May 2017 06:54:48 GMT (envelope-from pho@FreeBSD.org)
Received: (from pho@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v426smgE014977;
 Tue, 2 May 2017 06:54:48 GMT (envelope-from pho@FreeBSD.org)
Message-Id: <201705020654.v426smgE014977@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: pho set sender to pho@FreeBSD.org
 using -f
From: Peter Holm <pho@FreeBSD.org>
Date: Tue, 2 May 2017 06:54:48 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317674 - user/pho/stress2/misc
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Tue, 02 May 2017 06:54:50 -0000

Author: pho
Date: Tue May  2 06:54:48 2017
New Revision: 317674
URL: https://svnweb.freebsd.org/changeset/base/317674

Log:
  Fix issue with missing waits. Cleanup test case while here.
  
  Sponsored by:	Dell EMC Isilon

Modified:
  user/pho/stress2/misc/mmap21.sh

Modified: user/pho/stress2/misc/mmap21.sh
==============================================================================
--- user/pho/stress2/misc/mmap21.sh	Tue May  2 06:27:46 2017	(r317673)
+++ user/pho/stress2/misc/mmap21.sh	Tue May  2 06:54:48 2017	(r317674)
@@ -42,9 +42,7 @@ sed '1,/^EOF/d' < $here/$0 > mmap21.c
 mycc -o mmap21 -Wall -Wextra -O2 -g mmap21.c -lpthread || exit 1
 rm -f mmap21.c
 
-for i in `jot 2`; do
-	su $testuser -c /tmp/mmap21
-done
+su $testuser -c /tmp/mmap21
 
 rm -f /tmp/mmap21 /tmp/mmap21.core
 exit 0
@@ -64,12 +62,13 @@ EOF
 #include <stdlib.h>
 #include <unistd.h>
 
-#define LOOPS 2
-#define PARALLEL 50
+#define LOOPS 1
+#define NMAPS 50
+#define PARALLEL 2
 
 void *p;
 
-void *
+static void *
 tmmap(void *arg __unused)
 {
 	size_t len;
@@ -78,13 +77,13 @@ tmmap(void *arg __unused)
 	pthread_set_name_np(pthread_self(), __func__);
 	len = 1LL * 128 * 1024 * 1024;
 
-	for (i = 0; i < 100; i++)
+	for (i = 0; i < NMAPS; i++)
 		p = mmap(NULL, len, PROT_READ | PROT_WRITE, MAP_ANON, -1, 0);
 
 	return (NULL);
 }
 
-void *
+static void *
 tmlock(void *arg __unused)
 {
 	size_t len;
@@ -108,9 +107,10 @@ tmlock(void *arg __unused)
 	return (NULL);
 }
 
-void
+static void
 test(void)
 {
+	pid_t pid;
 	pthread_t tid[2];
 	int i, rc;
 
@@ -120,11 +120,12 @@ test(void)
 		errc(1, rc, "tmlock()");
 
 	for (i = 0; i < 100; i++) {
-		if (fork() == 0) {
+		if ((pid = fork()) == 0) {
 			usleep(10000);
 			_exit(0);
 		}
-		wait(NULL);
+		if (waitpid(pid, NULL, 0) != pid)
+			err(1, "waitpid(%d)", pid);
 	}
 
 	raise(SIGSEGV);
@@ -138,18 +139,23 @@ test(void)
 int
 main(void)
 {
-	int i, j;
 
-	alarm(120);
+	pid_t pids[PARALLEL];
+	int e, i, j, status;
+
 	for (i = 0; i < LOOPS; i++) {
 		for (j = 0; j < PARALLEL; j++) {
-			if (fork() == 0)
+			if ((pids[j] = fork()) == 0)
 				test();
 		}
 
-		for (j = 0; j < PARALLEL; j++)
-			wait(NULL);
+		e = 0;
+		for (j = 0; j < PARALLEL; j++) {
+			if (waitpid(pids[j], &status, 0) == -1)
+				err(1, "waitpid(%d)", pids[j]);
+			e += status == 0 ? 0 : 1;
+		}
 	}
 
-	return (0);
+	return (e);
 }

From owner-svn-src-user@freebsd.org  Tue May  2 07:28:03 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 60C38D5A308
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Tue,  2 May 2017 07:28:03 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id 23D561279;
 Tue,  2 May 2017 07:28:03 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v427S2UD027156;
 Tue, 2 May 2017 07:28:02 GMT (envelope-from pho@FreeBSD.org)
Received: (from pho@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v427S2Zi027154;
 Tue, 2 May 2017 07:28:02 GMT (envelope-from pho@FreeBSD.org)
Message-Id: <201705020728.v427S2Zi027154@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: pho set sender to pho@FreeBSD.org
 using -f
From: Peter Holm <pho@FreeBSD.org>
Date: Tue, 2 May 2017 07:28:02 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317675 - user/pho/stress2/misc
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Tue, 02 May 2017 07:28:03 -0000

Author: pho
Date: Tue May  2 07:28:01 2017
New Revision: 317675
URL: https://svnweb.freebsd.org/changeset/base/317675

Log:
  Adjust runtime.
  
  Sponsored by:	Dell EMC Isilon

Modified:
  user/pho/stress2/misc/msync.sh
  user/pho/stress2/misc/msync2.sh

Modified: user/pho/stress2/misc/msync.sh
==============================================================================
--- user/pho/stress2/misc/msync.sh	Tue May  2 06:54:48 2017	(r317674)
+++ user/pho/stress2/misc/msync.sh	Tue May  2 07:28:01 2017	(r317675)
@@ -45,9 +45,10 @@ mycc -o msync -Wall -Wextra msync.c -lpt
 rm -f msync.c
 cd $odir
 
-/tmp/msync
-
-killall msync > /dev/null 2>&1
+/tmp/msync &
+sleep 180
+while pkill -9 msync; do :; done
+wait
 rm -f /tmp/msync
 exit
 

Modified: user/pho/stress2/misc/msync2.sh
==============================================================================
--- user/pho/stress2/misc/msync2.sh	Tue May  2 06:54:48 2017	(r317674)
+++ user/pho/stress2/misc/msync2.sh	Tue May  2 07:28:01 2017	(r317675)
@@ -60,7 +60,7 @@ EOF
 #include <time.h>
 #include <unistd.h>
 
-#define RUNTIME 300
+#define RUNTIME 400
 
 const char *file;
 char c;

From owner-svn-src-user@freebsd.org  Tue May  2 10:06:47 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id E8B72D573F6
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Tue,  2 May 2017 10:06:47 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id AA71468F;
 Tue,  2 May 2017 10:06:47 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v42A6kB5091245;
 Tue, 2 May 2017 10:06:46 GMT (envelope-from pho@FreeBSD.org)
Received: (from pho@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v42A6kPm091242;
 Tue, 2 May 2017 10:06:46 GMT (envelope-from pho@FreeBSD.org)
Message-Id: <201705021006.v42A6kPm091242@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: pho set sender to pho@FreeBSD.org
 using -f
From: Peter Holm <pho@FreeBSD.org>
Date: Tue, 2 May 2017 10:06:46 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317676 - user/pho/stress2/misc
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Tue, 02 May 2017 10:06:48 -0000

Author: pho
Date: Tue May  2 10:06:46 2017
New Revision: 317676
URL: https://svnweb.freebsd.org/changeset/base/317676

Log:
  Improve error handling.
  
  Sponsored by:	Dell EMC Isilon

Modified:
  user/pho/stress2/misc/graid0.sh
  user/pho/stress2/misc/graid1.sh
  user/pho/stress2/misc/graid3.sh

Modified: user/pho/stress2/misc/graid0.sh
==============================================================================
--- user/pho/stress2/misc/graid0.sh	Tue May  2 07:28:01 2017	(r317675)
+++ user/pho/stress2/misc/graid0.sh	Tue May  2 10:06:46 2017	(r317676)
@@ -47,7 +47,7 @@ for u in $md1 $md2 $md3; do
 	mdconfig -a -t swap -s $size -u $u
 done
 
-gstripe load > /dev/null 2>&1
+gstripe load > /dev/null 2>&1 && unload=1
 gstripe label -v -s 131072 data /dev/md$md1 /dev/md$md2 /dev/md$md3  > \
     /dev/null || exit 1
 [ -c /dev/stripe/data ] || exit 1
@@ -62,9 +62,10 @@ su $testuser -c 'cd ..; ./run.sh marcus.
 while mount | grep $mntpoint | grep -q /stripe/; do
 	umount $mntpoint || sleep 1
 done
-gstripe stop data
-gstripe unload
+gstripe stop data && s=0 || s=1
+[ $unload ] && gstripe unload
 
 for u in $md3 $md2 $md1; do
 	mdconfig -d -u $u
 done
+exit $s

Modified: user/pho/stress2/misc/graid1.sh
==============================================================================
--- user/pho/stress2/misc/graid1.sh	Tue May  2 07:28:01 2017	(r317675)
+++ user/pho/stress2/misc/graid1.sh	Tue May  2 10:06:46 2017	(r317676)
@@ -38,6 +38,7 @@ md1=$mdstart
 md2=$((mdstart + 1))
 md3=$((mdstart + 2))
 
+s=0
 size=1g
 [ $((`sysctl -n hw.usermem` / 1024 / 1024 / 1024)) -le 4 ] &&
     size=512m
@@ -47,7 +48,7 @@ for u in $md1 $md2 $md3; do
 	mdconfig -a -t swap -s $size -u $u
 done
 
-gmirror load > /dev/null 2>&1
+gmirror load > /dev/null 2>&1 && unload=1
 gmirror label -v -b split -s 2048 data /dev/md$md1 /dev/md$md2 \
     /dev/md$md3 > /dev/null || exit 1
 [ -c /dev/mirror/data ] || exit 1
@@ -62,9 +63,11 @@ su $testuser -c 'cd ..; ./run.sh marcus.
 while mount | grep $mntpoint | grep -q /mirror/; do
 	umount $mntpoint || sleep 1
 done
-gmirror stop data
-gmirror unload
+gmirror stop data || s=1
+gmirror destroy data 2>/dev/null
+[ $unload ] && gmirror unload
 
 for u in $md3 $md2 $md1; do
-	mdconfig -d -u $u
+	mdconfig -d -u $u || s=3
 done
+exit $s

Modified: user/pho/stress2/misc/graid3.sh
==============================================================================
--- user/pho/stress2/misc/graid3.sh	Tue May  2 07:28:01 2017	(r317675)
+++ user/pho/stress2/misc/graid3.sh	Tue May  2 10:06:46 2017	(r317676)
@@ -48,7 +48,7 @@ for u in $md1 $md2 $md3; do
 	mdconfig -a -t swap -s $size -u $u
 done
 
-graid3 load > /dev/null 2>&1
+graid3 load > /dev/null 2>&1 && unload=1
 graid3 label -v -r data md$md1 md$md2 md$md3 > /dev/null || exit 1
 [ -c /dev/raid3/data ] || exit 1
 newfs $newfs_flags /dev/raid3/data  > /dev/null
@@ -64,9 +64,10 @@ while mount | grep $mntpoint | grep -q r
 	umount $mntpoint || sleep 1
 done
 
-graid3 stop data
-graid3 unload
+graid3 stop data && s=0 || s=1
+[ $unload ] && graid3 unload
 
 for u in $md3 $md2 $md1; do
 	mdconfig -d -u $u
 done
+exit $s

From owner-svn-src-user@freebsd.org  Tue May  2 10:13:52 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 58FFFD576B9
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Tue,  2 May 2017 10:13:52 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id 347A9B9A;
 Tue,  2 May 2017 10:13:52 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v42ADpoH095225;
 Tue, 2 May 2017 10:13:51 GMT (envelope-from pho@FreeBSD.org)
Received: (from pho@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v42ADpW4095222;
 Tue, 2 May 2017 10:13:51 GMT (envelope-from pho@FreeBSD.org)
Message-Id: <201705021013.v42ADpW4095222@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: pho set sender to pho@FreeBSD.org
 using -f
From: Peter Holm <pho@FreeBSD.org>
Date: Tue, 2 May 2017 10:13:51 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317677 - user/pho/stress2/misc
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Tue, 02 May 2017 10:13:52 -0000

Author: pho
Date: Tue May  2 10:13:50 2017
New Revision: 317677
URL: https://svnweb.freebsd.org/changeset/base/317677

Log:
  Add a new isofs test.
  If mkisofs is not installed, just ignore the test.
  
  Sponsored by:	Dell EMC Isilon

Added:
  user/pho/stress2/misc/isofs3.sh   (contents, props changed)
Modified:
  user/pho/stress2/misc/isofs.sh
  user/pho/stress2/misc/isofs2.sh

Modified: user/pho/stress2/misc/isofs.sh
==============================================================================
--- user/pho/stress2/misc/isofs.sh	Tue May  2 10:06:46 2017	(r317676)
+++ user/pho/stress2/misc/isofs.sh	Tue May  2 10:13:50 2017	(r317677)
@@ -30,7 +30,7 @@
 
 [ `id -u ` -ne 0 ] && echo "Must not be root!" && exit 1
 
-[ -z "`type mkisofs 2>/dev/null`" ] && echo "mkisofs not found" && exit 1
+[ -z "`type mkisofs 2>/dev/null`" ] && echo "mkisofs not found" && exit 0
 
 . ../default.cfg
 

Modified: user/pho/stress2/misc/isofs2.sh
==============================================================================
--- user/pho/stress2/misc/isofs2.sh	Tue May  2 10:06:46 2017	(r317676)
+++ user/pho/stress2/misc/isofs2.sh	Tue May  2 10:13:50 2017	(r317677)
@@ -35,7 +35,7 @@
 
 [ `id -u ` -ne 0 ] && echo "Must not be root!" && exit 1
 
-[ -z "`type mkisofs 2>/dev/null`" ] && echo "mkisofs not found" && exit 1
+[ -z "`type mkisofs 2>/dev/null`" ] && echo "mkisofs not found" && exit 0
 
 . ../default.cfg
 

Added: user/pho/stress2/misc/isofs3.sh
==============================================================================
--- /dev/null	00:00:00 1970	(empty, because file is newly added)
+++ user/pho/stress2/misc/isofs3.sh	Tue May  2 10:13:50 2017	(r317677)
@@ -0,0 +1,68 @@
+#!/bin/sh
+
+#
+# Copyright (c) 2016 Dell EMC
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+#    notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+#    notice, this list of conditions and the following disclaimer in the
+#    documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
+# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+# ARE DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
+# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+# SUCH DAMAGE.
+#
+# $FreeBSD$
+#
+
+# Simple isofs / union test scenario
+
+[ `id -u ` -ne 0 ] && echo "Must be root!" && exit 1
+[ -z "`which mkisofs`" ] && echo "mkisofs not found" && exit 0
+
+. ../default.cfg
+
+D=`dirname $diskimage`/dir
+I=`dirname $diskimage`/dir.iso
+
+rm -rf $D $I
+mkdir -p $D
+cp -r ../../stress2 $D 2>/dev/null
+
+mkisofs -o $I -r $D > /dev/null 2>&1
+
+mount | grep -q /dev/md${mdstart}$part && umount -f /dev/md${mdstart}$part
+[ -c /dev/md$mdstart ] && mdconfig -d -u $mdstart
+mdconfig -a -t vnode -f $I -u $mdstart || exit 1
+mount -t cd9660 /dev/md$mdstart $mntpoint || exit 1
+
+m2=$((mdstart + 1))
+mdconfig -s 1g -u $m2
+bsdlabel -w md$m2 auto
+newfs $newfs_flags md${m2}$part > /dev/null
+
+mount -o union /dev/md${m2}$part $mntpoint || exit 1
+
+export RUNDIR=$mntpoint/stressX
+export runRUNTIME=5m
+(cd $mntpoint/stress2; ./run.sh marcus.cfg) > /dev/null
+
+umount /mnt
+mdconfig -d -u $m2
+umount /mnt
+mdconfig -d -u $mdstart
+rm -rf $D $I
+exit 0

From owner-svn-src-user@freebsd.org  Thu May  4 06:41:18 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 63A19D5D794
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Thu,  4 May 2017 06:41:18 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id 348111706;
 Thu,  4 May 2017 06:41:18 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v446fHdE006418;
 Thu, 4 May 2017 06:41:17 GMT (envelope-from pho@FreeBSD.org)
Received: (from pho@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v446fHxT006417;
 Thu, 4 May 2017 06:41:17 GMT (envelope-from pho@FreeBSD.org)
Message-Id: <201705040641.v446fHxT006417@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: pho set sender to pho@FreeBSD.org
 using -f
From: Peter Holm <pho@FreeBSD.org>
Date: Thu, 4 May 2017 06:41:17 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317787 - user/pho/stress2/misc
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Thu, 04 May 2017 06:41:18 -0000

Author: pho
Date: Thu May  4 06:41:17 2017
New Revision: 317787
URL: https://svnweb.freebsd.org/changeset/base/317787

Log:
  Added when problem was fixed.
  
  Sponsored by:	Dell EMC Isilon

Modified:
  user/pho/stress2/misc/maxmemdom.sh

Modified: user/pho/stress2/misc/maxmemdom.sh
==============================================================================
--- user/pho/stress2/misc/maxmemdom.sh	Thu May  4 05:28:46 2017	(r317786)
+++ user/pho/stress2/misc/maxmemdom.sh	Thu May  4 06:41:17 2017	(r317787)
@@ -28,9 +28,10 @@
 # $FreeBSD$
 #
 
-# Demonstrate that "options MAXMEMDOM" is broken.
+# Demonstrate that "options MAXMEMDOM" is broken. (NUMA test)
 # panic: vm_page_alloc: missing page
 # https://people.freebsd.org/~pho/stress/log/maxmemdom.txt
+# Fixed in r293640.
 
 . ../default.cfg
 

From owner-svn-src-user@freebsd.org  Thu May  4 06:45:45 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 138F5D5D8DA
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Thu,  4 May 2017 06:45:45 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id C0CB11A0F;
 Thu,  4 May 2017 06:45:44 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v446jhol009481;
 Thu, 4 May 2017 06:45:43 GMT (envelope-from pho@FreeBSD.org)
Received: (from pho@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v446jhtf009479;
 Thu, 4 May 2017 06:45:43 GMT (envelope-from pho@FreeBSD.org)
Message-Id: <201705040645.v446jhtf009479@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: pho set sender to pho@FreeBSD.org
 using -f
From: Peter Holm <pho@FreeBSD.org>
Date: Thu, 4 May 2017 06:45:43 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317788 - user/pho/stress2/misc
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Thu, 04 May 2017 06:45:45 -0000

Author: pho
Date: Thu May  4 06:45:43 2017
New Revision: 317788
URL: https://svnweb.freebsd.org/changeset/base/317788

Log:
  Limit test runtime and added when problems was fixed.
  
  Sponsored by:	Dell EMC Isilon

Modified:
  user/pho/stress2/misc/oovm.sh
  user/pho/stress2/misc/oovm2.sh

Modified: user/pho/stress2/misc/oovm.sh
==============================================================================
--- user/pho/stress2/misc/oovm.sh	Thu May  4 06:41:17 2017	(r317787)
+++ user/pho/stress2/misc/oovm.sh	Thu May  4 06:45:43 2017	(r317788)
@@ -28,19 +28,23 @@
 # $FreeBSD$
 #
 
-# Out of VM deadlock seen.
+# Out of VM deadlock seen. Introduced by r285808.
 # https://people.freebsd.org/~pho/stress/log/oovm.txt
 # https://people.freebsd.org/~pho/stress/log/oovm-2.txt
 
+# Fixed by r290047 and <alc's PQ_LAUNDRY patch>
+
 # Test scenario suggestion by alc@
 
 . ../default.cfg
 
 [ `swapinfo | wc -l` -eq 1 ] && exit 0
+maxsize=$((2 * 1024)) # Limit size due to runtime reasons
 size=$((`sysctl -n hw.physmem` / 1024 / 1024))
-need=$((size * 2))
 [ $size -gt $((4 * 1024)) ] &&
-    echo "RAM should be be capped to 4G for this test."
+    echo "RAM should be capped to 4GB for this test."
+[ $size -gt $maxsize ] && size=$maxsize
+need=$((size * 2))
 d1=${diskimage}.1
 d2=${diskimage}.2
 rm -f $d1 $d2
@@ -49,7 +53,7 @@ rm -f $d1 $d2
 dd if=/dev/zero of=$d1 bs=1m count=$size 2>&1 | \
     egrep -v "records|transferred"
 cp $d1 $d2 || exit
-trap "rm -f $d1 $d2" EXIT SIGINT
+trap "rm -f $d1 $d2" EXIT INT
 
 dir=/tmp
 odir=`pwd`

Modified: user/pho/stress2/misc/oovm2.sh
==============================================================================
--- user/pho/stress2/misc/oovm2.sh	Thu May  4 06:41:17 2017	(r317787)
+++ user/pho/stress2/misc/oovm2.sh	Thu May  4 06:45:43 2017	(r317788)
@@ -31,12 +31,16 @@
 # Out of VM deadlock seen. Introduced by r285808. Variation of oovm.sh
 # https://people.freebsd.org/~pho/stress/log/oovm2.txt
 
+# Fixed by r290047 and <alc's PQ_LAUNDRY patch>
+
 # Test scenario suggestion by alc@
 
 . ../default.cfg
 
 [ `swapinfo | wc -l` -eq 1 ] && exit 0
+maxsize=$((2 * 1024)) # Limit size due to runtime reasons
 size=$((`sysctl -n hw.physmem` / 1024 / 1024))
+[ $size -gt $maxsize ] && size=$maxsize
 d1=${diskimage}.1
 d2=${diskimage}.2
 d3=${diskimage}.3

From owner-svn-src-user@freebsd.org  Thu May  4 12:19:01 2017
Return-Path: <owner-svn-src-user@freebsd.org>
Delivered-To: svn-src-user@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org
 [IPv6:2001:1900:2254:206a::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id A3C4CD5C855
 for <svn-src-user@mailman.ysv.freebsd.org>;
 Thu,  4 May 2017 12:19:01 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org (repo.freebsd.org
 [IPv6:2610:1c1:1:6068::e6a:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by mx1.freebsd.org (Postfix) with ESMTPS id 795D5680;
 Thu,  4 May 2017 12:19:01 +0000 (UTC) (envelope-from pho@FreeBSD.org)
Received: from repo.freebsd.org ([127.0.1.37])
 by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id v44CJ0de045334;
 Thu, 4 May 2017 12:19:00 GMT (envelope-from pho@FreeBSD.org)
Received: (from pho@localhost)
 by repo.freebsd.org (8.15.2/8.15.2/Submit) id v44CIwA7045314;
 Thu, 4 May 2017 12:18:58 GMT (envelope-from pho@FreeBSD.org)
Message-Id: <201705041218.v44CIwA7045314@repo.freebsd.org>
X-Authentication-Warning: repo.freebsd.org: pho set sender to pho@FreeBSD.org
 using -f
From: Peter Holm <pho@FreeBSD.org>
Date: Thu, 4 May 2017 12:18:58 +0000 (UTC)
To: src-committers@freebsd.org, svn-src-user@freebsd.org
Subject: svn commit: r317791 - user/pho/stress2/misc
X-SVN-Group: user
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: svn-src-user@freebsd.org
X-Mailman-Version: 2.1.23
Precedence: list
List-Id: "SVN commit messages for the experimental &quot; user&quot;
 src tree" <svn-src-user.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/svn-src-user/>
List-Post: <mailto:svn-src-user@freebsd.org>
List-Help: <mailto:svn-src-user-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/svn-src-user>,
 <mailto:svn-src-user-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Thu, 04 May 2017 12:19:01 -0000

Author: pho
Date: Thu May  4 12:18:58 2017
New Revision: 317791
URL: https://svnweb.freebsd.org/changeset/base/317791

Log:
  Style.
  
  Sponsored by:	Dell EMC Isilon

Modified:
  user/pho/stress2/misc/gnop.sh
  user/pho/stress2/misc/gnop2.sh
  user/pho/stress2/misc/gnop3.sh
  user/pho/stress2/misc/gnop4.sh
  user/pho/stress2/misc/graid1_5.sh
  user/pho/stress2/misc/maxmemdom.sh
  user/pho/stress2/misc/oovm.sh
  user/pho/stress2/misc/oovm2.sh
  user/pho/stress2/misc/suj12.sh
  user/pho/stress2/misc/suj3.sh
  user/pho/stress2/misc/vmio.sh
  user/pho/stress2/misc/zfs2.sh
  user/pho/stress2/misc/zfs3.sh
  user/pho/stress2/misc/zfs4.sh
  user/pho/stress2/misc/zfs5.sh

Modified: user/pho/stress2/misc/gnop.sh
==============================================================================
--- user/pho/stress2/misc/gnop.sh	Thu May  4 11:57:52 2017	(r317790)
+++ user/pho/stress2/misc/gnop.sh	Thu May  4 12:18:58 2017	(r317791)
@@ -42,8 +42,8 @@ test() {
 
 	mdconfig -a -t swap -s 2g -u $mdstart || exit 1
 	gnop create -S $1 /dev/md$mdstart
-	newfs $newfs_flags /dev/md${mdstart}.nop > /dev/null
-	mount /dev/md${mdstart}.nop $mntpoint
+	newfs $newfs_flags /dev/md$mdstart.nop > /dev/null
+	mount /dev/md$mdstart.nop $mntpoint
 	chmod 777 $mntpoint
 
 	export runRUNTIME=4m
@@ -54,8 +54,8 @@ test() {
 	while mount | grep $mntpoint | grep -q /dev/md; do
 		umount $mntpoint || sleep 1
 	done
-	checkfs /dev/md${mdstart}.nop
-	gnop destroy /dev/md${mdstart}.nop
+	checkfs /dev/md$mdstart.nop
+	gnop destroy /dev/md$mdstart.nop
 	mdconfig -d -u $mdstart
 }
 

Modified: user/pho/stress2/misc/gnop2.sh
==============================================================================
--- user/pho/stress2/misc/gnop2.sh	Thu May  4 11:57:52 2017	(r317790)
+++ user/pho/stress2/misc/gnop2.sh	Thu May  4 12:18:58 2017	(r317791)
@@ -55,8 +55,8 @@ test() {
 
 	mdconfig -a -t swap -s 2g -u $mdstart || exit 1
 	gnop create -S $1 /dev/md$mdstart
-	newfs $newfs_flags /dev/md${mdstart}.nop > /dev/null
-	mount /dev/md${mdstart}.nop $mntpoint
+	newfs $newfs_flags /dev/md$mdstart.nop > /dev/null
+	mount /dev/md$mdstart.nop $mntpoint
 	chmod 777 $mntpoint
 
 	dd if=/dev/zero of=$mntpoint/file bs=1k count=333 2>&1 | \
@@ -66,7 +66,7 @@ test() {
 	while mount | grep $mntpoint | grep -q /dev/md; do
 		umount $mntpoint || sleep 1
 	done
-	gnop destroy /dev/md${mdstart}.nop
+	gnop destroy /dev/md$mdstart.nop
 	mdconfig -d -u $mdstart
 }
 

Modified: user/pho/stress2/misc/gnop3.sh
==============================================================================
--- user/pho/stress2/misc/gnop3.sh	Thu May  4 11:57:52 2017	(r317790)
+++ user/pho/stress2/misc/gnop3.sh	Thu May  4 12:18:58 2017	(r317791)
@@ -46,8 +46,8 @@ mount | grep $mntpoint | grep -q /dev/md
 
 mdconfig -a -t swap -s 8g -u $mdstart || exit 1
 gnop create -S 8k /dev/md$mdstart
-newfs $newfs_flags /dev/md${mdstart}.nop > /dev/null
-mount /dev/md${mdstart}.nop $mntpoint
+newfs $newfs_flags /dev/md$mdstart.nop > /dev/null
+mount /dev/md$mdstart.nop $mntpoint
 chmod 777 $mntpoint
 
 cp -a ../../stress2 $mntpoint
@@ -63,7 +63,7 @@ cd $here
 while mount | grep $mntpoint | grep -q /dev/md; do
 	umount $mntpoint || sleep 1
 done
-gnop destroy /dev/md${mdstart}.nop
+gnop destroy /dev/md$mdstart.nop
 mdconfig -d -u $mdstart
 [ $notloaded ] && gnop unload
 exit 0

Modified: user/pho/stress2/misc/gnop4.sh
==============================================================================
--- user/pho/stress2/misc/gnop4.sh	Thu May  4 11:57:52 2017	(r317790)
+++ user/pho/stress2/misc/gnop4.sh	Thu May  4 12:18:58 2017	(r317791)
@@ -47,8 +47,8 @@ mount | grep $mntpoint | grep -q /dev/md
 
 mdconfig -a -t swap -s ${gigs}g -u $mdstart || exit 1
 gnop create -S 8k /dev/md$mdstart
-newfs $newfs_flags /dev/md${mdstart}.nop > /dev/null
-mount /dev/md${mdstart}.nop $mntpoint
+newfs $newfs_flags /dev/md$mdstart.nop > /dev/null
+mount /dev/md$mdstart.nop $mntpoint
 chmod 777 $mntpoint
 
 start=`date '+%s'`
@@ -69,7 +69,7 @@ cd /
 while mount | grep $mntpoint | grep -q /dev/md; do
 	umount $mntpoint || sleep 1
 done
-gnop destroy /dev/md${mdstart}.nop
+gnop destroy /dev/md$mdstart.nop
 mdconfig -d -u $mdstart
 [ $notloaded ] && gnop unload
 exit 0

Modified: user/pho/stress2/misc/graid1_5.sh
==============================================================================
--- user/pho/stress2/misc/graid1_5.sh	Thu May  4 11:57:52 2017	(r317790)
+++ user/pho/stress2/misc/graid1_5.sh	Thu May  4 12:18:58 2017	(r317791)
@@ -62,7 +62,7 @@ gpart add -t freebsd-ufs -s 340m md$u3
 ) > /dev/null
 gnop create md$u2
 gnop create md$u3
-gmirror label test md${u1}p1 md${u2}.nopp1 md${u3}.nopp1
+gmirror label test md${u1}p1 md$u2.nopp1 md$u3.nopp1
 [ -c /dev/mirror/test ] || exit 1
 
 newfs /dev/mirror/test > /dev/null
@@ -77,18 +77,18 @@ rm -rf /tmp/stressX.control
 su $testuser -c 'cd ..; ./run.sh marcus.cfg' > /dev/null 2>&1 &
 pid=$!
 
-gnop configure -r 0 -w 1 md${u2}.nop
-gnop configure -r 0 -w 1 md${u3}.nop
+gnop configure -r 0 -w 1 md$u2.nop
+gnop configure -r 0 -w 1 md$u3.nop
 while kill -0 $pid > /dev/null 2>&1; do
-	if ! gmirror status test | grep -q md${u2}.nopp1; then
+	if ! gmirror status test | grep -q md$u2.nopp1; then
 		gmirror forget test
-		gmirror remove test md${u2}.nopp1 2>/dev/null
-		gmirror insert test md${u2}.nopp1 2>/dev/null
+		gmirror remove test md$u2.nopp1 2>/dev/null
+		gmirror insert test md$u2.nopp1 2>/dev/null
 	fi
-	if ! gmirror status test | grep -q md${u3}.nopp1; then
+	if ! gmirror status test | grep -q md$u3.nopp1; then
 		gmirror forget test
-		gmirror remove test md${u3}.nopp1 2>/dev/null
-		gmirror insert test md${u3}.nopp1 2>/dev/null
+		gmirror remove test md$u3.nopp1 2>/dev/null
+		gmirror insert test md$u3.nopp1 2>/dev/null
 	fi
 	sleep 1
 done

Modified: user/pho/stress2/misc/maxmemdom.sh
==============================================================================
--- user/pho/stress2/misc/maxmemdom.sh	Thu May  4 11:57:52 2017	(r317790)
+++ user/pho/stress2/misc/maxmemdom.sh	Thu May  4 12:18:58 2017	(r317791)
@@ -38,8 +38,8 @@
 [ `sysctl -n vm.ndomains` -eq 1 ] && exit 0
 size=$((`sysctl -n hw.physmem` / 1024 / 1024))
 need=$((size * 2))
-d1=${diskimage}.1
-d2=${diskimage}.2
+d1=$diskimage.1
+d2=$diskimage.2
 rm -f $d1 $d2
 [ `df -k $(dirname $diskimage) | tail -1 | awk '{print int($4 / 1024)'}` -lt \
     $need ] && printf "Need %d MB on %s.\n" $need `dirname $diskimage` && exit

Modified: user/pho/stress2/misc/oovm.sh
==============================================================================
--- user/pho/stress2/misc/oovm.sh	Thu May  4 11:57:52 2017	(r317790)
+++ user/pho/stress2/misc/oovm.sh	Thu May  4 12:18:58 2017	(r317791)
@@ -45,8 +45,8 @@ size=$((`sysctl -n hw.physmem` / 1024 / 
     echo "RAM should be capped to 4GB for this test."
 [ $size -gt $maxsize ] && size=$maxsize
 need=$((size * 2))
-d1=${diskimage}.1
-d2=${diskimage}.2
+d1=$diskimage.1
+d2=$diskimage.2
 rm -f $d1 $d2
 [ `df -k $(dirname $diskimage) | tail -1 | awk '{print int($4 / 1024)'}` -lt \
     $need ] && printf "Need %d MB on %s.\n" $need `dirname $diskimage` && exit

Modified: user/pho/stress2/misc/oovm2.sh
==============================================================================
--- user/pho/stress2/misc/oovm2.sh	Thu May  4 11:57:52 2017	(r317790)
+++ user/pho/stress2/misc/oovm2.sh	Thu May  4 12:18:58 2017	(r317791)
@@ -41,10 +41,10 @@
 maxsize=$((2 * 1024)) # Limit size due to runtime reasons
 size=$((`sysctl -n hw.physmem` / 1024 / 1024))
 [ $size -gt $maxsize ] && size=$maxsize
-d1=${diskimage}.1
-d2=${diskimage}.2
-d3=${diskimage}.3
-d4=${diskimage}.4
+d1=$diskimage.1
+d2=$diskimage.2
+d3=$diskimage.3
+d4=$diskimage.4
 rm -f $d1 $d2 $d3 $d4
 [ `df -k $(dirname $diskimage) | tail -1 | awk '{print int($4 / 1024)'}` -lt \
     $size ] && printf "Need %d MB on %s.\n" $size `dirname $diskimage` && exit

Modified: user/pho/stress2/misc/suj12.sh
==============================================================================
--- user/pho/stress2/misc/suj12.sh	Thu May  4 11:57:52 2017	(r317790)
+++ user/pho/stress2/misc/suj12.sh	Thu May  4 12:18:58 2017	(r317791)
@@ -41,8 +41,8 @@ mdconfig -l | grep -q md$mdstart &&  mdc
 mdconfig -a -t swap -s 1g -u $mdstart || exit 1
 gnop status || exit 1
 gnop create -S 4k /dev/md$mdstart
-newfs -j /dev/md${mdstart}.nop
-mount /dev/md${mdstart}.nop $mntpoint
+newfs -j /dev/md$mdstart.nop
+mount /dev/md$mdstart.nop $mntpoint
 chmod 777 $mntpoint
 
 export runRUNTIME=20m
@@ -53,7 +53,7 @@ su $testuser -c 'cd ..; ./run.sh marcus.
 while mount | grep $mntpoint | grep -q /dev/md; do
 	umount $mntpoint || sleep 1
 done
-checkfs /dev/md${mdstart}.nop
-gnop destroy /dev/md${mdstart}.nop
+checkfs /dev/md$mdstart.nop
+gnop destroy /dev/md$mdstart.nop
 gnop unload
 mdconfig -d -u $mdstart

Modified: user/pho/stress2/misc/suj3.sh
==============================================================================
--- user/pho/stress2/misc/suj3.sh	Thu May  4 11:57:52 2017	(r317790)
+++ user/pho/stress2/misc/suj3.sh	Thu May  4 12:18:58 2017	(r317791)
@@ -46,20 +46,20 @@ echo test | geli init -s 4096 -J - -K /t
 echo test | geli attach -j - -k /tmp/suj3.key /dev/md$mdstart
 newfs /dev/md$mdstart.eli > /dev/null
 
-tunefs -j enable /dev/md${mdstart}.eli
-mount /dev/md${mdstart}.eli $mntpoint
+tunefs -j enable /dev/md$mdstart.eli
+mount /dev/md$mdstart.eli $mntpoint
 chmod 777 $mntpoint
 
 export RUNDIR=$mntpoint/stressX
 export runRUNTIME=5m
 
-mount | grep -q md${mdstart}.eli && \
+mount | grep -q md$mdstart.eli && \
 	su $testuser -c "cd ..; ./run.sh rw.cfg"
 
 while mount | grep $mntpoint | grep -q /dev/md; do
 	umount $mntpoint || sleep 1
 done
-checkfs /dev/md${mdstart}.eli
+checkfs /dev/md$mdstart.eli
 geli kill /dev/md$mdstart.eli
 mdconfig -d -u $mdstart
 rm -f /tmp/suj3.key

Modified: user/pho/stress2/misc/vmio.sh
==============================================================================
--- user/pho/stress2/misc/vmio.sh	Thu May  4 11:57:52 2017	(r317790)
+++ user/pho/stress2/misc/vmio.sh	Thu May  4 12:18:58 2017	(r317791)
@@ -46,8 +46,8 @@
 [ `swapinfo | wc -l` -eq 1 ] || { swapoff -a; off=1; }
 size=$((`sysctl -n hw.physmem` / 1024 / 1024))
 need=$((size * 2))
-d1=${diskimage}.1
-d2=${diskimage}.2
+d1=$diskimage.1
+d2=$diskimage.2
 rm -f $d1 $d2 || exit 1
 [ `df -k $(dirname $diskimage) | tail -1 | awk '{print int($4 / 1024)'}` \
     -lt $need ] &&

Modified: user/pho/stress2/misc/zfs2.sh
==============================================================================
--- user/pho/stress2/misc/zfs2.sh	Thu May  4 11:57:52 2017	(r317790)
+++ user/pho/stress2/misc/zfs2.sh	Thu May  4 12:18:58 2017	(r317791)
@@ -38,8 +38,8 @@
 kldstat -v | grep -q zfs.ko  || { kldload zfs.ko ||
     exit 0; loaded=1; }
 
-d1=${diskimage}.1
-d2=${diskimage}.2
+d1=$diskimage.1
+d2=$diskimage.2
 
 dd if=/dev/zero of=$d1 bs=1m count=1k 2>&1 | egrep -v "records|transferred"
 dd if=/dev/zero of=$d2 bs=1m count=1k 2>&1 | egrep -v "records|transferred"

Modified: user/pho/stress2/misc/zfs3.sh
==============================================================================
--- user/pho/stress2/misc/zfs3.sh	Thu May  4 11:57:52 2017	(r317790)
+++ user/pho/stress2/misc/zfs3.sh	Thu May  4 12:18:58 2017	(r317791)
@@ -44,8 +44,8 @@
 kldstat -v | grep -q zfs.ko  || { kldload zfs.ko ||
     exit 0; loaded=1; }
 
-d1=${diskimage}.1
-d2=${diskimage}.2
+d1=$diskimage.1
+d2=$diskimage.2
 
 dd if=/dev/zero of=$d1 bs=1m count=1k 2>&1 | egrep -v "records|transferred"
 dd if=/dev/zero of=$d2 bs=1m count=1k 2>&1 | egrep -v "records|transferred"

Modified: user/pho/stress2/misc/zfs4.sh
==============================================================================
--- user/pho/stress2/misc/zfs4.sh	Thu May  4 11:57:52 2017	(r317790)
+++ user/pho/stress2/misc/zfs4.sh	Thu May  4 12:18:58 2017	(r317791)
@@ -38,8 +38,8 @@
 kldstat -v | grep -q zfs.ko  || { kldload zfs.ko ||
     exit 0; loaded=1; }
 
-d1=${diskimage}.1
-d2=${diskimage}.2
+d1=$diskimage.1
+d2=$diskimage.2
 
 dd if=/dev/zero of=$d1 bs=1m count=1k 2>&1 | egrep -v "records|transferred"
 dd if=/dev/zero of=$d2 bs=1m count=1k 2>&1 | egrep -v "records|transferred"

Modified: user/pho/stress2/misc/zfs5.sh
==============================================================================
--- user/pho/stress2/misc/zfs5.sh	Thu May  4 11:57:52 2017	(r317790)
+++ user/pho/stress2/misc/zfs5.sh	Thu May  4 12:18:58 2017	(r317791)
@@ -38,8 +38,8 @@
 kldstat -v | grep -q zfs.ko  || { kldload zfs.ko ||
     exit 0; loaded=1; }
 
-d1=${diskimage}.1
-d2=${diskimage}.2
+d1=$diskimage.1
+d2=$diskimage.2
 
 dd if=/dev/zero of=$d1 bs=1m count=1k 2>&1 | egrep -v "records|transferred"
 dd if=/dev/zero of=$d2 bs=1m count=1k 2>&1 | egrep -v "records|transferred"