Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 30 Oct 2019 02:03:37 +0000 (UTC)
From:      Alan Somers <asomers@FreeBSD.org>
To:        src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-stable@freebsd.org, svn-src-stable-12@freebsd.org
Subject:   svn commit: r354165 - in stable/12: . tests/sys/cddl/zfs/include tests/sys/cddl/zfs/tests/cli_root/zdb tests/sys/cddl/zfs/tests/cli_root/zpool_add tests/sys/cddl/zfs/tests/cli_root/zpool_create tes...
Message-ID:  <201910300203.x9U23b7U018593@repo.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: asomers
Date: Wed Oct 30 02:03:37 2019
New Revision: 354165
URL: https://svnweb.freebsd.org/changeset/base/354165

Log:
  MFC r353117-r353118, r353281-r353282, r353284-r353289, r353309-r353310, r353360-r353361, r353366, r353379
  
  r353117:
  ZFS: the hotspare_add_004_neg test needs at least two disks
  
  Sponsored by:	Axcient
  
  r353118:
  ZFS: fix several of the "zpool create" tests
  
  * Remove zpool_create_013_neg.  FreeBSD doesn't have an equivalent of
    Solaris's metadevices.  GEOM would be the equivalent, but since all geoms
    are the same from ZFS's perspective, this test would be redundant with
    zpool_create_012_neg
  
  * Remove zpool_create_014_neg.  FreeBSD does not support swapping to regular
    files.
  
  * Remove zpool_create_016_pos.  This test is redundant with literally every
    other test that creates a disk-backed pool.
  
  * s:/etc/vfstab:/etc/fstab in zpool_create_011_neg
  
  * Delete the VTOC-related portion of zpool_create_008_pos.  FreeBSD doesn't
    use VTOC.
  
  * Replace dumpadm with dumpon and swap with swapon in multiple tests.
  
  * In zpool_create_015_neg, don't require "zpool create -n" to fail.  It's
    reasonable for that variant to succeed, because it doesn't actually open
    the zvol.
  
  * Greatly simplify zpool_create_012_neg.  Make it safer, too, but not
    interfering with the system's regular swap devices.
  
  * Expect zpool_create_011_neg to fail (PR 241070)
  
  * Delete some redundant cleanup steps in various tests
  
  * Remove some unneeeded ATF timeout specifications.  The default is fine.
  
  PR:		241070
  Sponsored by:	Axcient
  
  r353281:
  ZFS: fix several zvol_misc tests
  
  * Adapt zvol_misc_001_neg to use dumpon instead of Solaris's dumpadm
  * Disable zvol_misc_003_neg, zvol_misc_005_neg, and zvol_misc_006_pos,
    because they involve using a zvol as a dump device, which FreeBSD does not
    yet support.
  
  Sponsored by:	Axcient
  
  r353282:
  zfs: fix the slog_012_neg test
  
  This test attempts to corrupt a file-backed vdev by deleting it and then
  recreating it with truncate.  But that doesn't work, because the pool
  already has the vdev open, and it happily hangs on to the open-but-deleted
  file.  Fix by truncating the file without deleting it.
  
  Sponsored by:	Axcient
  
  r353284:
  ZFS: fix the zpool_get_002_pos test
  
  ZFS has grown some additional properties that hadn't been added to the
  config file yet.  While I'm here, improve the error message, and remove a
  superfluous command.
  
  Sponsored by:	Axcient
  
  r353285:
  zfs: fix the zdb_001_neg test
  
  The test needed to be updated for r331701 (MFV illumos 8671400), which added
  a "-k" option.
  
  Sponsored by:	Axcient
  
  r353286:
  zfs: skip the zfsd tests if zfsd is not running
  
  Sponsored by:	Axcient
  Differential Revision:	https://reviews.freebsd.org/D21878
  
  r353287:
  ZFS: fix the delegate tests
  
  These tests have never worked correctly
  
  * Replace runwattr with sudo
  * Fix a scoping bug with the "dtst" variable
  * Cleanup user properties created during tests
  * Eliminate the checks for refreservation and send support. They will always
    be supported.
  * Fix verify_fs_snapshot. It seemed to assume that permissions would not yet
    be delegated, but that's not how it's actually used.
  * Combine verify_fs_promote with verify_vol_promote
  * Remove some useless sleeps
  * Fix backwards condition in verify_vol_volsize
  * Remove some redundant cleanup steps in the tests. cleanup.ksh will handle
    everything.
  * Disable some parts of the tests that FreeBSD doesn't support:
      * Creating snapshots with mkdir
      * devices
      * shareisci
      * sharenfs
      * xattr
      * zoned
  
  The sharenfs parts could probably be reenabled with more work to remove the
  Solarisms.
  
  Sponsored by:	Axcient
  Differential Revision:	https://reviews.freebsd.org/D21898
  
  r353288:
  ZFS: mark hotspare_scrub_002_pos as an expected failure
  
  "zpool scrub" doesn't detect all errors on active spares in raidz arrays
  
  PR:		241069
  Sponsored by:	Axcient
  
  r353289:
  ZFS: fix the redundancy tests
  
  * Fix force_sync_path, which ensures that a file is fully flushed to disk.
    Apparently "zpool history"'s performance has improved, but exporting and
    importing the pool still works.
  * Fix file_dva by using undocumented zdb syntax to clarify that we're
    interested in the pool's root file system, not the pool itself. This
    should also fix the zpool_clear_001_pos test.
  * Remove a redundant cleanup step
  
  Sponsored by:	Axcient
  Differential Revision:	https://reviews.freebsd.org/D21901
  
  r353309:
  zfs: fix the zfsd_autoreplace_003_pos test
  
  The test declared that it only needed 5 disks, but actually tried to use 6.
  Fix it to use just 5, which is all it really needs.
  
  Sponsored by:	Axcient
  
  r353310:
  zfs: fix the zfsd_hotspare_007_pos test
  
  It was trying to destroy the pool while zfsd was detaching the spare, and
  "zpool destroy" failed.  Fix by waiting until the spare has fully detached.
  
  Sponsored by:	Axcient
  
  r353360:
  ZFS: multiple fixes to the zpool_import tests
  
  * Don't create a UFS mountpoint just to store some temporary files.  The
    tests should always be executed with a sufficiently large TMPDIR.
    Creating the UFS mountpoint is not only unneccessary, but it slowed
    zpool_import_missing_002_pos greatly, because that test moves large files
    between TMPDIR and the UFS mountpoint.  This change also allows many of
    the tests to be executed with just a single test disk, instead of two.
  
  * Move zpool_import_missing_002_pos's backup device dir from / to $PWD to
    prevent cross-device moves.  On my system, these two changes improved that
    test's speed by 39x.  It should also prevent ENOSPC errors seen in CI.
  
  * If insufficient disks are available, don't try to partition one of them.
    Just rely on Kyua to skip the test.  Users who care will configure Kyua
    with sufficient disks.
  
  Sponsored by:	Axcient
  
  r353361:
  ZFS: in the tests, don't override PWD
  
  The ZFS test suite was overriding the common $PWD variable with the path to
  the pwd command, even though no test wanted to use it that way.  Most tests
  didn't notice, because ksh93 eventually restored it to its proper meaning.
  
  Sponsored by:	Axcient
  
  r353366:
  ZFS: fix the zpool_add_010_pos test
  
  The test is necessarily racy, because it depends on being able to complete a
  "zpool add" before a previous resilver finishes.  But it was racier than it
  needed to be.  Move the first "zpool add" to before the resilver starts.
  
  Sponsored by:	Axcient
  
  r353379:
  zfs: multiple improvements to the zpool_add tests
  
  * Don't partition a disk if too few are available.  Just rely on Kyua to
    ensure that the tests aren't run with insufficient disks.
  
  * Remove redundant cleanup steps
  
  * In zpool_add_003_pos, store the temporary file in $PWD so Kyua will
    automatically clean it up.
  
  * Update zpool_add_005_pos to use dumpon instead of dumpadm.  This test had
    never been ported to FreeBSD.
  
  * In zpool_add_005_pos, don't format the dump disk with UFS.  That was
    pointless.
  
  Sponsored by:	Axcient
  > Description of fields to fill in above:                     76 columns --|
  > PR:                       If and which Problem Report is related.
  > Submitted by:             If someone else sent in the change.
  > Reported by:              If someone else reported the issue.
  > Reviewed by:              If someone else reviewed your modification.
  > Approved by:              If you needed approval for this commit.
  > Obtained from:            If the change is from a third party.
  > MFC after:                N [day[s]|week[s]|month[s]].  Request a reminder email.
  > MFH:                      Ports tree branch name.  Request approval for merge.
  > Relnotes:                 Set to 'yes' for mention in release notes.
  > Security:                 Vulnerability reference (one per line) or description.
  > Sponsored by:             If the change was sponsored by an organization (each collaborator).
  > Differential Revision:    https://reviews.freebsd.org/D### (*full* phabric URL needed).
  > Empty fields above will be automatically removed.
  
  _M   12
  M    12/ObsoleteFiles.inc
  M    12/tests/sys/cddl/zfs/include/commands.txt
  M    12/tests/sys/cddl/zfs/include/libtest.kshlib
  M    12/tests/sys/cddl/zfs/tests/cli_root/zdb/zdb_001_neg.ksh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/cleanup.ksh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/setup.ksh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_001_pos.ksh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_002_pos.ksh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_003_pos.ksh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_004_pos.ksh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_005_pos.ksh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_006_pos.ksh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_007_neg.ksh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_008_neg.ksh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_009_neg.ksh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_010_pos.ksh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_test.sh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/Makefile
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create.kshlib
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_008_pos.ksh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_011_neg.ksh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_012_neg.ksh
  D    12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_013_neg.ksh
  D    12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_014_neg.ksh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_015_neg.ksh
  D    12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_016_pos.ksh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_test.sh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_get/zpool_get.cfg
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_get/zpool_get_002_pos.ksh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_import/cleanup.ksh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_import/setup.ksh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import.cfg
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_all_001_pos.ksh
  M    12/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_test.sh
  M    12/tests/sys/cddl/zfs/tests/delegate/delegate_common.kshlib
  M    12/tests/sys/cddl/zfs/tests/delegate/zfs_allow_001_pos.ksh
  M    12/tests/sys/cddl/zfs/tests/delegate/zfs_allow_002_pos.ksh
  M    12/tests/sys/cddl/zfs/tests/delegate/zfs_allow_003_pos.ksh
  M    12/tests/sys/cddl/zfs/tests/delegate/zfs_allow_007_pos.ksh
  M    12/tests/sys/cddl/zfs/tests/delegate/zfs_allow_010_pos.ksh
  M    12/tests/sys/cddl/zfs/tests/delegate/zfs_allow_012_neg.ksh
  M    12/tests/sys/cddl/zfs/tests/delegate/zfs_allow_test.sh
  M    12/tests/sys/cddl/zfs/tests/delegate/zfs_unallow_007_neg.ksh
  M    12/tests/sys/cddl/zfs/tests/delegate/zfs_unallow_test.sh
  M    12/tests/sys/cddl/zfs/tests/hotspare/hotspare_test.sh
  M    12/tests/sys/cddl/zfs/tests/redundancy/redundancy.kshlib
  M    12/tests/sys/cddl/zfs/tests/redundancy/redundancy_001_pos.ksh
  M    12/tests/sys/cddl/zfs/tests/slog/slog_012_neg.ksh
  M    12/tests/sys/cddl/zfs/tests/zfsd/zfsd_autoreplace_003_pos.ksh
  M    12/tests/sys/cddl/zfs/tests/zfsd/zfsd_hotspare_007_pos.ksh
  M    12/tests/sys/cddl/zfs/tests/zfsd/zfsd_test.sh
  M    12/tests/sys/cddl/zfs/tests/zvol/zvol_misc/zvol_misc_001_neg.ksh
  M    12/tests/sys/cddl/zfs/tests/zvol/zvol_misc/zvol_misc_test.sh

Deleted:
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_013_neg.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_014_neg.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_016_pos.ksh
Modified:
  stable/12/ObsoleteFiles.inc
  stable/12/tests/sys/cddl/zfs/include/commands.txt
  stable/12/tests/sys/cddl/zfs/include/libtest.kshlib
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zdb/zdb_001_neg.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/cleanup.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/setup.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_001_pos.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_002_pos.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_003_pos.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_004_pos.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_005_pos.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_006_pos.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_007_neg.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_008_neg.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_009_neg.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_010_pos.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_test.sh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/Makefile
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create.kshlib
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_008_pos.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_011_neg.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_012_neg.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_015_neg.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_test.sh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_get/zpool_get.cfg
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_get/zpool_get_002_pos.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_import/cleanup.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_import/setup.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import.cfg
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_all_001_pos.ksh
  stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_import/zpool_import_test.sh
  stable/12/tests/sys/cddl/zfs/tests/delegate/delegate_common.kshlib
  stable/12/tests/sys/cddl/zfs/tests/delegate/zfs_allow_001_pos.ksh
  stable/12/tests/sys/cddl/zfs/tests/delegate/zfs_allow_002_pos.ksh
  stable/12/tests/sys/cddl/zfs/tests/delegate/zfs_allow_003_pos.ksh
  stable/12/tests/sys/cddl/zfs/tests/delegate/zfs_allow_007_pos.ksh
  stable/12/tests/sys/cddl/zfs/tests/delegate/zfs_allow_010_pos.ksh
  stable/12/tests/sys/cddl/zfs/tests/delegate/zfs_allow_012_neg.ksh
  stable/12/tests/sys/cddl/zfs/tests/delegate/zfs_allow_test.sh
  stable/12/tests/sys/cddl/zfs/tests/delegate/zfs_unallow_007_neg.ksh
  stable/12/tests/sys/cddl/zfs/tests/delegate/zfs_unallow_test.sh
  stable/12/tests/sys/cddl/zfs/tests/hotspare/hotspare_test.sh
  stable/12/tests/sys/cddl/zfs/tests/redundancy/redundancy.kshlib
  stable/12/tests/sys/cddl/zfs/tests/redundancy/redundancy_001_pos.ksh
  stable/12/tests/sys/cddl/zfs/tests/slog/slog_012_neg.ksh
  stable/12/tests/sys/cddl/zfs/tests/zfsd/zfsd_autoreplace_003_pos.ksh
  stable/12/tests/sys/cddl/zfs/tests/zfsd/zfsd_hotspare_007_pos.ksh
  stable/12/tests/sys/cddl/zfs/tests/zfsd/zfsd_test.sh
  stable/12/tests/sys/cddl/zfs/tests/zvol/zvol_misc/zvol_misc_001_neg.ksh
  stable/12/tests/sys/cddl/zfs/tests/zvol/zvol_misc/zvol_misc_test.sh
Directory Properties:
  stable/12/   (props changed)

Modified: stable/12/ObsoleteFiles.inc
==============================================================================
--- stable/12/ObsoleteFiles.inc	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/ObsoleteFiles.inc	Wed Oct 30 02:03:37 2019	(r354165)
@@ -38,6 +38,10 @@
 #   xargs -n1 | sort | uniq -d;
 # done
 
+# 20191003: Remove useless ZFS tests
+OLD_FILES+=usr/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_013_neg.ksh
+OLD_FILES+=usr/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_014_neg.ksh
+OLD_FILES+=usr/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_016_pos.ksh
 # 20190811: sys/pwm.h renamed to dev/pwmc.h and pwm(9) removed
 OLD_FILES+=usr/include/sys/pwm.h usr/share/man/man9/pwm.9
 # 20190723: new clang import which bumps version from 8.0.0 to 8.0.1.

Modified: stable/12/tests/sys/cddl/zfs/include/commands.txt
==============================================================================
--- stable/12/tests/sys/cddl/zfs/include/commands.txt	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/include/commands.txt	Wed Oct 30 02:03:37 2019	(r354165)
@@ -65,6 +65,7 @@
 /usr/bin/dirname
 /usr/bin/du
 #%%STFSUITEDIR%%/bin/dumpadm
+/sbin/dumpon
 /bin/echo
 /usr/bin/egrep
 /usr/bin/env
@@ -131,7 +132,6 @@
 /bin/pkill
 /bin/ps
 #/usr/sbin/psrinfo
-/bin/pwd
 /usr/sbin/quotaon
 /bin/rcp
 /sbin/reboot

Modified: stable/12/tests/sys/cddl/zfs/include/libtest.kshlib
==============================================================================
--- stable/12/tests/sys/cddl/zfs/include/libtest.kshlib	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/include/libtest.kshlib	Wed Oct 30 02:03:37 2019	(r354165)
@@ -2676,8 +2676,7 @@ function gen_dataset_name
 #
 # Ensure that a given path has been synced, not just ZIL committed.
 #
-# XXX The implementation currently requires calling 'zpool history'.  On
-#     FreeBSD, the sync(8) command (via $SYNC) calls zfs_sync() which just
+# XXX On FreeBSD, the sync(8) command (via $SYNC) calls zfs_sync() which just
 #     does a zil_commit(), as opposed to a txg_wait_synced().  For things that
 #     require writing to their final destination (e.g. for intentional
 #     corruption purposes), zil_commit() is not good enough.
@@ -2686,10 +2685,8 @@ function force_sync_path # path
 {
 	typeset path="$1"
 
-	zfspath=$($DF $path 2>/dev/null | tail -1 | cut -d" " -f1 | cut -d/ -f1)
-	[ -z "$zfspath" ] && return false
-	log_note "Force syncing ${zfspath} for ${path} ..."
-	$ZPOOL history $zfspath >/dev/null 2>&1
+	log_must $ZPOOL export $TESTPOOL
+	log_must $ZPOOL import -d $path $TESTPOOL
 }
 
 #
@@ -3326,7 +3323,7 @@ function file_dva # dataset filepath [level] [offset] 
 	# The inner match is for 'DVA[0]=<0:1b412600:200>', in which the
 	# text surrounding the actual DVA is a fixed size with 8 characters
 	# before it and 1 after.
-	$ZDB -P -vvvvv $dataset $inode | \
+	$ZDB -P -vvvvv "$dataset/" $inode | \
 	    $AWK -v level=${level} -v dva_num=${dva_num} '
 		BEGIN { stage = 0; }
 		(stage == 0) && ($1=="Object") { stage = 1; next; }

Modified: stable/12/tests/sys/cddl/zfs/tests/cli_root/zdb/zdb_001_neg.ksh
==============================================================================
--- stable/12/tests/sys/cddl/zfs/tests/cli_root/zdb/zdb_001_neg.ksh	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/tests/cli_root/zdb/zdb_001_neg.ksh	Wed Oct 30 02:03:37 2019	(r354165)
@@ -69,7 +69,7 @@ set -A args "create" "add" "destroy" "import fakepool"
     "add mirror fakepool" "add raidz fakepool" \
     "add raidz1 fakepool" "add raidz2 fakepool" \
     "setvprop" "blah blah" "-%" "--?" "-*" "-=" \
-    "-a" "-f" "-g" "-h" "-j" "-k" "-m" "-n" "-p" "-p /tmp" \
+    "-a" "-f" "-g" "-h" "-j" "-m" "-n" "-p" "-p /tmp" \
     "-r" "-t" "-w" "-x" "-y" "-z" \
     "-D" "-E" "-G" "-H" "-I" "-J" "-K" "-M" \
     "-N" "-Q" "-T" "-W"

Modified: stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/cleanup.ksh
==============================================================================
--- stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/cleanup.ksh	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/cleanup.ksh	Wed Oct 30 02:03:37 2019	(r354165)
@@ -32,6 +32,8 @@
 . $STF_SUITE/include/libtest.kshlib
 . $STF_SUITE/tests/cli_root/zpool_add/zpool_add.kshlib
 
+poolexists $TESTPOOL && \
+	destroy_pool $TESTPOOL
 cleanup_devices $DISKS
 
 log_pass

Modified: stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/setup.ksh
==============================================================================
--- stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/setup.ksh	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/setup.ksh	Wed Oct 30 02:03:37 2019	(r354165)
@@ -34,19 +34,4 @@
 
 verify_runnable "global"
 
-if [[ -n $DISK ]]; then
-	#
-        # Use 'zpool create' to clean up the infomation in 
-        # in the given disk to avoid slice overlapping.
-        #
-	cleanup_devices $DISK
-
-        partition_disk $SIZE $DISK 7
-else 
-	for disk in `$ECHO $DISKSARRAY`; do
-		cleanup_devices $disk
-        	partition_disk $SIZE $disk 7
-	done
-fi	
-
 log_pass

Modified: stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_001_pos.ksh
==============================================================================
--- stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_001_pos.ksh	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_001_pos.ksh	Wed Oct 30 02:03:37 2019	(r354165)
@@ -58,45 +58,24 @@
 
 verify_runnable "global"
 
-function cleanup
-{
-	poolexists $TESTPOOL && \
-		destroy_pool $TESTPOOL
-
-	partition_cleanup
-}
-
 log_assert "'zpool add <pool> <vdev> ...' can add devices to the pool." 
 
-log_onexit cleanup
-
 set -A keywords "" "mirror" "raidz" "raidz1" "spare"
 
-typeset diskname=${disk#/dev/}
+set_disks
+
 typeset diskname0=${DISK0#/dev/}
 typeset diskname1=${DISK1#/dev/}
+typeset diskname2=${DISK2#/dev/}
+typeset diskname3=${DISK3#/dev/}
+typeset diskname4=${DISK4#/dev/}
 
-case $DISK_ARRAY_NUM in
-0|1)
-        pooldevs="${diskname}p1 \
-                  /dev/${diskname}p1 \
-                  \"${diskname}p1 ${diskname}p2\""
-        mirrordevs="\"/dev/${diskname}p1 ${diskname}p2\""
-        raidzdevs="\"/dev/${diskname}p1 ${diskname}p2\""
+pooldevs="${diskname0}\
+	 \"/dev/${diskname0} ${diskname1}\" \
+	 \"${diskname0} ${diskname1} ${diskname2}\""
+mirrordevs="\"/dev/${diskname0} ${diskname1}\""
+raidzdevs="\"/dev/${diskname0} ${diskname1}\""
 
-        ;;
-2|*)
-        pooldevs="${diskname0}p1\
-                 \"/dev/${diskname0}p1 ${diskname1}p1\" \
-                 \"${diskname0}p1 ${diskname0}p2 ${diskname1}p2\"\
-                 \"${diskname0}p1 ${diskname1}p1 ${diskname0}p2\
-                   ${diskname1}p2\""
-        mirrordevs="\"/dev/${diskname0}p1 ${diskname1}p1\""
-        raidzdevs="\"/dev/${diskname0}p1 ${diskname1}p1\""
-
-        ;;
-esac
-
 typeset -i i=0
 typeset vdev
 eval set -A poolarray $pooldevs
@@ -107,7 +86,7 @@ while (( $i < ${#keywords[*]} )); do
         case ${keywords[i]} in
         ""|spare)     
 		for vdev in "${poolarray[@]}"; do
-			create_pool "$TESTPOOL" "${diskname}p6"
+			create_pool "$TESTPOOL" "${diskname3}"
 			log_must poolexists "$TESTPOOL"
                 	log_must $ZPOOL add -f "$TESTPOOL" ${keywords[i]} \
 				$vdev
@@ -119,7 +98,7 @@ while (( $i < ${#keywords[*]} )); do
         mirror) 
 		for vdev in "${mirrorarray[@]}"; do
 			create_pool "$TESTPOOL" "${keywords[i]}" \
-				"${diskname}p4" "${diskname}p5"
+				"${diskname3}" "${diskname4}"
 			log_must poolexists "$TESTPOOL"
                 	log_must $ZPOOL add "$TESTPOOL" ${keywords[i]} \
 				$vdev
@@ -131,7 +110,7 @@ while (( $i < ${#keywords[*]} )); do
         raidz|raidz1)  
 		for vdev in "${raidzarray[@]}"; do
 			create_pool "$TESTPOOL" "${keywords[i]}" \
-				"${diskname}p4" "${diskname}p5"
+				"${diskname3}" "${diskname4}"
 			log_must poolexists "$TESTPOOL"
                 	log_must $ZPOOL add "$TESTPOOL" ${keywords[i]} \
 				$vdev

Modified: stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_002_pos.ksh
==============================================================================
--- stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_002_pos.ksh	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_002_pos.ksh	Wed Oct 30 02:03:37 2019	(r354165)
@@ -60,26 +60,18 @@
 
 verify_runnable "global"
 
-function cleanup
-{
-        poolexists $TESTPOOL && \
-                destroy_pool $TESTPOOL
+set_disks
 
-	partition_cleanup
-}
-
 log_assert "'zpool add -f <pool> <vdev> ...' can successfully add" \
 	"devices to the pool in some cases."
 
-log_onexit cleanup
-
-create_pool "$TESTPOOL" mirror "${disk}p1" "${disk}p2"
+create_pool "$TESTPOOL" mirror "${DISK0}" "${DISK1}"
 log_must poolexists "$TESTPOOL"
 
-log_mustnot $ZPOOL add "$TESTPOOL" ${disk}p3
-log_mustnot iscontained "$TESTPOOL" "${disk}p3"
+log_mustnot $ZPOOL add "$TESTPOOL" ${DISK2}
+log_mustnot iscontained "$TESTPOOL" "${DISK2}"
 
-log_must $ZPOOL add -f "$TESTPOOL" ${disk}p3
-log_must iscontained "$TESTPOOL" "${disk}p3"
+log_must $ZPOOL add -f "$TESTPOOL" ${DISK2}
+log_must iscontained "$TESTPOOL" "${DISK2}"
 
 log_pass "'zpool add -f <pool> <vdev> ...' executes successfully."

Modified: stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_003_pos.ksh
==============================================================================
--- stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_003_pos.ksh	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_003_pos.ksh	Wed Oct 30 02:03:37 2019	(r354165)
@@ -58,30 +58,19 @@
 
 verify_runnable "global"
 
-function cleanup
-{
-        poolexists $TESTPOOL && \
-                destroy_pool $TESTPOOL
+set_disks
 
-	partition_cleanup
-
-	[[ -e $tmpfile ]] && \
-		log_must $RM -f $tmpfile
-}
-
 log_assert "'zpool add -n <pool> <vdev> ...' can display the configuration" \
 	"without actually adding devices to the pool."
 
-log_onexit cleanup
+tmpfile="zpool_add_003.tmp${TESTCASE_ID}"
 
-tmpfile="$TMPDIR/zpool_add_003.tmp${TESTCASE_ID}"
-
-create_pool "$TESTPOOL" "${disk}p1"
+create_pool "$TESTPOOL" "${DISK0}"
 log_must poolexists "$TESTPOOL"
 
-$ZPOOL add -n "$TESTPOOL" ${disk}p2 > $tmpfile
+$ZPOOL add -n "$TESTPOOL" ${DISK1} > $tmpfile
 
-log_mustnot iscontained "$TESTPOOL" "${disk}p2"
+log_mustnot iscontained "$TESTPOOL" "${DISK1}"
 
 str="would update '$TESTPOOL' to the following configuration:"
 $CAT $tmpfile | $GREP "$str" >/dev/null 2>&1

Modified: stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_004_pos.ksh
==============================================================================
--- stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_004_pos.ksh	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_004_pos.ksh	Wed Oct 30 02:03:37 2019	(r354165)
@@ -58,6 +58,8 @@
 
 verify_runnable "global"
 
+set_disks
+
 function cleanup
 {
 	poolexists $TESTPOOL && \
@@ -67,19 +69,16 @@ function cleanup
 		log_must $ZFS destroy -f $TESTPOOL1/$TESTVOL
 	poolexists $TESTPOOL1 && \
 		destroy_pool "$TESTPOOL1"	
-
-	partition_cleanup
-
 }
 
 log_assert "'zpool add <pool> <vdev> ...' can add zfs volume to the pool." 
 
 log_onexit cleanup
 
-create_pool "$TESTPOOL" "${disk}p1"
+create_pool "$TESTPOOL" "${DISK0}"
 log_must poolexists "$TESTPOOL"
 
-create_pool "$TESTPOOL1" "${disk}p2"
+create_pool "$TESTPOOL1" "${DISK1}"
 log_must poolexists "$TESTPOOL1"
 log_must $ZFS create -V $VOLSIZE $TESTPOOL1/$TESTVOL
 

Modified: stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_005_pos.ksh
==============================================================================
--- stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_005_pos.ksh	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_005_pos.ksh	Wed Oct 30 02:03:37 2019	(r354165)
@@ -61,6 +61,8 @@
 
 verify_runnable "global"
 
+set_disks
+
 function cleanup
 {
 	poolexists "$TESTPOOL" && \
@@ -68,11 +70,7 @@ function cleanup
 	poolexists "$TESTPOOL1" && \
 		destroy_pool "$TESTPOOL1"
 
-	if [[ -n $saved_dump_dev ]]; then
-		log_must eval "$DUMPADM -u -d $saved_dump_dev > /dev/null"
-	fi
-
-	partition_cleanup
+	$DUMPON -r $dump_dev
 }
 
 log_assert "'zpool add' should fail with inapplicable scenarios."
@@ -81,22 +79,20 @@ log_onexit cleanup
 
 mnttab_dev=$(find_mnttab_dev)
 vfstab_dev=$(find_vfstab_dev)
-saved_dump_dev=$(save_dump_dev)
-dump_dev=${disk}p3
+dump_dev=${DISK2}
 
-create_pool "$TESTPOOL" "${disk}p1"
+create_pool "$TESTPOOL" "${DISK0}"
 log_must poolexists "$TESTPOOL"
 
-create_pool "$TESTPOOL1" "${disk}p2"
+create_pool "$TESTPOOL1" "${DISK1}"
 log_must poolexists "$TESTPOOL1"
-log_mustnot $ZPOOL add -f "$TESTPOOL" ${disk}p2
+log_mustnot $ZPOOL add -f "$TESTPOOL" ${DISK1}
 
 log_mustnot $ZPOOL add -f "$TESTPOOL" $mnttab_dev
 
 log_mustnot $ZPOOL add -f "$TESTPOOL" $vfstab_dev
 
-log_must $ECHO "y" | $NEWFS /dev/$dump_dev > /dev/null 2>&1
-log_must $DUMPADM -u -d /dev/$dump_dev > /dev/null
+log_must $DUMPON $dump_dev
 log_mustnot $ZPOOL add -f "$TESTPOOL" $dump_dev
 
 log_pass "'zpool add' should fail with inapplicable scenarios."

Modified: stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_006_pos.ksh
==============================================================================
--- stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_006_pos.ksh	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_006_pos.ksh	Wed Oct 30 02:03:37 2019	(r354165)
@@ -66,16 +66,12 @@ function cleanup
 	poolexists $TESTPOOL1 && \
 		destroy_pool $TESTPOOL1
 
-	datasetexists $TESTPOOL/$TESTFS && \
-		log_must $ZFS destroy -f $TESTPOOL/$TESTFS
 	poolexists $TESTPOOL && \
 		destroy_pool $TESTPOOL
 
 	if [[ -d $TESTDIR ]]; then
 		log_must $RM -rf $TESTDIR
 	fi
-
-	partition_cleanup
 }
 
 	
@@ -101,7 +97,6 @@ function setup_vdevs #<disk> 
 		# Minus $largest_num/20 to leave 5% space for metadata.
 		(( vdevs_num=largest_num - largest_num/20 ))
 		file_size=64
-		vdev=$disk
 	else
 		vdevs_num=$VDEVS_NUM
 		(( file_size = fs_size / (1024 * 1024 * (vdevs_num + vdevs_num/20)) ))
@@ -112,8 +107,8 @@ function setup_vdevs #<disk> 
 		(( slice_size = file_size * (vdevs_num + vdevs_num/20) ))
 		wipe_partition_table $disk					
 		set_partition 0 "" ${slice_size}m $disk
-		vdev=${disk}p1
         fi
+	vdev=${disk}
 
 	create_pool $TESTPOOL $vdev  
 	[[ -d $TESTDIR ]] && \
@@ -143,17 +138,11 @@ log_assert " 'zpool add [-f]' can add large numbers of
 	   " pool without any errors."
 log_onexit cleanup
 
-if [[ $DISK_ARRAY_NUM == 0 ]]; then
-        disk=$DISK
-else
-        disk=$DISK0
-fi
-
 vdevs_list=""
 vdevs_num=$VDEVS_NUM
 file_size=$FILE_SIZE
 
-setup_vdevs $disk
+setup_vdevs $DISK0
 log_must $ZPOOL add -f "$TESTPOOL1" $vdevs_list
 log_must iscontained "$TESTPOOL1" "$vdevs_list"
 

Modified: stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_007_neg.ksh
==============================================================================
--- stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_007_neg.ksh	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_007_neg.ksh	Wed Oct 30 02:03:37 2019	(r354165)
@@ -57,22 +57,14 @@
 
 verify_runnable "global"
 
-function cleanup
-{
-	poolexists "$TESTPOOL" && \
-		destroy_pool "$TESTPOOL"
-	
-	partition_cleanup
-}
+set_disks
 
 log_assert "'zpool add' should return an error with badly-formed parameters."
 
-log_onexit cleanup
-
 set -A args "" "-f" "-n" "-?" "-nf" "-fn" "-f -n" "--f" "-blah" \
-	"-? $TESTPOOL ${disk}p2"
+	"-? $TESTPOOL ${DISK1}"
 
-create_pool "$TESTPOOL" "${disk}p1"
+create_pool "$TESTPOOL" "${DISK0}"
 log_must poolexists "$TESTPOOL"
 
 typeset -i i=0

Modified: stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_008_neg.ksh
==============================================================================
--- stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_008_neg.ksh	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_008_neg.ksh	Wed Oct 30 02:03:37 2019	(r354165)
@@ -57,23 +57,12 @@
 
 verify_runnable "global"
 
-function cleanup
-{
-
-        poolexists "$TESTPOOL" && \
-                destroy_pool "$TESTPOOL"
-
-	partition_cleanup
-}
-
 log_assert "'zpool add' should return an error with nonexistent pools and vdevs"
 
-log_onexit cleanup
-
-set -A args "" "-f nonexistent_pool ${disk}p2" \
+set -A args "" "-f nonexistent_pool ${DISK1}" \
 	"-f $TESTPOOL nonexistent_vdev"
 
-create_pool "$TESTPOOL" "${disk}p1"
+create_pool "$TESTPOOL" "${DISK0}"
 log_must poolexists "$TESTPOOL"
 
 typeset -i i=0

Modified: stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_009_neg.ksh
==============================================================================
--- stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_009_neg.ksh	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_009_neg.ksh	Wed Oct 30 02:03:37 2019	(r354165)
@@ -58,26 +58,14 @@
 
 verify_runnable "global"
 
-function cleanup
-{
-
-        poolexists "$TESTPOOL" && \
-                destroy_pool "$TESTPOOL"
-
-	partition_cleanup
-
-}
-
 log_assert "'zpool add' should fail if vdevs are the same or vdev is " \
 	"contained in the given pool."
 
-log_onexit cleanup
-
-create_pool "$TESTPOOL" "${disk}p1"
+create_pool "$TESTPOOL" "${DISK1}"
 log_must poolexists "$TESTPOOL"
 
-log_mustnot $ZPOOL add -f "$TESTPOOL" ${disk}p2 ${disk}p2
-log_mustnot $ZPOOL add -f "$TESTPOOL" ${disk}p1
+log_mustnot $ZPOOL add -f "$TESTPOOL" ${DISK0} ${DISK0}
+log_mustnot $ZPOOL add -f "$TESTPOOL" ${DISK1}
 
 log_pass "'zpool add' get fail as expected if vdevs are the same or vdev is " \
 	"contained in the given pool."

Modified: stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_010_pos.ksh
==============================================================================
--- stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_010_pos.ksh	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_010_pos.ksh	Wed Oct 30 02:03:37 2019	(r354165)
@@ -31,25 +31,15 @@
 
 verify_runnable "global"
 
-function cleanup
-{
-	poolexists $TESTPOOL && \
-		destroy_pool $TESTPOOL
-
-	partition_cleanup
-}
-
 log_assert "'zpool add' can add devices, even if a replacing vdev with a spare child is present"
 
-log_onexit cleanup
-
 create_pool $TESTPOOL mirror ${DISK0} ${DISK1}
 # A replacing vdev will automatically detach the older member when resilvering
 # is complete.  We don't want that to happen during this test, so write some
 # data just to slow down resilvering.
 $TIMEOUT 60s $DD if=/dev/zero of=/$TESTPOOL/zerofile bs=128k
-log_must $ZPOOL replace $TESTPOOL ${DISK0} ${DISK2}
 log_must $ZPOOL add $TESTPOOL spare ${DISK3}
+log_must $ZPOOL replace $TESTPOOL ${DISK0} ${DISK2}
 log_must $ZPOOL replace $TESTPOOL ${DISK0} ${DISK3}
 log_must $ZPOOL add $TESTPOOL spare ${DISK4}
 

Modified: stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_test.sh
==============================================================================
--- stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_test.sh	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_add/zpool_add_test.sh	Wed Oct 30 02:03:37 2019	(r354165)
@@ -39,7 +39,7 @@ zpool_add_001_pos_body()
 	. $(atf_get_srcdir)/zpool_add.kshlib
 	. $(atf_get_srcdir)/zpool_add.cfg
 
-	verify_disk_count "$DISKS" 2
+	verify_disk_count "$DISKS" 5
 	ksh93 $(atf_get_srcdir)/setup.ksh || atf_fail "Setup failed"
 	ksh93 $(atf_get_srcdir)/zpool_add_001_pos.ksh || atf_fail "Testcase failed"
 }
@@ -66,7 +66,7 @@ zpool_add_002_pos_body()
 	. $(atf_get_srcdir)/zpool_add.kshlib
 	. $(atf_get_srcdir)/zpool_add.cfg
 
-	verify_disk_count "$DISKS" 1
+	verify_disk_count "$DISKS" 3
 	ksh93 $(atf_get_srcdir)/setup.ksh || atf_fail "Setup failed"
 	ksh93 $(atf_get_srcdir)/zpool_add_002_pos.ksh || atf_fail "Testcase failed"
 }
@@ -93,7 +93,7 @@ zpool_add_003_pos_body()
 	. $(atf_get_srcdir)/zpool_add.kshlib
 	. $(atf_get_srcdir)/zpool_add.cfg
 
-	verify_disk_count "$DISKS" 1
+	verify_disk_count "$DISKS" 2
 	ksh93 $(atf_get_srcdir)/setup.ksh || atf_fail "Setup failed"
 	ksh93 $(atf_get_srcdir)/zpool_add_003_pos.ksh || atf_fail "Testcase failed"
 }
@@ -120,6 +120,7 @@ zpool_add_004_pos_body()
 	. $(atf_get_srcdir)/zpool_add.kshlib
 	. $(atf_get_srcdir)/zpool_add.cfg
 
+	verify_disk_count "$DISKS" 2
 	verify_zvol_recursive
 	ksh93 $(atf_get_srcdir)/setup.ksh || atf_fail "Setup failed"
 	ksh93 $(atf_get_srcdir)/zpool_add_004_pos.ksh || atf_fail "Testcase failed"
@@ -138,7 +139,7 @@ atf_test_case zpool_add_005_pos cleanup
 zpool_add_005_pos_head()
 {
 	atf_set "descr" "'zpool add' should fail with inapplicable scenarios."
-	atf_set "require.progs"  dumpadm zpool
+	atf_set "require.progs"  zpool
 	atf_set "timeout" 2400
 }
 zpool_add_005_pos_body()
@@ -147,8 +148,8 @@ zpool_add_005_pos_body()
 	. $(atf_get_srcdir)/zpool_add.kshlib
 	. $(atf_get_srcdir)/zpool_add.cfg
 
-	verify_disk_count "$DISKS" 1
-	verify_disk_count "$DISKS" 1
+	verify_disk_count "$DISKS" 3
+	atf_expect_fail "PR 241070 dumpon opens geom devices non-exclusively"
 	ksh93 $(atf_get_srcdir)/setup.ksh || atf_fail "Setup failed"
 	ksh93 $(atf_get_srcdir)/zpool_add_005_pos.ksh || atf_fail "Testcase failed"
 }
@@ -175,7 +176,7 @@ zpool_add_006_pos_body()
 	. $(atf_get_srcdir)/zpool_add.kshlib
 	. $(atf_get_srcdir)/zpool_add.cfg
 
-	verify_disk_count "$DISKS" 2
+	verify_disk_count "$DISKS" 1
 	ksh93 $(atf_get_srcdir)/setup.ksh || atf_fail "Setup failed"
 	ksh93 $(atf_get_srcdir)/zpool_add_006_pos.ksh || atf_fail "Testcase failed"
 }
@@ -202,7 +203,7 @@ zpool_add_007_neg_body()
 	. $(atf_get_srcdir)/zpool_add.kshlib
 	. $(atf_get_srcdir)/zpool_add.cfg
 
-	verify_disk_count "$DISKS" 1
+	verify_disk_count "$DISKS" 2
 	ksh93 $(atf_get_srcdir)/setup.ksh || atf_fail "Setup failed"
 	ksh93 $(atf_get_srcdir)/zpool_add_007_neg.ksh || atf_fail "Testcase failed"
 }
@@ -229,7 +230,7 @@ zpool_add_008_neg_body()
 	. $(atf_get_srcdir)/zpool_add.kshlib
 	. $(atf_get_srcdir)/zpool_add.cfg
 
-	verify_disk_count "$DISKS" 1
+	verify_disk_count "$DISKS" 2
 	ksh93 $(atf_get_srcdir)/setup.ksh || atf_fail "Setup failed"
 	ksh93 $(atf_get_srcdir)/zpool_add_008_neg.ksh || atf_fail "Testcase failed"
 }
@@ -256,7 +257,7 @@ zpool_add_009_neg_body()
 	. $(atf_get_srcdir)/zpool_add.kshlib
 	. $(atf_get_srcdir)/zpool_add.cfg
 
-	verify_disk_count "$DISKS" 1
+	verify_disk_count "$DISKS" 2
 	ksh93 $(atf_get_srcdir)/setup.ksh || atf_fail "Setup failed"
 	ksh93 $(atf_get_srcdir)/zpool_add_009_neg.ksh || atf_fail "Testcase failed"
 }

Modified: stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/Makefile
==============================================================================
--- stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/Makefile	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/Makefile	Wed Oct 30 02:03:37 2019	(r354165)
@@ -13,8 +13,6 @@ TEST_METADATA+=		is_exclusive=true
 ${PACKAGE}FILES+=	zpool_create_003_pos.ksh
 ${PACKAGE}FILES+=	zpool_create_020_pos.ksh
 ${PACKAGE}FILES+=	zpool_create_017_neg.ksh
-${PACKAGE}FILES+=	zpool_create_013_neg.ksh
-${PACKAGE}FILES+=	zpool_create_016_pos.ksh
 ${PACKAGE}FILES+=	zpool_create_012_neg.ksh
 ${PACKAGE}FILES+=	zpool_create_006_pos.ksh
 ${PACKAGE}FILES+=	zpool_create_002_pos.ksh
@@ -22,7 +20,6 @@ ${PACKAGE}FILES+=	zpool_create_021_pos.ksh
 ${PACKAGE}FILES+=	zpool_create_007_neg.ksh
 ${PACKAGE}FILES+=	setup.ksh
 ${PACKAGE}FILES+=	cleanup.ksh
-${PACKAGE}FILES+=	zpool_create_014_neg.ksh
 ${PACKAGE}FILES+=	zpool_create_010_neg.ksh
 ${PACKAGE}FILES+=	zpool_create_019_pos.ksh
 ${PACKAGE}FILES+=	zpool_create_008_pos.ksh

Modified: stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create.kshlib
==============================================================================
--- stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create.kshlib	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create.kshlib	Wed Oct 30 02:03:37 2019	(r354165)
@@ -96,36 +96,22 @@ function clean_blockfile
 #
 # Find the storage device in /etc/vfstab
 #
-function find_vfstab_dev
+function find_fstab_dev
 {
-	typeset vfstab="/etc/vfstab"
-	typeset tmpfile="$TMPDIR/vfstab.tmp"
-	typeset vfstabdev
-	typeset vfstabdevs=""
+	typeset fstab="/etc/fstab"
+	typeset tmpfile="$TMPDIR/fstab.tmp"
+	typeset fstabdev
+	typeset fstabdevs=""
 	typeset line
 
-	$CAT $vfstab | $GREP "^/dev" >$tmpfile
+	$CAT $fstab | $GREP "^/dev" >$tmpfile
 	while read -r line
 	do
-		vfstabdev=`$ECHO "$line" | $AWK '{print $1}'`
-		vfstabdev=${vfstabdev%%:}
-		vfstabdevs="$vfstabdev $vfstabdevs"
+		fstabdev=`$ECHO "$line" | $AWK '{print $1}'`
+		fstabdev=${fstabdev%%:}
+		fstabdevs="$fstabdev $fstabdevs"
 	done <$tmpfile
 
 	$RM -f $tmpfile
-	$ECHO $vfstabdevs	
+	$ECHO $fstabdevs	
 } 
-
-#
-# Save the systme current dump device configuration
-#
-function save_dump_dev
-{
-
-	typeset dumpdev
-	typeset fnd="Dump device"
-	
-	dumpdev=`$DUMPADM | $GREP "$fnd" | $CUT -f2 -d : | \
-		$AWK '{print $1}'`
-	$ECHO $dumpdev
-}

Modified: stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_008_pos.ksh
==============================================================================
--- stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_008_pos.ksh	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_008_pos.ksh	Wed Oct 30 02:03:37 2019	(r354165)
@@ -57,102 +57,24 @@
 
 verify_runnable "global"
 
-function cleanup
-{
-	if [[ $exported_pool == true ]]; then
-		if [[ $force_pool == true ]]; then
-			log_must $ZPOOL create -f $TESTPOOL ${disk}p1
-		else
-			log_must $ZPOOL import $TESTPOOL
-		fi
-	fi
-
-	if poolexists $TESTPOOL ; then
-                destroy_pool $TESTPOOL
-	fi
-
-	if poolexists $TESTPOOL1 ; then
-                destroy_pool $TESTPOOL1
-	fi
-
-	#
-	# recover it back to EFI label
-	#
-	wipe_partition_table $disk
-}
-
-#
-# create overlap slice 0 and 1 on $disk
-#
-function create_overlap_slice
-{
-        typeset format_file=$TMPDIR/format_overlap.${TESTCASE_ID}
-        typeset disk=$1
-
-        $ECHO "partition" >$format_file
-        $ECHO "0" >> $format_file
-        $ECHO "" >> $format_file
-        $ECHO "" >> $format_file
-        $ECHO "0" >> $format_file
-        $ECHO "200m" >> $format_file
-        $ECHO "1" >> $format_file
-        $ECHO "" >> $format_file
-        $ECHO "" >> $format_file
-        $ECHO "0" >> $format_file
-        $ECHO "400m" >> $format_file
-        $ECHO "label" >> $format_file
-        $ECHO "" >> $format_file
-        $ECHO "q" >> $format_file
-        $ECHO "q" >> $format_file
-
-        $FORMAT -e -s -d $disk -f $format_file
-	typeset -i ret=$?
-        $RM -fr $format_file
-
-	if (( ret != 0 )); then
-                log_fail "unable to create overlap slice."
-        fi
-
-        return 0
-}
-
 log_assert "'zpool create' have to use '-f' scenarios"
-log_onexit cleanup
 
-typeset exported_pool=false
-typeset force_pool=false
-
 if [[ -n $DISK ]]; then
         disk=$DISK
 else
         disk=$DISK0
 fi
 
-# overlapped slices as vdev need -f to create pool
-
 # Make the disk is EFI labeled first via pool creation
 create_pool $TESTPOOL $disk
 destroy_pool $TESTPOOL
 
-# Make the disk is VTOC labeled since only VTOC label supports overlap
-log_must labelvtoc $disk
-log_must create_overlap_slice $disk
-
-log_mustnot $ZPOOL create $TESTPOOL ${disk}p1
-log_must $ZPOOL create -f $TESTPOOL ${disk}p1
-destroy_pool $TESTPOOL
-
 # exported device to be as spare vdev need -f to create pool
-
-log_must $ZPOOL create -f $TESTPOOL $disk
-destroy_pool $TESTPOOL
 log_must partition_disk $SIZE $disk 6
 create_pool $TESTPOOL ${disk}p1 ${disk}p2
 log_must $ZPOOL export $TESTPOOL
-exported_pool=true
 log_mustnot $ZPOOL create $TESTPOOL1 ${disk}p3 spare ${disk}p2 
 create_pool $TESTPOOL1 ${disk}p3 spare ${disk}p2
-force_pool=true
 destroy_pool $TESTPOOL1
 
 log_pass "'zpool create' have to use '-f' scenarios"

Modified: stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_011_neg.ksh
==============================================================================
--- stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_011_neg.ksh	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_011_neg.ksh	Wed Oct 30 02:03:37 2019	(r354165)
@@ -41,7 +41,7 @@
 # 'zpool create' will fail in the following cases:
 # existent pool; device is part of an active pool; nested virtual devices;
 # differently sized devices without -f option; device being currently
-# mounted; devices in /etc/vfstab; specified as the dedicated dump device.
+# mounted; devices in /etc/fstab; specified as the dedicated dump device.
 #
 # STRATEGY:
 # 1. Create case scenarios
@@ -67,8 +67,8 @@ function cleanup
                 destroy_pool $pool
         done
 
-	if [[ -n $saved_dump_dev ]]; then
-		log_must $DUMPADM -u -d $saved_dump_dev
+	if [[ -n $specified_dump_dev ]]; then
+		$DUMPON -r $specified_dump_dev
 	fi
 }
 
@@ -87,11 +87,11 @@ mirror2="${disk}p4 ${disk}p5"
 raidz1=$mirror1
 raidz2=$mirror2
 diff_size_dev="${disk}p6 ${disk}p7"
-vfstab_dev=$(find_vfstab_dev)
-specified_dump_dev=${disk}p1
-saved_dump_dev=$(save_dump_dev)
+fstab_dev=$(find_fstab_dev)
+specified_dump_dev=${disk}
 
 lba=$(get_partition_end $disk 6)
+$GPART delete -i 7 $disk
 set_partition 7 "$lba" $SIZE1 $disk
 create_pool "$TESTPOOL" "$pooldev1"
 
@@ -112,7 +112,7 @@ set -A arg "$TESTPOOL $pooldev2" \
         "$TESTPOOL1 raidz $diff_size_dev" \
         "$TESTPOOL1 raidz1 $diff_size_dev" \
 	"$TESTPOOL1 mirror $mirror1 spare $mirror2 spare $diff_size_dev" \
-        "$TESTPOOL1 $vfstab_dev" \
+        "$TESTPOOL1 $fstab_dev" \
         "$TESTPOOL1 ${disk}s10" \
 	"$TESTPOOL1 spare $pooldev2"
 
@@ -130,7 +130,7 @@ log_must $ZPOOL destroy -f $TESTPOOL
 log_must $ZPOOL create -f $TESTPOOL3 $disk
 log_must $ZPOOL destroy -f $TESTPOOL3
 
-log_must $DUMPADM -d /dev/$specified_dump_dev
+log_must dumpon $specified_dump_dev
 log_mustnot $ZPOOL create -f $TESTPOOL1 "$specified_dump_dev"
 
 # Also check to see that in-use checking prevents us from creating

Modified: stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_012_neg.ksh
==============================================================================
--- stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_012_neg.ksh	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_012_neg.ksh	Wed Oct 30 02:03:37 2019	(r354165)
@@ -38,13 +38,12 @@
 #
 #
 # DESCRIPTION:
-# 'zpool create' will fail with formal disk slice in swap
+# 'zpool create' will fail with disk in swap
 #
 #
 # STRATEGY:
-# 1. Get all the disk devices in swap
-# 2. For each device, try to create a new pool with this device
-# 3. Verify the creation is failed.
+# 1. Add a disk to swap
+# 2. Try to create a pool on that disk.  It should fail.
 #
 # TESTABILITY: explicit
 #
@@ -60,21 +59,14 @@ verify_runnable "global"
 
 function cleanup
 {
-	if poolexists $TESTPOOL; then
-		destroy_pool $TESTPOOL
-	fi
+	$SWAPOFF $DISK0
 
 }
-typeset swap_disks=`$SWAP -l | $GREP "c[0-9].*d[0-9].*s[0-9]" | \
-            $AWK '{print $1}'`
 
-log_assert "'zpool create' should fail with disk slice in swap."
+log_assert "'zpool create' should fail with disk in swap."
 log_onexit cleanup
 
-for sdisk in $swap_disks; do
-	for opt in "-n" "" "-f"; do
-		log_mustnot $ZPOOL create $opt $TESTPOOL $sdisk
-	done
-done
+log_must $SWAPON $DISK0
+log_mustnot $ZPOOL create $TESTPOOL $DISK0
 
-log_pass "'zpool create' passed as expected with inapplicable scenario."
+log_pass "'zpool create' cannot use a swap disk"

Modified: stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_015_neg.ksh
==============================================================================
--- stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_015_neg.ksh	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_015_neg.ksh	Wed Oct 30 02:03:37 2019	(r354165)
@@ -62,16 +62,7 @@ verify_runnable "global"
 
 function cleanup
 {
-	# cleanup zfs pool and dataset
-	if datasetexists $vol_name; then
-		$SWAP -l | $GREP /dev/zvol/$vol_name > /dev/null 2>&1
-		if [[ $? -eq 0 ]]; then
-			$SWAP -d /dev/zvol/${vol_name}
-		fi
-	fi
-
-	destroy_pool $TESTPOOL1
-	destroy_pool $TESTPOOL
+	$SWAPOFF /dev/zvol/${vol_name}
 }
 
 if [[ -n $DISK ]]; then
@@ -80,7 +71,7 @@ else
         disk=$DISK0
 fi
 
-typeset pool_dev=${disk}p1
+typeset pool_dev=${disk}
 typeset vol_name=$TESTPOOL/$TESTVOL
 
 log_assert "'zpool create' should fail with zfs vol device in swap."
@@ -91,13 +82,9 @@ log_onexit cleanup
 #
 create_pool $TESTPOOL $pool_dev
 log_must $ZFS create -V 100m $vol_name
-log_must $SWAP -a /dev/zvol/$vol_name
-for opt in "-n" "" "-f"; do
+log_must $SWAPON /dev/zvol/$vol_name
+for opt in "" "-f"; do
 	log_mustnot $ZPOOL create $opt $TESTPOOL1 /dev/zvol/${vol_name}
 done
-
-# cleanup
-log_must $SWAP -d /dev/zvol/${vol_name}
-log_must $ZFS destroy $vol_name
 
 log_pass "'zpool create' passed as expected with inapplicable scenario."

Modified: stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_test.sh
==============================================================================
--- stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_test.sh	Wed Oct 30 01:57:40 2019	(r354164)
+++ stable/12/tests/sys/cddl/zfs/tests/cli_root/zpool_create/zpool_create_test.sh	Wed Oct 30 02:03:37 2019	(r354165)
@@ -219,7 +219,7 @@ atf_test_case zpool_create_008_pos cleanup
 zpool_create_008_pos_head()
 {
 	atf_set "descr" "'zpool create' have to use '-f' scenarios"
-	atf_set "require.progs"  zpool format
+	atf_set "require.progs"  zpool
 	atf_set "timeout" 2400
 }
 zpool_create_008_pos_body()
@@ -300,7 +300,7 @@ atf_test_case zpool_create_011_neg cleanup
 zpool_create_011_neg_head()
 {
 	atf_set "descr" "'zpool create' should be failed with inapplicable scenarios."
-	atf_set "require.progs"  dumpadm zpool
+	atf_set "require.progs" zpool
 	atf_set "timeout" 2400
 }
 zpool_create_011_neg_body()
@@ -310,6 +310,7 @@ zpool_create_011_neg_body()
 	. $(atf_get_srcdir)/zpool_create.cfg
 
 	verify_disk_count "$DISKS" 1
+	atf_expect_fail "PR 241070 dumpon opens geom devices non-exclusively"
 	ksh93 $(atf_get_srcdir)/setup.ksh || atf_fail "Setup failed"
 	ksh93 $(atf_get_srcdir)/zpool_create_011_neg.ksh || atf_fail "Testcase failed"
 }
@@ -323,12 +324,11 @@ zpool_create_011_neg_cleanup()
 }
 
 
-atf_test_case zpool_create_012_neg cleanup
+atf_test_case zpool_create_012_neg
 zpool_create_012_neg_head()
 {
 	atf_set "descr" "'zpool create' should fail with disk slice in swap."
-	atf_set "require.progs"  zpool swap
-	atf_set "timeout" 2400
+	atf_set "require.progs"  zpool
 }
 zpool_create_012_neg_body()
 {
@@ -336,78 +336,16 @@ zpool_create_012_neg_body()
 	. $(atf_get_srcdir)/zpool_create.kshlib
 	. $(atf_get_srcdir)/zpool_create.cfg
 
-	ksh93 $(atf_get_srcdir)/setup.ksh || atf_fail "Setup failed"
+	verify_disk_count "$DISKS" 1
 	ksh93 $(atf_get_srcdir)/zpool_create_012_neg.ksh || atf_fail "Testcase failed"
 }
-zpool_create_012_neg_cleanup()
-{
-	. $(atf_get_srcdir)/../../../include/default.cfg
-	. $(atf_get_srcdir)/zpool_create.kshlib
-	. $(atf_get_srcdir)/zpool_create.cfg
 
-	ksh93 $(atf_get_srcdir)/cleanup.ksh || atf_fail "Cleanup failed"
-}
 
-
-atf_test_case zpool_create_013_neg cleanup
-zpool_create_013_neg_head()

*** DIFF OUTPUT TRUNCATED AT 1000 LINES ***



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201910300203.x9U23b7U018593>