Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 26 May 2014 04:16:16 +0000 (UTC)
From:      Warren Block <wblock@FreeBSD.org>
To:        doc-committers@freebsd.org, svn-doc-projects@freebsd.org
Subject:   svn commit: r44954 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs
Message-ID:  <201405260416.s4Q4GGMZ062013@svn.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: wblock
Date: Mon May 26 04:16:16 2014
New Revision: 44954
URL: http://svnweb.freebsd.org/changeset/doc/44954

Log:
  Editorial pass done.  Sections zfs-zfs-snapshot-creation through
  zfs-zfs-clones need less confusing examples and rewrites to match.
  zfs-send-ssh needs a clearer explanation of the -d option and
  possibly examples.

Modified:
  projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml

Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
==============================================================================
--- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml	Mon May 26 00:18:22 2014	(r44953)
+++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml	Mon May 26 04:16:16 2014	(r44954)
@@ -49,10 +49,8 @@
     designs.</para>
 
   <para>Originally developed at &sun;, ongoing <acronym>ZFS</acronym>
-    development has moved to the
-    <link xlink:href="http://open-zfs.org">OpenZFS Project</link>.
-    <xref linkend="zfs-history"/> describes the development history in
-    more detail.</para>
+    development has moved to the <link
+      xlink:href="http://open-zfs.org">OpenZFS Project</link>.</para>
 
   <para><acronym>ZFS</acronym> has three major design goals:</para>
 
@@ -101,31 +99,31 @@
       created on a single disk at a time.  If there were two disks
       then two separate file systems would have to be created.  In a
       traditional hardware <acronym>RAID</acronym> configuration, this
-      problem was worked around by presenting the operating system
-      with a single logical disk made up of the space provided by a
-      number of disks, on top of which the operating system placed its
+      problem was avoided by presenting the operating system with a
+      single logical disk made up of the space provided by a number of
+      physical disks, on top of which the operating system placed a
       file system.  Even in the case of software
-      <acronym>RAID</acronym> solutions like <acronym>GEOM</acronym>,
-      the <acronym>UFS</acronym> file system living on top of the
-      <acronym>RAID</acronym> transform believed that it was dealing
-      with a single device.  <acronym>ZFS</acronym>'s combination of
-      the volume manager and the file system solves this and allows
-      the creation of many file systems all sharing a pool of
-      available storage.  One of the biggest advantages to
-      <acronym>ZFS</acronym>'s awareness of the physical layout of the
-      disks is that <acronym>ZFS</acronym> can grow the existing file
-      systems automatically when additional disks are added to the
+      <acronym>RAID</acronym> solutions like those provided by
+      <acronym>GEOM</acronym>, the <acronym>UFS</acronym> file system
+      living on top of the <acronym>RAID</acronym> transform believed
+      that it was dealing with a single device.
+      <acronym>ZFS</acronym>'s combination of the volume manager and
+      the file system solves this and allows the creation of many file
+      systems all sharing a pool of available storage.  One of the
+      biggest advantages to <acronym>ZFS</acronym>'s awareness of the
+      physical layout of the disks is that existing file systems can
+      be grown automatically when additional disks are added to the
       pool.  This new space is then made available to all of the file
       systems.  <acronym>ZFS</acronym> also has a number of different
-      properties that can be applied to each file system, creating
-      many advantages to creating a number of different filesystems
-      and datasets rather than a single monolithic filesystem.</para>
+      properties that can be applied to each file system, giving many
+      advantages to creating a number of different filesystems and
+      datasets rather than a single monolithic filesystem.</para>
   </sect1>
 
   <sect1 xml:id="zfs-quickstart">
     <title>Quick Start Guide</title>
 
-    <para>There is a start up mechanism that allows &os; to mount
+    <para>There is a startup mechanism that allows &os; to mount
       <acronym>ZFS</acronym> pools during system initialization.  To
       enable it, add this line to
       <filename>/etc/rc.conf</filename>:</para>
@@ -149,7 +147,7 @@
       <title>Single Disk Pool</title>
 
       <para>To create a simple, non-redundant pool using a single
-	disk device, use <command>zpool create</command>:</para>
+	disk device:</para>
 
       <screen>&prompt.root; <userinput>zpool create <replaceable>example</replaceable> <replaceable>/dev/da0</replaceable></userinput></screen>
 
@@ -164,8 +162,8 @@ devfs               1       1        0  
 example      17547136       0 17547136     0%    /example</screen>
 
       <para>This output shows that the <literal>example</literal> pool
-	has been created and <emphasis>mounted</emphasis>.  It is now
-	accessible as a file system.  Files may be created on it and
+	has been created and mounted.  It is now
+	accessible as a file system.  Files can be created on it and
 	users can browse it, like in this example:</para>
 
       <screen>&prompt.root; <userinput>cd /example</userinput>
@@ -186,15 +184,15 @@ drwxr-xr-x  21 root  wheel  512 Aug 29 2
 
       <para>The <literal>example/compressed</literal> dataset is now a
 	<acronym>ZFS</acronym> compressed file system.  Try copying
-	some large files to <filename
-	  class="directory">/example/compressed</filename>.</para>
+	some large files to
+	<filename>/example/compressed</filename>.</para>
 
       <para>Compression can be disabled with:</para>
 
       <screen>&prompt.root; <userinput>zfs set compression=off example/compressed</userinput></screen>
 
       <para>To unmount a file system, use
-	<command>zfs umount</command> and then verify by using
+	<command>zfs umount</command> and then verify with
 	<command>df</command>:</para>
 
       <screen>&prompt.root; <userinput>zfs umount example/compressed</userinput>
@@ -229,12 +227,13 @@ example on /example (zfs, local)
 example/data on /example/data (zfs, local)
 example/compressed on /example/compressed (zfs, local)</screen>
 
-      <para><acronym>ZFS</acronym> datasets, after creation, may be
+      <para>After creatopm, <acronym>ZFS</acronym> datasets can be
 	used like any file systems.  However, many other features are
 	available which can be set on a per-dataset basis.  In the
-	example below, a new file system, <literal>data</literal>
-	is created.  Important files will be stored here, the file
-	system is set to keep two copies of each data block:</para>
+	example below, a new file system called
+	<literal>data</literal> is created.  Important files will be
+	stored here, so it is configured to keep two copies of each
+	data block:</para>
 
       <screen>&prompt.root; <userinput>zfs create example/data</userinput>
 &prompt.root; <userinput>zfs set copies=2 example/data</userinput></screen>
@@ -255,13 +254,12 @@ example/data        17547008       0 175
 	amount of available space.  This is the reason for using
 	<command>df</command> in these examples, to show that the file
 	systems use only the amount of space they need and all draw
-	from the same pool.  The <acronym>ZFS</acronym> file system
-	does away with concepts such as volumes and partitions, and
-	allows for several file systems to occupy the same
-	pool.</para>
+	from the same pool.  <acronym>ZFS</acronym> eliminates
+	concepts such as volumes and partitions, and allows multiple
+	file systems to occupy the same pool.</para>
 
       <para>To destroy the file systems and then destroy the pool as
-	they are no longer needed:</para>
+	it is no longer needed:</para>
 
       <screen>&prompt.root; <userinput>zfs destroy example/compressed</userinput>
 &prompt.root; <userinput>zfs destroy example/data</userinput>
@@ -271,22 +269,21 @@ example/data        17547008       0 175
     <sect2>
       <title>RAID-Z</title>
 
-      <para>Disks fail.  One
-	method of avoiding data loss from disk failure is to
-	implement <acronym>RAID</acronym>.  <acronym>ZFS</acronym>
-	supports this feature in its pool design.
-	<acronym>RAID-Z</acronym> pools require three or more disks
-	but yield more usable space than mirrored pools.</para>
+      <para>Disks fail.  One method of avoiding data loss from disk
+	failure is to implement <acronym>RAID</acronym>.
+	<acronym>ZFS</acronym> supports this feature in its pool
+	design.  <acronym>RAID-Z</acronym> pools require three or more
+	disks but provide more usable space than mirrored
+	pools.</para>
 
-      <para>To create a <acronym>RAID-Z</acronym> pool, use this
-	command, specifying the disks to add to the
-	pool:</para>
+      <para>This example creates a <acronym>RAID-Z</acronym> pool,
+	specifying the disks to add to the pool:</para>
 
       <screen>&prompt.root; <userinput>zpool create storage raidz da0 da1 da2</userinput></screen>
 
       <note>
 	<para>&sun; recommends that the number of devices used in a
-	  <acronym>RAID</acronym>-Z configuration is between three and
+	  <acronym>RAID</acronym>-Z configuration be between three and
 	  nine.  For environments requiring a single pool consisting
 	  of 10 disks or more, consider breaking it up into smaller
 	  <acronym>RAID-Z</acronym> groups.  If only two disks are
@@ -295,21 +292,21 @@ example/data        17547008       0 175
 	  more details.</para>
       </note>
 
-      <para>This command creates the <literal>storage</literal> zpool.
-	This may be verified using &man.mount.8; and &man.df.1;.  This
-	command makes a new file system in the pool called
-	<literal>home</literal>:</para>
+      <para>The previous example created the
+	<literal>storage</literal> zpool.  This example makes a new
+	file system called <literal>home</literal> in that
+	pool:</para>
 
       <screen>&prompt.root; <userinput>zfs create storage/home</userinput></screen>
 
-      <para>Now compression and keeping extra copies of directories
-	and files can be enabled with these commands:</para>
+      <para>Compression and keeping extra copies of directories
+	and files can be enabled:</para>
 
       <screen>&prompt.root; <userinput>zfs set copies=2 storage/home</userinput>
 &prompt.root; <userinput>zfs set compression=gzip storage/home</userinput></screen>
 
       <para>To make this the new home directory for users, copy the
-	user data to this directory, and create the appropriate
+	user data to this directory and create the appropriate
 	symbolic links:</para>
 
       <screen>&prompt.root; <userinput>cp -rp /home/* /storage/home</userinput>
@@ -317,28 +314,30 @@ example/data        17547008       0 175
 &prompt.root; <userinput>ln -s /storage/home /home</userinput>
 &prompt.root; <userinput>ln -s /storage/home /usr/home</userinput></screen>
 
-      <para>Users now have their data stored on the freshly
-	created <filename class="directory">/storage/home</filename>.
-	Test by adding a new user and logging in as that user.</para>
+      <para>Users data is now stored on the freshly-created
+	<filename>/storage/home</filename>.  Test by adding a new user
+	and logging in as that user.</para>
 
-      <para>Try creating a snapshot which can be rolled back
-	later:</para>
+      <para>Try creating a file system snapshot which can be rolled
+	back later:</para>
 
       <screen>&prompt.root; <userinput>zfs snapshot storage/home@08-30-08</userinput></screen>
 
-      <para>Note that the snapshot option will only capture a real
-	file system, not a home directory or a file.  The
-	<literal>@</literal> character is a delimiter used between the
-	file system name or the volume name.  When a user's home
-	directory is accidentally deleted, restore it with:</para>
+      <para>Snapshots can only be made of a full file system, not a
+	single directory or file.</para>
+
+      <para>The <literal>@</literal> character is a delimiter between
+	the file system name or the volume name.  If an important
+	directory has been accidentally deleted, the file system can
+	be backed up, then rolled back to an earlier snapshot when the
+	directory still existed:</para>
 
       <screen>&prompt.root; <userinput>zfs rollback storage/home@08-30-08</userinput></screen>
 
       <para>To list all available snapshots, run
 	<command>ls</command> in the file system's
-	<filename class="directory">.zfs/snapshot</filename>
-	directory.  For example, to see the previously taken
-	snapshot:</para>
+	<filename>.zfs/snapshot</filename> directory.  For example, to
+	see the previously taken snapshot:</para>
 
       <screen>&prompt.root; <userinput>ls /storage/home/.zfs/snapshot</userinput></screen>
 
@@ -349,16 +348,15 @@ example/data        17547008       0 175
 
       <screen>&prompt.root; <userinput>zfs destroy storage/home@08-30-08</userinput></screen>
 
-      <para>After testing,
-	<filename class="directory">/storage/home</filename> can be
-	made the real <filename class="directory">/home</filename>
-	using this command:</para>
+      <para>After testing, <filename>/storage/home</filename> can be
+	made the real <filename>/home</filename> using this
+	command:</para>
 
       <screen>&prompt.root; <userinput>zfs set mountpoint=/home storage/home</userinput></screen>
 
       <para>Run <command>df</command> and <command>mount</command> to
 	confirm that the system now treats the file system as the real
-	<filename class="directory">/home</filename>:</para>
+	<filename>/home</filename>:</para>
 
       <screen>&prompt.root; <userinput>mount</userinput>
 /dev/ad0s1a on / (ufs, local)
@@ -377,13 +375,14 @@ storage/home  26320512       0 26320512 
       <para>This completes the <acronym>RAID-Z</acronym>
 	configuration.  Daily status updates about the file systems
 	created can be generated as part of the nightly
-	&man.periodic.8; runs:</para>
+	&man.periodic.8; runs.  Add this line to
+	<filename>/etc/periodic.conf</filename>:</para>
 
-      <screen>&prompt.root; <userinput>echo 'daily_status_zfs_enable="YES"' &gt;&gt; /etc/periodic.conf</userinput></screen>
+      <programlisting>daily_status_zfs_enable="YES"</programlisting>
     </sect2>
 
     <sect2>
-      <title>Recovering <acronym>RAID</acronym>-Z</title>
+      <title>Recovering <acronym>RAID-Z</acronym></title>
 
       <para>Every software <acronym>RAID</acronym> has a method of
 	monitoring its <literal>state</literal>.  The status of
@@ -394,7 +393,7 @@ storage/home  26320512       0 26320512 
 
       <para>If all pools are
 	<link linkend="zfs-term-online">Online</link> and everything
-	is normal, the message indicates that:</para>
+	is normal, the message shows:</para>
 
       <screen>all pools are healthy</screen>
 
@@ -459,32 +458,29 @@ errors: No known data errors</screen>
 
       <para><acronym>ZFS</acronym> uses checksums to verify the
 	integrity of stored data.  These are enabled automatically
-	upon creation of file systems and may be disabled using the
-	command:</para>
-
-      <screen>&prompt.root; <userinput>zfs set checksum=off storage/home</userinput></screen>
+	upon creation of file systems.</para>
 
       <warning>
-	<para>Doing so is <emphasis>not</emphasis> recommended!
-	  Checksums take very little storage space and provide data
-	  integrity.  Many <acronym>ZFS</acronym> features will not
-	  work properly with checksums disabled.  There is also no
-	  noticeable performance gain from disabling these
-	  checksums.</para>
+	<para>Checksums can be disabled, but it is
+	  <emphasis>not</emphasis> recommended!  Checksums take very
+	  little storage space and provide data integrity.  Many
+	  <acronym>ZFS</acronym> features will not work properly with
+	  checksums disabled.  There is also no noticeable performance
+	  gain from disabling these checksums.</para>
       </warning>
 
       <para>Checksum verification is known as
-	<quote>scrubbing</quote>.  Verify the data integrity of the
-	<literal>storage</literal> pool, with this command:</para>
+	<emphasis>scrubbing</emphasis>.  Verify the data integrity of
+	the <literal>storage</literal> pool with this command:</para>
 
       <screen>&prompt.root; <userinput>zpool scrub storage</userinput></screen>
 
       <para>The duration of a scrub depends on the amount of data
-	stored.  Larger amounts of data will take considerably longer
-	to verify.  Scrubs are very <acronym>I/O</acronym> intensive,
-	so much so that only one scrub may be run at a time.  After
-	the scrub has completed, the status is updated and may be
-	viewed with <command>status</command>:</para>
+	stored.  Larger amounts of data will take proportionally
+	longer to verify.  Scrubs are very <acronym>I/O</acronym>
+	intensive, and only one scrub is allowed to run at a time.
+	After the scrub completes, the status can be viewed with
+	<command>status</command>:</para>
 
       <screen>&prompt.root; <userinput>zpool status storage</userinput>
  pool: storage
@@ -503,8 +499,8 @@ errors: No known data errors</screen>
 
       <para>The completion date of the last scrub operation is
 	displayed to help track when another scrub is required.
-	Routine pool scrubs help protect data from silent corruption
-	and ensure the integrity of the pool.</para>
+	Routine scrubs help protect data from silent corruption and
+	ensure the integrity of the pool.</para>
 
       <para>Refer to &man.zfs.8; and &man.zpool.8; for other
 	<acronym>ZFS</acronym> options.</para>
@@ -514,24 +510,24 @@ errors: No known data errors</screen>
   <sect1 xml:id="zfs-zpool">
     <title><command>zpool</command> Administration</title>
 
-    <para>The administration of <acronym>ZFS</acronym> is divided
-      between two main utilities.  The <command>zpool</command>
-      utility which controls the operation of the pool and deals with
-      adding, removing, replacing and managing disks, and the
-      <link linkend="zfs-zfs"><command>zfs</command></link> utility,
-      which deals with creating, destroying and managing datasets
-      (both <link linkend="zfs-term-filesystem">filesystems</link> and
-      <link linkend="zfs-term-volume">volumes</link>).</para>
+    <para><acronym>ZFS</acronym> administration is divided between two
+      main utilities.  The <command>zpool</command> utility controls
+      the operation of the pool and deals with adding, removing,
+      replacing, and managing disks.  The
+      <link linkend="zfs-zfs"><command>zfs</command></link> utility
+      deals with creating, destroying, and managing datasets,
+      both <link linkend="zfs-term-filesystem">filesystems</link> and
+      <link linkend="zfs-term-volume">volumes</link>.</para>
 
     <sect2 xml:id="zfs-zpool-create">
-      <title>Creating &amp; Destroying Storage Pools</title>
+      <title>Creating and Destroying Storage Pools</title>
 
-      <para>Creating a <acronym>ZFS</acronym> Storage Pool
-	(<acronym>zpool</acronym>) involves making a number of
+      <para>Creating a <acronym>ZFS</acronym> storage pool
+	(<emphasis>zpool</emphasis>) involves making a number of
 	decisions that are relatively permanent because the structure
 	of the pool cannot be changed after the pool has been created.
-	The most important decision is what types of vdevs to group
-	the physical disks into.  See the list of
+	The most important decision is what types of vdevs into which
+	to group the physical disks.  See the list of
 	<link linkend="zfs-term-vdev">vdev types</link> for details
 	about the possible options.  After the pool has been created,
 	most vdev types do not allow additional disks to be added to
@@ -540,43 +536,41 @@ errors: No known data errors</screen>
 	upgraded to mirrors by attaching an additional disk to the
 	vdev.  Although additional vdevs can be added to a pool, the
 	layout of the pool cannot be changed once the pool has been
-	created, instead the data must be backed up and the pool
-	recreated.</para>
+	created.  Instead the data must be backed up and the pool
+	destroyed and recreated.</para>
 
-      <para>A <acronym>ZFS</acronym> pool that is no longer needed can
-	be destroyed so that the disks making up the pool can be
-	reused in another pool or for other purposes.  Destroying a
-	pool involves unmounting all of the datasets in that pool.  If
-	the datasets are in use, the unmount operation will fail and
-	the pool will not be destroyed.  The destruction of the pool
-	can be forced with <option>-f</option>, but this can cause
-	undefined behavior in applications which had open files on
-	those datasets.</para>
+      <para>A pool that is no longer needed can be destroyed so that
+	the disks can be reused.  Destroying a pool involves first
+	unmounting all of the datasets in that pool.  If the datasets
+	are in use, the unmount operation will fail and the pool will
+	not be destroyed.  The destruction of the pool can be forced
+	with <option>-f</option>, but this can cause undefined
+	behavior in applications which had open files on those
+	datasets.</para>
     </sect2>
 
     <sect2 xml:id="zfs-zpool-attach">
       <title>Adding and Removing Devices</title>
 
-      <para>Adding disks to a zpool can be broken down into two
-	separate cases: attaching a disk to an existing vdev with
+      <para>There are two cases for adding disks to a zpool: attaching
+	a disk to an existing vdev with
 	<command>zpool attach</command>, or adding vdevs to the pool
-	with <command>zpool add</command>.  Only some <link
-	linkend="zfs-term-vdev">vdev types</link> allow disks to be
-	added to the vdev after creation.</para>
+	with <command>zpool add</command>.  Only some
+	<link linkend="zfs-term-vdev">vdev types</link> allow disks to
+	be added to the vdev after creation.</para>
 
       <para>When adding disks to the existing vdev is not an option,
-	as in the case of RAID-Z, the other option is to add a vdev to
-	the pool.  It is possible, but discouraged, to mix vdev types.
+	as for RAID-Z, the other option is to add a vdev to the pool.
+	It is possible, but discouraged, to mix vdev types.
 	<acronym>ZFS</acronym> stripes data across each of the vdevs.
 	For example, if there are two mirror vdevs, then this is
 	effectively a <acronym>RAID</acronym> 10, striping the writes
-	across the two sets of mirrors.  Because of the way that space
-	is allocated in <acronym>ZFS</acronym> to attempt to have each
-	vdev reach 100% full at the same time, there is a performance
-	penalty if the vdevs have different amounts of free
-	space.</para>
+	across the two sets of mirrors.  Space is allocated so that
+	each vdev reaches 100% full at the same time, so there is a
+	performance penalty if the vdevs have different amounts of
+	free space.</para>
 
-      <para>Currently, vdevs cannot be removed from a zpool, and disks
+      <para>Currently, vdevs cannot be removed from a pool, and disks
 	can only be removed from a mirror if there is enough remaining
 	redundancy.</para>
     </sect2>
@@ -585,13 +579,12 @@ errors: No known data errors</screen>
       <title>Checking the Status of a Pool</title>
 
       <para>Pool status is important.  If a drive goes offline or a
-	read, write, or checksum error is detected, the error
-	counter in <command>status</command> is incremented.  The
-	<command>status</command> output shows the configuration and
-	status of each device in the pool, in addition to the status
-	of the entire pool.  Actions that need to be taken and details
-	about the last <link
-	linkend="zfs-zpool-scrub"><command>scrub</command></link>
+	read, write, or checksum error is detected, the corresponding
+	error count is incremented.  The <command>status</command>
+	output shows the configuration and status of each device in
+	the pool, in addition to the status of the entire pool
+	Actions that need to be taken and details about the last <link
+	  linkend="zfs-zpool-scrub"><command>scrub</command></link>
 	are also shown.</para>
 
       <screen>&prompt.root; <userinput>zpool status</userinput>
@@ -619,7 +612,7 @@ errors: No known data errors</screen>
       <para>When an error is detected, the read, write, or checksum
 	counts are incremented.  The error message can be cleared and
 	the counts reset with <command>zpool clear
-	<replaceable>mypool</replaceable></command>.  Clearing the
+	  <replaceable>mypool</replaceable></command>.  Clearing the
 	error state can be important for automated scripts that alert
 	the administrator when the pool encounters an error.  Further
 	errors may not be reported if the old errors are not
@@ -637,20 +630,20 @@ errors: No known data errors</screen>
 	After this operation completes, the old disk is disconnected
 	from the vdev.  If the new disk is larger than the old disk,
 	it may be possible to grow the zpool, using the new space.
-	See <link linkend="zfs-zpool-online">Growing a
-	  Pool</link>.</para>
+	See
+	<link linkend="zfs-zpool-online">Growing a Pool</link>.</para>
     </sect2>
 
     <sect2 xml:id="zfs-zpool-resilver">
       <title>Dealing with Failed Devices</title>
 
-      <para>When a disk in a <acronym>ZFS</acronym> pool fails, the
-	vdev that the disk belongs to will enter the
+      <para>When a disk in a pool fails, the vdev to which the disk
+	belongs will enter the
 	<link linkend="zfs-term-degraded">Degraded</link> state.  In
 	this state, all of the data stored on the vdev is still
 	available, but performance may be impacted because missing
-	data will need to be calculated from the available redundancy.
-	To restore the vdev to a fully functional state, the failed
+	data must be calculated from the available redundancy.  To
+	restore the vdev to a fully functional state, the failed
 	physical device must be replaced, and <acronym>ZFS</acronym>
 	must be instructed to begin the
 	<link linkend="zfs-term-resilver">resilver</link> operation,
@@ -659,23 +652,23 @@ errors: No known data errors</screen>
 	device.  After the process has completed, the vdev will return
 	to <link linkend="zfs-term-online">Online</link> status.  If
 	the vdev does not have any redundancy, or if multiple devices
-	have failed and there is not enough redundancy to
-	compensate, the pool will enter the
-	<link linkend="zfs-term-faulted">Faulted</link> state.  If a
-	sufficient number of devices cannot be reconnected to the pool
-	then the pool will be inoperative, and data must be
+	have failed and there is not enough redundancy to compensate,
+	the pool will enter the
+	<link linkend="zfs-term-faulted">Faulted</link> state.  When a
+	sufficient number of devices cannot be reconnected to the
+	pool, then the pool will be inoperative, and data must be
 	restored from backups.</para>
     </sect2>
 
     <sect2 xml:id="zfs-zpool-scrub">
       <title>Scrubbing a Pool</title>
 
-      <para>It is recommended that pools be <link
-	  linkend="zfs-term-scrub">scrubbed</link> regularly, ideally
-	at least once every month.  The <command>scrub</command>
-	operating is very disk-intensive and will reduce performance
-	while running.  Avoid high-demand periods when scheduling
-	<command>scrub</command> or use <link
+      <para>It is recommended that pools be
+	<link linkend="zfs-term-scrub">scrubbed</link> regularly,
+	ideally at least once every month.  The
+	<command>scrub</command> operation is very disk-intensive and
+	will reduce performance while running.  Avoid high-demand
+	periods when scheduling <command>scrub</command> or use <link
 	  linkend="zfs-advanced-tuning-scrub_delay"><varname>vfs.zfs.scrub_delay</varname></link>
 	to adjust the relative priority of the
 	<command>scrub</command> to prevent it interfering with other
@@ -717,23 +710,21 @@ errors: No known data errors</screen>
 	pool.  For example, a mirror with two disks where one drive is
 	starting to malfunction and cannot properly store the data any
 	more.  This is even worse when the data has not been accessed
-	for a long time in long term archive storage for example.
+	for a long time, as with long term archive storage.
 	Traditional file systems need to run algorithms that check and
-	repair the data like the &man.fsck.8; program.  These commands
-	take time and in severe cases, an administrator has to
-	manually decide which repair operation has to be performed.
-	When <acronym>ZFS</acronym> detects that a data block is being
-	read whose checksum does not match, it will try to read the
-	data from the mirror disk.  If that disk can provide the
-	correct data, it will not only give that data to the
-	application requesting it, but also correct the wrong data on
-	the disk that had the bad checksum.  This happens without any
-	interaction of a system administrator during normal pool
-	operation.</para>
-
-      <para>The next example will demonstrate this self-healing
-	behavior in <acronym>ZFS</acronym>.  First, a mirrored pool of
-	two disks <filename>/dev/ada0</filename> and
+	repair the data like &man.fsck.8;.  These commands take time,
+	and in severe cases, an administrator has to manually decide
+	which repair operation must be performed.  When
+	<acronym>ZFS</acronym> detects a data block with a checksum
+	that does not match, it tries to read the data from the mirror
+	disk.  If that disk can provide the correct data, it will not
+	only give that data to the application requesting it, but also
+	correct the wrong data on the disk that had the bad checksum.
+	This happens without any interaction from a system
+	administrator during normal pool operation.</para>
+
+      <para>The next example demonstrates this self-healing behavior.
+	A mirrored pool of disks <filename>/dev/ada0</filename> and
 	<filename>/dev/ada1</filename> is created.</para>
 
       <screen>&prompt.root; <userinput>zpool create <replaceable>healer</replaceable> mirror <replaceable>/dev/ada0</replaceable> <replaceable>/dev/ada1</replaceable></userinput>
@@ -754,10 +745,9 @@ errors: No known data errors
 NAME     SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
 healer   960M  92.5K   960M     0%  1.00x  ONLINE  -</screen>
 
-      <para>Now, some important data that we want to protect from data
-	errors using the self-healing feature is copied to the pool.
-	A checksum of the pool is then created to compare it against
-	the pool later on.</para>
+      <para>Some important data that to be protected from data errors
+	using the self-healing feature is copied to the pool.  A
+	checksum of the pool is created for later comparison.</para>
 
       <screen>&prompt.root; <userinput>cp /some/important/data /healer</userinput>
 &prompt.root; <userinput>zfs list</userinput>
@@ -767,22 +757,22 @@ healer   960M  67.7M   892M     7%  1.00
 &prompt.root; <userinput>cat checksum.txt</userinput>
 SHA1 (/healer) = 2753eff56d77d9a536ece6694bf0a82740344d1f</screen>
 
-      <para>Next, data corruption is simulated by writing random data
-	to the beginning of one of the disks that make up the mirror.
-	To prevent <acronym>ZFS</acronym> from healing the data as
-	soon as it detects it, we export the pool first and import it
-	again afterwards.</para>
+      <para>Data corruption is simulated by writing random data to the
+	beginning of one of the disks in the mirror.  To prevent
+	<acronym>ZFS</acronym> from healing the data as soon as it is
+	detected, the pool is exported before the corruption and
+	imported again afterwards.</para>
 
       <warning>
 	<para>This is a dangerous operation that can destroy vital
 	  data.  It is shown here for demonstrational purposes only
 	  and should not be attempted during normal operation of a
-	  <acronym>ZFS</acronym> storage pool.  Nor should this
-	  <command>dd</command> example be run on a disk with a
-	  different filesystem on it.  Do not use any other disk
-	  device names other than the ones that are part of the
-	  <acronym>ZFS</acronym> pool.  Make sure that proper backups
-	  of the pool are created before running the command!</para>
+	  storage pool.  Nor should this intentional corruption
+	  example be run on any disk with a different file system on
+	  it.  Do not use any other disk device names other than the
+	  ones that are part of the pool.  Make certain that proper
+	  backups of the pool are created before running the
+	  command!</para>
       </warning>
 
       <screen>&prompt.root; <userinput>zpool export <replaceable>healer</replaceable></userinput>
@@ -792,15 +782,13 @@ SHA1 (/healer) = 2753eff56d77d9a536ece66
 209715200 bytes transferred in 62.992162 secs (3329227 bytes/sec)
 &prompt.root; <userinput>zpool import healer</userinput></screen>
 
-      <para>The <acronym>ZFS</acronym> pool status shows that one
-	device has experienced an error.  It is important to know that
-	applications reading data from the pool did not receive any
-	data with a wrong checksum.  <acronym>ZFS</acronym> did
-	provide the application with the data from the
-	<filename>ada0</filename> device that has the correct
-	checksums.  The device with the wrong checksum can be found
-	easily as the <literal>CKSUM</literal> column contains a value
-	greater than zero.</para>
+      <para>The pool status shows that one device has experienced an
+	error.  Note that applications reading data from the pool did
+	not receive any incorrect data.  <acronym>ZFS</acronym>
+	provided data from the <filename>ada0</filename> device with
+	the correct checksums.  The device with the wrong checksum can
+	be found easily as the <literal>CKSUM</literal> column
+	contains a nonzero value.</para>
 
       <screen>&prompt.root; <userinput>zpool status <replaceable>healer</replaceable></userinput>
     pool: healer
@@ -821,11 +809,10 @@ SHA1 (/healer) = 2753eff56d77d9a536ece66
 
 errors: No known data errors</screen>
 
-      <para><acronym>ZFS</acronym> has detected the error and took
-	care of it by using the redundancy present in the unaffected
-	<filename>ada0</filename> mirror disk.  A checksum comparison
-	with the original one will reveal whether the pool is
-	consistent again.</para>
+      <para>The error was detected and handled by using the redundancy
+	present in the unaffected <filename>ada0</filename> mirror
+	disk.  A checksum comparison with the original one will reveal
+	whether the pool is consistent again.</para>
 
       <screen>&prompt.root; <userinput>sha1 /healer >> checksum.txt</userinput>
 &prompt.root; <userinput>cat checksum.txt</userinput>
@@ -835,17 +822,17 @@ SHA1 (/healer) = 2753eff56d77d9a536ece66
       <para>The two checksums that were generated before and after the
 	intentional tampering with the pool data still match.  This
 	shows how <acronym>ZFS</acronym> is capable of detecting and
-	correcting any errors automatically when the checksums do not
-	match any more.  Note that this is only possible when there is
-	enough redundancy present in the pool.  A pool consisting of a
-	single device has no self-healing capabilities.  That is also
-	the reason why checksums are so important in
+	correcting any errors automatically when the checksums differ.
+	Note that this is only possible when there is enough
+	redundancy present in the pool.  A pool consisting of a single
+	device has no self-healing capabilities.  That is also the
+	reason why checksums are so important in
 	<acronym>ZFS</acronym> and should not be disabled for any
 	reason.  No &man.fsck.8; or similar filesystem consistency
 	check program is required to detect and correct this and the
-	pool was available the whole time.  A scrub operation is now
-	required to remove the falsely written data from
-	<filename>ada1</filename>.</para>
+	pool was still available during the time there was a problem.
+	A scrub operation is now required to overwrite the corrupted
+	data on <filename>ada1</filename>.</para>
 
       <screen>&prompt.root; <userinput>zpool scrub <replaceable>healer</replaceable></userinput>
 &prompt.root; <userinput>zpool status <replaceable>healer</replaceable></userinput>
@@ -869,12 +856,12 @@ config:
 
 errors: No known data errors</screen>
 
-      <para>The scrub operation is reading the data from
-	<filename>ada0</filename> and corrects all data that has a
-	wrong checksum on <filename>ada1</filename>.  This is
+      <para>The scrub operation reads data from
+	<filename>ada0</filename> and rewrites any data with an
+	incorrect checksum on <filename>ada1</filename>.  This is
 	indicated by the <literal>(repairing)</literal> output from
 	<command>zpool status</command>.  After the operation is
-	complete, the pool status has changed to:</para>
+	complete, the pool status changes to:</para>
 
       <screen>&prompt.root; <userinput>zpool status <replaceable>healer</replaceable></userinput>
   pool: healer
@@ -895,12 +882,11 @@ config:
 
 errors: No known data errors</screen>
 
-      <para>After the scrub operation has completed and all the data
+      <para>After the scrub operation completes and all the data
 	has been synchronized from <filename>ada0</filename> to
-	<filename>ada1</filename>, the error messages can be <link
-	linkend="zfs-zpool-clear">cleared</link>
-	from the pool status by running <command>zpool
-	  clear</command>.</para>
+	<filename>ada1</filename>, the error messages can be
+	<link linkend="zfs-zpool-clear">cleared</link> from the pool
+	status by running <command>zpool clear</command>.</para>
 
       <screen>&prompt.root; <userinput>zpool clear <replaceable>healer</replaceable></userinput>
 &prompt.root; <userinput>zpool status <replaceable>healer</replaceable></userinput>
@@ -917,47 +903,46 @@ config:
 
 errors: No known data errors</screen>
 
-      <para>Our pool is now back to a fully working state and all the
+      <para>The pool is now back to a fully working state and all the
 	errors have been cleared.</para>
     </sect2>
 
     <sect2 xml:id="zfs-zpool-online">
       <title>Growing a Pool</title>
 
-      <para>The usable size of a redundant <acronym>ZFS</acronym> pool
-	is limited by the size of the smallest device in the vdev.  If
-	each device in the vdev is replaced sequentially, after the
-	smallest device has completed the
-	<link linkend="zfs-zpool-replace">replace</link> or
-	<link linkend="zfs-term-resilver">resilver</link> operation,
-	the pool can grow based on the size of the new smallest
-	device.  This expansion can be triggered by using
-	<command>zpool online</command> with <option>-e</option>
-	on each device.  After expansion of all devices,
-	the additional space will become available to the pool.</para>
+      <para>The usable size of a redundant pool is limited by the size
+	of the smallest device in the vdev.  If each device in the
+	vdev is replaced sequentially, after the smallest device has
+	completed the <link linkend="zfs-zpool-replace">replace</link>
+	or <link linkend="zfs-term-resilver">resilver</link>
+	operation, the pool can grow based on the size of the new
+	smallest device.  This expansion is triggered by using
+	<command>zpool online</command> with <option>-e</option> on
+	each device.  After expansion of all devices, the additional
+	space becomes available to the pool.</para>
     </sect2>
 
     <sect2 xml:id="zfs-zpool-import">
-      <title>Importing &amp; Exporting Pools</title>
+      <title>Importing and Exporting Pools</title>
 
-      <para>Pools can be exported in preparation for moving them to
-	another system.  All datasets are unmounted, and each device
-	is marked as exported but still locked so it cannot be used
-	by other disk subsystems.  This allows pools to be imported on
-	other machines, other operating systems that support
-	<acronym>ZFS</acronym>, and even different hardware
-	architectures (with some caveats, see &man.zpool.8;).  When a
-	dataset has open files, <option>-f</option> can be used to
-	force the export of a pool.  <option>-f</option> causes the
-	datasets to be forcibly unmounted, which can cause undefined
-	behavior in the applications which had open files on those
-	datasets.</para>
+      <para>Pools are <emphasis>exported</emphasis> before moving them
+	to another system.  All datasets are unmounted, and each
+	device is marked as exported but still locked so it cannot be
+	used by other disk subsystems.  This allows pools to be
+	<emphasis>imported</emphasis> on other machines, other
+	operating systems that support <acronym>ZFS</acronym>, and
+	even different hardware architectures (with some caveats, see
+	&man.zpool.8;).  When a dataset has open files,
+	<option>-f</option> can be used to force the export of a pool.
+	Use this with caution.  The datasets are forcibly unmounted,
+	potentially resulting in unexpected behavior by the
+	applications which had open files on those datasets.</para>
 
       <para>Importing a pool automatically mounts the datasets.  This
 	may not be the desired behavior, and can be prevented with
 	<option>-N</option>.  <option>-o</option> sets temporary
 	properties for this import only.  <option>altroot=</option>
-	allows importing a zpool with a base mount point instead of
+	allows importing a pool with a base mount point instead of
 	the root of the file system.  If the pool was last used on a
 	different system and was not properly exported, an import
 	might have to be forced with <option>-f</option>.
@@ -971,9 +956,11 @@ errors: No known data errors</screen>
       <para>After upgrading &os;, or if a pool has been imported from
 	a system using an older version of <acronym>ZFS</acronym>, the
 	pool can be manually upgraded to the latest version of
-	<acronym>ZFS</acronym>.  Consider whether the pool may ever
-	need to be imported on an older system before upgrading.  The
-	upgrade process is unreversible and cannot be undone.</para>
+	<acronym>ZFS</acronym> to support newer features.  Consider
+	whether the pool may ever need to be imported on an older
+	system before upgrading.  Upgrading is a one-way process.
+	Older pools can be upgraded, but pools with newer features
+	cannot be downgraded.</para>
 
       <screen>&prompt.root; <userinput>zpool status</userinput>
   pool: mypool
@@ -1001,8 +988,8 @@ errors: No known data errors</screen>
 	features are already supported.</para>
 
       <warning>
-	<para>Systems that boot from a pool must have their boot code
-	  updated to support the new pool version.  Run
+	<para>The boot code on systems that boot from a pool must be
+	  updated to support the new pool version.  Use
 	  <command>gpart bootcode</command> on the partition that
 	  contains the boot code.  See &man.gpart.8; for more
 	  information.</para>
@@ -1012,15 +999,13 @@ errors: No known data errors</screen>
     <sect2 xml:id="zfs-zpool-history">
       <title>Displaying Recorded Pool History</title>
 
-      <para><acronym>ZFS</acronym> records all the commands that were
-	issued to administer the pool.  These include the creation of
-	datasets, changing properties, or when a disk has been
-	replaced in the pool.  This history is useful for reviewing
-	how a pool was created and which user did a specific action
-	and when.  History is not kept in a log file, but is part of
-	the pool itself.  Because of that, history cannot be altered
-	after the fact unless the pool is destroyed.  The command to
-	review this history is aptly named
+      <para>Commands that modify the pool are recorded.  Recorded
+	actions include the creation of datasets, changing properties,
+	or replacement of a disk.  This history is useful for
+	reviewing how a pool was created and which user performed a
+	specific action and when.  History is not kept in a log file,
+	but is part of the pool itself.  The command to review this
+	history is aptly named
 	<command>zpool history</command>:</para>
 
       <screen>&prompt.root; <userinput>zpool history</userinput>
@@ -1032,18 +1017,17 @@ History for 'tank':
 
       <para>The output shows <command>zpool</command> and
 	<command>zfs</command> commands that were executed on the pool
-	along with a timestamp.  Only commands that alter
-	the pool in some way are recorded.  Commands like
-	<command>zfs list</command> are not included.  When
-	no pool name is given to
-	<command>zpool history</command>, the history of all
-	pools is displayed.</para>
+	along with a timestamp.  Only commands that alter the pool in
+	some way are recorded.  Commands like
+	<command>zfs list</command> are not included.  When no pool
+	name is specified, the history of all pools is
+	displayed.</para>
 
       <para><command>zpool history</command> can show even more
 	information when the options <option>-i</option> or
-	<option>-l</option> are provided.  The option
-	<option>-i</option> displays user initiated events as well
-	as internally logged <acronym>ZFS</acronym> events.</para>
+	<option>-l</option> are provided.  <option>-i</option>
+	displays user-initiated events as well as internally logged
+	<acronym>ZFS</acronym> events.</para>
 
       <screen>&prompt.root; <userinput>zpool history -i</userinput>
 History for 'tank':
@@ -1056,9 +1040,9 @@ History for 'tank':
 2013-02-27.18:51:18 zfs create tank/backup</screen>
 
       <para>More details can be shown by adding <option>-l</option>.
-	History records are shown in a long format,
-	including information like the name of the user who issued the
-	command and the hostname on which the change was made.</para>
+	History records are shown in a long format, including
+	information like the name of the user who issued the command
+	and the hostname on which the change was made.</para>
 
       <screen>&prompt.root; <userinput>zpool history -l</userinput>
 History for 'tank':
@@ -1067,36 +1051,36 @@ History for 'tank':
 2013-02-27.18:51:09 zfs set checksum=fletcher4 tank [user 0 (root) on myzfsbox:global]
 2013-02-27.18:51:18 zfs create tank/backup [user 0 (root) on myzfsbox:global]</screen>
 
-      <para>This output clearly shows that the <systemitem
-	  class="username">root</systemitem> user created the mirrored
-	pool (consisting of <filename>/dev/ada0</filename> and
-	<filename>/dev/ada1</filename>).  In addition to that, the
-	hostname (<literal>myzfsbox</literal>) is also shown in the
-	commands after the pool's creation.  The hostname display
-	becomes important when the pool is exported from the current
-	and imported on another system.  The commands that are issued
+      <para>The output shows that the
+	<systemitem class="username">root</systemitem> user created
+	the mirrored pool with disks
+	<filename>/dev/ada0</filename> and
+	<filename>/dev/ada1</filename>.  The hostname
+	<systemitem class="systemname">myzfsbox</systemitem> is also
+	shown in the commands after the pool's creation.  The hostname
+	display becomes important when the pool is exported from one
+	system and imported on another.  The commands that are issued
 	on the other system can clearly be distinguished by the
 	hostname that is recorded for each command.</para>
 
       <para>Both options to <command>zpool history</command> can be
 	combined to give the most detailed information possible for
 	any given pool.  Pool history provides valuable information
-	when tracking down what actions were performed or when more
-	detailed output is needed for debugging.</para>
+	when tracking down the actions that were performed or when
+	more detailed output is needed for debugging.</para>
     </sect2>
 
     <sect2 xml:id="zfs-zpool-iostat">
       <title>Performance Monitoring</title>
 
-      <para>A built-in monitoring system can display
-	statistics about I/O on the pool in real-time.  It
-	shows the amount of free and used space on the pool, how many
-	read and write operations are being performed per second, and
-	how much I/O bandwidth is currently being utilized.
-	By default, all pools in the system
-	are monitored and displayed.  A pool name can be provided
-	to limit monitoring to just that pool.  A
-	basic example:</para>
+      <para>A built-in monitoring system can display pool
+	<acronym>I/O</acronym> statistics in real time.  It shows the
+	amount of free and used space on the pool, how many read and
+	write operations are being performed per second, and how much
+	<acronym>I/O</acronym> bandwidth is currently being utilized.
+	By default, all pools in the system are monitored and
+	displayed.  A pool name can be provided to limit monitoring to
+	just that pool.  A basic example:</para>
 
       <screen>&prompt.root; <userinput>zpool iostat</userinput>
                capacity     operations    bandwidth
@@ -1104,10 +1088,10 @@ pool        alloc   free   read  write  
 ----------  -----  -----  -----  -----  -----  -----
 data         288G  1.53T      2     11  11.3K  57.1K</screen>
 
-      <para>To continuously monitor I/O activity on the pool, a
-	number can be specified as the last parameter, indicating
-	the frequency in seconds to wait between updates.
-	The next statistic line is printed after each interval.  Press
+      <para>To continuously monitor <acronym>I/O</acronym> activity, a
+	number can be specified as the last parameter, indicating a
+	interval in seconds to wait between updates.  The next
+	statistic line is printed after each interval.  Press
 	<keycombo action="simul">
 	  <keycap>Ctrl</keycap>
 	  <keycap>C</keycap>
@@ -1116,14 +1100,13 @@ data         288G  1.53T      2     11  
 	the interval to specify the total number of statistics to
 	display.</para>
 
-      <para>Even more detailed pool I/O statistics can be displayed
-	with <option>-v</option>.  Each device in
-	the pool is shown with a statistics line.
-	This is useful in seeing how many read and write
-	operations are being performed on each device, and can help
-	determine if any individual device is slowing down the
-	pool.  This example shows a mirrored pool
-	consisting of two devices:</para>
+      <para>Even more detailed <acronym>I/O</acronym> statistics can
+	be displayed with <option>-v</option>.  Each device in the
+	pool is shown with a statistics line.  This is useful in
+	seeing how many read and write operations are being performed
+	on each device, and can help determine if any individual
+	device is slowing down the pool.  This example shows a
+	mirrored pool with two devices:</para>
 
       <screen>&prompt.root; <userinput>zpool iostat -v </userinput>
                             capacity     operations    bandwidth
@@ -1139,14 +1122,14 @@ data                      288G  1.53T   
     <sect2 xml:id="zfs-zpool-split">
       <title>Splitting a Storage Pool</title>
 
-      <para>A pool consisting of one or more mirror vdevs can be
-	split into a second pool.  The last member of each mirror
-	(unless otherwise specified) is detached and used to create a
-	new pool containing the same data.  It is recommended that the
-	operation first be attempted with the <option>-n</option>
-	parameter.  The details of the proposed operation are
-	displayed without actually performing it.  This helps ensure
-	the operation will happen as expected.</para>
+      <para>A pool consisting of one or more mirror vdevs can be split
+	into two pools.  Unless otherwise specified, the last member
+	of each mirror is detached and used to create a new pool

*** DIFF OUTPUT TRUNCATED AT 1000 LINES ***



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201405260416.s4Q4GGMZ062013>