Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 15 Aug 2013 01:08:24 +0000 (UTC)
From:      Warren Block <wblock@FreeBSD.org>
To:        doc-committers@freebsd.org, svn-doc-projects@freebsd.org
Subject:   svn commit: r42544 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs
Message-ID:  <201308150108.r7F18OET056618@svn.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: wblock
Date: Thu Aug 15 01:08:23 2013
New Revision: 42544
URL: http://svnweb.freebsd.org/changeset/doc/42544

Log:
  Move Terms section to end.

Modified:
  projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml

Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
==============================================================================
--- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml	Thu Aug 15 01:04:54 2013	(r42543)
+++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml	Thu Aug 15 01:08:23 2013	(r42544)
@@ -33,636 +33,6 @@
     designed to prevent data write corruption and to overcome some
     of the limitations of hardware <acronym>RAID</acronym>.</para>
 
-  <sect1 id="zfs-term">
-    <title>ZFS Features and Terminology</title>
-
-    <para>ZFS is a fundamentally different file system because it
-      is more than just a file system.  ZFS combines the roles of
-      file system and volume manager, enabling additional storage
-      devices to be added to a live system and having the new space
-      available on all of the existing file systems in that pool
-      immediately.  By combining the traditionally separate roles,
-      ZFS is able to overcome previous limitations that prevented
-      RAID groups being able to grow.  Each top level device in a
-      zpool is called a vdev, which can be a simple disk or a RAID
-      transformation such as a mirror or RAID-Z array.  ZFS file
-      systems (called datasets), each have access to the combined
-      free space of the entire pool.  As blocks are allocated the
-      free space in the pool available to of each file system is
-      decreased.  This approach avoids the common pitfall with
-      extensive partitioning where free space becomes fragmentated
-      across the partitions.</para>
-
-    <informaltable pgwide="1">
-      <tgroup cols="2">
-	<tbody>
-	  <row>
-	    <entry valign="top"
-	      id="zfs-term-zpool">zpool</entry>
-
-	    <entry>A storage pool is the most basic building block of
-	      ZFS.  A pool is made up of one or more vdevs, the
-	      underlying devices that store the data.  A pool is then
-	      used to create one or more file systems (datasets) or
-	      block devices (volumes).  These datasets and volumes
-	      share the pool of remaining free space.  Each pool is
-	      uniquely identified by a name and a
-	      <acronym>GUID</acronym>.  The zpool also controls the
-	      version number and therefore the features available for
-	      use with ZFS.
-
-	      <note>
-		<para>&os; 9.0 and 9.1 include support for ZFS version
-		  28.  Future versions use ZFS version 5000 with
-		  feature flags.  This allows greater
-		  cross-compatibility with other implementations of
-		  ZFS.</para>
-	      </note></entry>
-	  </row>
-
-	  <row>
-	    <entry valign="top"
-	      id="zfs-term-vdev">vdev&nbsp;Types</entry>
-
-	    <entry>A zpool is made up of one or more vdevs, which
-	      themselves can be a single disk or a group of disks, in
-	      the case of a RAID transform.  When multiple vdevs are
-	      used, ZFS spreads data across the vdevs to increase
-	      performance and maximize usable space.
-
-	      <itemizedlist>
-		<listitem>
-		  <para id="zfs-term-vdev-disk">
-		    <emphasis>Disk</emphasis> - The most basic type
-		    of vdev is a standard block device.  This can be
-		    an entire disk (such as
-		    <devicename><replaceable>/dev/ada0</replaceable></devicename>
-		    or
-		    <devicename><replaceable>/dev/da0</replaceable></devicename>)
-		    or a partition
-		    (<devicename><replaceable>/dev/ada0p3</replaceable></devicename>).
-		    Contrary to the Solaris documentation, on &os;
-		    there is no performance penalty for using a
-		    partition rather than an entire disk.</para>
-		</listitem>
-
-		<listitem>
-		  <para id="zfs-term-vdev-file">
-		    <emphasis>File</emphasis> - In addition to
-		    disks, ZFS pools can be backed by regular files,
-		    this is especially useful for testing and
-		    experimentation.  Use the full path to the file
-		    as the device path in the zpool create command.
-		    All vdevs must be atleast 128&nbsp;MB in
-		    size.</para>
-		</listitem>
-
-		<listitem>
-		  <para id="zfs-term-vdev-mirror">
-		    <emphasis>Mirror</emphasis> - When creating a
-		    mirror, specify the <literal>mirror</literal>
-		    keyword followed by the list of member devices
-		    for the mirror.  A mirror consists of two or
-		    more devices, all data will be written to all
-		    member devices.  A mirror vdev will only hold as
-		    much data as its smallest member.  A mirror vdev
-		    can withstand the failure of all but one of its
-		    members without losing any data.</para>
-
-		  <note>
-		    <para>regular single disk vdev can be upgraded to
-		      a mirror vdev at any time using the
-		      <command>zpool</command> <link
-			linkend="zfs-zpool-attach">attach</link>
-		      command.</para>
-		  </note>
-		</listitem>
-
-		<listitem>
-		  <para id="zfs-term-vdev-raidz">
-		    <emphasis><acronym>RAID</acronym>-Z</emphasis> -
-		    ZFS implements RAID-Z, a variation on standard
-		    RAID-5 that offers better distribution of parity
-		    and eliminates the "RAID-5 write hole" in which
-		    the data and parity information become
-		    inconsistent after an unexpected restart.  ZFS
-		    supports 3 levels of RAID-Z which provide
-		    varying levels of redundancy in exchange for
-		    decreasing levels of usable storage.  The types
-		    are named RAID-Z1 through Z3 based on the number
-		    of parity devinces in the array and the number
-		    of disks that the pool can operate
-		    without.</para>
-
-		  <para>In a RAID-Z1 configuration with 4 disks,
-		    each 1&nbsp;TB, usable storage will be 3&nbsp;TB
-		    and the pool will still be able to operate in
-		    degraded mode with one faulted disk.  If an
-		    additional disk goes offline before the faulted
-		    disk is replaced and resilvered, all data in the
-		    pool can be lost.</para>
-
-		  <para>In a RAID-Z3 configuration with 8 disks of
-		    1&nbsp;TB, the volume would provide 5TB of
-		    usable space and still be able to operate with
-		    three faulted disks.  Sun recommends no more
-		    than 9 disks in a single vdev.  If the
-		    configuration has more disks, it is recommended
-		    to divide them into separate vdevs and the pool
-		    data will be striped across them.</para>
-
-		  <para>A configuration of 2 RAID-Z2 vdevs
-		    consisting of 8 disks each would create
-		    something similar to a RAID 60 array.  A RAID-Z
-		    group's storage capacity is approximately the
-		    size of the smallest disk, multiplied by the
-		    number of non-parity disks.  4x 1&nbsp;TB disks
-		    in Z1 has an effective size of approximately
-		    3&nbsp;TB, and a 8x 1&nbsp;TB array in Z3 will
-		    yeild 5&nbsp;TB of usable space.</para>
-		</listitem>
-
-		<listitem>
-		  <para id="zfs-term-vdev-spare">
-		    <emphasis>Spare</emphasis> - ZFS has a special
-		    pseudo-vdev type for keeping track of available
-		    hot spares.  Note that installed hot spares are
-		    not deployed automatically; they must manually
-		    be configured to replace the failed device using
-		    the zfs replace command.</para>
-		</listitem>
-
-		<listitem>
-		  <para id="zfs-term-vdev-log">
-		    <emphasis>Log</emphasis> - ZFS Log Devices, also
-		    known as ZFS Intent Log (<acronym>ZIL</acronym>)
-		    move the intent log from the regular pool
-		    devices to a dedicated device.  The ZIL
-		    accelerates synchronous transactions by using
-		    storage devices (such as
-		    <acronym>SSD</acronym>s) that are faster
-		    compared to those used for the main pool.  When
-		    data is being written and the application
-		    requests a guarantee that the data has been
-		    safely stored, the data is written to the faster
-		    ZIL storage, then later flushed out to the
-		    regular disks, greatly reducing the latency of
-		    synchronous writes.  Log devices can be
-		    mirrored, but RAID-Z is not supported.  When
-		    specifying multiple log devices writes will be
-		    load balanced across all devices.</para>
-		</listitem>
-
-		<listitem>
-		  <para id="zfs-term-vdev-cache">
-		    <emphasis>Cache</emphasis> - Adding a cache vdev
-		    to a zpool will add the storage of the cache to
-		    the L2ARC.  Cache devices cannot be mirrored.
-		    Since a cache device only stores additional
-		    copies of existing data, there is no risk of
-		    data loss.</para>
-		</listitem>
-	      </itemizedlist></entry>
-	  </row>
-
-	  <row>
-	    <entry valign="top"
-	      id="zfs-term-arc">Adaptive Replacement
-	      Cache (<acronym>ARC</acronym>)</entry>
-
-	    <entry>ZFS uses an Adaptive Replacement Cache
-	      (<acronym>ARC</acronym>), rather than a more
-	      traditional Least Recently Used
-	      (<acronym>LRU</acronym>) cache.  An
-	      <acronym>LRU</acronym> cache is a simple list of items
-	      in the cache sorted by when each object was most
-	      recently used; new items are added to the top of the
-	      list and once the cache is full items from the bottom
-	      of the list are evicted to make room for more active
-	      objects.  An <acronym>ARC</acronym> consists of four
-	      lists; the Most Recently Used (<acronym>MRU</acronym>)
-	      and Most Frequently Used (<acronym>MFU</acronym>)
-	      objects, plus a ghost list for each.  These ghost
-	      lists tracks recently evicted objects to provent them
-	      being added back to the cache.  This increases the
-	      cache hit ratio by avoiding objects that have a
-	      history of only being used occasionally.  Another
-	      advantage of using both an <acronym>MRU</acronym> and
-	      <acronym>MFU</acronym> is that scanning an entire
-	      filesystem would normally evict all data from an
-	      <acronym>MRU</acronym> or <acronym>LRU</acronym> cache
-	      in favor of this freshly accessed content.  In the
-	      case of <acronym>ZFS</acronym> since there is also an
-	      <acronym>MFU</acronym> that only tracks the most
-	      frequently used objects, the cache of the most
-	      commonly accessed blocks remains.</entry>
-	  </row>
-
-	  <row>
-	    <entry valign="top"
-	      id="zfs-term-l2arc">L2ARC</entry>
-
-	    <entry>The <acronym>L2ARC</acronym> is the second level
-	      of the <acronym>ZFS</acronym> caching system.  The
-	      primary <acronym>ARC</acronym> is stored in
-	      <acronym>RAM</acronym>, however since the amount of
-	      available <acronym>RAM</acronym> is often limited,
-	      <acronym>ZFS</acronym> can also make use of <link
-		linkend="zfs-term-vdev-cache">cache</link>
-	      vdevs.  Solid State Disks (<acronym>SSD</acronym>s) are
-	      often used as these cache devices due to their higher
-	      speed and lower latency compared to traditional spinning
-	      disks.  An L2ARC is entirely optional, but having one
-	      will significantly increase read speeds for files that
-	      are cached on the <acronym>SSD</acronym> instead of
-	      having to be read from the regular spinning disks.  The
-	      L2ARC can also speed up <link
-		linkend="zfs-term-deduplication">deduplication</link>
-	      since a <acronym>DDT</acronym> that does not fit in
-	      <acronym>RAM</acronym> but does fit in the
-	      <acronym>L2ARC</acronym> will be much faster than if the
-	      <acronym>DDT</acronym> had to be read from disk.  The
-	      rate at which data is added to the cache devices is
-	      limited to prevent prematurely wearing out the
-	      <acronym>SSD</acronym> with too many writes.  Until the
-	      cache is full (the first block has been evicted to make
-	      room), writing to the <acronym>L2ARC</acronym> is
-	      limited to the sum of the write limit and the boost
-	      limit, then after that limited to the write limit.  A
-	      pair of sysctl values control these rate limits;
-	      <literal>vfs.zfs.l2arc_write_max</literal> controls how
-	      many bytes are written to the cache per second, while
-	      <literal>vfs.zfs.l2arc_write_boost</literal> adds to
-	      this limit during the "Turbo Warmup Phase" (Write
-	      Boost).</entry>
-	  </row>
-
-	  <row>
-	    <entry valign="top"
-	      id="zfs-term-cow">Copy-On-Write</entry>
-
-	    <entry>Unlike a traditional file system, when data is
-	      overwritten on ZFS the new data is written to a
-	      different block rather than overwriting the old data in
-	      place.  Only once this write is complete is the metadata
-	      then updated to point to the new location of the data.
-	      This means that in the event of a shorn write (a system
-	      crash or power loss in the middle of writing a file) the
-	      entire original contents of the file are still available
-	      and the incomplete write is discarded.  This also means
-	      that ZFS does not require a fsck after an unexpected
-	      shutdown.</entry>
-	  </row>
-
-	  <row>
-	    <entry valign="top"
-	      id="zfs-term-dataset">Dataset</entry>
-
-	    <entry>Dataset is the generic term for a ZFS file system,
-	      volume, snapshot or clone.  Each dataset will have a
-	      unique name in the format:
-	      <literal>poolname/path@snapshot</literal>.  The root of
-	      the pool is technically a dataset as well.  Child
-	      datasets are named hierarchically like directories; for
-	      example <literal>mypool/home</literal>, the home dataset
-	      is a child of mypool and inherits properties from it.
-	      This can be expended further by creating
-	      <literal>mypool/home/user</literal>.  This grandchild
-	      dataset will inherity properties from the parent and
-	      grandparent.  It is also possible to set properties
-	      on a child to override the defaults inherited from the
-	      parents and grandparents.  ZFS also allows
-	      administration of datasets and their children to be
-	      delegated.</entry>
-	  </row>
-
-	  <row>
-	    <entry valign="top"
-	      id="zfs-term-volum">Volume</entry>
-
-	    <entry>In additional to regular file system datasets, ZFS
-	      can also create volumes, which are block devices.
-	      Volumes have many of the same features, including
-	      copy-on-write, snapshots, clones and checksumming.
-	      Volumes can be useful for running other file system
-	      formats on top of ZFS, such as UFS or in the case of
-	      Virtualization or exporting <acronym>iSCSI</acronym>
-	      extents.</entry>
-	  </row>
-
-	  <row>
-	    <entry valign="top"
-	      id="zfs-term-snapshot">Snapshot</entry>
-
-	    <entry>The <link
-		linkend="zfs-term-cow">copy-on-write</link>
-
-	      design of ZFS allows for nearly instantaneous consistent
-	      snapshots with arbitrary names.  After taking a snapshot
-	      of a dataset (or a recursive snapshot of a parent
-	      dataset that will include all child datasets), new data
-	      is written to new blocks (as described above), however
-	      the old blocks are not reclaimed as free space.  There
-	      are then two versions of the file system, the snapshot
-	      (what the file system looked like before) and the live
-	      file system; however no additional space is used.  As
-	      new data is written to the live file system, new blocks
-	      are allocated to store this data.  The apparent size of
-	      the snapshot will grow as the blocks are no longer used
-	      in the live file system, but only in the snapshot.
-	      These snapshots can be mounted (read only) to allow for
-	      the recovery of previous versions of files.  It is also
-	      possible to <link
-		linkend="zfs-zfs-snapshot">rollback</link>
-	      a live file system to a specific snapshot, undoing any
-	      changes that took place after the snapshot was taken.
-	      Each block in the zpool has a reference counter which
-	      indicates how many snapshots, clones, datasets or
-	      volumes make use of that block.  As files and snapshots
-	      are deleted, the reference count is decremented; once a
-	      block is no longer referenced, it is reclaimed as free
-	      space.  Snapshots can also be marked with a <link
-		linkend="zfs-zfs-snapshot">hold</link>,
-	      once a snapshot is held, any attempt to destroy it will
-	      return an EBUY error.  Each snapshot can have multiple
-	      holds, each with a unique name.  The <link
-		linkend="zfs-zfs-snapshot">release</link>
-	      command removes the hold so the snapshot can then be
-	      deleted.  Snapshots can be taken on volumes, however
-	      they can only be cloned or rolled back, not mounted
-	      independently.</entry>
-	  </row>
-
-	  <row>
-	    <entry valign="top"
-	      id="zfs-term-clone">Clone</entry>
-
-	    <entry>Snapshots can also be cloned; a clone is a writable
-	      version of a snapshot, allowing the file system to be
-	      forked as a new dataset.  As with a snapshot, a clone
-	      initially consumes no additional space, only as new data
-	      is written to a clone and new blocks are allocated does
-	      the apparent size of the clone grow.  As blocks are
-	      overwritten in the cloned file system or volume, the
-	      reference count on the previous block is decremented.
-	      The snapshot upon which a clone is based cannot be
-	      deleted because the clone is dependeant upon it (the
-	      snapshot is the parent, and the clone is the child).
-	      Clones can be <literal>promoted</literal>, reversing
-	      this dependeancy, making the clone the parent and the
-	      previous parent the child.  This operation requires no
-	      additional space, however it will change the way the
-	      used space is accounted.</entry>
-	  </row>
-
-	  <row>
-	    <entry valign="top"
-	      id="zfs-term-checksum">Checksum</entry>
-
-	    <entry>Every block that is allocated is also checksummed
-	      (which algorithm is used is a per dataset property, see:
-	      zfs set).  ZFS transparently validates the checksum of
-	      each block as it is read, allowing ZFS to detect silent
-	      corruption.  If the data that is read does not match the
-	      expected checksum, ZFS will attempt to recover the data
-	      from any available redundancy (mirrors, RAID-Z).  You
-	      can trigger the validation of all checksums using the
-	      <link linkend="zfs-term-scrub">scrub</link>
-	      command.  The available checksum algorithms include:
-
-	      <itemizedlist>
-		<listitem>
-		  <para>fletcher2</para>
-		</listitem>
-
-		<listitem>
-		  <para>fletcher4</para>
-		</listitem>
-
-		<listitem>
-		  <para>sha256</para>
-		</listitem>
-	      </itemizedlist>
-
-	      The fletcher algorithms are faster, but sha256 is a
-	      strong cryptographic hash and has a much lower chance of
-	      a collisions at the cost of some performance.  Checksums
-	      can be disabled but it is inadvisable.</entry>
-	  </row>
-
-	  <row>
-	    <entry valign="top"
-	      id="zfs-term-compression">Compression</entry>
-
-	    <entry>Each dataset in ZFS has a compression property,
-	      which defaults to off.  This property can be set to one
-	      of a number of compression algorithms, which will cause
-	      all new data that is written to this dataset to be
-	      compressed as it is written.  In addition to the
-	      reduction in disk usage, this can also increase read and
-	      write throughput, as only the smaller compressed version
-	      of the file needs to be read or written.
-
-	      <note>
-		<para>LZ4 compression is only available after &os;
-		  9.2</para>
-	      </note></entry>
-	  </row>
-
-	  <row>
-	    <entry valign="top"
-	      id="zfs-term-deduplication">Deduplication</entry>
-
-	    <entry>ZFS has the ability to detect duplicate blocks of
-	      data as they are written (thanks to the checksumming
-	      feature).  If deduplication is enabled, instead of
-	      writing the block a second time, the reference count of
-	      the existing block will be increased, saving storage
-	      space.  In order to do this, ZFS keeps a deduplication
-	      table (<acronym>DDT</acronym>) in memory, containing the
-	      list of unique checksums, the location of that block and
-	      a reference count.  When new data is written, the
-	      checksum is calculated and compared to the list.  If a
-	      match is found, the data is considered to be a
-	      duplicate.  When deduplication is enabled, the checksum
-	      algorithm is changed to <acronym>SHA256</acronym> to
-	      provide a secure cryptographic hash.  ZFS deduplication
-	      is tunable; if dedup is on, then a matching checksum is
-	      assumed to mean that the data is identical.  If dedup is
-	      set to verify, then the data in the two blocks will be
-	      checked byte-for-byte to ensure it is actually identical
-	      and if it is not, the hash collision will be noted by
-	      ZFS and the two blocks will be stored separately.  Due
-	      to the nature of the <acronym>DDT</acronym>, having to
-	      store the hash of each unique block, it consumes a very
-	      large amount of memory (a general rule of thumb is
-	      5-6&nbsp;GB of ram per 1&nbsp;TB of deduplicated data).
-	      In situations where it is not practical to have enough
-	      <acronym>RAM</acronym> to keep the entire DDT in memory,
-	      performance will suffer greatly as the DDT will need to
-	      be read from disk before each new block is written.
-	      Deduplication can make use of the L2ARC to store the
-	      DDT, providing a middle ground between fast system
-	      memory and slower disks.  It is advisable to consider
-	      using ZFS compression instead, which often provides
-	      nearly as much space savings without the additional
-	      memory requirement.</entry>
-	  </row>
-
-	  <row>
-	    <entry valign="top"
-	      id="zfs-term-scrub">Scrub</entry>
-
-	    <entry>In place of a consistency check like fsck, ZFS has
-	      the <literal>scrub</literal> command, which reads all
-	      data blocks stored on the pool and verifies their
-	      checksums them against the known good checksums stored
-	      in the metadata.  This periodic check of all the data
-	      stored on the pool ensures the recovery of any corrupted
-	      blocks before they are needed.  A scrub is not required
-	      after an unclean shutdown, but it is recommended that
-	      you run a scrub at least once each quarter.  ZFS
-	      compares the checksum for each block as it is read in
-	      the normal course of use, but a scrub operation makes
-	      sure even infrequently used blocks are checked for
-	      silent corruption.</entry>
-	  </row>
-
-	  <row>
-	    <entry valign="top"
-	      id="zfs-term-quota">Dataset Quota</entry>
-
-	    <entry>ZFS provides very fast and accurate dataset, user
-	      and group space accounting in addition to quotes and
-	      space reservations.  This gives the administrator fine
-	      grained control over how space is allocated and allows
-	      critical file systems to reserve space to ensure other
-	      file systems do not take all of the free space.
-
-	      <para>ZFS supports different types of quotas: the
-		dataset quota, the <link
-		  linkend="zfs-term-refquota">reference
-		  quota (<acronym>refquota</acronym>)</link>, the
-		<link linkend="zfs-term-userquota">user
-		  quota</link>, and the
-		<link linkend="zfs-term-groupquota">group
-		  quota</link>.</para>
-
-	      <para>Quotas limit the amount of space that a dataset
-		and all of its descendants (snapshots of the dataset,
-		child datasets and the snapshots of those datasets)
-		can consume.</para>
-
-	      <note>
-		<para>Quotas cannot be set on volumes, as the
-		  <literal>volsize</literal> property acts as an
-		  implicit quota.</para>
-	      </note></entry>
-	  </row>
-
-	  <row>
-	    <entry valign="top"
-	      id="zfs-term-refquota">Reference
-	      Quota</entry>
-
-	    <entry>A reference quota limits the amount of space a
-	      dataset can consume by enforcing a hard limit on the
-	      space used.  However, this hard limit includes only
-	      space that the dataset references and does not include
-	      space used by descendants, such as file systems or
-	      snapshots.</entry>
-	  </row>
-
-	  <row>
-	    <entry valign="top"
-	      id="zfs-term-userquota">User
-	      Quota</entry>
-
-	    <entry>User quotas are useful to limit the amount of space
-	      that can be used by the specified user.</entry>
-	  </row>
-
-	  <row>
-	    <entry valign="top"
-	      id="zfs-term-groupquota">Group
-	      Quota</entry>
-
-	    <entry>The group quota limits the amount of space that a
-	      specified group can consume.</entry>
-	  </row>
-
-	  <row>
-	    <entry valign="top"
-	      id="zfs-term-reservation">Dataset
-	      Reservation</entry>
-
-	    <entry>The <literal>reservation</literal> property makes
-	      it possible to guaranteed a minimum amount of space for
-	      the use of a specific dataset and its descendants.  This
-	      means that if a 10&nbsp;GB reservation is set on
-	      <filename>storage/home/bob</filename>, if another
-	      dataset tries to use all of the free space, at least
-	      10&nbsp;GB of space is reserved for this dataset.  If a
-	      snapshot is taken of
-	      <filename>storage/home/bob</filename>, the space used by
-	      that snapshot is counted against the reservation.  The
-	      <link
-		linkend="zfs-term-refreservation">refreservation</link>
-	      property works in a similar way, except it
-	      <emphasis>excludes</emphasis> descendants, such as
-	      snapshots.
-
-	      <para>Reservations of any sort are useful in many
-		situations, such as planning and testing the
-		suitability of disk space allocation in a new system,
-		or ensuring that enough space is available on file
-		systems for audio logs or system recovery procedures
-		and files.</para></entry>
-	  </row>
-
-	  <row>
-	    <entry valign="top"
-	      id="zfs-term-refreservation">Reference
-	      Reservation</entry>
-
-	    <entry>The <literal>refreservation</literal> property
-	      makes it possible to guaranteed a minimum amount of
-	      space for the use of a specific dataset
-	      <emphasis>excluding</emphasis> its descendants.  This
-	      means that if a 10&nbsp;GB reservation is set on
-	      <filename>storage/home/bob</filename>, if another
-	      dataset tries to use all of the free space, at least
-	      10&nbsp;GB of space is reserved for this dataset.  In
-	      contrast to a regular <link
-		linkend="zfs-term-reservation">reservation</link>,
-	      space used by snapshots and decendant datasets is not
-	      counted against the reservation.  As an example, if a
-	      snapshot was taken of
-	      <filename>storage/home/bob</filename>, enough disk space
-	      would have to exist outside of the
-	      <literal>refreservation</literal> amount for the
-	      operation to succeed because descendants of the main
-	      data set are not counted by the
-	      <literal>refreservation</literal> amount and so do not
-	      encroach on the space set.</entry>
-	  </row>
-
-	  <row>
-	    <entry valign="top"
-	      id="zfs-term-resilver">Resilver</entry>
-
-	    <entry>When a disk fails and must be replaced, the new
-	      disk must be filled with the data that was lost.  This
-	      process of calculating and writing the missing data
-	      (using the parity information distributed across the
-	      remaining drives) to the new drive is called
-	      Resilvering.</entry>
-	  </row>
-	</tbody>
-      </tgroup>
-    </informaltable>
-  </sect1>
-
   <sect1 id="zfs-differences">
     <title>What Makes ZFS Different</title>
 
@@ -1019,443 +389,1073 @@ config:
 
 errors: No known data errors</screen>
 
-      <para>As shown from this example, everything appears to be
-	normal.</para>
-    </sect2>
+      <para>As shown from this example, everything appears to be
+	normal.</para>
+    </sect2>
+
+    <sect2>
+      <title>Data Verification</title>
+
+      <para><acronym>ZFS</acronym> uses checksums to verify the
+	integrity of stored data.  These are enabled automatically
+	upon creation of file systems and may be disabled using the
+	following command:</para>
+
+      <screen>&prompt.root; <userinput>zfs set checksum=off storage/home</userinput></screen>
+
+      <para>Doing so is <emphasis>not</emphasis> recommended as
+	checksums take very little storage space and are used to check
+	data integrity using checksum verification in a process is
+	known as <quote>scrubbing.</quote> To verify the data
+	integrity of the <literal>storage</literal> pool, issue this
+	command:</para>
+
+      <screen>&prompt.root; <userinput>zpool scrub storage</userinput></screen>
+
+      <para>This process may take considerable time depending on the
+	amount of data stored.  It is also very <acronym>I/O</acronym>
+	intensive, so much so that only one scrub may be run at any
+	given time.  After the scrub has completed, the status is
+	updated and may be viewed by issuing a status request:</para>
+
+      <screen>&prompt.root; <userinput>zpool status storage</userinput>
+ pool: storage
+ state: ONLINE
+ scrub: scrub completed with 0 errors on Sat Jan 26 19:57:37 2013
+config:
+
+	NAME        STATE     READ WRITE CKSUM
+	storage     ONLINE       0     0     0
+	  raidz1    ONLINE       0     0     0
+	    da0     ONLINE       0     0     0
+	    da1     ONLINE       0     0     0
+	    da2     ONLINE       0     0     0
+
+errors: No known data errors</screen>
+
+      <para>The completion time is displayed and helps to ensure data
+	integrity over a long period of time.</para>
+
+      <para>Refer to &man.zfs.8; and &man.zpool.8; for other
+	<acronym>ZFS</acronym> options.</para>
+    </sect2>
+  </sect1>
+
+  <sect1 id="zfs-zpool">
+    <title><command>zpool</command> Administration</title>
+
+    <para></para>
+
+    <sect2 id="zfs-zpool-create">
+      <title>Creating &amp; Destroying Storage Pools</title>
+
+      <para></para>
+    </sect2>
+
+    <sect2 id="zfs-zpool-attach">
+      <title>Adding &amp; Removing Devices</title>
+
+      <para></para>
+    </sect2>
+
+    <sect2 id="zfs-zpool-resilver">
+      <title>Dealing with Failed Devices</title>
+
+      <para></para>
+    </sect2>
+
+    <sect2 id="zfs-zpool-import">
+      <title>Importing &amp; Exporting Pools</title>
+
+      <para></para>
+    </sect2>
+
+    <sect2 id="zfs-zpool-upgrade">
+      <title>Upgrading a Storage Pool</title>
+
+      <para></para>
+    </sect2>
+
+    <sect2 id="zfs-zpool-status">
+      <title>Checking the Status of a Pool</title>
+
+      <para></para>
+    </sect2>
+
+    <sect2 id="zfs-zpool-iostat">
+      <title>Performance Monitoring</title>
+
+      <para></para>
+    </sect2>
+
+    <sect2 id="zfs-zpool-split">
+      <title>Splitting a Storage Pool</title>
+
+      <para></para>
+    </sect2>
+  </sect1>
+
+  <sect1 id="zfs-zfs">
+    <title><command>zfs</command> Administration</title>
+
+    <para></para>
+
+    <sect2 id="zfs-zfs-create">
+      <title>Creating &amp; Destroying Datasets</title>
+
+      <para></para>
+    </sect2>
+
+    <sect2 id="zfs-zfs-volume">
+      <title>Creating &amp; Destroying Volumes</title>
+
+      <para></para>
+    </sect2>
+
+    <sect2 id="zfs-zfs-rename">
+      <title>Renaming a Dataset</title>
+
+      <para></para>
+    </sect2>
+
+    <sect2 id="zfs-zfs-set">
+      <title>Setting Dataset Properties</title>
+
+      <para></para>
+    </sect2>
+
+    <sect2 id="zfs-zfs-snapshot">
+      <title>Managing Snapshots</title>
+
+      <para></para>
+    </sect2>
+
+    <sect2 id="zfs-zfs-clones">
+      <title>Managing Clones</title>
+
+      <para></para>
+    </sect2>
+
+    <sect2 id="zfs-zfs-send">
+      <title>ZFS Replication</title>
+
+      <para></para>
+    </sect2>
+
+    <sect2 id="zfs-zfs-quota">
+      <title>Dataset, User and Group Quotes</title>
+
+      <para>To enforce a dataset quota of 10&nbsp;GB for
+	<filename>storage/home/bob</filename>, use the
+	following:</para>
+
+      <screen>&prompt.root; <userinput>zfs set quota=10G storage/home/bob</userinput></screen>
+
+      <para>To enforce a reference quota of 10&nbsp;GB for
+	<filename>storage/home/bob</filename>, use the
+	following:</para>
+
+      <screen>&prompt.root; <userinput>zfs set refquota=10G storage/home/bob</userinput></screen>
+
+      <para>The general format is
+	<literal>userquota@<replaceable>user</replaceable>=<replaceable>size</replaceable></literal>,
+	and the user's name must be in one of the following
+	formats:</para>
+
+      <itemizedlist>
+	<listitem>
+	  <para><acronym
+	      role="Portable Operating System
+	      Interface">POSIX</acronym> compatible name such as
+	    <replaceable>joe</replaceable>.</para>
+	</listitem>
+
+	<listitem>
+	  <para><acronym
+	      role="Portable Operating System
+	      Interface">POSIX</acronym> numeric ID such as
+	    <replaceable>789</replaceable>.</para>
+	</listitem>
+
+	<listitem>
+	  <para><acronym role="System Identifier">SID</acronym> name
+	    such as
+	    <replaceable>joe.bloggs@example.com</replaceable>.</para>
+	</listitem>
+
+	<listitem>
+	  <para><acronym role="System Identifier">SID</acronym>
+	    numeric ID such as
+	    <replaceable>S-1-123-456-789</replaceable>.</para>
+	</listitem>
+      </itemizedlist>
+
+      <para>For example, to enforce a user quota of 50&nbsp;GB for a
+	user named <replaceable>joe</replaceable>, use the
+	following:</para>
+
+      <screen>&prompt.root; <userinput>zfs set userquota@joe=50G</userinput></screen>
+
+      <para>To remove the quota or make sure that one is not set,
+	instead use:</para>
+
+      <screen>&prompt.root; <userinput>zfs set userquota@joe=none</userinput></screen>
+
+      <note>
+	<para>User quota properties are not displayed by
+	  <command>zfs get all</command>.
+	  Non-<username>root</username> users can only see their own
+	  quotas unless they have been granted the
+	  <literal>userquota</literal> privilege.  Users with this
+	  privilege are able to view and set everyone's quota.</para>
+      </note>
+
+      <para>The general format for setting a group quota is:
+	<literal>groupquota@<replaceable>group</replaceable>=<replaceable>size</replaceable></literal>.</para>
+
+      <para>To set the quota for the group
+	<replaceable>firstgroup</replaceable> to 50&nbsp;GB,
+	use:</para>
+
+      <screen>&prompt.root; <userinput>zfs set groupquota@firstgroup=50G</userinput></screen>
 
-    <sect2>
-      <title>Data Verification</title>
+      <para>To remove the quota for the group
+	<replaceable>firstgroup</replaceable>, or to make sure that
+	one is not set, instead use:</para>
 
-      <para><acronym>ZFS</acronym> uses checksums to verify the
-	integrity of stored data.  These are enabled automatically
-	upon creation of file systems and may be disabled using the
-	following command:</para>
+      <screen>&prompt.root; <userinput>zfs set groupquota@firstgroup=none</userinput></screen>
 
-      <screen>&prompt.root; <userinput>zfs set checksum=off storage/home</userinput></screen>
+      <para>As with the user quota property,
+	non-<username>root</username> users can only see the quotas
+	associated with the groups that they belong to.  However,
+	<username>root</username> or a user with the
+	<literal>groupquota</literal> privilege can view and set all
+	quotas for all groups.</para>
 
-      <para>Doing so is <emphasis>not</emphasis> recommended as
-	checksums take very little storage space and are used to check
-	data integrity using checksum verification in a process is
-	known as <quote>scrubbing.</quote> To verify the data
-	integrity of the <literal>storage</literal> pool, issue this
-	command:</para>
+      <para>To display the amount of space consumed by each user on
+	the specified filesystem or snapshot, along with any specified
+	quotas, use <command>zfs userspace</command>.  For group
+	information, use <command>zfs groupspace</command>.  For more
+	information about supported options or how to display only
+	specific options, refer to &man.zfs.1;.</para>
 
-      <screen>&prompt.root; <userinput>zpool scrub storage</userinput></screen>
+      <para>Users with sufficient privileges and
+	<username>root</username> can list the quota for
+	<filename>storage/home/bob</filename> using:</para>
 
-      <para>This process may take considerable time depending on the
-	amount of data stored.  It is also very <acronym>I/O</acronym>
-	intensive, so much so that only one scrub may be run at any
-	given time.  After the scrub has completed, the status is
-	updated and may be viewed by issuing a status request:</para>
+      <screen>&prompt.root; <userinput>zfs get quota storage/home/bob</userinput></screen>
+    </sect2>
 
-      <screen>&prompt.root; <userinput>zpool status storage</userinput>
- pool: storage
- state: ONLINE
- scrub: scrub completed with 0 errors on Sat Jan 26 19:57:37 2013
-config:
+    <sect2 id="zfs-zfs-reservation">
+      <title>Reservations</title>
 
-	NAME        STATE     READ WRITE CKSUM
-	storage     ONLINE       0     0     0
-	  raidz1    ONLINE       0     0     0
-	    da0     ONLINE       0     0     0
-	    da1     ONLINE       0     0     0
-	    da2     ONLINE       0     0     0
+      <para></para>
 
-errors: No known data errors</screen>
+      <para>The general format of the <literal>reservation</literal>
+	property is
+	<literal>reservation=<replaceable>size</replaceable></literal>,
+	so to set a reservation of 10&nbsp;GB on
+	<filename>storage/home/bob</filename>, use:</para>
 
-      <para>The completion time is displayed and helps to ensure data
-	integrity over a long period of time.</para>
+      <screen>&prompt.root; <userinput>zfs set reservation=10G storage/home/bob</userinput></screen>
 
-      <para>Refer to &man.zfs.8; and &man.zpool.8; for other
-	<acronym>ZFS</acronym> options.</para>
-    </sect2>
-  </sect1>
+      <para>To make sure that no reservation is set, or to remove a
+	reservation, use:</para>
 
-  <sect1 id="zfs-zpool">
-    <title><command>zpool</command> Administration</title>
+      <screen>&prompt.root; <userinput>zfs set reservation=none storage/home/bob</userinput></screen>
 
-    <para></para>
+      <para>The same principle can be applied to the
+	<literal>refreservation</literal> property for setting a
+	refreservation, with the general format
+	<literal>refreservation=<replaceable>size</replaceable></literal>.</para>
 
-    <sect2 id="zfs-zpool-create">
-      <title>Creating &amp; Destroying Storage Pools</title>
+      <para>To check if any reservations or refreservations exist on
+	<filename>storage/home/bob</filename>, execute one of the
+	following commands:</para>
 
-      <para></para>
+      <screen>&prompt.root; <userinput>zfs get reservation storage/home/bob</userinput>
+&prompt.root; <userinput>zfs get refreservation storage/home/bob</userinput></screen>
     </sect2>
 
-    <sect2 id="zfs-zpool-attach">
-      <title>Adding &amp; Removing Devices</title>
+    <sect2 id="zfs-zfs-compression">
+      <title>Compression</title>
 
       <para></para>
     </sect2>
 
-    <sect2 id="zfs-zpool-resilver">
-      <title>Dealing with Failed Devices</title>
+    <sect2 id="zfs-zfs-deduplication">
+      <title>Deduplication</title>
 
       <para></para>
     </sect2>
 
-    <sect2 id="zfs-zpool-import">
-      <title>Importing &amp; Exporting Pools</title>
+    <sect2 id="zfs-zfs-allow">
+      <title>Delegated Administration</title>
 
       <para></para>
     </sect2>

*** DIFF OUTPUT TRUNCATED AT 1000 LINES ***



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201308150108.r7F18OET056618>