From owner-svn-doc-projects@FreeBSD.ORG Wed Jun 4 01:31:24 2014 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 87434ABF; Wed, 4 Jun 2014 01:31:24 +0000 (UTC) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 682B52059; Wed, 4 Jun 2014 01:31:24 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.8/8.14.8) with ESMTP id s541VOpP037432; Wed, 4 Jun 2014 01:31:24 GMT (envelope-from wblock@svn.freebsd.org) Received: (from wblock@localhost) by svn.freebsd.org (8.14.8/8.14.8/Submit) id s541VOet037431; Wed, 4 Jun 2014 01:31:24 GMT (envelope-from wblock@svn.freebsd.org) Message-Id: <201406040131.s541VOet037431@svn.freebsd.org> From: Warren Block Date: Wed, 4 Jun 2014 01:31:24 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r45004 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 04 Jun 2014 01:31:24 -0000 Author: wblock Date: Wed Jun 4 01:31:23 2014 New Revision: 45004 URL: http://svnweb.freebsd.org/changeset/doc/45004 Log: More assorted fixes and cleanups. Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Tue Jun 3 23:21:48 2014 (r45003) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Wed Jun 4 01:31:23 2014 (r45004) @@ -162,9 +162,9 @@ devfs 1 1 0 example 17547136 0 17547136 0% /example This output shows that the example pool - has been created and mounted. It is now - accessible as a file system. Files can be created on it and - users can browse it, like in this example: + has been created and mounted. It is now accessible as a file + system. Files can be created on it and users can browse it, + like in this example: &prompt.root; cd /example &prompt.root; ls @@ -578,18 +578,19 @@ config: errors: No known data errors Pools can also be constructed using partitions rather than - whole disks. Putting ZFS in a separate partition allows the - same disk to have other partitions for other purposes. In - particular, partitions with bootcode and file systems needed - for booting can be added. This allows booting from disks that - are also members of a pool. There is no performance penalty - on &os; when using a partition rather than a whole disk. - Using partitions also allows the administrator to - under-provision the disks, using less - than the full capacity. If a future replacement disk of the - same nominal size as the original actually has a slightly - smaller capacity, the smaller partition will still fit, and - the replacement disk can still be used. + whole disks. Putting ZFS in a separate + partition allows the same disk to have other partitions for + other purposes. In particular, partitions with bootcode and + file systems needed for booting can be added. This allows + booting from disks that are also members of a pool. There is + no performance penalty on &os; when using a partition rather + than a whole disk. Using partitions also allows the + administrator to under-provision the + disks, using less than the full capacity. If a future + replacement disk of the same nominal size as the original + actually has a slightly smaller capacity, the smaller + partition will still fit, and the replacement disk can still + be used. Create a RAID-Z2 pool using @@ -722,7 +723,7 @@ errors: No known data errors RAID-Z vdevs risks the data on the entire pool. Writes are distributed, so the failure of the non-redundant disk will result in the loss of a fraction of - every block that has been writen to the pool. + every block that has been written to the pool. Data is striped across each of the vdevs. For example, with two mirror vdevs, this is effectively a @@ -1278,16 +1279,16 @@ errors: No known data errors resilver operation, the pool can grow to use the capacity of the new device. For example, consider a mirror of a 1 TB drive and a - 2 drive. The usable space is 1 . Then the + 2 drive. The usable space is 1 TB. Then the 1 TB is replaced with another 2 TB drive, and the resilvering process duplicates existing data. Because both of the devices now have 2 TB capacity, the mirror's available space can be grown to 2 TB. Expansion is triggered by using - zpool online with on - each device. After expansion of all devices, the additional - space becomes available to the pool. + zpool online -e on each device. After + expansion of all devices, the additional space becomes + available to the pool. @@ -1301,10 +1302,11 @@ errors: No known data errors operating systems that support ZFS, and even different hardware architectures (with some caveats, see &man.zpool.8;). When a dataset has open files, - can be used to force the export of a pool. - Use this with caution. The datasets are forcibly unmounted, - potentially resulting in unexpected behavior by the - applications which had open files on those datasets. + zpool export -f can be used to force the + export of a pool. Use this with caution. The datasets are + forcibly unmounted, potentially resulting in unexpected + behavior by the applications which had open files on those + datasets. Export a pool that is not in use: @@ -1312,14 +1314,16 @@ errors: No known data errors Importing a pool automatically mounts the datasets. This may not be the desired behavior, and can be prevented with - . sets temporary - properties for this import only. - allows importing a pool with a base mount point instead of - the root of the file system. If the pool was last used on a - different system and was not properly exported, an import - might have to be forced with . - imports all pools that do not appear to be - in use by another system. + zpool import -N. + zpool import -o sets temporary properties + for this import only. + zpool import altroot= allows importing a + pool with a base mount point instead of the root of the file + system. If the pool was last used on a different system and + was not properly exported, an import might have to be forced + with zpool import -f. + zpool import -a imports all pools that do + not appear to be in use by another system. List all available pools for import: @@ -1401,9 +1405,9 @@ Enabled the following features on 'mypoo The newer features of ZFS will not be available until zpool upgrade has - completed. can be used to see what new - features will be provided by upgrading, as well as which - features are already supported. + completed. zpool upgrade -v can be used to + see what new features will be provided by upgrading, as well + as which features are already supported. Upgrade a pool to support additional feature flags: @@ -1716,10 +1720,9 @@ mypool/var/log 178K 93.2G 178K mypool/var/mail 144K 93.2G 144K /var/mail mypool/var/tmp 152K 93.2G 152K /var/tmp - In modern versions of - ZFS, zfs destroy - is asynchronous, and the free space might take several - minutes to appear in the pool. Use + In modern versions of ZFS, + zfs destroy is asynchronous, and the free + space might take several minutes to appear in the pool. Use zpool get freeing poolname to see the freeing property, indicating how many @@ -2107,7 +2110,7 @@ M /var/tmp/ Snapshot Rollback - Once at least one snapshot is available, it can be + When at least one snapshot is available, it can be rolled back to at any time. Most of the time this is the case when the current state of the dataset is no longer required and an older version is preferred. Scenarios such @@ -2151,11 +2154,11 @@ vi.recover &prompt.user; At this point, the user realized that too many files - were deleted and wants them back. ZFS provides an easy way - to get them back using rollbacks, but only when snapshots of - important data are performed on a regular basis. To get the - files back and start over from the last snapshot, issue the - command: + were deleted and wants them back. ZFS + provides an easy way to get them back using rollbacks, but + only when snapshots of important data are performed on a + regular basis. To get the files back and start over from + the last snapshot, issue the command: &prompt.root; zfs rollback mypool/var/tmp@diff_snapshot &prompt.user; ls /var/tmp @@ -2164,8 +2167,8 @@ passwd passwd.copy vi.recov The rollback operation restored the dataset to the state of the last snapshot. It is also possible to roll back to a snapshot that was taken much earlier and has other snapshots - that were created after it. When trying to do this, ZFS - will issue this warning: + that were created after it. When trying to do this, + ZFS will issue this warning: &prompt.root; zfs list -rt snapshot mypool/var/tmp AME USED AVAIL REFER MOUNTPOINT @@ -2334,8 +2337,8 @@ usr/home/joenew 1.3G 31k 1.3G After a clone is created it is an exact copy of the state the dataset was in when the snapshot was taken. The clone can now be changed independently from its originating dataset. - The only connection between the two is the snapshot. ZFS - records this connection in the property + The only connection between the two is the snapshot. + ZFS records this connection in the property origin. Once the dependency between the snapshot and the clone has been removed by promoting the clone using zfs promote, the @@ -2368,7 +2371,7 @@ backup.txz loader.conf plans.txt Filesystem Size Used Avail Capacity Mounted on usr/home/joe 1.3G 128k 1.3G 0% /usr/home/joe - The cloned snapshot is now handled by ZFS like an ordinary + The cloned snapshot is now handled like an ordinary dataset. It contains all the data from the original snapshot plus the files that were added to it like loader.conf. Clones can be used in @@ -2388,14 +2391,13 @@ usr/home/joe 1.3G 128k 1.3G Keeping data on a single pool in one location exposes it to risks like theft and natural or human disasters. Making regular backups of the entire pool is vital. - ZFS provides a built-in - serialization feature that can send a stream representation of - the data to standard output. Using this technique, it is - possible to not only store the data on another pool connected - to the local system, but also to send it over a network to - another system. Snapshots are the basis for - this replication (see the section on - ZFS + ZFS provides a built-in serialization + feature that can send a stream representation of the data to + standard output. Using this technique, it is possible to not + only store the data on another pool connected to the local + system, but also to send it over a network to another system. + Snapshots are the basis for this replication (see the section + on ZFS snapshots). The commands used for replicating data are zfs send and zfs receive. @@ -2503,11 +2505,11 @@ mypool 960M 50.2M 910M 5% 1.00x second snapshot contains only the changes that were made to the file system between now and the previous snapshot, replica1. Using - with zfs send and - indicating the pair of snapshots generates an incremental - replica stream containing only the data that has changed. - This can only succeed if the initial snapshot already exists - on the receiving side. + zfs send -i and indicating the pair of + snapshots generates an incremental replica stream containing + only the data that has changed. This can only succeed if + the initial snapshot already exists on the receiving + side. &prompt.root; zfs send -v -i mypool@replica1 mypool@replica2 | zfs receive /backup/mypool send from @replica1 to mypool@replica2 estimated size is 5.02M @@ -2874,7 +2876,7 @@ mypool/compressed_dataset logicalused Deduplication When enabled, - Deduplication + deduplication uses the checksum of each block to detect duplicate blocks. When a new block is a duplicate of an existing block, ZFS writes an additional reference to the @@ -3050,7 +3052,7 @@ dedup = 1.05, compress = 1.11, copies = vfs.zfs.arc_max - - The maximum size of the ARC. The default is all RAM less 1 GB, or one half of RAM, whichever is more. @@ -3063,7 +3065,7 @@ dedup = 1.05, compress = 1.11, copies = vfs.zfs.arc_meta_limit - - Limits the portion of the + - Limit the portion of the ARC that can be used to store metadata. The default is one fourth of vfs.zfs.arc_max. Increasing @@ -3079,7 +3081,7 @@ dedup = 1.05, compress = 1.11, copies = vfs.zfs.arc_min - - The minimum size of the ARC. The default is one half of vfs.zfs.arc_meta_limit. Adjust this @@ -3103,9 +3105,9 @@ dedup = 1.05, compress = 1.11, copies = vfs.zfs.min_auto_ashift - - The minimum ashift (sector size) - that will be used automatically at pool creation time. - The value is a power of two. The default value of + - Minimum ashift (sector size) that + will be used automatically at pool creation time. The + value is a power of two. The default value of 9 represents 2^9 = 512, a sector size of 512 bytes. To avoid write amplification and get @@ -3196,7 +3198,7 @@ dedup = 1.05, compress = 1.11, copies = vfs.zfs.top_maxinflight - - The maxmimum number of outstanding I/Os per top-level + - Maxmimum number of outstanding I/Os per top-level vdev. Limits the depth of the command queue to prevent high latency. The limit is per top-level vdev, meaning the limit applies to @@ -3964,8 +3966,7 @@ vfs.zfs.vdev.cache.size="5M"ZFS will attempt to recover the data from any available redundancy, like mirrors or RAID-Z). - Validation of all checksums can be triggered with - scrub. Checksum algorithms include: @@ -4071,11 +4072,11 @@ vfs.zfs.vdev.cache.size="5M"When set to a value greater than 1, the copies property instructs ZFS to maintain multiple copies of - each block in the File System or - Volume. Setting this - property on important datasets provides additional + each block in the + File System + or + Volume. Setting + this property on important datasets provides additional redundancy from which to recover a block that does not match its checksum. In pools without redundancy, the copies feature is the only form of redundancy. The @@ -4132,19 +4133,17 @@ vfs.zfs.vdev.cache.size="5M"ZFS has scrub. scrub reads all data blocks stored on the pool and verifies their checksums against the known - good checksums stored in the metadata. A periodic - check of all the data stored on the pool ensures the - recovery of any corrupted blocks before they are needed. - A scrub is not required after an unclean shutdown, but - is recommended at least once - every three months. The checksum of each block is - verified as blocks are read during normal use, but a - scrub makes certain that even + good checksums stored in the metadata. A periodic check + of all the data stored on the pool ensures the recovery + of any corrupted blocks before they are needed. A scrub + is not required after an unclean shutdown, but is + recommended at least once every three months. The + checksum of each block is verified as blocks are read + during normal use, but a scrub makes certain that even infrequently used blocks are checked for silent - corruption. Data security is improved, - especially in archival storage situations. The relative - priority of scrub can be adjusted - with scrub can be adjusted with vfs.zfs.scrub_delay to prevent the scrub from degrading the performance of other workloads on the pool. @@ -4257,10 +4256,9 @@ vfs.zfs.vdev.cache.size="5M"storage/home/bob, enough disk space must exist outside of the refreservation amount for the - operation to succeed. Descendants of the main - data set are not counted in the - refreservation amount and so do not - encroach on the space set. + operation to succeed. Descendants of the main data set + are not counted in the refreservation + amount and so do not encroach on the space set.