From owner-svn-doc-projects@FreeBSD.ORG Mon Feb 10 01:02:17 2014 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id DF7C4951; Mon, 10 Feb 2014 01:02:17 +0000 (UTC) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id C71E019CF; Mon, 10 Feb 2014 01:02:17 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.8/8.14.8) with ESMTP id s1A12HtG029579; Mon, 10 Feb 2014 01:02:17 GMT (envelope-from wblock@svn.freebsd.org) Received: (from wblock@localhost) by svn.freebsd.org (8.14.8/8.14.8/Submit) id s1A12HdQ029578; Mon, 10 Feb 2014 01:02:17 GMT (envelope-from wblock@svn.freebsd.org) Message-Id: <201402100102.s1A12HdQ029578@svn.freebsd.org> From: Warren Block Date: Mon, 10 Feb 2014 01:02:17 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r43855 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Feb 2014 01:02:17 -0000 Author: wblock Date: Mon Feb 10 01:02:17 2014 New Revision: 43855 URL: http://svnweb.freebsd.org/changeset/doc/43855 Log: Giant whitespace and markup fix from Allan Jude. This document has not been merged to the Handbook, so separate whitespace and content patches should not yet be necessary. Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Sun Feb 9 23:21:14 2014 (r43854) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Mon Feb 10 01:02:17 2014 (r43855) @@ -468,9 +468,10 @@ errors: No known data errors Doing so is not recommended! Checksums take very little storage space and provide data - integrity. Many ZFS features will not work properly with - checksums disabled. There is also no noticeable performance - gain from disabling these checksums. + integrity. Many ZFS features will not + work properly with checksums disabled. There is also no + noticeable performance gain from disabling these + checksums. Checksum verification is known as @@ -513,10 +514,10 @@ errors: No known data errors <command>zpool</command> Administration - The administration of ZFS is divided between two main - utilities. The zpool utility which controls - the operation of the pool and deals with adding, removing, - replacing and managing disks, and the + The administration of ZFS is divided + between two main utilities. The zpool + utility which controls the operation of the pool and deals with + adding, removing, replacing and managing disks, and the zfs utility, which deals with creating, destroying and managing datasets (both filesystems and @@ -525,12 +526,12 @@ errors: No known data errors Creating & Destroying Storage Pools - Creating a ZFS Storage Pool (zpool) - involves making a number of decisions that are relatively - permanent because the structure of the pool cannot be changed - after the pool has been created. The most important decision - is what types of vdevs to group the physical disks into. See - the list of + Creating a ZFS Storage Pool + (zpool) involves making a number of + decisions that are relatively permanent because the structure + of the pool cannot be changed after the pool has been created. + The most important decision is what types of vdevs to group + the physical disks into. See the list of vdev types for details about the possible options. After the pool has been created, most vdev types do not allow additional disks to be added to @@ -542,13 +543,13 @@ errors: No known data errors created, instead the data must be backed up and the pool recreated. - A ZFS pool that is no longer needed can be destroyed so - that the disks making up the pool can be reused in another - pool or for other purposes. Destroying a pool involves - unmounting all of the datasets in that pool. If the datasets - are in use, the unmount operation will fail and the pool will - not be destroyed. The destruction of the pool can be forced - with , but this can cause + A ZFS pool that is no longer needed can + be destroyed so that the disks making up the pool can be + reused in another pool or for other purposes. Destroying a + pool involves unmounting all of the datasets in that pool. If + the datasets are in use, the unmount operation will fail and + the pool will not be destroyed. The destruction of the pool + can be forced with , but this can cause undefined behavior in applications which had open files on those datasets. @@ -566,13 +567,14 @@ errors: No known data errors When adding disks to the existing vdev is not an option, as in the case of RAID-Z, the other option is to add a vdev to the pool. It is possible, but discouraged, to mix vdev types. - ZFS stripes data across each of the vdevs. For example, if - there are two mirror vdevs, then this is effectively a - RAID 10, striping the writes across the two - sets of mirrors. Because of the way that space is allocated - in ZFS to attempt to have each vdev reach - 100% full at the same time, there is a performance penalty if - the vdevs have different amounts of free space. + ZFS stripes data across each of the vdevs. + For example, if there are two mirror vdevs, then this is + effectively a RAID 10, striping the writes + across the two sets of mirrors. Because of the way that space + is allocated in ZFS to attempt to have each + vdev reach 100% full at the same time, there is a performance + penalty if the vdevs have different amounts of free + space. Currently, vdevs cannot be removed from a zpool, and disks can only be removed from a mirror if there is enough remaining @@ -597,8 +599,8 @@ errors: No known data errors Dealing with Failed Devices - When a disk in a ZFS pool fails, the vdev that the disk - belongs to will enter the + When a disk in a ZFS pool fails, the + vdev that the disk belongs to will enter the Degraded state. In this state, all of the data stored on the vdev is still available, but performance may be impacted because missing @@ -629,7 +631,7 @@ errors: No known data errors does not match the one recorded on another device that is part of the storage pool. For example, a mirror with two disks where one drive is starting to malfunction and cannot properly - store the data anymore. This is even worse when the data has + store the data any more. This is even worse when the data has not been accessed for a long time in long term archive storage for example. Traditional file systems need to run algorithms that check and repair the data like the &man.fsck.8; program. @@ -645,8 +647,8 @@ errors: No known data errors operation. The following example will demonstrate this self-healing - behavior in ZFS. First, a mirrored pool of two disks - /dev/ada0 and + behavior in ZFS. First, a mirrored pool of + two disks /dev/ada0 and /dev/ada1 is created. &prompt.root; zpool create healer mirror /dev/ada0 /dev/ada1 @@ -682,19 +684,20 @@ SHA1 (/healer) = 2753eff56d77d9a536ece66 Next, data corruption is simulated by writing random data to the beginning of one of the disks that make up the mirror. - To prevent ZFS from healing the data as soon as it detects it, - we export the pool first and import it again - afterwards. + To prevent ZFS from healing the data as + soon as it detects it, we export the pool first and import it + again afterwards. This is a dangerous operation that can destroy vital data. It is shown here for demonstrational purposes only - and should not be attempted during normal operation of a ZFS - storage pool. Nor should this dd example - be run on a disk with a different filesystem on it. Do not - use any other disk device names other than the ones that are - part of the ZFS pool. Make sure that proper backups of the - pool are created before running the command! + and should not be attempted during normal operation of a + ZFS storage pool. Nor should this + dd example be run on a disk with a + different filesystem on it. Do not use any other disk + device names other than the ones that are part of the + ZFS pool. Make sure that proper backups + of the pool are created before running the command! &prompt.root; zpool export healer @@ -704,11 +707,12 @@ SHA1 (/healer) = 2753eff56d77d9a536ece66 209715200 bytes transferred in 62.992162 secs (3329227 bytes/sec) &prompt.root; zpool import healer - The ZFS pool status shows that one device has experienced - an error. It is important to know that applications reading - data from the pool did not receive any data with a wrong - checksum. ZFS did provide the application with the data from - the ada0 device that has the correct + The ZFS pool status shows that one + device has experienced an error. It is important to know that + applications reading data from the pool did not receive any + data with a wrong checksum. ZFS did + provide the application with the data from the + ada0 device that has the correct checksums. The device with the wrong checksum can be found easily as the CKSUM column contains a value greater than zero. @@ -732,8 +736,8 @@ SHA1 (/healer) = 2753eff56d77d9a536ece66 errors: No known data errors - ZFS has detected the error and took care of it by using - the redundancy present in the unaffected + ZFS has detected the error and took + care of it by using the redundancy present in the unaffected ada0 mirror disk. A checksum comparison with the original one should reveal whether the pool is consistent again. @@ -745,17 +749,18 @@ SHA1 (/healer) = 2753eff56d77d9a536ece66 The two checksums that were generated before and after the intentional tampering with the pool data still match. This - shows how ZFS is capable of detecting and correcting any - errors automatically when the checksums do not match anymore. - Note that this is only possible when there is enough - redundancy present in the pool. A pool consisting of a single - device has no self-healing capabilities. That is also the - reason why checksums are so important in ZFS and should not be - disabled for any reason. No &man.fsck.8; or similar - filesystem consistency check program is required to detect and - correct this and the pool was available the whole time. A - scrub operation is now required to remove the falsely written - data from ada1. + shows how ZFS is capable of detecting and + correcting any errors automatically when the checksums do not + match any more. Note that this is only possible when there is + enough redundancy present in the pool. A pool consisting of a + single device has no self-healing capabilities. That is also + the reason why checksums are so important in + ZFS and should not be disabled for any + reason. No &man.fsck.8; or similar filesystem consistency + check program is required to detect and correct this and the + pool was available the whole time. A scrub operation is now + required to remove the falsely written data from + ada1. &prompt.root; zpool scrub healer &prompt.root; zpool status healer @@ -783,7 +788,7 @@ errors: No known data errors ada0 and corrects all data that has a wrong checksum on ada1. This is indicated by the (repairing) output from - the zpool status command. After the + zpool status. After the operation is complete, the pool status has changed to the following: @@ -810,7 +815,7 @@ errors: No known data errors has been synchronized from ada0 to ada1, the error messages can be cleared from the pool status by running zpool - clear. + clear. &prompt.root; zpool clear healer &prompt.root; zpool status healer @@ -834,10 +839,10 @@ errors: No known data errors Growing a Pool - The usable size of a redundant ZFS pool is limited by the - size of the smallest device in the vdev. If each device in - the vdev is replaced sequentially, after the smallest device - has completed the + The usable size of a redundant ZFS pool + is limited by the size of the smallest device in the vdev. If + each device in the vdev is replaced sequentially, after the + smallest device has completed the replace or resilver operation, the pool can grow based on the size of the new smallest @@ -854,13 +859,14 @@ errors: No known data errors another system. All datasets are unmounted, and each device is marked as exported but still locked so it cannot be used by other disk subsystems. This allows pools to be imported on - other machines, other operating systems that support ZFS, and - even different hardware architectures (with some caveats, see - &man.zpool.8;). When a dataset has open files, - can be used to force the export - of a pool. causes the datasets to be - forcibly unmounted, which can cause undefined behavior in the - applications which had open files on those datasets. + other machines, other operating systems that support + ZFS, and even different hardware + architectures (with some caveats, see &man.zpool.8;). When a + dataset has open files, can be used to + force the export of a pool. causes the + datasets to be forcibly unmounted, which can cause undefined + behavior in the applications which had open files on those + datasets. Importing a pool automatically mounts the datasets. This may not be the desired behavior, and can be prevented with @@ -878,17 +884,17 @@ errors: No known data errors Upgrading a Storage Pool After upgrading &os;, or if a pool has been imported from - a system using an older version of ZFS, the pool can be - manually upgraded to the latest version of ZFS. Consider - whether the pool may ever need to be imported on an older - system before upgrading. The upgrade process is unreversible - and cannot be undone. - - The newer features of ZFS will not be available until - zpool upgrade has completed. - can be used to see what new features will - be provided by upgrading, as well as which features are - already supported by the existing version. + a system using an older version of ZFS, the + pool can be manually upgraded to the latest version of + ZFS. Consider whether the pool may ever + need to be imported on an older system before upgrading. The + upgrade process is unreversible and cannot be undone. + + The newer features of ZFS will not be + available until zpool upgrade has + completed. can be used to see what new + features will be provided by upgrading, as well as which + features are already supported by the existing version. @@ -928,9 +934,9 @@ History for 'tank': pools is displayed. zpool history can show even more - information when the options -i or - -l are provided. The option - -i displays user initiated events as well + information when the options or + are provided. The option + displays user initiated events as well as internally logged ZFS events. &prompt.root; zpool history -i @@ -943,8 +949,8 @@ History for 'tank': 2013-02-27.18:51:13 [internal create txg:55] dataset = 39 2013-02-27.18:51:18 zfs create tank/backup - More details can be shown by adding - -l. History records are shown in a long format, + More details can be shown by adding . + History records are shown in a long format, including information like the name of the user who issued the command and the hostname on which the change was made. @@ -1051,11 +1057,12 @@ data 288G 1.53T Creating & Destroying Datasets Unlike traditional disks and volume managers, space - in ZFS is not preallocated. With traditional - file systems, once all of the space was partitioned and - assigned, there was no way to add an additional file system - without adding a new disk. With ZFS, new - file systems can be created at any time. Each ZFS is not preallocated. With + traditional file systems, once all of the space was + partitioned and assigned, there was no way to add an + additional file system without adding a new disk. With + ZFS, new file systems can be created at any + time. Each dataset has properties including features like compression, deduplication, caching and quoteas, as well as other useful @@ -1250,25 +1257,27 @@ tank custom:costcenter - ZFS Replication - Keeping the data on a single pool in one location exposes + Keeping data on a single pool in one location exposes it to risks like theft, natural and human disasters. Keeping regular backups of the entire pool is vital when data needs to - be restored. ZFS provides a built-in serialization feature - that can send a stream representation of the data to standard - output. Using this technique, it is possible to not only - store the data on another pool connected to the local system, - but also to send it over a network to another system that runs - ZFS. To achieve this replication, ZFS uses filesystem - snapshots (see the section on ZFS snapshots for how they - work) to send them from one location to another. The commands - for this operation are zfs send and - zfs receive, respectively. + be restored. ZFS provides a built-in + serialization feature that can send a stream representation of + the data to standard output. Using this technique, it is + possible to not only store the data on another pool connected + to the local system, but also to send it over a network to + another system that runs ZFS. To achieve this replication, + ZFS uses filesystem snapshots (see the + section on ZFS snapshots) to send + them from one location to another. The commands for this + operation are zfs send and + zfs receive, respectively. The following examples will demonstrate the functionality - of ZFS replication using these two pools: + of ZFS replication using these two + pools: - &prompt.root; zpool list + &prompt.root; zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT backup 960M 77K 896M 0% 1.00x ONLINE - mypool 984M 43.7M 940M 4% 1.00x ONLINE - @@ -1277,36 +1286,42 @@ mypool 984M 43.7M 940M 4% 1.00x primary pool where data is written to and read from on a regular basis. A second pool, backup is used as a standby in case - the primary pool becomes offline. Note that this is not done - automatically by ZFS, but rather done by a system - administrator in case it is needed. First, a snapshot is - created on mypool to have a copy - of the current state of the data to send to the pool - backup. + the primary pool becomes unavailable. Note that this + fail-over is not done automatically by ZFS, + but rather must be done by a system administrator in the event + that it is needed. Replication requires a snapshot to provide + a consistent version of the file system to be transmitted. + Once a snapshot of mypool has been + created it can be copied to the + backup pool. + ZFS only replicates snapshots, changes + since the most recent snapshot will not be replicated. - &prompt.root; zfs snapshot mypool@backup1 -&prompt.root; zfs list -t snapshot + &prompt.root; zfs snapshot mypool@backup1 +&prompt.root; zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT mypool@backup1 0 - 43.6M - Now that a snapshot exists, zfs send can be used to create a stream representing the contents of - the snapshot locally or remotely to another pool. The stream - must be written to the standard output, otherwise ZFS will - produce an error like in this example: + the snapshot, which can be stored as a file, or received by + another pool. The stream will be written to standard + output, which will need to be redirected to a file or pipe + otherwise ZFS will produce an error: - &prompt.root; zfs send mypool@backup1 + &prompt.root; zfs send mypool@backup1 Error: Stream can not be written to a terminal. You must redirect standard output. - The correct way to use zfs send is to - redirect it to a location like the mounted backup pool. - Afterwards, that pool should have the size of the snapshot - allocated, which means all the data contained in the snapshot - was stored on the backup pool. + To backup a dataset with zfs send, + redirect to a file located on the mounted backup pool. First + ensure that the pool has enough free space to accommodate the + size of the snapshot you are sending, which means all of the + data contained in the snapshot, not only the changes in that + snapshot. - &prompt.root; zfs send mypool@backup1 > /backup/backup1 -&prompt.root; zpool list + &prompt.root; zfs send mypool@backup1 > /backup/backup1 +&prompt.root; zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT backup 960M 63.7M 896M 6% 1.00x ONLINE - mypool 984M 43.7M 940M 4% 1.00x ONLINE - @@ -1314,8 +1329,32 @@ mypool 984M 43.7M 940M 4% 1.00x The zfs send transferred all the data in the snapshot called backup1 to the pool named backup. Creating - and sending these snapshots could be done automatically by a - cron job. + and sending these snapshots could be done automatically with a + &man.cron.8; job. + + Instead of storing the backups as archive files, + ZFS can receive them as a live file system, + allowing the backed up data to be accessed directly. + To get to the actual data contained in those streams, the + reverse operation of zfs send must be used + to transform the streams back into files and directories. The + command is zfs receive. The example below + combines zfs send and + zfs receive using a pipe to copy the data + from one pool to another. This way, the data can be used + directly on the receiving pool after the transfer is complete. + A dataset can only be replicated to an empty dataset. + + &prompt.root; zfs snapshot mypool@replica1 +&prompt.root; zfs send -v mypool@replica1 | zfs receive backup/mypool +send from @ to mypool@replica1 estimated size is 50.1M +total estimated size is 50.1M +TIME SENT SNAPSHOT + +&prompt.root; zpool list +NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT +backup 960M 63.7M 896M 6% 1.00x ONLINE - +mypool 984M 43.7M 940M 4% 1.00x ONLINE - ZFS Incremental Backups @@ -1652,8 +1691,8 @@ mypool 50.0M 878M 44. When a new block is a duplicate of an existing block, ZFS writes an additional reference to the existing data instead of the whole duplicate block. - Tremendous space savings are possible if the data contains many - duplicated files or repeated information. Be warned: + Tremendous space savings are possible if the data contains + many duplicated files or repeated information. Be warned: deduplication requires an extremely large amount of memory, and most of the space savings can be had without the extra cost by enabling compression instead. @@ -1761,15 +1800,16 @@ dedup = 1.05, compress = 1.11, copies = Delegated Administration A comprehensive permission delegation system allows - unprivileged users to perform ZFS administration functions. For - example, if each user's home directory is a dataset, users can - be given permission to create and destroy snapshots of their - home directories. A backup user can be given permission to use - ZFS replication features. A usage statistics script can be - allowed to run with access only to the space utilization data - for all users. It is even possible to delegate the ability to - delegate permissions. Permission delegation is possible for - each subcommand and most ZFS properties. + unprivileged users to perform ZFS + administration functions. For example, if each user's home + directory is a dataset, users can be given permission to create + and destroy snapshots of their home directories. A backup user + can be given permission to use ZFS + replication features. A usage statistics script can be allowed + to run with access only to the space utilization data for all + users. It is even possible to delegate the ability to delegate + permissions. Permission delegation is possible for each + subcommand and most ZFS properties. Delegating Dataset Creation @@ -2115,8 +2155,8 @@ vfs.zfs.vdev.cache.size="5M" Log - ZFS - Log Devices, also known as ZFS Intent Log - (ZFS + Intent Log (ZIL) move the intent log from the regular pool devices to a dedicated device, typically an