From owner-svn-doc-projects@FreeBSD.ORG Mon Nov 25 06:13:43 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D3B5969C; Mon, 25 Nov 2013 06:13:43 +0000 (UTC) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id C2BDD2D25; Mon, 25 Nov 2013 06:13:43 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id rAP6Dh6c004623; Mon, 25 Nov 2013 06:13:43 GMT (envelope-from wblock@svn.freebsd.org) Received: (from wblock@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id rAP6DhIU004622; Mon, 25 Nov 2013 06:13:43 GMT (envelope-from wblock@svn.freebsd.org) Message-Id: <201311250613.rAP6DhIU004622@svn.freebsd.org> From: Warren Block Date: Mon, 25 Nov 2013 06:13:43 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r43243 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 25 Nov 2013 06:13:43 -0000 Author: wblock Date: Mon Nov 25 06:13:43 2013 New Revision: 43243 URL: http://svnweb.freebsd.org/changeset/doc/43243 Log: Whitespace-only fixes, translators please ignore. Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Mon Nov 25 05:58:08 2013 (r43242) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Mon Nov 25 06:13:43 2013 (r43243) @@ -567,12 +567,12 @@ errors: No known data errors as in the case of RAID-Z, the other option is to add a vdev to the pool. It is possible, but discouraged, to mix vdev types. ZFS stripes data across each of the vdevs. For example, if - there are two mirror vdevs, then this is effectively a RAID - 10, striping the writes across the two sets of mirrors. - Because of the way that space is allocated in ZFS to attempt - to have each vdev reach 100% full at the same time, there is a - performance penalty if the vdevs have different amounts of - free space. + there are two mirror vdevs, then this is effectively a + RAID 10, striping the writes across the two + sets of mirrors. Because of the way that space is allocated + in ZFS to attempt to have each vdev reach + 100% full at the same time, there is a performance penalty if + the vdevs have different amounts of free space. Currently, vdevs cannot be removed from a zpool, and disks can only be removed from a mirror if there is enough remaining @@ -604,8 +604,8 @@ errors: No known data errors available, but performance may be impacted because missing data will need to be calculated from the available redundancy. To restore the vdev to a fully functional state, the failed - physical device must be replaced, and ZFS must - be instructed to begin the + physical device must be replaced, and ZFS + must be instructed to begin the resilver operation, where data that was on the failed device will be recalculated from available redundancy and written to the replacement @@ -689,13 +689,13 @@ errors: No known data errors Displaying Recorded Pool History - ZFS records all the commands that were issued to - administer the pool. These include the creation of datasets, - changing properties, or when a disk has been replaced in - the pool. This history is useful for reviewing how a pool was created and - which user did a specific action and when. - History is not kept in a log file, but is a part of the pool - itself. Because of that, history cannot be altered + ZFS records all the commands that were + issued to administer the pool. These include the creation of + datasets, changing properties, or when a disk has been + replaced in the pool. This history is useful for reviewing + how a pool was created and which user did a specific action + and when. History is not kept in a log file, but is a part of + the pool itself. Because of that, history cannot be altered after the fact unless the pool is destroyed. The command to review this history is aptly named zpool history: @@ -732,11 +732,10 @@ History for 'tank': 2013-02-27.18:51:13 [internal create txg:55] dataset = 39 2013-02-27.18:51:18 zfs create tank/backup - A more-detailed history is invoked by - adding -l. - Log records are shown in long format, including information - like the name of the user who issued the command and the hostname on - which the change was made. + A more-detailed history is invoked by adding + -l. Log records are shown in long format, + including information like the name of the user who issued the + command and the hostname on which the change was made. &prompt.root; zpool history -l History for 'tank': @@ -758,9 +757,9 @@ History for 'tank': Both options to zpool history can be combined to give the most detailed information possible for - any given pool. Pool history provides valuable - information when tracking down what actions were - performed or when more detailed output is needed for debugging. + any given pool. Pool history provides valuable information + when tracking down what actions were performed or when more + detailed output is needed for debugging. @@ -820,11 +819,11 @@ data 288G 1.53T A pool consisting of one or more mirror vdevs can be split into a second pool. The last member of each mirror (unless otherwise specified) is detached and used to create a - new pool containing the same data. It is recommended that - the operation first be attempted with the - parameter. The details of the proposed - operation are displayed without actually performing it. This helps - ensure the operation will happen as expected. + new pool containing the same data. It is recommended that the + operation first be attempted with the + parameter. The details of the proposed operation are + displayed without actually performing it. This helps ensure + the operation will happen as expected. @@ -841,18 +840,17 @@ data 288G 1.53T Creating & Destroying Datasets Unlike traditional disks and volume managers, space - in ZFS is not preallocated. - Wtraditional file systems, once all of the space was - partitioned and assigned, there was no way to - add an additional file system without adding a new disk. - With ZFS, new file systems can be created at any time. - Each - dataset has - properties including features like compression, deduplication, - caching and quoteas, as well as other useful properties like - readonly, case sensitivity, network file sharing, and a mount - point. Each separate dataset can be administered, - delegated, + in ZFS is not preallocated. Wtraditional + file systems, once all of the space was partitioned and + assigned, there was no way to add an additional file system + without adding a new disk. With ZFS, new + file systems can be created at any time. Each dataset + has properties including features like compression, + deduplication, caching and quoteas, as well as other useful + properties like readonly, case sensitivity, network file + sharing, and a mount point. Each separate dataset can be + administered, delegated, replicated, snapshoted, jailed, and destroyed as a @@ -871,7 +869,7 @@ data 288G 1.53T is asynchronous, and the free space may take several minutes to appear in the pool. The freeing property, accessible with zpool get freeing - poolname indicates how + poolname indicates how many datasets are having their blocks freed in the background. If there are child datasets, like snapshots or other @@ -894,16 +892,17 @@ data 288G 1.53T /dev/zvol/poolname/dataset. This allows the volume to be used for other file systems, to back the disks of a virtual machine, or to be exported using - protocols like iSCSI or HAST. + protocols like iSCSI or + HAST. - A volume can be formatted with any file system. - To the user, it will appear as if they are working with - a regular disk using that specific filesystem and not ZFS. - Putting ordinary file systems on - ZFS volumes provides features those file systems would not normally have. For example, - using the compression property on a - 250 MB volume allows creation of a compressed FAT - filesystem. + A volume can be formatted with any file system. To the + user, it will appear as if they are working with a regular + disk using that specific filesystem and not + ZFS. Putting ordinary file systems on + ZFS volumes provides features those file + systems would not normally have. For example, using the + compression property on a 250 MB volume allows creation + of a compressed FAT filesystem. &prompt.root; zfs create -V 250m -o compression=on tank/fat32 &prompt.root; zfs list tank @@ -927,16 +926,16 @@ Filesystem Size Used Avail Cap Renaming a Dataset The name of a dataset can be changed with - zfs rename. rename can also be - used to change the parent of a dataset. Renaming a dataset to - be under a different parent dataset will change the value of - those properties that are inherited by the child dataset. - When a dataset is renamed, it is unmounted and then remounted - in the new location (inherited from the parent dataset). This - behavior can be prevented with . - Due to the nature of snapshots, they cannot be - renamed outside of the parent dataset. To rename a recursive - snapshot, specify , and all + zfs rename. rename can + also be used to change the parent of a dataset. Renaming a + dataset to be under a different parent dataset will change the + value of those properties that are inherited by the child + dataset. When a dataset is renamed, it is unmounted and then + remounted in the new location (inherited from the parent + dataset). This behavior can be prevented with + . Due to the nature of snapshots, they + cannot be renamed outside of the parent dataset. To rename a + recursive snapshot, specify , and all snapshots with the same specified snapshot will be renamed. @@ -949,19 +948,21 @@ Filesystem Size Used Avail Cap automatically inherited from the parent dataset, but can be overridden locally. Set a property on a dataset with zfs set - property=value - dataset. Most properties - have a limited set of valid values, zfs get - will display each possible property and its valid values. - Most properties can be reverted to their inherited values - using zfs inherit. - - It is possible to set user-defined properties. - They become part of the dataset configuration and can be used - to provide additional information about the dataset or its + property=value + dataset. Most + properties have a limited set of valid values, + zfs get will display each possible property + and its valid values. Most properties can be reverted to + their inherited values using + zfs inherit. + + It is possible to set user-defined properties. They + become part of the dataset configuration and can be used to + provide additional information about the dataset or its contents. To distinguish these custom properties from the - ones supplied as part of ZFS, a colon (:) - is used to create a custom namespace for the property. + ones supplied as part of ZFS, a colon + (:) is used to create a custom namespace + for the property. &prompt.root; zfs set custom:costcenter=1234 tank &prompt.root; zfs get custom:costcenter tank @@ -969,11 +970,10 @@ NAME PROPERTY VALUE SOURCE tank custom:costcenter 1234 local To remove a custom property, use - zfs inherit with - . If the custom property is not - defined in any of the parent datasets, it will be removed - completely (although the changes are still recorded in the - pool's history). + zfs inherit with . If + the custom property is not defined in any of the parent + datasets, it will be removed completely (although the changes + are still recorded in the pool's history). &prompt.root; zfs inherit -r custom:costcenter tank &prompt.root; zfs get custom:costcenter tank @@ -989,12 +989,11 @@ tank custom:costcenter - Snapshots are one of the most powerful features of ZFS. A snapshot provides a point-in-time copy of the dataset. The - parent dataset can be easily rolled back to that snapshot state. Create a - snapshot with zfs snapshot - dataset@snapshotname. + parent dataset can be easily rolled back to that snapshot + state. Create a snapshot with zfs snapshot + dataset@snapshotname. Adding creates a snapshot recursively, - with the same name on all child - datasets. + with the same name on all child datasets. Snapshots are mounted in a hidden directory under the parent dataset: , with the general format refreservation=size. - This command shows any reservations or refreservations that exist on - storage/home/bob: + This command shows any reservations or refreservations + that exist on storage/home/bob: &prompt.root; zfs get reservation storage/home/bob &prompt.root; zfs get refreservation storage/home/bob @@ -1202,25 +1201,24 @@ tank custom:costcenter - Deduplication uses the checksum of each block to detect duplicate blocks. When a new block is a duplicate of an existing block, - ZFS writes an additional reference to - the existing data instead of the whole duplicate block. This can offer - tremendous space savings if the data contains many discreet - copies of the file information. Be warned: deduplication requires an - extremely large amount of memory, and most of the space - savings can be had without the extra cost by enabling - compression instead. + ZFS writes an additional reference to the + existing data instead of the whole duplicate block. This can + offer tremendous space savings if the data contains many + discreet copies of the file information. Be warned: + deduplication requires an extremely large amount of memory, + and most of the space savings can be had without the extra + cost by enabling compression instead. To activate deduplication, set the dedup property on the target pool: &prompt.root; zfs set dedup=on pool - Only new data being - written to the pool will be deduplicated. Data that has - already been written to the pool will not be deduplicated merely by - activating this option. As such, a pool with a freshly - activated deduplication property will look something like this - example: + Only new data being written to the pool will be + deduplicated. Data that has already been written to the pool + will not be deduplicated merely by activating this option. As + such, a pool with a freshly activated deduplication property + will look something like this example: &prompt.root; zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT @@ -1228,10 +1226,10 @@ pool 2.84G 2.19M 2.83G 0% 1.00x ONLINE The DEDUP column shows the actual rate of deduplication for the pool. A value of - 1.00x shows that data has not been deduplicated - yet. In the next example, - the ports tree is copied three times into different - directories on the deduplicated pool created above. + 1.00x shows that data has not been + deduplicated yet. In the next example, the ports tree is + copied three times into different directories on the + deduplicated pool created above. &prompt.root; zpool list for d in dir1 dir2 dir3; do @@ -1247,13 +1245,14 @@ pool 2.84G 20.9M 2.82G 0% 3.00x ONLINE - The DEDUP column now shows a factor of 3.00x. The multiple copies of the ports tree data was detected and deduplicated, taking only a third - of the space. The potential for space savings - can be enormous, but comes at the cost of having enough memory - to keep track of the deduplicated blocks. + of the space. The potential for space savings can be + enormous, but comes at the cost of having enough memory to + keep track of the deduplicated blocks. Deduplication is not always beneficial, especially when - there is not much redundant data on a pool. ZFS - can show potential space savings by simulating deduplication on an existing pool: + there is not much redundant data on a pool. + ZFS can show potential space savings by + simulating deduplication on an existing pool: &prompt.root; zdb -S pool Simulated DDT histogram: @@ -1282,12 +1281,12 @@ dedup = 1.05, compress = 1.11, copies = 1.16 is a very poor ratio that is mostly influenced by compression. Activating deduplication on this pool would not save any significant amount of space. Using - the formula dedup * compress / copies = deduplication - ratio, system administrators can plan the - storage allocation more towards having multiple copies of data - or by having a decent compression rate in order to utilize the - space savings that deduplication provides. As a rule of - thumb, compression should be used before deduplication + the formula dedup * compress / copies = + deduplication ratio, system administrators can plan + the storage allocation more towards having multiple copies of + data or by having a decent compression rate in order to + utilize the space savings that deduplication provides. As a + rule of thumb, compression should be used before deduplication due to the much lower memory requirements. @@ -1296,15 +1295,16 @@ dedup = 1.05, compress = 1.11, copies = zfs jail and the corresponding jailed property are used to delegate a - ZFS dataset to a Jail. zfs jail - jailid attaches a dataset - to the specified jail, and zfs unjail - detaches it. For the dataset to be administered from - within a jail, the jailed property must be - set. Once a dataset is jailed, it can no longer be mounted on - the host because the jail administrator may have set - unacceptable mount points. + ZFS dataset to a + Jail. + zfs jail jailid + attaches a dataset to the specified jail, and + zfs unjail detaches it. For the dataset to + be administered from within a jail, the + jailed property must be set. Once a + dataset is jailed, it can no longer be mounted on the host + because the jail administrator may have set unacceptable mount + points. @@ -1633,8 +1633,8 @@ vfs.zfs.vdev.cache.size="5M" @@ -1793,13 +1793,12 @@ vfs.zfs.vdev.cache.size="5M"ZFS - does not require a &man.fsck.8; after an unexpected - shutdown. + In the event of a shorn write (a system crash or power + loss in the middle of writing a file), the entire + original contents of the file are still available and + the incomplete write is discarded. This also means that + ZFS does not require a &man.fsck.8; + after an unexpected shutdown. @@ -2019,11 +2018,11 @@ vfs.zfs.vdev.cache.size="5M"scrub is run at least once - each quarter. Checksums of each block are tested as - they are read in normal use, but a scrub operation makes - sure even infrequently used blocks are checked for - silent corruption. + it is recommended that a scrub is run + at least once each quarter. Checksums of each block are + tested as they are read in normal use, but a scrub + operation makes sure even infrequently used blocks are + checked for silent corruption.