From owner-svn-doc-projects@FreeBSD.ORG Mon Nov 25 04:36:46 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 24ABF9F2; Mon, 25 Nov 2013 04:36:46 +0000 (UTC) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 01BF12977; Mon, 25 Nov 2013 04:36:45 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id rAP4ajBO071288; Mon, 25 Nov 2013 04:36:45 GMT (envelope-from wblock@svn.freebsd.org) Received: (from wblock@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id rAP4aji3071287; Mon, 25 Nov 2013 04:36:45 GMT (envelope-from wblock@svn.freebsd.org) Message-Id: <201311250436.rAP4aji3071287@svn.freebsd.org> From: Warren Block Date: Mon, 25 Nov 2013 04:36:45 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r43241 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 25 Nov 2013 04:36:46 -0000 Author: wblock Date: Mon Nov 25 04:36:45 2013 New Revision: 43241 URL: http://svnweb.freebsd.org/changeset/doc/43241 Log: More whitespace fixes, translators please ignore. Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Mon Nov 25 03:53:50 2013 (r43240) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Mon Nov 25 04:36:45 2013 (r43241) @@ -303,9 +303,8 @@ example/data 17547008 0 175 &prompt.root; zfs create storage/home - Now compression and keeping extra - copies of directories and files can be enabled with these - commands: + Now compression and keeping extra copies of directories + and files can be enabled with these commands: &prompt.root; zfs set copies=2 storage/home &prompt.root; zfs set compression=gzip storage/home @@ -394,15 +393,15 @@ storage/home 26320512 0 26320512 &prompt.root; zpool status -x - If all pools are Online and everything is - normal, the message indicates that: + If all pools are + Online and everything + is normal, the message indicates that: all pools are healthy - If there is an issue, perhaps a disk is in the Offline state, the pool - state will look similar to: + If there is an issue, perhaps a disk is in the + Offline state, the + pool state will look similar to: pool: storage state: DEGRADED @@ -424,8 +423,7 @@ config: errors: No known data errors This indicates that the device was previously taken - offline by the administrator with this - command: + offline by the administrator with this command: &prompt.root; zpool offline storage da1 @@ -436,8 +434,8 @@ errors: No known data errors &prompt.root; zpool replace storage da1 From here, the status may be checked again, this time - without so that all pools - are shown: + without so that all pools are + shown: &prompt.root; zpool status storage pool: storage @@ -518,25 +516,25 @@ errors: No known data errors The administration of ZFS is divided between two main utilities. The zpool utility which controls the operation of the pool and deals with adding, removing, - replacing and managing disks, and the zfs utility, which - deals with creating, destroying and managing datasets (both - filesystems and volumes). + replacing and managing disks, and the + zfs utility, + which deals with creating, destroying and managing datasets + (both filesystems and + volumes). Creating & Destroying Storage Pools Creating a ZFS Storage Pool (zpool) involves making a number of decisions that are relatively - permanent because the structure of the pool cannot be - changed after the pool has been created. The most important - decision is what types of vdevs to group the physical disks - into. See the list of vdev types for details about - the possible options. After the pool has been created, most - vdev types do not allow additional disks to be added to the - vdev. The exceptions are mirrors, which allow additional + permanent because the structure of the pool cannot be changed + after the pool has been created. The most important decision + is what types of vdevs to group the physical disks into. See + the list of + vdev types for details + about the possible options. After the pool has been created, + most vdev types do not allow additional disks to be added to + the vdev. The exceptions are mirrors, which allow additional disks to be added to the vdev, and stripes, which can be upgraded to mirrors by attaching an additional disk to the vdev. Although additional vdevs can be added to a pool, the @@ -565,21 +563,20 @@ errors: No known data errors linkend="zfs-term-vdev">vdev types allow disks to be added to the vdev after creation. - When adding disks to the existing vdev is not - an option, as in the case of RAID-Z, the other option is - to add a vdev to the pool. It is possible, but - discouraged, to mix vdev types. ZFS stripes data across each - of the vdevs. For example, if there are two mirror vdevs, - then this is effectively a RAID 10, striping the writes across - the two sets of mirrors. Because of the way that space is - allocated in ZFS to attempt to have each vdev reach - 100% full at the same time, there is a performance penalty if - the vdevs have different amounts of free space. + When adding disks to the existing vdev is not an option, + as in the case of RAID-Z, the other option is to add a vdev to + the pool. It is possible, but discouraged, to mix vdev types. + ZFS stripes data across each of the vdevs. For example, if + there are two mirror vdevs, then this is effectively a RAID + 10, striping the writes across the two sets of mirrors. + Because of the way that space is allocated in ZFS to attempt + to have each vdev reach 100% full at the same time, there is a + performance penalty if the vdevs have different amounts of + free space. Currently, vdevs cannot be removed from a zpool, and disks can only be removed from a mirror if there is enough remaining redundancy. - @@ -601,23 +598,23 @@ errors: No known data errors Dealing with Failed Devices When a disk in a ZFS pool fails, the vdev that the disk - belongs to will enter the Degraded state. In this - state, all of the data stored on the vdev is still available, - but performance may be impacted because missing data will need - to be calculated from the available redundancy. To restore - the vdev to a fully functional state the failed physical - device will need to be replace replaced, and ZFS must be - instructed to begin the resilver operation, where - data that was on the failed device will be recalculated + belongs to will enter the + Degraded state. In + this state, all of the data stored on the vdev is still + available, but performance may be impacted because missing + data will need to be calculated from the available redundancy. + To restore the vdev to a fully functional state the failed + physical device will need to be replace replaced, and ZFS must + be instructed to begin the + resilver operation, + where data that was on the failed device will be recalculated from the available redundancy and written to the replacement device. Once this process has completed the vdev will return to Online status. If the vdev does not have any redundancy, or if multiple devices have failed and there is insufficient redundancy to - compensate, the pool will enter the Faulted state. If a + compensate, the pool will enter the + Faulted state. If a sufficient number of devices cannot be reconnected to the pool then the pool will be inoperative, and data will need to be restored from backups. @@ -629,14 +626,14 @@ errors: No known data errors The usable size of a redundant ZFS pool is limited by the size of the smallest device in the vdev. If each device in the vdev is replaced sequentially, after the smallest device - has completed the replace or resilver operation, the - pool can grow based on the size of the new smallest device. - This expansion can be triggered by using zpool - online with the parameter on - each device. After the expansion of each device, the - additional space will become available in the pool. + has completed the + replace or + resilver operation, + the pool can grow based on the size of the new smallest + device. This expansion can be triggered by using + zpool online with the + parameter on each device. After the expansion of each device, + the additional space will become available in the pool. @@ -759,26 +756,26 @@ History for 'tank': on the other system can clearly be distinguished by the hostname that is recorded for each command. - Both options to zpool history - can be combined to give the most detailed - information possible for any given pool. The pool history can - be a valuable information source when tracking down what - actions were performed or when more - detailed output is needed for debugging a ZFS pool. + Both options to zpool history can be + combined to give the most detailed information possible for + any given pool. The pool history can be a valuable + information source when tracking down what actions were + performed or when more detailed output is needed for debugging + a ZFS pool. Performance Monitoring ZFS has a built-in monitoring system that can display - statistics about I/O happening on the pool in real-time. - It shows the amount of free and used space on the pool, how - many read and write operations are being performed per second, - and how much I/O bandwidth is currently being utilized for - read and write operations. By default, all pools in the - system will be monitored and displayed. A pool name can be - provided as part of the command to monitor just that specific - pool. A basic example: + statistics about I/O happening on the pool in real-time. It + shows the amount of free and used space on the pool, how many + read and write operations are being performed per second, and + how much I/O bandwidth is currently being utilized for read + and write operations. By default, all pools in the system + will be monitored and displayed. A pool name can be provided + as part of the command to monitor just that specific pool. A + basic example: &prompt.root; zpool iostat capacity operations bandwidth @@ -790,11 +787,13 @@ data 288G 1.53T 2 11 number can be specified as the last parameter, indicating the frequency in seconds to wait between updates. ZFS will print the next statistic line after each interval. Press - CtrlC - to stop this continuous monitoring. Alternatively, give a - second number on the command line after the interval to - specify the total number of statistics to display. + + Ctrl + C + to stop this continuous monitoring. + Alternatively, give a second number on the command line after + the interval to specify the total number of statistics to + display. Even more detailed pool I/O statistics can be displayed with . In this case each storage device in @@ -850,22 +849,22 @@ data 288G 1.53T partitioned and assigned to a file system, there was no way to add an additional file system without adding a new disk. ZFS also allows you to set a number of - properties on each dataset. These properties - include features like compression, deduplication, caching and - quoteas, as well as other useful properties like readonly, - case sensitivity, network file sharing and mount point. Each - separate dataset can be administered, delegated, replicated, snapshoted, jailed, and destroyed as a unit. - This offers many advantages to creating a separate dataset for - each different type or set of files. The only drawback to - having an extremely large number of datasets, is that some - commands like zfs list will be slower, - and the mounting of an extremely large number of datasets - (100s or 1000s) can make the &os; boot process take + properties on each + dataset. These + properties include features like compression, deduplication, + caching and quoteas, as well as other useful properties like + readonly, case sensitivity, network file sharing and mount + point. Each separate dataset can be administered, + delegated, + replicated, + snapshoted, + jailed, and destroyed as a + unit. This offers many advantages to creating a separate + dataset for each different type or set of files. The only + drawback to having an extremely large number of datasets, is + that some commands like zfs list will be + slower, and the mounting of an extremely large number of + datasets (100s or 1000s) can make the &os; boot process take longer. Destroying a dataset is much quicker than deleting all @@ -878,8 +877,8 @@ data 288G 1.53T property, accessible with zpool get freeing poolname indicates how many datasets are having their blocks freed in the background. - If there are child datasets, such as snapshots or other + If there are child datasets, such as + snapshots or other datasets, then the parent cannot be destroyed. To destroy a dataset and all of its children, use the parameter to recursively destroy the dataset and all of its @@ -926,16 +925,15 @@ Filesystem Size Used Avail Cap regular filesystem dataset. The operation is nearly instantaneous, but it may take several minutes for the free space to be reclaimed in the background. - Renaming a Dataset - The name of a dataset can be changed using zfs - rename. The rename command can also be used to - change the parent of a dataset. Renaming a dataset to be - under a different parent dataset will change the value of + The name of a dataset can be changed using + zfs rename. The rename command can also be + used to change the parent of a dataset. Renaming a dataset to + be under a different parent dataset will change the value of those properties that are inherited by the child dataset. When a dataset is renamed, it is unmounted and then remounted in the new location (inherited from the parent dataset). This @@ -1004,12 +1002,12 @@ tank custom:costcenter - By default, snapshots are mounted in a hidden directory under the parent dataset: .zfs/snapshots/snapshotname. + class="directory">.zfs/snapshots/snapshotname. Individual files can easily be restored to a previous state by copying them from the snapshot back to the parent dataset. It is also possible to revert the entire dataset back to the - point-in-time of the snapshot using zfs - rollback. + point-in-time of the snapshot using + zfs rollback. Snapshots consume space based on how much the parent file system has changed since the time of the snapshot. The @@ -1018,7 +1016,7 @@ tank custom:costcenter - To destroy a snapshot and recover the space consumed by the overwritten or deleted files, run zfs destroy - dataset@snapshot. + dataset@snapshot. The parameter will recursively remove all snapshots with the same name under the parent dataset. Adding the parameters to the destroy command @@ -1035,12 +1033,12 @@ tank custom:costcenter - only, is mounted, and can have its own properties. Once a clone has been created, the snapshot it was created from cannot be destroyed. The child/parent relationship between - the clone and the snapshot can be reversed using zfs - promote. After a clone has been promoted, the - snapshot becomes a child of the clone, rather than of the - original parent dataset. This will change how the space is - accounted, but not actually change the amount of space - consumed. + the clone and the snapshot can be reversed using + zfs promote. After a clone has been + promoted, the snapshot becomes a child of the clone, rather + than of the original parent dataset. This will change how the + space is accounted, but not actually change the amount of + space consumed. @@ -1052,17 +1050,17 @@ tank custom:costcenter - Dataset, User and Group Quotas - Dataset - quotas can be used to restrict the amount of space - that can be consumed by a particular dataset. Reference Quotas work in - very much the same way, except they only count the space used - by the dataset itself, excluding snapshots and child - datasets. Similarly user and group quotas can be used - to prevent users or groups from consuming all of the available - space in the pool or dataset. + Dataset quotas can + be used to restrict the amount of space that can be consumed + by a particular dataset. + Reference Quotas work + in very much the same way, except they only count the space + used by the dataset itself, excluding snapshots and child + datasets. Similarly + user and + group quotas can be + used to prevent users or groups from consuming all of the + available space in the pool or dataset. To enforce a dataset quota of 10 GB for storage/home/bob, use the @@ -1167,11 +1165,10 @@ tank custom:costcenter - Reservations guarantee a minimum amount of space will always be available - to a dataset. The reserved space will not - be available to any other dataset. This feature can be - especially useful to ensure that users cannot comsume all of - the free space, leaving none for an important dataset or log - files. + to a dataset. The reserved space will not be available to any + other dataset. This feature can be especially useful to + ensure that users cannot comsume all of the free space, + leaving none for an important dataset or log files. The general format of the reservation property is @@ -1189,7 +1186,7 @@ tank custom:costcenter - The same principle can be applied to the refreservation property for setting a Reference - Reservation, with the general format + Reservation, with the general format refreservation=size. To check if any reservations or refreservations exist on @@ -1209,13 +1206,13 @@ tank custom:costcenter - Deduplication - When enabled, Deduplication uses - the checksum of each block to detect duplicate blocks. When a - new block is about to be written and it is determined to be a - duplicate of an existing block, rather than writing the same - data again, ZFS just references the - existing data on disk an additional time. This can offer + When enabled, + Deduplication + uses the checksum of each block to detect duplicate blocks. + When a new block is about to be written and it is determined + to be a duplicate of an existing block, rather than writing + the same data again, ZFS just references + the existing data on disk an additional time. This can offer tremendous space savings if your data contains many discreet copies of the file information. Deduplication requires an extremely large amount of memory, and most of the space @@ -1343,12 +1340,12 @@ dedup = 1.05, compress = 1.11, copies = Delegating Dataset Creation zfs allow - someuser create - mydataset - gives the specified user permission to create child datasets - under the selected parent dataset. There is a caveat: - creating a new dataset involves mounting it. That requires - setting the vfs.usermount &man.sysctl.8; to + someuser create + mydataset gives the + specified user permission to create child datasets under the + selected parent dataset. There is a caveat: creating a new + dataset involves mounting it. That requires setting the + vfs.usermount &man.sysctl.8; to 1 to allow non-root users to mount a filesystem. There is another restriction aimed at preventing abuse: non-root users must own the mountpoint where the file @@ -1359,14 +1356,14 @@ dedup = 1.05, compress = 1.11, copies = Delegating Permission Delegation zfs allow - someuser allow - mydataset - gives the specified user the ability to assign any permission - they have on the target dataset (or its children) to other - users. If a user has the snapshot - permission and the allow permission, that - user can then grant the snapshot permission - to some other users. + someuser allow + mydataset gives the + specified user the ability to assign any permission they have + on the target dataset (or its children) to other users. If a + user has the snapshot permission and the + allow permission, that user can then grant + the snapshot permission to some other + users. @@ -1401,8 +1398,8 @@ dedup = 1.05, compress = 1.11, copies = ZFS on i386 Some of the features provided by ZFS - are RAM-intensive, and may require tuning for - maximum efficiency on systems with limited + are RAM-intensive, and may require tuning for maximum + efficiency on systems with limited RAM. @@ -1411,16 +1408,15 @@ dedup = 1.05, compress = 1.11, copies = As a bare minimum, the total system memory should be at least one gigabyte. The amount of recommended RAM depends upon the size of the pool and - which ZFS features are used. A - general rule of thumb is 1 GB of RAM for every - 1 TB of storage. If the deduplication feature is used, - a general rule of thumb is 5 GB of RAM per TB of - storage to be deduplicated. While some users successfully - use ZFS with less RAM, - systems under heavy load - may panic due to memory exhaustion. Further tuning may be - required for systems with less than the recommended RAM - requirements. + which ZFS features are used. A general + rule of thumb is 1 GB of RAM for every 1 TB of + storage. If the deduplication feature is used, a general + rule of thumb is 5 GB of RAM per TB of storage to be + deduplicated. While some users successfully use + ZFS with less RAM, + systems under heavy load may panic due to memory exhaustion. + Further tuning may be required for systems with less than + the recommended RAM requirements. @@ -1686,7 +1682,7 @@ vfs.zfs.vdev.cache.size="5M"Log - ZFS Log Devices, also known as ZFS Intent Log (ZIL) + linkend="zfs-term-zil">ZIL) move the intent log from the regular pool devices to a dedicated device, typically an SSD. Having a dedicated log @@ -1703,7 +1699,7 @@ vfs.zfs.vdev.cache.size="5M"Cache - Adding a cache vdev to a zpool will add the storage of the cache to the L2ARC. + linkend="zfs-term-l2arc">L2ARC. Cache devices cannot be mirrored. Since a cache device only stores additional copies of existing data, there is no risk of data loss. @@ -1870,9 +1866,9 @@ vfs.zfs.vdev.cache.size="5M" Snapshot - The copy-on-write - (COW) design of + The + copy-on-write + (COW) design of ZFS allows for nearly instantaneous consistent snapshots with arbitrary names. After taking a snapshot of a dataset (or a recursive snapshot of a @@ -1974,11 +1970,10 @@ vfs.zfs.vdev.cache.size="5M" LZ4 compression is only @@ -2082,11 +2077,10 @@ vfs.zfs.vdev.cache.size="5M" A reference quota limits the amount of space a - dataset can consume by enforcing a hard limit. - However, this hard limit includes only - space that the dataset references and does not include - space used by descendants, such as file systems or - snapshots. + dataset can consume by enforcing a hard limit. However, + this hard limit includes only space that the dataset + references and does not include space used by + descendants, such as file systems or snapshots. @@ -2145,8 +2139,8 @@ vfs.zfs.vdev.cache.size="5M"storage/home/bob, and another dataset tries to use all of the free space, at least 10 GB of space is reserved for this dataset. In - contrast to a regular reservation, + contrast to a regular + reservation, space used by snapshots and decendant datasets is not counted against the reservation. As an example, if a snapshot was taken of @@ -2186,9 +2180,9 @@ vfs.zfs.vdev.cache.size="5M"Individual devices can be put in an Offline state by the administrator if there is sufficient redundancy to avoid putting the pool - or vdev into a Faulted state. An - administrator may choose to offline a disk in + or vdev into a + Faulted state. + An administrator may choose to offline a disk in preparation for replacing it, or to make it easier to identify.