From owner-svn-doc-projects@FreeBSD.ORG Sun Nov 24 23:53:50 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id CEBFA798; Sun, 24 Nov 2013 23:53:50 +0000 (UTC) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id BD4FD2D31; Sun, 24 Nov 2013 23:53:50 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id rAONroub073314; Sun, 24 Nov 2013 23:53:50 GMT (envelope-from wblock@svn.freebsd.org) Received: (from wblock@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id rAONrovn073313; Sun, 24 Nov 2013 23:53:50 GMT (envelope-from wblock@svn.freebsd.org) Message-Id: <201311242353.rAONrovn073313@svn.freebsd.org> From: Warren Block Date: Sun, 24 Nov 2013 23:53:50 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r43238 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 24 Nov 2013 23:53:51 -0000 Author: wblock Date: Sun Nov 24 23:53:50 2013 New Revision: 43238 URL: http://svnweb.freebsd.org/changeset/doc/43238 Log: Edit for clarity, spelling, and redundancy. Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Sun Nov 24 23:25:14 2013 (r43237) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Sun Nov 24 23:53:50 2013 (r43238) @@ -689,7 +689,7 @@ errors: No known data errors - Displaying recorded Pool history + Displaying Recorded Pool History ZFS records all the commands that were issued to administer the pool. These include the creation of datasets, @@ -709,13 +709,13 @@ History for 'tank': 2013-02-27.18:51:09 zfs set checksum=fletcher4 tank 2013-02-27.18:51:18 zfs create tank/backup - The command output shows in it's basic form a timestamp - followed by each zpool or - zfs command that was executed on the pool. + The output shows + zpool and + zfs commands that were executed on the pool along with a timestamp. Note that only commands that altered the pool in some way are being recorded. Commands like zfs list are not part of the history. When there is no pool name provided - for zpool history then the history of all + for zpool history, then the history of all pools will be displayed. The zpool history can show even more @@ -758,12 +758,12 @@ History for 'tank': on the other system can clearly be distinguished by the hostname that is recorded for each command. - Both options to the zpool history - command can be combined together to give the most detailed + Both options to zpool history + can be combined to give the most detailed information possible for any given pool. The pool history can - become a valuable information source when tracking down what - actions were performed or when it is needed to provide more - detailed output for debugging a ZFS pool. + be a valuable information source when tracking down what + actions were performed or when more + detailed output is needed for debugging a ZFS pool. @@ -974,9 +974,9 @@ Filesystem Size Used Avail Cap NAME PROPERTY VALUE SOURCE tank custom:costcenter 1234 local - To remove such a custom property again, use the - zfs inherit command with the - option. If the custom property is not + To remove such a custom property again, use + zfs inherit with + . If the custom property is not defined in any of the parent datasets, it will be removed completely (although the changes are still recorded in the pool's history). @@ -1057,7 +1057,7 @@ tank custom:costcenter - that can be consumed by a particular dataset. Reference Quotas work in very much the same way, except they only count the space used - by the dataset it self, excluding snapshots and child + by the dataset itself, excluding snapshots and child datasets. Similarly user and group quotas can be used @@ -1258,7 +1258,7 @@ for> done NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT pool 2.84G 20.9M 2.82G 0% 3.00x ONLINE - - The DEDUP column does now contain the + The DEDUP column now contains the value 3.00x. This indicates that ZFS detected the copies of the ports tree data and was able to deduplicate it at a ratio of 1/3. The space savings that this @@ -1269,8 +1269,7 @@ pool 2.84G 20.9M 2.82G 0% 3.00x ONLINE - there is not much redundant data on a ZFS pool. To see how much space could be saved by deduplication for a given set of data that is already stored in a pool, ZFS can simulate the - effects that deduplication would have. To do that, the - following command can be invoked on the pool. + effects that deduplication would have: &prompt.root; zdb -S pool Simulated DDT histogram: @@ -1293,9 +1292,9 @@ refcnt blocks LSIZE PSIZE DSIZE dedup = 1.05, compress = 1.11, copies = 1.00, dedup * compress / copies = 1.16 - After zdb -S finished analyzing the - pool, it outputs a summary that shows the ratio that would - result in activating deduplication. In this case, + After zdb -S finishes analyzing the + pool, it shows the space reduction ratio that would be achieved by + activating deduplication. In this case, 1.16 is a very poor rate that is mostly influenced by compression. Activating deduplication on this pool would not save any significant amount of space. Keeping @@ -1316,8 +1315,8 @@ dedup = 1.05, compress = 1.11, copies = ZFS dataset to a Jail. zfs jail jailid attaches a dataset - to the specified jail, and the zfs unjail - detaches it. In order for the dataset to be administered from + to the specified jail, and zfs unjail + detaches it. For the dataset to be administered from within a jail, the jailed property must be set. Once a dataset is jailed it can no longer be mounted on the host, because the jail administrator may have set @@ -1328,46 +1327,45 @@ dedup = 1.05, compress = 1.11, copies = Delegated Administration - ZFS features a comprehensive delegation system to assign - permissions to perform the various ZFS administration functions - to a regular (non-root) user. For example, if each users' home - directory is a dataset, then each user could be delegated + A comprehensive permission delegation system allows unprivileged + users to perform ZFS administration functions. + For example, if each user's home + directory is a dataset, users can be given permission to create and destroy snapshots of their home - directory. A backup user could be assigned the permissions - required to make use of the ZFS replication features without - requiring root access, or isolate a usage collection script to - run as an unprivileged user with access to only the space - utilization data of all users. It is even possible to delegate - the ability to delegate permissions. ZFS allows to delegate - permissions over each subcommand and most ZFS properties. + directories. A backup user can be given permission + to use ZFS replication features. + A usage statistics script can be allowed to + run with access only to the space + utilization data for all users. It is even possible to delegate + the ability to delegate permissions. Permission delegation is + possible for each subcommand and most ZFS properties. Delegating Dataset Creation - Using the zfs allow + zfs allow someuser create - mydataset command will - give the indicated user the required permissions to create + mydataset + gives the specified user permission to create child datasets under the selected parent dataset. There is a - caveat: creating a new dataset involves mounting it, which - requires the vfs.usermount sysctl to be - enabled in order to allow non-root users to mount a - filesystem. There is another restriction that non-root users - must own the directory they are mounting the filesystem to, in - order to prevent abuse. + caveat: creating a new dataset involves mounting it. + That requires setting the vfs.usermount &man.sysctl.8; to 1 + to allow non-root users to mount a + filesystem. There is another restriction aimed at preventing abuse: non-root users + must own the mountpoint where the file system is being mounted. Delegating Permission Delegation - Using the zfs allow + zfs allow someuser allow - mydataset command will - give the indicated user the ability to assign any permission + mydataset + gives the specified user the ability to assign any permission they have on the target dataset (or its children) to other users. If a user has the snapshot - permission and the allow permission that - user can then grant the snapshot permission to some other + permission and the allow permission, that + user can then grant the snapshot permission to some other users. @@ -1403,23 +1401,23 @@ dedup = 1.05, compress = 1.11, copies = ZFS on i386 Some of the features provided by ZFS - are RAM-intensive, so some tuning may be required to provide + are RAM-intensive, and may require tuning for maximum efficiency on systems with limited RAM. Memory - At a bare minimum, the total system memory should be at + As a bare minimum, the total system memory should be at least one gigabyte. The amount of recommended RAM depends upon the size of the pool and - the ZFS features which are used. A + which ZFS features are used. A general rule of thumb is 1 GB of RAM for every 1 TB of storage. If the deduplication feature is used, a general rule of thumb is 5 GB of RAM per TB of storage to be deduplicated. While some users successfully use ZFS with less RAM, - it is possible that when the system is under heavy load, it + systems under heavy load may panic due to memory exhaustion. Further tuning may be required for systems with less than the recommended RAM requirements. @@ -1429,19 +1427,19 @@ dedup = 1.05, compress = 1.11, copies = Kernel Configuration Due to the RAM limitations of the - &i386; platform, users using ZFS on the - &i386; architecture should add the following option to a + &i386; platform, ZFS users on the + &i386; architecture should add this option to a custom kernel configuration file, rebuild the kernel, and reboot: options KVA_PAGES=512 - This option expands the kernel address space, allowing + This expands the kernel address space, allowing the vm.kvm_size tunable to be pushed beyond the currently imposed limit of 1 GB, or the limit of 2 GB for PAE. To find the most suitable value for this option, divide the desired - address space in megabytes by four (4). In this example, it + address space in megabytes by four. In this example, it is 512 for 2 GB. @@ -1450,8 +1448,8 @@ dedup = 1.05, compress = 1.11, copies = The kmem address space can be increased on all &os; architectures. On a test system with - one gigabyte of physical memory, success was achieved with - the following options added to + 1 GB of physical memory, success was achieved with + these options added to /boot/loader.conf, and the system restarted: @@ -1638,12 +1636,12 @@ vfs.zfs.vdev.cache.size="5M"RAID-Z1 through RAID-Z3 based on the number of - parity devinces in the array and the number of - disks that the pool can operate without. + parity devices in the array and the number of + disks which can fail while the pool remains operational. In a RAID-Z1 configuration - with 4 disks, each 1 TB, usable storage will - be 3 TB and the pool will still be able to + with 4 disks, each 1 TB, usable storage is + 3 TB and the pool will still be able to operate in degraded mode with one faulted disk. If an additional disk goes offline before the faulted disk is replaced and resilvered, all data @@ -1663,7 +1661,7 @@ vfs.zfs.vdev.cache.size="5M"RAID-60 array. A RAID-Z group's storage capacity - is approximately the size of the smallest disk, + is approximately the size of the smallest disk multiplied by the number of non-parity disks. Four 1 TB disks in RAID-Z1 has an effective size of approximately 3 TB, @@ -1749,17 +1747,17 @@ vfs.zfs.vdev.cache.size="5M"L2ARC - The L2ARC is the second level + L2ARC is the second level of the ZFS caching system. The primary ARC is stored in - RAM, however since the amount of + RAM. Since the amount of available RAM is often limited, - ZFS can also make use of + ZFS can also use cache vdevs. Solid State Disks (SSDs) are often used as these cache devices due to their higher speed and lower latency compared to traditional spinning - disks. An L2ARC is entirely + disks. L2ARC is entirely optional, but having one will significantly increase read speeds for files that are cached on the SSD instead of having to be read from @@ -1789,7 +1787,7 @@ vfs.zfs.vdev.cache.size="5M"ZIL - The ZIL accelerates synchronous + ZIL accelerates synchronous transactions by using storage devices (such as SSDs) that are faster than those used for the main storage pool. When data is being written @@ -1809,11 +1807,11 @@ vfs.zfs.vdev.cache.size="5M"Copy-On-Write Unlike a traditional file system, when data is - overwritten on ZFS the new data is + overwritten on ZFS, the new data is written to a different block rather than overwriting the - old data in place. Only once this write is complete is - the metadata then updated to point to the new location - of the data. This means that in the event of a shorn + old data in place. Only when this write is complete is + the metadata then updated to point to the new location. + In the event of a shorn write (a system crash or power loss in the middle of writing a file), the entire original contents of the file are still available and the incomplete write is @@ -1825,23 +1823,23 @@ vfs.zfs.vdev.cache.size="5M" Dataset - Dataset is the generic term for a + Dataset is the generic term for a ZFS file system, volume, snapshot or - clone. Each dataset will have a unique name in the + clone. Each dataset has a unique name in the format: poolname/path@snapshot. The root of the pool is technically a dataset as well. Child datasets are named hierarchically like - directories; for example, + directories. For example, mypool/home, the home dataset, is a child of mypool and inherits properties from it. This can be expanded further by creating mypool/home/user. This grandchild dataset will inherity properties from the - parent and grandparent. It is also possible to set - properties on a child to override the defaults inherited + parent and grandparent. + Properties on a child can be set to override the defaults inherited from the parents and grandparents. - ZFS also allows administration of - datasets and their children to be delegated. @@ -1852,7 +1850,7 @@ vfs.zfs.vdev.cache.size="5M"ZFS file system is mounted somewhere in the systems directory heirarchy and contains files - and directories of its own with permissions, flags and + and directories of its own with permissions, flags, and other metadata. @@ -1903,7 +1901,7 @@ vfs.zfs.vdev.cache.size="5M"hold, once a snapshot is held, any attempt to destroy it will return - an EBUY error. Each snapshot can have multiple holds, + an EBUSY error. Each snapshot can have multiple holds, each with a unique name. The release command removes the hold so the snapshot can then be deleted. @@ -1924,12 +1922,12 @@ vfs.zfs.vdev.cache.size="5M"promoted, reversing - this dependeancy, making the clone the parent and the + this dependency, making the clone the parent and the previous parent the child. This operation requires no - additional space, however it will change the way the + additional space, but it will change the way the used space is accounted. @@ -1937,9 +1935,9 @@ vfs.zfs.vdev.cache.size="5M"Checksum Every block that is allocated is also checksummed - (the algorithm used is a per dataset property, see: - zfs set). ZFS - transparently validates the checksum of each block as it + (the algorithm used is a per dataset property, see + zfs set). The checksum of each block + is transparently validated as it is read, allowing ZFS to detect silent corruption. If the data that is read does not match the expected checksum, ZFS will @@ -1967,7 +1965,7 @@ vfs.zfs.vdev.cache.size="5M" + can be disabled, but it is inadvisable. @@ -1977,8 +1975,8 @@ vfs.zfs.vdev.cache.size="5M" Deduplication - ZFS has the ability to detect - duplicate blocks of data as they are written (thanks to - the checksumming feature). If deduplication is enabled, + Checksums make it possible to detect + duplicate blocks of data as they are written. + If deduplication is enabled, instead of writing the block a second time, the reference count of the existing block will be increased, saving storage space. To do this, @@ -2011,23 +2009,23 @@ vfs.zfs.vdev.cache.size="5M"ZFS and - the two blocks will be stored separately. Due to the - nature of the DDT, having to store + it is actually identical. If the data is not identical, the hash + collision will be noted and + the two blocks will be stored separately. Because + DDT must store the hash of each unique block, it consumes a very large amount of memory (a general rule of thumb is 5-6 GB of ram per 1 TB of deduplicated data). In situations where it is not practical to have enough RAM to keep the entire DDT in memory, performance will - suffer greatly as the DDT will need - to be read from disk before each new block is written. - Deduplication can make use of the + suffer greatly as the DDT must + be read from disk before each new block is written. + Deduplication can use L2ARC to store the DDT, providing a middle ground between fast system memory and slower disks. Consider - using ZFS compression instead, which + using compression instead, which often provides nearly as much space savings without the additional memory requirement. @@ -2035,17 +2033,17 @@ vfs.zfs.vdev.cache.size="5M" Scrub - In place of a consistency check like &man.fsck.8;, - ZFS has the scrub - command, which reads all data blocks stored on the pool - and verifies their checksums them against the known good + Instead of a consistency check like &man.fsck.8;, + ZFS has the scrub. + scrub reads all data blocks stored on the pool + and verifies their checksums against the known good checksums stored in the metadata. This periodic check of all the data stored on the pool ensures the recovery of any corrupted blocks before they are needed. A scrub is not required after an unclean shutdown, but it is recommended that you run a scrub at least once each - quarter. ZFS compares the checksum - for each block as it is read in the normal course of + quarter. Checksums + of each block are tested as they are read in normal use, but a scrub operation makes sure even infrequently used blocks are checked for silent corruption. @@ -2054,7 +2052,7 @@ vfs.zfs.vdev.cache.size="5M"Dataset Quota ZFS provides very fast and - accurate dataset, user and group space accounting in + accurate dataset, user, and group space accounting in addition to quotas and space reservations. This gives the administrator fine grained control over how space is allocated and allows critical file systems to reserve @@ -2087,8 +2085,8 @@ vfs.zfs.vdev.cache.size="5M" A reference quota limits the amount of space a - dataset can consume by enforcing a hard limit on the - space used. However, this hard limit includes only + dataset can consume by enforcing a hard limit. + However, this hard limit includes only space that the dataset references and does not include space used by descendants, such as file systems or snapshots. @@ -2115,10 +2113,10 @@ vfs.zfs.vdev.cache.size="5M" The reservation property makes - it possible to guaranteed a minimum amount of space for - the use of a specific dataset and its descendants. This + it possible to guarantee a minimum amount of space for + a specific dataset and its descendants. This means that if a 10 GB reservation is set on - storage/home/bob, if another + storage/home/bob, and another dataset tries to use all of the free space, at least 10 GB of space is reserved for this dataset. If a snapshot is taken of @@ -2127,7 +2125,7 @@ vfs.zfs.vdev.cache.size="5M"refreservation property works in a similar way, except it - excludes descendants, such as + excludes descendants like snapshots. Reservations of any sort are useful in many @@ -2143,11 +2141,11 @@ vfs.zfs.vdev.cache.size="5M" The refreservation property - makes it possible to guaranteed a minimum amount of + makes it possible to guarantee a minimum amount of space for the use of a specific dataset excluding its descendants. This means that if a 10 GB reservation is set on - storage/home/bob, if another + storage/home/bob, and another dataset tries to use all of the free space, at least 10 GB of space is reserved for this dataset. In contrast to a regular Resilver When a disk fails and must be replaced, the new - disk must be filled with the data that was lost. This - process of calculating and writing the missing data - (using the parity information distributed across the - remaining drives) to the new drive is called + disk must be filled with the data that was lost. The + process of using the parity information distributed across the remaining drives + to calculate and write the missing data to the new drive + is called resilvering. @@ -2194,7 +2192,7 @@ vfs.zfs.vdev.cache.size="5M"Faulted state. An administrator may choose to offline a disk in - preperation for replacing it, or to make it easier to + preparation for replacing it, or to make it easier to identify. @@ -2204,10 +2202,10 @@ vfs.zfs.vdev.cache.size="5M"A ZFS pool or vdev that is in the Degraded state has one or more disks that have been disconnected or have failed. The pool is - still usable however if additional devices fail the pool + still usable, however if additional devices fail, the pool could become unrecoverable. Reconnecting the missing - device(s) or replacing the failed disks will return the - pool to a Online state after the reconnected or new device has completed the Resilver @@ -2228,7 +2226,7 @@ vfs.zfs.vdev.cache.size="5M"Online state. If there is insufficient redundancy to compensate for the number of failed disks, then the contents of the pool - are lost and will need to be restored from + are lost and must be restored from backups.