From owner-svn-doc-projects@FreeBSD.ORG Mon Nov 25 00:20:23 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id D754BFD6; Mon, 25 Nov 2013 00:20:23 +0000 (UTC) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id C569D2E71; Mon, 25 Nov 2013 00:20:23 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id rAP0KNC0084559; Mon, 25 Nov 2013 00:20:23 GMT (envelope-from wblock@svn.freebsd.org) Received: (from wblock@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id rAP0KNwr084558; Mon, 25 Nov 2013 00:20:23 GMT (envelope-from wblock@svn.freebsd.org) Message-Id: <201311250020.rAP0KNwr084558@svn.freebsd.org> From: Warren Block Date: Mon, 25 Nov 2013 00:20:23 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r43239 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 25 Nov 2013 00:20:23 -0000 Author: wblock Date: Mon Nov 25 00:20:23 2013 New Revision: 43239 URL: http://svnweb.freebsd.org/changeset/doc/43239 Log: Whitespace-only fixes, translators please ignore. Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Sun Nov 24 23:53:50 2013 (r43238) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Mon Nov 25 00:20:23 2013 (r43239) @@ -4,9 +4,13 @@ $FreeBSD$ --> - + + The Z File System (<acronym>ZFS</acronym>) + @@ -54,15 +58,14 @@ - Data integrity: All data - includes a checksum of the data. When - data is written, the checksum is calculated and written along - with it. When that data is later read back, the - checksum is calculated again. If the checksums do not match, a - data error has been detected. ZFS will attempt to - automatically correct errors when data - redundancy is available. + Data integrity: All data includes a + checksum of the data. + When data is written, the checksum is calculated and written + along with it. When that data is later read back, the + checksum is calculated again. If the checksums do not match, + a data error has been detected. ZFS will + attempt to automatically correct errors when data redundancy + is available. @@ -73,13 +76,12 @@ - Performance: multiple - caching mechanisms provide increased performance. - ARC is an advanced - memory-based read cache. A second level of + Performance: multiple caching mechanisms provide increased + performance. ARC is an + advanced memory-based read cache. A second level of disk-based read cache can be added with - L2ARC, and disk-based synchronous - write cache is available with + L2ARC, and disk-based + synchronous write cache is available with ZIL. @@ -91,34 +93,33 @@ What Makes <acronym>ZFS</acronym> Different ZFS is significantly different from any - previous file system because it is more than just - a file system. Combining the - traditionally separate roles of volume manager and file system - provides ZFS with unique advantages. The file system is now - aware of the underlying structure of the disks. Traditional - file systems could only be created on a single disk at a time. - If there were two disks then two separate file systems would - have to be created. In a traditional hardware - RAID configuration, this problem was worked - around by presenting the operating system with a single logical - disk made up of the space provided by a number of disks, on top - of which the operating system placed its file system. Even in - the case of software RAID solutions like - GEOM, the UFS file system - living on top of the RAID transform believed - that it was dealing with a single device. - ZFS's combination of the volume manager and - the file system solves this and allows the creation of many file - systems all sharing a pool of available storage. One of the - biggest advantages to ZFS's awareness of the - physical layout of the disks is that ZFS can - grow the existing file systems automatically when additional - disks are added to the pool. This new space is then made - available to all of the file systems. ZFS - also has a number of different properties that can be applied to - each file system, creating many advantages to creating a number - of different filesystems and datasets rather than a single - monolithic filesystem. + previous file system because it is more than just a file system. + Combining the traditionally separate roles of volume manager and + file system provides ZFS with unique + advantages. The file system is now aware of the underlying + structure of the disks. Traditional file systems could only be + created on a single disk at a time. If there were two disks + then two separate file systems would have to be created. In a + traditional hardware RAID configuration, this + problem was worked around by presenting the operating system + with a single logical disk made up of the space provided by a + number of disks, on top of which the operating system placed its + file system. Even in the case of software + RAID solutions like GEOM, + the UFS file system living on top of the + RAID transform believed that it was dealing + with a single device. ZFS's combination of + the volume manager and the file system solves this and allows + the creation of many file systems all sharing a pool of + available storage. One of the biggest advantages to + ZFS's awareness of the physical layout of the + disks is that ZFS can grow the existing file + systems automatically when additional disks are added to the + pool. This new space is then made available to all of the file + systems. ZFS also has a number of different + properties that can be applied to each file system, creating + many advantages to creating a number of different filesystems + and datasets rather than a single monolithic filesystem. @@ -473,10 +474,10 @@ errors: No known data errors checksums disabled. There is also no noticeable performance gain from disabling these checksums. - - Checksum verification is known as scrubbing. - Verify the data integrity of the storage - pool, with this command: + + Checksum verification is known as + scrubbing. Verify the data integrity of the + storage pool, with this command: &prompt.root; zpool scrub storage @@ -699,9 +700,9 @@ errors: No known data errors history is not kept in a log file, but is a part of the pool itself. That is the reason why the history cannot be altered after the fact unless the pool is destroyed. The command to - review this history is aptly named zpool - history: - + review this history is aptly named + zpool history: + &prompt.root; zpool history History for 'tank': 2013-02-26.23:02:35 zpool create tank mirror /dev/ada0 /dev/ada1 @@ -709,13 +710,13 @@ History for 'tank': 2013-02-27.18:51:09 zfs set checksum=fletcher4 tank 2013-02-27.18:51:18 zfs create tank/backup - The output shows - zpool and - zfs commands that were executed on the pool along with a timestamp. - Note that only commands that altered the pool in some way are - being recorded. Commands like zfs list are - not part of the history. When there is no pool name provided - for zpool history, then the history of all + The output shows zpool and + zfs commands that were executed on the pool + along with a timestamp. Note that only commands that altered + the pool in some way are being recorded. Commands like + zfs list are not part of the history. When + there is no pool name provided for + zpool history, then the history of all pools will be displayed. The zpool history can show even more @@ -728,7 +729,7 @@ History for 'tank': History for 'tank': 2013-02-26.23:02:35 [internal pool create txg:5] pool spa 28; zfs spa 28; zpl 5;uts 9.1-RELEASE 901000 amd64 2013-02-27.18:50:53 [internal property set txg:50] atime=0 dataset = 21 -2013-02-27.18:50:58 zfs set atime=off tank +2013-02-27.18:50:58 zfs set atime=off tank 2013-02-27.18:51:04 [internal property set txg:53] checksum=7 dataset = 21 2013-02-27.18:51:09 zfs set checksum=fletcher4 tank 2013-02-27.18:51:13 [internal create txg:55] dataset = 39 @@ -795,16 +796,15 @@ data 288G 1.53T 2 11 second number on the command line after the interval to specify the total number of statistics to display. - Even more detailed pool I/O statistics can be - displayed with . In this case - each storage device in the pool will be shown with a - corresponding statistics line. This is helpful to - determine how many read and write operations are being - performed on each device, and can help determine if any - specific device is slowing down I/O on the entire pool. The - following example shows a mirrored pool consisting of two - devices. For each of these, a separate line is shown with - the current I/O activity. + Even more detailed pool I/O statistics can be displayed + with . In this case each storage device in + the pool will be shown with a corresponding statistics line. + This is helpful to determine how many read and write + operations are being performed on each device, and can help + determine if any specific device is slowing down I/O on the + entire pool. The following example shows a mirrored pool + consisting of two devices. For each of these, a separate line + is shown with the current I/O activity. &prompt.root; zpool iostat -v capacity operations bandwidth @@ -1119,8 +1119,8 @@ tank custom:costcenter - User quota properties are not displayed by zfs get all. - Non-root users can only see their own - quotas unless they have been granted the + Non-root users can + only see their own quotas unless they have been granted the userquota privilege. Users with this privilege are able to view and set everyone's quota. @@ -1141,11 +1141,12 @@ tank custom:costcenter - &prompt.root; zfs set groupquota@firstgroup=none As with the user quota property, - non-root users can only see the quotas - associated with the groups that they belong to. However, - root or a user with the - groupquota privilege can view and set all - quotas for all groups. + non-root users can + only see the quotas associated with the groups that they + belong to. However, + root or a user with + the groupquota privilege can view and set + all quotas for all groups. To display the amount of space consumed by each user on the specified filesystem or snapshot, along with any specified @@ -1155,8 +1156,8 @@ tank custom:costcenter - specific options, refer to &man.zfs.1;. Users with sufficient privileges and - root can list the quota for - storage/home/bob using: + root can list the + quota for storage/home/bob using: &prompt.root; zfs get quota storage/home/bob @@ -1259,7 +1260,7 @@ NAME SIZE ALLOC FREE CAP DEDUP HEALTH A pool 2.84G 20.9M 2.82G 0% 3.00x ONLINE - The DEDUP column now contains the - value 3.00x. This indicates that ZFS + value 3.00x. This indicates that ZFS detected the copies of the ports tree data and was able to deduplicate it at a ratio of 1/3. The space savings that this yields can be enormous, but only when there is enough memory @@ -1293,8 +1294,8 @@ refcnt blocks LSIZE PSIZE DSIZE dedup = 1.05, compress = 1.11, copies = 1.00, dedup * compress / copies = 1.16 After zdb -S finishes analyzing the - pool, it shows the space reduction ratio that would be achieved by - activating deduplication. In this case, + pool, it shows the space reduction ratio that would be + achieved by activating deduplication. In this case, 1.16 is a very poor rate that is mostly influenced by compression. Activating deduplication on this pool would not save any significant amount of space. Keeping @@ -1327,18 +1328,16 @@ dedup = 1.05, compress = 1.11, copies = Delegated Administration - A comprehensive permission delegation system allows unprivileged - users to perform ZFS administration functions. - For example, if each user's home - directory is a dataset, users can be given - permission to create and destroy snapshots of their home - directories. A backup user can be given permission - to use ZFS replication features. - A usage statistics script can be allowed to - run with access only to the space - utilization data for all users. It is even possible to delegate - the ability to delegate permissions. Permission delegation is - possible for each subcommand and most ZFS properties. + A comprehensive permission delegation system allows + unprivileged users to perform ZFS administration functions. For + example, if each user's home directory is a dataset, users can + be given permission to create and destroy snapshots of their + home directories. A backup user can be given permission to use + ZFS replication features. A usage statistics script can be + allowed to run with access only to the space utilization data + for all users. It is even possible to delegate the ability to + delegate permissions. Permission delegation is possible for + each subcommand and most ZFS properties. Delegating Dataset Creation @@ -1346,13 +1345,14 @@ dedup = 1.05, compress = 1.11, copies = zfs allow someuser create mydataset - gives the specified user permission to create - child datasets under the selected parent dataset. There is a - caveat: creating a new dataset involves mounting it. - That requires setting the vfs.usermount &man.sysctl.8; to 1 - to allow non-root users to mount a - filesystem. There is another restriction aimed at preventing abuse: non-root users - must own the mountpoint where the file system is being mounted. + gives the specified user permission to create child datasets + under the selected parent dataset. There is a caveat: + creating a new dataset involves mounting it. That requires + setting the vfs.usermount &man.sysctl.8; to + 1 to allow non-root users to mount a + filesystem. There is another restriction aimed at preventing + abuse: non-root users must own the mountpoint where the file + system is being mounted. @@ -1365,8 +1365,8 @@ dedup = 1.05, compress = 1.11, copies = they have on the target dataset (or its children) to other users. If a user has the snapshot permission and the allow permission, that - user can then grant the snapshot permission to some other - users. + user can then grant the snapshot permission + to some other users. @@ -1470,14 +1470,14 @@ vfs.zfs.vdev.cache.size="5M" - FreeBSD Wiki - - ZFS + FreeBSD + Wiki - ZFS FreeBSD Wiki - - ZFS Tuning + xlink:href="https://wiki.freebsd.org/ZFSTuningGuide">FreeBSD + Wiki - ZFS Tuning @@ -1489,8 +1489,7 @@ vfs.zfs.vdev.cache.size="5M" Oracle - Solaris ZFS Administration - Guide + Solaris ZFS Administration Guide @@ -1637,7 +1636,8 @@ vfs.zfs.vdev.cache.size="5M"RAID-Z1 through RAID-Z3 based on the number of parity devices in the array and the number of - disks which can fail while the pool remains operational. + disks which can fail while the pool remains + operational. In a RAID-Z1 configuration with 4 disks, each 1 TB, usable storage is @@ -1823,11 +1823,11 @@ vfs.zfs.vdev.cache.size="5M" Dataset - Dataset is the generic term for a - ZFS file system, volume, snapshot or - clone. Each dataset has a unique name in the - format: poolname/path@snapshot. The - root of the pool is technically a dataset as well. + Dataset is the generic term + for a ZFS file system, volume, + snapshot or clone. Each dataset has a unique name in + the format: poolname/path@snapshot. + The root of the pool is technically a dataset as well. Child datasets are named hierarchically like directories. For example, mypool/home, the home dataset, is a @@ -1835,12 +1835,11 @@ vfs.zfs.vdev.cache.size="5M"mypool/home/user. This grandchild dataset will inherity properties from the - parent and grandparent. - Properties on a child can be set to override the defaults inherited - from the parents and grandparents. - Administration of - datasets and their children can be delegated. + parent and grandparent. Properties on a child can be + set to override the defaults inherited from the parents + and grandparents. Administration of datasets and their + children can be + delegated. @@ -1901,8 +1900,8 @@ vfs.zfs.vdev.cache.size="5M"hold, once a snapshot is held, any attempt to destroy it will return - an EBUSY error. Each snapshot can have multiple holds, - each with a unique name. The + an EBUSY error. Each snapshot can + have multiple holds, each with a unique name. The release command removes the hold so the snapshot can then be deleted. Snapshots can be taken on volumes, however they can only @@ -1988,12 +1987,12 @@ vfs.zfs.vdev.cache.size="5M" - Deduplication + Deduplication - Checksums make it possible to detect - duplicate blocks of data as they are written. - If deduplication is enabled, - instead of writing the block a second time, the + Checksums make it possible to detect duplicate + blocks of data as they are written. If deduplication is + enabled, instead of writing the block a second time, the reference count of the existing block will be increased, saving storage space. To do this, ZFS keeps a deduplication table @@ -2009,25 +2008,23 @@ vfs.zfs.vdev.cache.size="5M"DDT must store - the hash of each unique block, it consumes a very large - amount of memory (a general rule of thumb is 5-6 GB - of ram per 1 TB of deduplicated data). In - situations where it is not practical to have enough + it is actually identical. If the data is not identical, + the hash collision will be noted and the two blocks will + be stored separately. Because DDT + must store the hash of each unique block, it consumes a + very large amount of memory (a general rule of thumb is + 5-6 GB of ram per 1 TB of deduplicated data). + In situations where it is not practical to have enough RAM to keep the entire DDT in memory, performance will - suffer greatly as the DDT must - be read from disk before each new block is written. - Deduplication can use - L2ARC to store the - DDT, providing a middle ground + suffer greatly as the DDT must be + read from disk before each new block is written. + Deduplication can use L2ARC to store + the DDT, providing a middle ground between fast system memory and slower disks. Consider - using compression instead, which - often provides nearly as much space savings without the - additional memory requirement. + using compression instead, which often provides nearly + as much space savings without the additional memory + requirement. @@ -2035,17 +2032,17 @@ vfs.zfs.vdev.cache.size="5M"Instead of a consistency check like &man.fsck.8;, ZFS has the scrub. - scrub reads all data blocks stored on the pool - and verifies their checksums against the known good - checksums stored in the metadata. This periodic check - of all the data stored on the pool ensures the recovery - of any corrupted blocks before they are needed. A scrub - is not required after an unclean shutdown, but it is - recommended that you run a scrub at least once each - quarter. Checksums - of each block are tested as they are read in normal - use, but a scrub operation makes sure even infrequently - used blocks are checked for silent corruption. + scrub reads all data blocks stored on + the pool and verifies their checksums against the known + good checksums stored in the metadata. This periodic + check of all the data stored on the pool ensures the + recovery of any corrupted blocks before they are needed. + A scrub is not required after an unclean shutdown, but + it is recommended that you run a scrub at least once + each quarter. Checksums of each block are tested as + they are read in normal use, but a scrub operation makes + sure even infrequently used blocks are checked for + silent corruption. @@ -2113,9 +2110,9 @@ vfs.zfs.vdev.cache.size="5M" The reservation property makes - it possible to guarantee a minimum amount of space for - a specific dataset and its descendants. This - means that if a 10 GB reservation is set on + it possible to guarantee a minimum amount of space for a + specific dataset and its descendants. This means that + if a 10 GB reservation is set on storage/home/bob, and another dataset tries to use all of the free space, at least 10 GB of space is reserved for this dataset. If a @@ -2167,9 +2164,9 @@ vfs.zfs.vdev.cache.size="5M"When a disk fails and must be replaced, the new disk must be filled with the data that was lost. The - process of using the parity information distributed across the remaining drives - to calculate and write the missing data to the new drive - is called + process of using the parity information distributed + across the remaining drives to calculate and write the + missing data to the new drive is called resilvering. @@ -2202,13 +2199,13 @@ vfs.zfs.vdev.cache.size="5M"A ZFS pool or vdev that is in the Degraded state has one or more disks that have been disconnected or have failed. The pool is - still usable, however if additional devices fail, the pool - could become unrecoverable. Reconnecting the missing - devices or replacing the failed disks will return the - pool to an Online state after - the reconnected or new device has completed the Resilver + still usable, however if additional devices fail, the + pool could become unrecoverable. Reconnecting the + missing devices or replacing the failed disks will + return the pool to an + Online state + after the reconnected or new device has completed the + Resilver process. @@ -2217,17 +2214,16 @@ vfs.zfs.vdev.cache.size="5M"A ZFS pool or vdev that is in the Faulted state is no longer - operational and the data residing on it can no longer - be accessed. A pool or vdev enters the + operational and the data residing on it can no longer be + accessed. A pool or vdev enters the Faulted state when the number of missing or failed devices exceeds the level of redundancy in the vdev. If missing devices can be - reconnected the pool will return to a Online state. If + reconnected the pool will return to a + Online state. If there is insufficient redundancy to compensate for the number of failed disks, then the contents of the pool - are lost and must be restored from - backups. + are lost and must be restored from backups.