From owner-svn-doc-projects@FreeBSD.ORG Tue Oct 29 05:25:32 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 563A77D9; Tue, 29 Oct 2013 05:25:32 +0000 (UTC) (envelope-from wblock@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 351492998; Tue, 29 Oct 2013 05:25:32 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r9T5PWwv007870; Tue, 29 Oct 2013 05:25:32 GMT (envelope-from wblock@svn.freebsd.org) Received: (from wblock@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r9T5PWHH007869; Tue, 29 Oct 2013 05:25:32 GMT (envelope-from wblock@svn.freebsd.org) Message-Id: <201310290525.r9T5PWHH007869@svn.freebsd.org> From: Warren Block Date: Tue, 29 Oct 2013 05:25:32 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r43070 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 29 Oct 2013 05:25:32 -0000 Author: wblock Date: Tue Oct 29 05:25:31 2013 New Revision: 43070 URL: http://svnweb.freebsd.org/changeset/doc/43070 Log: Apply the latest patch from Allan Jude. Submitted by: Allan Jude Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Tue Oct 29 02:03:09 2013 (r43069) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Tue Oct 29 05:25:31 2013 (r43070) @@ -12,6 +12,16 @@ Rhodes Written by + + Allan + Jude + Written by + + + Benedict + Reuschling + Written by + @@ -470,12 +480,52 @@ errors: No known data errors Creating & Destroying Storage Pools + Creating a ZFS Storage Pool (zpool) + involves making a number of decisions that are relatively + permanent because the structure of the pool cannot be + changed after the pool has been created. The most important + decision is what type(s) of vdevs to group the physical disks + into. See the list of vdev types for details about + the possible options. Once the pool has been created, most + vdev types do not allow additional disks to be added to the + vdev. The exceptions are mirrors, which allow additional + disks to be added to the vdev, and stripes, which can be + upgraded to mirrors by attaching an additional to the vdev. + Although additional vdevs can be added to a pool, the layout + of the pool cannot be changed once the pool has been created, + instead the data must be backed up and the pool + recreated. + Adding & Removing Devices + Adding additional disks to a zpool can be broken down into + two separate cases, attaching an additional disk to an + existing vdev with the zpool attach + command, or adding additional vdevs to the pool with the + zpool add command. Only some + vdev types allow disks to + be added to the vdev after the fact. + + When adding additional disks to the existing vdev is not + an option, such as in the case of RAID-Z, the other option is + to add an additional vdev to the pool. It is possible, but + discouraged, to mix vdev types. ZFS stripes data across each + of the vdevs, for example if there are two mirror vdevs, then + this is effectively a RAID 10, striping the writes across the + two sets of mirrors. Because of the way that space is + allocated in ZFS in order to attempt to have each vdev reach + 100% full at the same time, there is a performance penalty if + the vdevs have different amounts of free space. + + Currently, vdevs cannot be removed from a zpool, and disks + can only be removed from a mirror if there is enough remaining + redundancy. + Creating a ZFS Storage Pool (zpool) involves making a number of decisions that are relatively permanent. Although additional vdevs can be added to a pool, @@ -485,22 +535,84 @@ errors: No known data errors zpool. + + Replacing a Working Devices + + There are a number of situations in which it may be + desirable to replacing a disk with a different disk. This + process requires connecting the new disk at the same time as + the disk to be replaced. The + zpool replace command will copy all of the + data from the old disk to the new one. Once this operation + completes, the old disk is disconnected from the vdev. If the + newer disk is larger this may allow your zpool to grow, see + the Growing a Pool + section. + + Dealing with Failed Devices - + When a disk fails and the physical device is replaced, ZFS + needs to be told to begin the resilver operation, where + the data that was on the failed device will be recalculated + from the available redundancy and written to the new + device. + + + + Growing a Pool + + The usable size of a redundant ZFS pool is limited by the + size of the smallest device in the vdev. If you sequentially + replace each device in the vdev then when the smallest device + has completed the replace or resilver operation, the pool + can then grow based on the size of the new smallest device. + This expansion can be triggered with the + zpool online command with the -e flag on + each device. Once the expansion of each device is complete, + the additional space will be available in the pool. Importing & Exporting Pools - + Pools can be exported in preperation for moving them to + another system. All datasets are unmounted, and each device + is marked as exported but still locked so it cannot be used + by other disk subsystems. This allows pools to be imported on + other machines, other operating systems that support ZFS, and + even different hardware architectures (with some caveats, see + the zpool man page). The -f flag can be used to force + exporting a pool, in cases such as when a dataset has open + files. If you force an export, the datasets will be forcibly + unmounted such can have unexpected side effects. + + Importing a pool will automatically mount the datasets, + which may not be the desired behavior. The -N command line + param will skip mounting. The command line parameter -o sets + temporary properties for this import only. The altroot= + property allows you to import a zpool with a base of some + mount point, instead of the root of the file system. If the + pool was last used on a different system and was not properly + exported, you may have to force an import with the -f flag. + The -a flag will import all pools that do not appear to be + in use by another system. Upgrading a Storage Pool - + After FreeBSD has been upgraded, or if a pool has been + imported from a system using an older verison of ZFS, the pool + must be manually upgraded to the latest version of ZFS. This + process is unreversable, so consider if the pool may ever need + to be imported on an older system before upgrading. Onle once + the zpool upgrade command has completed + will the newer features of ZFS be available. An upgrade + cannot be undone. The -v flag can be used to see what new + features will be supported by upgrading. @@ -556,7 +668,7 @@ data 288G 1.53T ada1 - - 0 4 5.61K 61.7K ada2 - - 1 4 5.04K 61.7K ----------------------- ----- ----- ----- ----- ----- ----- - + Splitting a Storage Pool @@ -1389,7 +1501,8 @@ vfs.zfs.vdev.cache.size="5M"Snapshot The copy-on-write (COW) design of + linkend="zfs-term-cow">copy-on-write + (COW) design of ZFS allows for nearly instantaneous consistent snapshots with arbitrary names. After taking a snapshot of a dataset (or a recursive snapshot of a From owner-svn-doc-projects@FreeBSD.ORG Tue Oct 29 06:21:24 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 611C350A; Tue, 29 Oct 2013 06:21:24 +0000 (UTC) (envelope-from wblock@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 4C3F62C84; Tue, 29 Oct 2013 06:21:24 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r9T6LOvY027455; Tue, 29 Oct 2013 06:21:24 GMT (envelope-from wblock@svn.freebsd.org) Received: (from wblock@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r9T6LO5F027454; Tue, 29 Oct 2013 06:21:24 GMT (envelope-from wblock@svn.freebsd.org) Message-Id: <201310290621.r9T6LO5F027454@svn.freebsd.org> From: Warren Block Date: Tue, 29 Oct 2013 06:21:24 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r43071 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 29 Oct 2013 06:21:24 -0000 Author: wblock Date: Tue Oct 29 06:21:23 2013 New Revision: 43071 URL: http://svnweb.freebsd.org/changeset/doc/43071 Log: Make an edit pass up to line 696. Fix spelling errors, remove redundancy, reorder passive sentences, add markup. Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Tue Oct 29 05:25:31 2013 (r43070) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Tue Oct 29 06:21:23 2013 (r43071) @@ -239,15 +239,15 @@ example/data 17547008 0 175 <acronym>ZFS</acronym> RAID-Z - There is no way to prevent a disk from failing. One - method of avoiding data loss due to a failed hard disk is to + Disks fail. One + method of avoiding data loss from disk failure is to implement RAID. ZFS supports this feature in its pool design. - RAID-Z pools require 3 or more disks but + RAID-Z pools require three or more disks but yield more usable space than mirrored pools. - To create a RAID-Z pool, issue the - following command and specify the disks to add to the + To create a RAID-Z pool, use this + command, specifying the disks to add to the pool: &prompt.root; zpool create storage raidz da0 da1 da2 @@ -270,8 +270,8 @@ example/data 17547008 0 175 &prompt.root; zfs create storage/home - It is now possible to enable compression and keep extra - copies of directories and files using the following + Now compression and keeping extra + copies of directories and files can be enabled with these commands: &prompt.root; zfs set copies=2 storage/home @@ -286,11 +286,11 @@ example/data 17547008 0 175 &prompt.root; ln -s /storage/home /home &prompt.root; ln -s /storage/home /usr/home - Users should now have their data stored on the freshly + Users now have their data stored on the freshly created /storage/home. Test by adding a new user and logging in as that user. - Try creating a snapshot which may be rolled back + Try creating a snapshot which can be rolled back later: &prompt.root; zfs snapshot storage/home@08-30-08 @@ -299,11 +299,11 @@ example/data 17547008 0 175 file system, not a home directory or a file. The @ character is a delimiter used between the file system name or the volume name. When a user's home - directory gets trashed, restore it with: + directory is accidentally deleted, restore it with: &prompt.root; zfs rollback storage/home@08-30-08 - To get a list of all available snapshots, run + To list all available snapshots, run ls in the file system's .zfs/snapshot directory. For example, to see the previously taken @@ -312,8 +312,8 @@ example/data 17547008 0 175 &prompt.root; ls /storage/home/.zfs/snapshot It is possible to write a script to perform regular - snapshots on user data. However, over time, snapshots may - consume a great deal of disk space. The previous snapshot may + snapshots on user data. However, over time, snapshots can + consume a great deal of disk space. The previous snapshot can be removed using the following command: &prompt.root; zfs destroy storage/home@08-30-08 @@ -344,9 +344,8 @@ storage 26320512 0 26320512 storage/home 26320512 0 26320512 0% /home This completes the RAID-Z - configuration. To get status updates about the file systems - created during the nightly &man.periodic.8; runs, issue the - following command: + configuration. Daily status updates about the file systems + created can be generated as part of the nightly &man.periodic.8; runs: &prompt.root; echo 'daily_status_zfs_enable="YES"' >> /etc/periodic.conf @@ -362,7 +361,7 @@ storage/home 26320512 0 26320512 &prompt.root; zpool status -x If all pools are healthy and everything is normal, the - following message will be returned: + message indicates that: all pools are healthy @@ -389,21 +388,21 @@ config: errors: No known data errors This indicates that the device was previously taken - offline by the administrator using the following + offline by the administrator with this command: &prompt.root; zpool offline storage da1 - It is now possible to replace da1 - after the system has been powered down. When the system is - back online, the following command may issued to replace the - disk: + Now the system can be powered down to replace da1. + When the system is + back online, the failed disk can replaced + in the pool: &prompt.root; zpool replace storage da1 From here, the status may be checked again, this time - without the flag to get state - information: + without so that all pools + are shown: &prompt.root; zpool status storage pool: storage @@ -419,8 +418,7 @@ config: da2 ONLINE 0 0 0 errors: No known data errors - - As shown from this example, everything appears to be + In this example, everything is normal. @@ -434,20 +432,20 @@ errors: No known data errors &prompt.root; zfs set checksum=off storage/home - Doing so is not recommended as - checksums take very little storage space and are used to check - data integrity using checksum verification in a process is - known as scrubbing. To verify the data - integrity of the storage pool, issue this + Doing so is not recommended. + Checksums take very little storage space and provide + data integrity. Checksum verification is + known as scrubbing. Verify the data + integrity of the storage pool, with this command: &prompt.root; zpool scrub storage - This process may take considerable time depending on the - amount of data stored. It is also very I/O - intensive, so much so that only one scrub may be run at any + The duration of a scrub depends on the + amount of data stored. Large amounts of data can take a considerable amount of time to verify. It is also very I/O + intensive, so much so that only one scrub> may be run at any given time. After the scrub has completed, the status is - updated and may be viewed by issuing a status request: + updated and may be viewed with a status request: &prompt.root; zpool status storage pool: storage @@ -466,6 +464,7 @@ errors: No known data errors The completion time is displayed and helps to ensure data integrity over a long period of time. + Refer to &man.zfs.8; and &man.zpool.8; for other ZFS options. @@ -484,14 +483,14 @@ errors: No known data errors involves making a number of decisions that are relatively permanent because the structure of the pool cannot be changed after the pool has been created. The most important - decision is what type(s) of vdevs to group the physical disks + decision is what types of vdevs to group the physical disks into. See the list of vdev types for details about - the possible options. Once the pool has been created, most + the possible options. After the pool has been created, most vdev types do not allow additional disks to be added to the vdev. The exceptions are mirrors, which allow additional disks to be added to the vdev, and stripes, which can be - upgraded to mirrors by attaching an additional to the vdev. + upgraded to mirrors by attaching an additional disk to the vdev. Although additional vdevs can be added to a pool, the layout of the pool cannot be changed once the pool has been created, instead the data must be backed up and the pool @@ -503,22 +502,22 @@ errors: No known data errors Adding & Removing Devices - Adding additional disks to a zpool can be broken down into - two separate cases, attaching an additional disk to an + Adding disks to a zpool can be broken down into + two separate cases: attaching a disk to an existing vdev with the zpool attach - command, or adding additional vdevs to the pool with the + command, or adding vdevs to the pool with the zpool add command. Only some vdev types allow disks to - be added to the vdev after the fact. + be added to the vdev after creation. - When adding additional disks to the existing vdev is not - an option, such as in the case of RAID-Z, the other option is - to add an additional vdev to the pool. It is possible, but + When adding disks to the existing vdev is not + an option, as in the case of RAID-Z, the other option is + to add a vdev to the pool. It is possible, but discouraged, to mix vdev types. ZFS stripes data across each - of the vdevs, for example if there are two mirror vdevs, then + of the vdevs. For example, if there are two mirror vdevs, then this is effectively a RAID 10, striping the writes across the two sets of mirrors. Because of the way that space is - allocated in ZFS in order to attempt to have each vdev reach + allocated in ZFS to attempt to have each vdev reach 100% full at the same time, there is a performance penalty if the vdevs have different amounts of free space. @@ -539,24 +538,23 @@ errors: No known data errors Replacing a Working Devices There are a number of situations in which it may be - desirable to replacing a disk with a different disk. This + desirable to replace a disk with a different disk. This process requires connecting the new disk at the same time as the disk to be replaced. The zpool replace command will copy all of the - data from the old disk to the new one. Once this operation + data from the old disk to the new one. After this operation completes, the old disk is disconnected from the vdev. If the - newer disk is larger this may allow your zpool to grow, see - the Growing a Pool - section. + new disk is larger than the old disk, it may be possible to grow the zpool, using the new space. See + Growing a Pool. Dealing with Failed Devices When a disk fails and the physical device is replaced, ZFS - needs to be told to begin the resilver operation, where - the data that was on the failed device will be recalculated + data that was on the failed device will be recalculated from the available redundancy and written to the new device. @@ -565,54 +563,57 @@ errors: No known data errors Growing a Pool The usable size of a redundant ZFS pool is limited by the - size of the smallest device in the vdev. If you sequentially - replace each device in the vdev then when the smallest device + size of the smallest device in the vdev. If each device in the vdev is replaced sequentially, + after the smallest device has completed the replace or resilver operation, the pool - can then grow based on the size of the new smallest device. + can grow based on the size of the new smallest device. This expansion can be triggered with the zpool online command with the -e flag on - each device. Once the expansion of each device is complete, + each device. After the expansion of each device, the additional space will be available in the pool. Importing & Exporting Pools - Pools can be exported in preperation for moving them to + Pools can be exported in preparation for moving them to another system. All datasets are unmounted, and each device is marked as exported but still locked so it cannot be used by other disk subsystems. This allows pools to be imported on other machines, other operating systems that support ZFS, and even different hardware architectures (with some caveats, see - the zpool man page). The -f flag can be used to force - exporting a pool, in cases such as when a dataset has open - files. If you force an export, the datasets will be forcibly - unmounted such can have unexpected side effects. - - Importing a pool will automatically mount the datasets, - which may not be the desired behavior. The -N command line - param will skip mounting. The command line parameter -o sets - temporary properties for this import only. The altroot= - property allows you to import a zpool with a base of some - mount point, instead of the root of the file system. If the + &man.zpool.8;). When a dataset has open files, can be used to force the + export of a pool. + causes the datasets to be forcibly + unmounted. This can have unexpected side effects. + + Importing a pool automatically mounts the datasets. + This may not be the desired behavior, and can be prevented with . + sets + temporary properties for this import only. + allows importing a zpool with a base + mount point instead of the root of the file system. If the pool was last used on a different system and was not properly - exported, you may have to force an import with the -f flag. - The -a flag will import all pools that do not appear to be + exported, an import might have to be forced with . + imports all pools that do not appear to be in use by another system. Upgrading a Storage Pool - After FreeBSD has been upgraded, or if a pool has been - imported from a system using an older verison of ZFS, the pool + After upgrading &os;, or if a pool has been + imported from a system using an older version of ZFS, the pool must be manually upgraded to the latest version of ZFS. This - process is unreversable, so consider if the pool may ever need - to be imported on an older system before upgrading. Onle once - the zpool upgrade command has completed - will the newer features of ZFS be available. An upgrade - cannot be undone. The -v flag can be used to see what new - features will be supported by upgrading. + process is unreversible. Consider whether the pool may ever need + to be imported on an older system before upgrading. An upgrade + cannot be undone. + + The newer features of ZFS will not be available until + the zpool upgrade command has completed. + will the newer features of ZFS be available. + can be used to see what new + features will be provided by upgrading. @@ -624,13 +625,13 @@ errors: No known data errors Performance Monitoring - ZFS has a built-in monitoring isystem that can display + ZFS has a built-in monitoring system that can display statistics about I/O happening on the pool in real-time. Additionally, it shows the free and used space on the pool and how much I/O bandwidth is currently utilized for read and write operations. By default, all pools in the system will be - monitored and displayed. A pool name can be provided to just - monitor one pool. A basic example is provided below: + monitored and displayed. A pool name can be provided to monitor + just that single pool. A basic example: &prompt.root; zpool iostat capacity operations bandwidth @@ -638,23 +639,23 @@ pool alloc free read write ---------- ----- ----- ----- ----- ----- ----- data 288G 1.53T 2 11 11.3K 57.1K - To monitor I/O activity on the pool continuously, a - number indicating the seconds after which to refresh the - display can be specified. ZFS will then print the next - statistic line after each interval has been reached. Press + To continuously monitor I/O activity on the pool, specify + a number as the last parameter, indicating the number of seconds + to wait between updates. ZFS will print the next + statistic line after each interval. Press CtrlC - to stop this continuous monitoring. Alternatively, a second - whole number can be provided on the command line after the - interval to indicate how many of these statistics should be - displayed in total. + to stop this continuous monitoring. Alternatively, give a second + number on the command line after the + interval to specify the total number of statistics to + display. Even more detailed pool I/O statistics can be - displayed using the -v parameter. For - each storage device that is part of the pool ZFS will - provide a separate statistic line. This is helpful to + displayed with parameter. + Each storage device in the pool will be shown with a + separate statistic line. This is helpful to determine reads and writes on devices that slow down I/O on - the whole pool. In the following example, we have a + the whole pool. The following example shows a mirrored pool consisting of two devices. For each of these, a separate line is shown with the current I/O activity.