From owner-svn-doc-projects@FreeBSD.ORG Tue Oct 29 05:25:32 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 563A77D9; Tue, 29 Oct 2013 05:25:32 +0000 (UTC) (envelope-from wblock@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 351492998; Tue, 29 Oct 2013 05:25:32 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r9T5PWwv007870; Tue, 29 Oct 2013 05:25:32 GMT (envelope-from wblock@svn.freebsd.org) Received: (from wblock@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r9T5PWHH007869; Tue, 29 Oct 2013 05:25:32 GMT (envelope-from wblock@svn.freebsd.org) Message-Id: <201310290525.r9T5PWHH007869@svn.freebsd.org> From: Warren Block Date: Tue, 29 Oct 2013 05:25:32 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r43070 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 29 Oct 2013 05:25:32 -0000 Author: wblock Date: Tue Oct 29 05:25:31 2013 New Revision: 43070 URL: http://svnweb.freebsd.org/changeset/doc/43070 Log: Apply the latest patch from Allan Jude. Submitted by: Allan Jude Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Tue Oct 29 02:03:09 2013 (r43069) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Tue Oct 29 05:25:31 2013 (r43070) @@ -12,6 +12,16 @@ Rhodes Written by + + Allan + Jude + Written by + + + Benedict + Reuschling + Written by + @@ -470,12 +480,52 @@ errors: No known data errors Creating & Destroying Storage Pools + Creating a ZFS Storage Pool (zpool) + involves making a number of decisions that are relatively + permanent because the structure of the pool cannot be + changed after the pool has been created. The most important + decision is what type(s) of vdevs to group the physical disks + into. See the list of vdev types for details about + the possible options. Once the pool has been created, most + vdev types do not allow additional disks to be added to the + vdev. The exceptions are mirrors, which allow additional + disks to be added to the vdev, and stripes, which can be + upgraded to mirrors by attaching an additional to the vdev. + Although additional vdevs can be added to a pool, the layout + of the pool cannot be changed once the pool has been created, + instead the data must be backed up and the pool + recreated. + Adding & Removing Devices + Adding additional disks to a zpool can be broken down into + two separate cases, attaching an additional disk to an + existing vdev with the zpool attach + command, or adding additional vdevs to the pool with the + zpool add command. Only some + vdev types allow disks to + be added to the vdev after the fact. + + When adding additional disks to the existing vdev is not + an option, such as in the case of RAID-Z, the other option is + to add an additional vdev to the pool. It is possible, but + discouraged, to mix vdev types. ZFS stripes data across each + of the vdevs, for example if there are two mirror vdevs, then + this is effectively a RAID 10, striping the writes across the + two sets of mirrors. Because of the way that space is + allocated in ZFS in order to attempt to have each vdev reach + 100% full at the same time, there is a performance penalty if + the vdevs have different amounts of free space. + + Currently, vdevs cannot be removed from a zpool, and disks + can only be removed from a mirror if there is enough remaining + redundancy. + Creating a ZFS Storage Pool (zpool) involves making a number of decisions that are relatively permanent. Although additional vdevs can be added to a pool, @@ -485,22 +535,84 @@ errors: No known data errors zpool. + + Replacing a Working Devices + + There are a number of situations in which it may be + desirable to replacing a disk with a different disk. This + process requires connecting the new disk at the same time as + the disk to be replaced. The + zpool replace command will copy all of the + data from the old disk to the new one. Once this operation + completes, the old disk is disconnected from the vdev. If the + newer disk is larger this may allow your zpool to grow, see + the Growing a Pool + section. + + Dealing with Failed Devices - + When a disk fails and the physical device is replaced, ZFS + needs to be told to begin the resilver operation, where + the data that was on the failed device will be recalculated + from the available redundancy and written to the new + device. + + + + Growing a Pool + + The usable size of a redundant ZFS pool is limited by the + size of the smallest device in the vdev. If you sequentially + replace each device in the vdev then when the smallest device + has completed the replace or resilver operation, the pool + can then grow based on the size of the new smallest device. + This expansion can be triggered with the + zpool online command with the -e flag on + each device. Once the expansion of each device is complete, + the additional space will be available in the pool. Importing & Exporting Pools - + Pools can be exported in preperation for moving them to + another system. All datasets are unmounted, and each device + is marked as exported but still locked so it cannot be used + by other disk subsystems. This allows pools to be imported on + other machines, other operating systems that support ZFS, and + even different hardware architectures (with some caveats, see + the zpool man page). The -f flag can be used to force + exporting a pool, in cases such as when a dataset has open + files. If you force an export, the datasets will be forcibly + unmounted such can have unexpected side effects. + + Importing a pool will automatically mount the datasets, + which may not be the desired behavior. The -N command line + param will skip mounting. The command line parameter -o sets + temporary properties for this import only. The altroot= + property allows you to import a zpool with a base of some + mount point, instead of the root of the file system. If the + pool was last used on a different system and was not properly + exported, you may have to force an import with the -f flag. + The -a flag will import all pools that do not appear to be + in use by another system. Upgrading a Storage Pool - + After FreeBSD has been upgraded, or if a pool has been + imported from a system using an older verison of ZFS, the pool + must be manually upgraded to the latest version of ZFS. This + process is unreversable, so consider if the pool may ever need + to be imported on an older system before upgrading. Onle once + the zpool upgrade command has completed + will the newer features of ZFS be available. An upgrade + cannot be undone. The -v flag can be used to see what new + features will be supported by upgrading. @@ -556,7 +668,7 @@ data 288G 1.53T ada1 - - 0 4 5.61K 61.7K ada2 - - 1 4 5.04K 61.7K ----------------------- ----- ----- ----- ----- ----- ----- - + Splitting a Storage Pool @@ -1389,7 +1501,8 @@ vfs.zfs.vdev.cache.size="5M"Snapshot The copy-on-write (COW) design of + linkend="zfs-term-cow">copy-on-write + (COW) design of ZFS allows for nearly instantaneous consistent snapshots with arbitrary names. After taking a snapshot of a dataset (or a recursive snapshot of a