From owner-svn-doc-projects@FreeBSD.ORG Sat May 17 04:28:45 2014 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 1508CBE; Sat, 17 May 2014 04:28:45 +0000 (UTC) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 01C012BF6; Sat, 17 May 2014 04:28:45 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.8/8.14.8) with ESMTP id s4H4SieF082034; Sat, 17 May 2014 04:28:44 GMT (envelope-from bcr@svn.freebsd.org) Received: (from bcr@localhost) by svn.freebsd.org (8.14.8/8.14.8/Submit) id s4H4SicD082033; Sat, 17 May 2014 04:28:44 GMT (envelope-from bcr@svn.freebsd.org) Message-Id: <201405170428.s4H4SicD082033@svn.freebsd.org> From: Benedict Reuschling Date: Sat, 17 May 2014 04:28:44 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r44851 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 17 May 2014 04:28:45 -0000 Author: bcr Date: Sat May 17 04:28:44 2014 New Revision: 44851 URL: http://svnweb.freebsd.org/changeset/doc/44851 Log: Reducing the output of igor -y chapter.xml to only include those sentences where these fill-words actually make sense. In addition to that, add acronym tags around another occurance of RAM. With help from: Allan Jude Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Sat May 17 03:35:45 2014 (r44850) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Sat May 17 04:28:44 2014 (r44851) @@ -166,7 +166,7 @@ example 17547136 0 17547136 This output shows that the example pool has been created and mounted. It is now accessible as a file system. Files may be created on it and - users can browse it, as seen in the following example: + users can browse it, like in this example: &prompt.root; cd /example &prompt.root; ls @@ -232,7 +232,7 @@ example/compressed on /example/compresse ZFS datasets, after creation, may be used like any file systems. However, many other features are available which can be set on a per-dataset basis. In the - following example, a new file system, data + example below, a new file system, data is created. Important files will be stored here, the file system is set to keep two copies of each data block: @@ -345,7 +345,7 @@ example/data 17547008 0 175 It is possible to write a script to perform regular snapshots on user data. However, over time, snapshots can consume a great deal of disk space. The previous snapshot can - be removed using the following command: + be removed using the command: &prompt.root; zfs destroy storage/home@08-30-08 @@ -460,7 +460,7 @@ errors: No known data errors ZFS uses checksums to verify the integrity of stored data. These are enabled automatically upon creation of file systems and may be disabled using the - following command: + command: &prompt.root; zfs set checksum=off storage/home @@ -670,13 +670,13 @@ errors: No known data errors Scrubbing a Pool - Pools should be - scrubbed regularly, - ideally at least once every three months. The - scrub operating is very disk-intensive and - will reduce performance while running. Avoid high-demand - periods when scheduling scrub or use vfs.zfs.scrub_delay + It is recommended that pools be scrubbed regularly, ideally + at least once every month. The scrub + operating is very disk-intensive and will reduce performance + while running. Avoid high-demand periods when scheduling + scrub or use vfs.zfs.scrub_delay to adjust the relative priority of the scrub to prevent it interfering with other workloads. @@ -731,7 +731,7 @@ errors: No known data errors interaction of a system administrator during normal pool operation. - The following example will demonstrate this self-healing + The next example will demonstrate this self-healing behavior in ZFS. First, a mirrored pool of two disks /dev/ada0 and /dev/ada1 is created. @@ -824,7 +824,7 @@ errors: No known data errors ZFS has detected the error and took care of it by using the redundancy present in the unaffected ada0 mirror disk. A checksum comparison - with the original one should reveal whether the pool is + with the original one will reveal whether the pool is consistent again. &prompt.root; sha1 /healer >> checksum.txt @@ -873,9 +873,8 @@ errors: No known data errors ada0 and corrects all data that has a wrong checksum on ada1. This is indicated by the (repairing) output from - zpool status. After the - operation is complete, the pool status has changed to the - following: + zpool status. After the operation is + complete, the pool status has changed to: &prompt.root; zpool status healer pool: healer @@ -1073,7 +1072,7 @@ History for 'tank': pool (consisting of /dev/ada0 and /dev/ada1). In addition to that, the hostname (myzfsbox) is also shown in the - commands following the pool's creation. The hostname display + commands after the pool's creation. The hostname display becomes important when the pool is exported from the current and imported on another system. The commands that are issued on the other system can clearly be distinguished by the @@ -1317,16 +1316,15 @@ tank custom:costcenter - of the most powerful features of ZFS. A snapshot provides a read-only, point-in-time copy of the dataset. Due to ZFS' Copy-On-Write (COW) implementation, - snapshots can be created quickly simply by preserving the - older version of the data on disk. When no snapshot is - created, ZFS simply reclaims the space for future use. - Snapshots preserve disk space by recording only the - differences that happened between snapshots. ZFS allows - snapshots only on whole datasets, not on individual files or - directories. When a snapshot is created from a dataset, - everything contained in it, including the filesystem - properties, files, directories, permissions, etc., is - duplicated. + snapshots can be created quickly by preserving the older + version of the data on disk. When no snapshot is created, ZFS + reclaims the space for future use. Snapshots preserve disk + space by recording only the differences that happened between + snapshots. ZFS allows snapshots only on whole datasets, not + on individual files or directories. When a snapshot is + created from a dataset, everything contained in it, including + the filesystem properties, files, directories, permissions, + etc., is duplicated. ZFS Snapshots provide a variety of uses that other filesystems with snapshot functionality do not have. A @@ -1354,8 +1352,8 @@ tank custom:costcenter - Create a snapshot with zfs snapshot dataset@snapshotname. Adding creates a snapshot recursively, - with the same name on all child datasets. The following - example creates a snapshot of a home directory: + with the same name on all child datasets. This example + creates a snapshot of a home directory: &prompt.root; zfs snapshot bigpool/work/joe@backup @@ -1419,7 +1417,7 @@ bigpool/work/joe@after_cp 0 - is that still contains a file that was accidentally deleted using zfs diff. Doing this for the two snapshots that were created in the previous section yields - the following output: + this output: &prompt.root; zfs list -rt all bigpool/work/joe NAME USED AVAIL REFER MOUNTPOINT @@ -1435,7 +1433,7 @@ M /usr/home/bcr/ bigpool/work/joe@after_cp) and the one provided as a parameter to zfs diff. The first column indicates the type of - change according to the following table: + change according to this table: @@ -1532,7 +1530,7 @@ santaletter.txt summerholiday.txt to get them back using rollbacks, but only when snapshots of important data are performed on a regular basis. To get the files back and start over from the last snapshot, issue the - following command: + command: &prompt.root; zfs rollback bigpool/work/joe@summerplan &prompt.user; ls @@ -1541,8 +1539,8 @@ santaletter.txt summerholiday.txtThe rollback operation restored the dataset to the state of the last snapshot. It is also possible to roll back to a snapshot that was taken much earlier and has other snapshots - following after it. When trying to do this, ZFS will issue - the following warning: + that were created after it. When trying to do this, ZFS + will issue this warning: &prompt.root; zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT @@ -1611,11 +1609,11 @@ bigpool/work/joe snapdir hidden dataset. The directory structure below .zfs/snapshot has a directory named exactly like the snapshots taken earlier to make it - easier to identify them. In the following example, it is - assumed that a file should be restored from the hidden - .zfs directory by - copying it from the snapshot that contained the latest - version of the file: + easier to identify them. In the next example, it is assumed + that a file is to be restored from the hidden .zfs directory by copying it + from the snapshot that contained the latest version of the + file: &prompt.root; ls .zfs/snapshot santa summerplan @@ -1628,12 +1626,12 @@ summerholiday.txt snapdir could be set to hidden and it would still be possible to list the contents of that directory. It is up to the administrator to decide whether - these directories should be displayed. Of course, it is + these directories will be displayed. Of course, it is possible to display these for certain datasets and prevent it for others. Copying files or directories from these hidden .zfs/snapshot is simple enough. Trying it the other way around results in - the following error: + this error: &prompt.root; cp /etc/rc.conf .zfs/snapshot/santa/ cp: .zfs/snapshot/santa/rc.conf: Read-only file system @@ -1678,8 +1676,8 @@ cp: .zfs/snapshot/santa/rc.conf: Read-on point within the ZFS filesystem hierarchy, not just below the original location of the snapshot. - To demonstrate the clone feature, the following example - dataset is used: + To demonstrate the clone feature, this example dataset is + used: &prompt.root; zfs list -rt all camino/home/joe NAME USED AVAIL REFER MOUNTPOINT @@ -1718,8 +1716,7 @@ usr/home/joenew 1.3G 31k 1.3G snapshot and the clone has been removed by promoting the clone using zfs promote, the origin of the clone is removed as it is now - an independent dataset. The following example demonstrates - this: + an independent dataset. This example demonstrates it: &prompt.root; zfs get origin camino/home/joenew NAME PROPERTY VALUE SOURCE @@ -1732,7 +1729,7 @@ camino/home/joenew origin - After making some changes like copying loader.conf to the promoted clone, for example, the old directory becomes obsolete in this case. - Instead, the promoted clone should replace it. This can be + Instead, the promoted clone can replace it. This can be achieved by two consecutive commands: zfs destroy on the old dataset and zfs rename on the clone to name it like the old @@ -1781,8 +1778,8 @@ usr/home/joe 1.3G 128k 1.3G zfs send and zfs receive, respectively. - The following examples will demonstrate the functionality - of ZFS replication using these two + These examples will demonstrate the functionality of + ZFS replication using these two pools: &prompt.root; zpool list @@ -1961,8 +1958,8 @@ mypool@replica2 before this can be done. Since this chapter is about ZFS and not about configuring SSH, it only lists the things required to perform the - zfs send operation. The following - configuration is required: + zfs send operation. This configuration + is required: @@ -2024,18 +2021,17 @@ vfs.usermount: 0 -> 1 zfs receive on the remote host backuphost via SSH. A fully qualified domain - name or IP address should be used here. The receiving - machine will write the data to + name or IP address is recommended be used here. The + receiving machine will write the data to backup dataset on the recvpool pool. Using - with zfs recv - will remove the original name of the pool on the receiving - side and just takes the name of the snapshot instead. + with zfs recv will + remove the original name of the pool on the receiving side + and just takes the name of the snapshot instead. causes the filesystem(s) to not be mounted on the receiving side. When is - included, more detail about the transfer is shown. - Included are elapsed time and the amount of data - transferred. + included, more detail about the transfer is shown. Included + are elapsed time and the amount of data transferred. @@ -2056,20 +2052,19 @@ vfs.usermount: 0 -> 1 To enforce a dataset quota of 10 GB for storage/home/bob, use the - following: + command: &prompt.root; zfs set quota=10G storage/home/bob To enforce a reference quota of 10 GB for storage/home/bob, use the - following: + command: &prompt.root; zfs set refquota=10G storage/home/bob The general format is userquota@user=size, - and the user's name must be in one of the following - formats: + and the user's name must be in one of these formats: @@ -2437,13 +2432,13 @@ mypool/compressed_dataset logicalused vfs.zfs.arc_max - Sets the maximum size of the ARC. - The default is all RAM less 1 GB, - or 1/2 of ram, whichever is more. However a lower value - should be used if the system will be running any other - daemons or processes that may require memory. This value - can only be adjusted at boot time, and is set in - /boot/loader.conf. + linkend="zfs-term-arc">ARC. The + default is all RAM less 1 GB, or 1/2 + of RAM, whichever is more. However, a + lower value should be used if the system will be running any + other daemons or processes that may require memory. This + value can only be adjusted at boot time, and is set in + /boot/loader.conf. @@ -2722,7 +2717,7 @@ mypool/compressed_dataset logicalused Due to the address space limitations of the &i386; platform, ZFS users on the - &i386; architecture should add this option to a + &i386; architecture must add this option to a custom kernel configuration file, rebuild the kernel, and reboot: