From owner-svn-doc-projects@FreeBSD.ORG Fri May 16 14:10:39 2014 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id A84C0B85; Fri, 16 May 2014 14:10:39 +0000 (UTC) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 89C07232C; Fri, 16 May 2014 14:10:39 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.8/8.14.8) with ESMTP id s4GEAdkI062502; Fri, 16 May 2014 14:10:39 GMT (envelope-from bcr@svn.freebsd.org) Received: (from bcr@localhost) by svn.freebsd.org (8.14.8/8.14.8/Submit) id s4GEAdlD062501; Fri, 16 May 2014 14:10:39 GMT (envelope-from bcr@svn.freebsd.org) Message-Id: <201405161410.s4GEAdlD062501@svn.freebsd.org> From: Benedict Reuschling Date: Fri, 16 May 2014 14:10:39 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r44847 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 16 May 2014 14:10:39 -0000 Author: bcr Date: Fri May 16 14:10:39 2014 New Revision: 44847 URL: http://svnweb.freebsd.org/changeset/doc/44847 Log: Corrections on the ZFS chapter: - updates on sysctls for limiting IOPS during a scrub or resilver - wording and grammar fixes - comment out sections that will come in later once the chapter is officially available in the handbook Submitted by: Allan Jude Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Fri May 16 12:32:45 2014 (r44846) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Fri May 16 14:10:39 2014 (r44847) @@ -671,7 +671,7 @@ errors: No known data errors Scrubbing a Pool Pools should be - Scrubbed regularly, + scrubbed regularly, ideally at least once every three months. The scrub operating is very disk-intensive and will reduce performance while running. Avoid high-demand @@ -691,7 +691,7 @@ errors: No known data errors config: NAME STATE READ WRITE CKSUM - mypool ONLINE 0 0 0 + mypool ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 ada0p3 ONLINE 0 0 0 ada1p3 ONLINE 0 0 0 @@ -701,6 +701,10 @@ config: ada5p3 ONLINE 0 0 0 errors: No known data errors + + In the event that a scrub operation needs to be cancelled, + issue zpool scrub -s + mypool. @@ -1247,17 +1251,20 @@ Filesystem Size Used Avail Cap Renaming a Dataset The name of a dataset can be changed with zfs - rename. rename can also be - used to change the parent of a dataset. Renaming a dataset to - be under a different parent dataset will change the value of - those properties that are inherited by the child dataset. - When a dataset is renamed, it is unmounted and then remounted - in the new location (inherited from the parent dataset). This - behavior can be prevented with . Due to - the nature of snapshots, they cannot be renamed outside of the - parent dataset. To rename a recursive snapshot, specify - , and all snapshots with the same specified - snapshot will be renamed. + rename. To change the parent of a dataset + rename can also be used. Renaming a + dataset to be under a different parent dataset will change the + value of those properties that are inherited from the parent + dataset. When a dataset is renamed, it is unmounted and then + remounted in the new location (which is inherited from the new + parent dataset). This behavior can be prevented with + . + + Snapshots can also be renamed in this way. Due to + the nature of snapshots, they cannot be renamed into a + different parent dataset. To rename a recursive snapshot, + specify , and all snapshots with the same + name in child datasets with also be renamed. @@ -1314,7 +1321,7 @@ tank custom:costcenter - older version of the data on disk. When no snapshot is created, ZFS simply reclaims the space for future use. Snapshots preserve disk space by recording only the - differences that happened between snapshots. ZFS llow + differences that happened between snapshots. ZFS allows snapshots only on whole datasets, not on individual files or directories. When a snapshot is created from a dataset, everything contained in it, including the filesystem @@ -1357,17 +1364,17 @@ NAME USED AVAIL R bigpool/work/joe@backup 0 - 85.5K - Snapshots are not listed by a normal zfs - list operation. In order to list the snapshot - that was just created, the option -t - snapshot has to be appended to zfs - list. The output clearly indicates that - snapshots can not be mounted directly into the system as - there is no path shown in the MOUNTPOINT - column. Additionally, there is no mention of available disk - space in the AVAIL column as snapshots - cannot be written after they are created. It becomes more - clear when comparing the snapshot with the original dataset - from which it was created: + list operation. To list the snapshot that was + just created, the option -t snapshot has + to be appended to zfs list. The output + clearly indicates that snapshots can not be mounted directly + into the system as there is no path shown in the + MOUNTPOINT column. Additionally, there + is no mention of available disk space in the + AVAIL column as snapshots cannot be + written after they are created. It becomes more clear when + comparing the snapshot with the original dataset from which + it was created: &prompt.root; zfs list -rt all bigpool/work/joe NAME USED AVAIL REFER MOUNTPOINT @@ -2262,16 +2269,21 @@ dedup = 1.05, compress = 1.11, copies = After zdb -S finishes analyzing the pool, it shows the space reduction ratio that would be achieved by activating deduplication. In this case, - 1.16 is a very poor ratio that is mostly - influenced by compression. Activating deduplication on this - pool would not save any significant amount of space. Using - the formula dedup * compress / copies = - deduplication ratio, system administrators can plan - the storage allocation more towards having multiple copies of - data or by having a decent compression rate in order to - utilize the space savings that deduplication provides. As a - rule of thumb, compression should be used before deduplication - due to the much lower memory requirements. + 1.16 is a very poor space saving ratio that + is mostly provided by compression. Activating deduplication + on this pool would not save any significant amount of space, + and is not worth the amount of memory required to enable + deduplication. Using the formula dedup * compress / + copies = deduplication ratio, system administrators + can plan the storage allocation, deciding if the workload will + contain enough duplicate blocks to make the memory + requirements pay off. If the data is reasonably compressible, + the space savings may be very good and compression can also + provide greatly increased performance. It is recommended to + use compression first and only enable deduplication in cases + where the additional savings will be considerable and there is + sufficient memory for the DDT. @@ -2567,45 +2579,30 @@ mypool/compressed_dataset logicalused - - vfs.zfs.no_scrub_io - - Disable scrub - I/O. Causes scrub to not actually read - the data blocks and verify their checksums, effectively - turning any scrub in progress into a - no-op. This may be useful if a scrub - is interferring with other operations on the pool. This - value can be adjusted at any time with - &man.sysctl.8;. - - If this tunable is set to cancel an - in-progress scrub, be sure to unset - it afterwards or else all future - scrub and resilver operations - will be ineffective. - - - vfs.zfs.scrub_delay - - Determines the milliseconds of delay inserted between + - Determines the number of ticks to delay between each I/O during a scrub. To ensure that a scrub does not interfere with the normal operation of the pool, if any other I/O is happening the scrub will - delay between each command. This value allows you to - limit the total IOPS (I/Os Per Second) - generated by the scrub. The default - value is 4, resulting in a limit of: 1000  ms / 4 = + delay between each command. This value controls the limit + on the total IOPS (I/Os Per Second) + generated by the scrub. The + granularity of the setting is deterined by the value of + kern.hz which defaults to 1000 ticks + per second. This setting may be changed, resulting in + a different effective IOPS limit. The + default value is 4, resulting in a limit of: + 1000 ticks/sec / 4 = 250 IOPS. Using a value of 20 would give a limit of: - 1000 ms / 20 = 50 IOPS. The - speed of scrub is only limited when - there has been only recent activity on the pool, as - determined by IOPS. The speed of + scrub is only limited when there has + been recent activity on the pool, as determined by + vfs.zfs.scan_idle. This value can be adjusted at any time with &man.sysctl.8;. @@ -2620,10 +2617,15 @@ mypool/compressed_dataset logicalused that a resilver does not interfere with the normal operation of the pool, if any other I/O is happening the resilver will delay - between each command. This value allows you to limit the + between each command. This value controls the limit of total IOPS (I/Os Per Second) generated - by the resilver. The default value is - 2, resulting in a limit of: 1000  ms / 2 = + by the resilver. The granularity of + the setting is determined by the value of + kern.hz which defaults to 1000 ticks + per second. This setting may be changed, resulting in + a different effective IOPS limit. The + default value is 2, resulting in a limit of: + 1000 ticks/sec / 2 = 500 IOPS. Returning the pool to an Online state may be more important if another device failing could + <acronym>ZFS</acronym> on i386 @@ -2851,10 +2855,10 @@ vfs.zfs.vdev.cache.size="5M" &os; 9.0 and 9.1 include support for - ZFS version 28. Future versions + ZFS version 28. Later versions use ZFS version 5000 with feature - flags. This allows greater cross-compatibility with - other implementations of + flags. The new feature flags system allows greater + cross-compatibility with other implementations of ZFS. @@ -3407,7 +3411,7 @@ vfs.zfs.vdev.cache.size="5M" @@ -3476,7 +3480,7 @@ vfs.zfs.vdev.cache.size="5M"vfs.zfs.scrub_delay to prevent the scrub from degrading the performance of - other workloads on your pool. + other workloads on the pool. @@ -3563,7 +3567,8 @@ vfs.zfs.vdev.cache.size="5M" + and files. +