From owner-svn-doc-projects@FreeBSD.ORG Sun Jul 14 08:24:02 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id CCC91538; Sun, 14 Jul 2013 08:24:02 +0000 (UTC) (envelope-from gabor@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) by mx1.freebsd.org (Postfix) with ESMTP id BFFF5940; Sun, 14 Jul 2013 08:24:02 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r6E8O2Xj048711; Sun, 14 Jul 2013 08:24:02 GMT (envelope-from gabor@svn.freebsd.org) Received: (from gabor@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r6E8O2AX048710; Sun, 14 Jul 2013 08:24:02 GMT (envelope-from gabor@svn.freebsd.org) Message-Id: <201307140824.r6E8O2AX048710@svn.freebsd.org> From: Gabor Kovesdan Date: Sun, 14 Jul 2013 08:24:02 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r42278 - projects/db5/share/xsl X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 14 Jul 2013 08:24:02 -0000 Author: gabor Date: Sun Jul 14 08:24:02 2013 New Revision: 42278 URL: http://svnweb.freebsd.org/changeset/doc/42278 Log: - Enable hyphenation Modified: projects/db5/share/xsl/freebsd-fo.xsl Modified: projects/db5/share/xsl/freebsd-fo.xsl ============================================================================== --- projects/db5/share/xsl/freebsd-fo.xsl Sat Jul 13 22:45:46 2013 (r42277) +++ projects/db5/share/xsl/freebsd-fo.xsl Sun Jul 14 08:24:02 2013 (r42278) @@ -75,7 +75,7 @@ 1 - false + true @@ -418,4 +418,12 @@ + + + + + + + + From owner-svn-doc-projects@FreeBSD.ORG Mon Jul 15 08:29:05 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 79D66ED1; Mon, 15 Jul 2013 08:29:05 +0000 (UTC) (envelope-from gabor@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) by mx1.freebsd.org (Postfix) with ESMTP id 69C89347; Mon, 15 Jul 2013 08:29:05 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r6F8T4pt006606; Mon, 15 Jul 2013 08:29:04 GMT (envelope-from gabor@svn.freebsd.org) Received: (from gabor@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r6F8T4XK006605; Mon, 15 Jul 2013 08:29:04 GMT (envelope-from gabor@svn.freebsd.org) Message-Id: <201307150829.r6F8T4XK006605@svn.freebsd.org> From: Gabor Kovesdan Date: Mon, 15 Jul 2013 08:29:04 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r42283 - projects/db5/ja_JP.eucJP/share/xsl X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Jul 2013 08:29:05 -0000 Author: gabor Date: Mon Jul 15 08:29:04 2013 New Revision: 42283 URL: http://svnweb.freebsd.org/changeset/doc/42283 Log: - Add Japanese customization Modified: projects/db5/ja_JP.eucJP/share/xsl/freebsd-fo.xsl Modified: projects/db5/ja_JP.eucJP/share/xsl/freebsd-fo.xsl ============================================================================== --- projects/db5/ja_JP.eucJP/share/xsl/freebsd-fo.xsl Sun Jul 14 20:59:45 2013 (r42282) +++ projects/db5/ja_JP.eucJP/share/xsl/freebsd-fo.xsl Mon Jul 15 08:29:04 2013 (r42283) @@ -12,4 +12,33 @@ + + + + + IPAPMincho + IPAPGothic + IPAPGothic + + + + 8pt + + + + + + + + + + + + + + 10pt + + From owner-svn-doc-projects@FreeBSD.ORG Mon Jul 15 13:15:07 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 4C1D8496; Mon, 15 Jul 2013 13:15:07 +0000 (UTC) (envelope-from gabor@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) by mx1.freebsd.org (Postfix) with ESMTP id 3F6A7A22; Mon, 15 Jul 2013 13:15:07 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r6FDF73M003545; Mon, 15 Jul 2013 13:15:07 GMT (envelope-from gabor@svn.freebsd.org) Received: (from gabor@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r6FDF67q003543; Mon, 15 Jul 2013 13:15:06 GMT (envelope-from gabor@svn.freebsd.org) Message-Id: <201307151315.r6FDF67q003543@svn.freebsd.org> From: Gabor Kovesdan Date: Mon, 15 Jul 2013 13:15:06 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r42284 - projects/db5/share/xsl X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Jul 2013 13:15:07 -0000 Author: gabor Date: Mon Jul 15 13:15:06 2013 New Revision: 42284 URL: http://svnweb.freebsd.org/changeset/doc/42284 Log: - Like in XHTML, do not generate outer links in TOC entries since TOC entries themselves are cross-refereces, so move a customization to the common part. Modified: projects/db5/share/xsl/freebsd-common.xsl projects/db5/share/xsl/freebsd-xhtml-common.xsl Modified: projects/db5/share/xsl/freebsd-common.xsl ============================================================================== --- projects/db5/share/xsl/freebsd-common.xsl Mon Jul 15 08:29:04 2013 (r42283) +++ projects/db5/share/xsl/freebsd-common.xsl Mon Jul 15 13:15:06 2013 (r42284) @@ -29,4 +29,14 @@ png + + + + + + + + + + Modified: projects/db5/share/xsl/freebsd-xhtml-common.xsl ============================================================================== --- projects/db5/share/xsl/freebsd-xhtml-common.xsl Mon Jul 15 08:29:04 2013 (r42283) +++ projects/db5/share/xsl/freebsd-xhtml-common.xsl Mon Jul 15 13:15:06 2013 (r42284) @@ -66,14 +66,6 @@ - - - - - - - - From owner-svn-doc-projects@FreeBSD.ORG Mon Jul 15 15:23:14 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by hub.freebsd.org (Postfix) with ESMTP id 4A74BC17; Mon, 15 Jul 2013 15:23:14 +0000 (UTC) (envelope-from gabor@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) by mx1.freebsd.org (Postfix) with ESMTP id 2385C2E0; Mon, 15 Jul 2013 15:23:14 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r6FFNEKW045332; Mon, 15 Jul 2013 15:23:14 GMT (envelope-from gabor@svn.freebsd.org) Received: (from gabor@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r6FFNEOs045331; Mon, 15 Jul 2013 15:23:14 GMT (envelope-from gabor@svn.freebsd.org) Message-Id: <201307151523.r6FFNEOs045331@svn.freebsd.org> From: Gabor Kovesdan Date: Mon, 15 Jul 2013 15:23:14 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r42285 - projects/db5/share/xsl X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Jul 2013 15:23:14 -0000 Author: gabor Date: Mon Jul 15 15:23:13 2013 New Revision: 42285 URL: http://svnweb.freebsd.org/changeset/doc/42285 Log: - Enable FOP extensions, which eliminate most of the warnings and turn on some advanced features, like PDF bookmarks. - Allow breaking lines of verbatim elements at spaces. Such a break is denoted with a special arrow symbol. This technique is commonly used in technical books to present long source lines. - Use other monospace font, that has an appropriate arrow symbol for this. - Add some padding to verbatim environments. Modified: projects/db5/share/xsl/freebsd-fo.xsl Modified: projects/db5/share/xsl/freebsd-fo.xsl ============================================================================== --- projects/db5/share/xsl/freebsd-fo.xsl Mon Jul 15 13:15:06 2013 (r42284) +++ projects/db5/share/xsl/freebsd-fo.xsl Mon Jul 15 15:23:13 2013 (r42285) @@ -18,6 +18,9 @@ FO-SPECIFIC PARAMETER SETTINGS --> + + + B5 @@ -76,11 +79,12 @@ true - + 9.5 + DejaVu Sans Mono @@ -139,14 +143,17 @@ 12pt 0 false - no-wrap false preserve preserve start rgb(192,192,192) wrap - + + 3pt + 3pt + 3pt + 3pt From owner-svn-doc-projects@FreeBSD.ORG Mon Jul 15 22:56:24 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 4983F7BB; Mon, 15 Jul 2013 22:56:24 +0000 (UTC) (envelope-from wblock@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) by mx1.freebsd.org (Postfix) with ESMTP id 3A7DAF6D; Mon, 15 Jul 2013 22:56:24 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r6FMuO62080788; Mon, 15 Jul 2013 22:56:24 GMT (envelope-from wblock@svn.freebsd.org) Received: (from wblock@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r6FMuOZZ080787; Mon, 15 Jul 2013 22:56:24 GMT (envelope-from wblock@svn.freebsd.org) Message-Id: <201307152256.r6FMuOZZ080787@svn.freebsd.org> From: Warren Block Date: Mon, 15 Jul 2013 22:56:24 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r42288 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/filesystems X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 15 Jul 2013 22:56:24 -0000 Author: wblock Date: Mon Jul 15 22:56:23 2013 New Revision: 42288 URL: http://svnweb.freebsd.org/changeset/doc/42288 Log: Commit Allan Jude's modifications to the ZFS section so we can get to work on it. Submitted by: Allan Jude Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml Mon Jul 15 20:59:25 2013 (r42287) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml Mon Jul 15 22:56:23 2013 (r42288) @@ -100,81 +100,635 @@ The Z File System (ZFS) The Z file system, originally developed by &sun;, - is designed to use a pooled storage method in that space is only - used as it is needed for data storage. It is also designed for - maximum data integrity, supporting data snapshots, multiple - copies, and data checksums. It uses a software data replication - model, known as RAID-Z. - RAID-Z provides redundancy similar to - hardware RAID, but is designed to prevent - data write corruption and to overcome some of the limitations - of hardware RAID. - - - ZFS Tuning - - Some of the features provided by ZFS - are RAM-intensive, so some tuning may be required to provide - maximum efficiency on systems with limited RAM. - - - Memory - - At a bare minimum, the total system memory should be at - least one gigabyte. The amount of recommended RAM depends - upon the size of the pool and the ZFS features which are - used. A general rule of thumb is 1GB of RAM for every 1TB - of storage. If the deduplication feature is used, a general - rule of thumb is 5GB of RAM per TB of storage to be - deduplicated. While some users successfully use ZFS with - less RAM, it is possible that when the system is under heavy - load, it may panic due to memory exhaustion. Further tuning - may be required for systems with less than the recommended - RAM requirements. - - - - Kernel Configuration - - Due to the RAM limitations of the &i386; platform, users - using ZFS on the &i386; architecture should add the - following option to a custom kernel configuration file, - rebuild the kernel, and reboot: - - options KVA_PAGES=512 - - This option expands the kernel address space, allowing - the vm.kvm_size tunable to be pushed - beyond the currently imposed limit of 1 GB, or the - limit of 2 GB for PAE. To find the - most suitable value for this option, divide the desired - address space in megabytes by four (4). In this example, it - is 512 for 2 GB. - - - - Loader Tunables + is designed to future proof the file system by removing many of + the arbitrary limits imposed on previous file systems. ZFS + allows continuous growth of the pooled storage by adding + additional devices. ZFS allows you to create many file systems + (in addition to block devices) out of a single shared pool of + storage. Space is allocated as needed, so all remaining free + space is available to each file system in the pool. It is also + designed for maximum data integrity, supporting data snapshots, + multiple copies, and cryptographic checksums. It uses a + software data replication model, known as + RAID-Z. RAID-Z provides + redundancy similar to hardware RAID, but is + designed to prevent data write corruption and to overcome some + of the limitations of hardware RAID. + + + ZFS Features and Terminology + + ZFS is a fundamentally different file system because it + is more than just a file system. ZFS combines the roles of + file system and volume manager, enabling additional storage + devices to be added to a live system and having the new space + available on all of the existing file systems in that pool + immediately. By combining the traditionally separate roles, + ZFS is able to overcome previous limitations that prevented + RAID groups being able to grow. Each top level device in a + zpool is called a vdev, which can be a simple disk or a RAID + transformation such as a mirror or RAID-Z array. ZFS file + systems (called datasets), each have access to the combined + free space of the entire pool. As blocks are allocated the + free space in the pool available to of each file system is + decreased. This approach avoids the common pitfall with + extensive partitioning where free space becomes fragmentated + across the partitions. + + + + + + zpool + + A storage pool is the most basic building block + of ZFS. A pool is made up of one or more vdevs, the + underlying devices that store the data. A pool is + then used to create one or more file systems + (datasets) or block devices (volumes). These datasets + and volumes share the pool of remaining free space. + Each pool is uniquely identified by a name and a + GUID. The zpool also controls the + version number and therefore the features available + for use with ZFS. + &os; 9.0 and 9.1 include + support for ZFS version 28. Future versions use ZFS + version 5000 with feature flags. This allows + greater cross-compatibility with other + implementations of ZFS. + + + + + vdev Types + + A zpool is made up of one or more vdevs, which + themselves can be a single disk or a group of disks, + in the case of a RAID transform. When multiple vdevs + are used, ZFS spreads data across the vdevs to + increase performance and maximize usable space. + + + + Disk - The most basic type + of vdev is a standard block device. This can be + an entire disk (such as + /dev/ada0 + or + /dev/da0) + or a partition + (/dev/ada0p3). + Contrary to the Solaris documentation, on &os; + there is no performance penalty for using a + partition rather than an entire disk. + + + + + File - In addition to + disks, ZFS pools can be backed by regular files, + this is especially useful for testing and + experimentation. Use the full path to the file + as the device path in the zpool create command. + All vdevs must be atleast 128 MB in + size. + + + + + Mirror - When creating a + mirror, specify the mirror + keyword followed by the list of member devices + for the mirror. A mirror consists of two or + more devices, all data will be written to all + member devices. A mirror vdev will only hold as + much data as its smallest member. A mirror vdev + can withstand the failure of all but one of its + members without losing any data. + + + + A regular single disk vdev can be + upgraded to a mirror vdev at any time using + the zpool attach + command. + + + + + + RAID-Z - + ZFS implements RAID-Z, a variation on standard + RAID-5 that offers better distribution of parity + and eliminates the "RAID-5 write hole" in which + the data and parity information become + inconsistent after an unexpected restart. ZFS + supports 3 levels of RAID-Z which provide + varying levels of redundancy in exchange for + decreasing levels of usable storage. The types + are named RAID-Z1 through Z3 based on the number + of parity devinces in the array and the number + of disks that the pool can operate + without. + + In a RAID-Z1 configuration with 4 disks, + each 1 TB, usable storage will be 3 TB + and the pool will still be able to operate in + degraded mode with one faulted disk. If an + additional disk goes offline before the faulted + disk is replaced and resilvered, all data in the + pool can be lost. + + In a RAID-Z3 configuration with 8 disks of + 1 TB, the volume would provide 5TB of + usable space and still be able to operate with + three faulted disks. Sun recommends no more + than 9 disks in a single vdev. If the + configuration has more disks, it is recommended + to divide them into separate vdevs and the pool + data will be striped across them. + + A configuration of 2 RAID-Z2 vdevs + consisting of 8 disks each would create + something similar to a RAID 60 array. A RAID-Z + group's storage capacity is approximately the + size of the smallest disk, multiplied by the + number of non-parity disks. 4x 1 TB disks + in Z1 has an effective size of approximately + 3 TB, and a 8x 1 TB array in Z3 will + yeild 5 TB of usable space. + + + + + Spare - ZFS has a special + pseudo-vdev type for keeping track of available + hot spares. Note that installed hot spares are + not deployed automatically; they must manually + be configured to replace the failed device using + the zfs replace command. + + + + + Log - ZFS Log Devices, also + known as ZFS Intent Log (ZIL) + move the intent log from the regular pool + devices to a dedicated device. The ZIL + accelerates synchronous transactions by using + storage devices (such as + SSDs) that are faster + compared to those used for the main pool. When + data is being written and the application + requests a guarantee that the data has been + safely stored, the data is written to the faster + ZIL storage, then later flushed out to the + regular disks, greatly reducing the latency of + synchronous writes. Log devices can be + mirrored, but RAID-Z is not supported. When + specifying multiple log devices writes will be + load balanced across all devices. + + + + + Cache - Adding a cache vdev + to a zpool will add the storage of the cache to + the L2ARC. Cache devices cannot be mirrored. + Since a cache device only stores additional + copies of existing data, there is no risk of + data loss. + + + + + + Adaptive Replacement + Cache (ARC) + + ZFS uses an Adaptive Replacement Cache + (ARC), rather than a more + traditional Least Recently Used + (LRU) cache. An + LRU cache is a simple list of items + in the cache sorted by when each object was most + recently used; new items are added to the top of the + list and once the cache is full items from the bottom + of the list are evicted to make room for more active + objects. An ARC consists of four + lists; the Most Recently Used (MRU) + and Most Frequently Used (MFU) + objects, plus a ghost list for each. These ghost + lists tracks recently evicted objects to provent them + being added back to the cache. This increases the + cache hit ratio by avoiding objects that have a + history of only being used occasionally. Another + advantage of using both an MRU and + MFU is that scanning an entire + filesystem would normally evict all data from an + MRU or LRU cache + in favor of this freshly accessed content. In the + case of ZFS since there is also an + MFU that only tracks the most + frequently used objects, the cache of the most + commonly accessed blocks remains. + + + + L2ARC + + The L2ARC is the second level + of the ZFS caching system. The + primary ARC is stored in + RAM, however since the amount of + available RAM is often limited, + ZFS can also make use of cache + vdevs. Solid State Disks (SSDs) + are often used as these cache devices due to their + higher speed and lower latency compared to traditional + spinning disks. An L2ARC is entirely optional, but + having one will significantly increase read speeds for + files that are cached on the SSD + instead of having to be read from the regular spinning + disks. The L2ARC can also speed up deduplication + since a DDT that does not fit in + RAM but does fit in the + L2ARC will be much faster than if + the DDT had to be read from disk. + The rate at which data is added to the cache devices + is limited to prevent prematurely wearing out the + SSD with too many writes. Until + the cache is full (the first block has been evicted to + make room), writing to the L2ARC is + limited to the sum of the write limit and the boost + limit, then after that limited to the write limit. A + pair of sysctl values control these rate limits; + vfs.zfs.l2arc_write_max controls + how many bytes are written to the cache per second, + while vfs.zfs.l2arc_write_boost + adds to this limit during the "Turbo Warmup Phase" + (Write Boost). + + + + Copy-On-Write + + Unlike a traditional file system, when data is + overwritten on ZFS the new data is written to a + different block rather than overwriting the old data + in place. Only once this write is complete is the + metadata then updated to point to the new location of + the data. This means that in the event of a shorn + write (a system crash or power loss in the middle of + writing a file) the entire original contents of the + file are still available and the incomplete write is + discarded. This also means that ZFS does not require + a fsck after an unexpected shutdown. + + + + Dataset + + + + + + Volume + + In additional to regular file systems (datasets), + ZFS can also create volumes, which are block devices. + Volumes have many of the same features, including + copy-on-write, snapshots, clones and + checksumming. + + + + Snapshot + + The copy-on-write + design of ZFS allows for nearly instantaneous + consistent snapshots with arbitrary names. After + taking a snapshot of a dataset (or a recursive + snapshot of a parent dataset that will include all + child datasets), new data is written to new blocks (as + described above), however the old blocks are not + reclaimed as free space. There are then two versions + of the file system, the snapshot (what the file system + looked like before) and the live file system; however + no additional space is used. As new data is written + to the live file system, new blocks are allocated to + store this data. The apparent size of the snapshot + will grow as the blocks are no longer used in the live + file system, but only in the snapshot. These + snapshots can be mounted (read only) to allow for the + recovery of previous versions of files. It is also + possible to rollback + a live file system to a specific snapshot, undoing any + changes that took place after the snapshot was taken. + Each block in the zpool has a reference counter which + indicates how many snapshots, clones, datasets or + volumes make use of that block. As files and + snapshots are deleted, the reference count is + decremented; once a block is no longer referenced, it + is reclaimed as free space. Snapshots can also be + marked with a hold, + once a snapshot is held, any attempt to destroy it + will return an EBUY error. Each snapshot can have + multiple holds, each with a unique name. The release + command removes the hold so the snapshot can then be + deleted. Snapshots can be taken on volumes, however + they can only be cloned or rolled back, not mounted + independently. + + + + Clone + + Snapshots can also be cloned; a clone is a + writable version of a snapshot, allowing the file + system to be forked as a new dataset. As with a + snapshot, a clone initially consumes no additional + space, only as new data is written to a clone and new + blocks are allocated does the apparent size of the + clone grow. As blocks are overwritten in the cloned + file system or volume, the reference count on the + previous block is decremented. The snapshot upon + which a clone is based cannot be deleted because the + clone is dependeant upon it (the snapshot is the + parent, and the clone is the child). Clones can be + promoted, reversing this + dependeancy, making the clone the parent and the + previous parent the child. This operation requires no + additional space, however it will change the way the + used space is accounted. + + + + Checksum + + Every block that is allocated is also checksummed + (which algorithm is used is a per dataset property, + see: zfs set). ZFS transparently validates the + checksum of each block as it is read, allowing ZFS to + detect silent corruption. If the data that is read + does not match the expected checksum, ZFS will attempt + to recover the data from any available redundancy + (mirrors, RAID-Z). You can trigger the validation of + all checksums using the scrub + command. The available checksum algorithms include: + + fletcher2 + fletcher4 + sha256 + The fletcher algorithms are faster, + but sha256 is a strong cryptographic hash and has a + much lower chance of a collisions at the cost of some + performance. Checksums can be disabled but it is + inadvisable. + + + + Compression + + Each dataset in ZFS has a compression property, + which defaults to off. This property can be set to + one of a number of compression algorithms, which will + cause all new data that is written to this dataset to + be compressed as it is written. In addition to the + reduction in disk usage, this can also increase read + and write throughput, as only the smaller compressed + version of the file needs to be read or + written. + LZ4 compression is only available after &os; + 9.2 + + + + + Deduplication + + ZFS has the ability to detect duplicate blocks of + data as they are written (thanks to the checksumming + feature). If deduplication is enabled, instead of + writing the block a second time, the reference count + of the existing block will be increased, saving + storage space. In order to do this, ZFS keeps a + deduplication table (DDT) in + memory, containing the list of unique checksums, the + location of that block and a reference count. When + new data is written, the checksum is calculated and + compared to the list. If a match is found, the data + is considered to be a duplicate. When deduplication + is enabled, the checksum algorithm is changed to + SHA256 to provide a secure + cryptographic hash. ZFS deduplication is tunable; if + dedup is on, then a matching checksum is assumed to + mean that the data is identical. If dedup is set to + verify, then the data in the two blocks will be + checked byte-for-byte to ensure it is actually + identical and if it is not, the hash collision will be + noted by ZFS and the two blocks will be stored + separately. Due to the nature of the + DDT, having to store the hash of + each unique block, it consumes a very large amount of + memory (a general rule of thumb is 5-6 GB of ram + per 1 TB of deduplicated data). In situations + where it is not practical to have enough + RAM to keep the entire DDT in + memory, performance will suffer greatly as the DDT + will need to be read from disk before each new block + is written. Deduplication can make use of the L2ARC + to store the DDT, providing a middle ground between + fast system memory and slower disks. It is advisable + to consider using ZFS compression instead, which often + provides nearly as much space savings without the + additional memory requirement. + + + + Scrub + + In place of a consistency check like fsck, ZFS + has the scrub command, which reads + all data blocks stored on the pool and verifies their + checksums them against the known good checksums stored + in the metadata. This periodic check of all the data + stored on the pool ensures the recovery of any + corrupted blocks before they are needed. A scrub is + not required after an unclean shutdown, but it is + recommended that you run a scrub at least once each + quarter. ZFS compares the checksum for each block as + it is read in the normal course of use, but a scrub + operation makes sure even infrequently used blocks are + checked for silent corruption. + + + + Dataset + Quota + + ZFS provides very fast and accurate dataset, user + and group space accounting in addition to quotes and + space reservations. This gives the administrator fine + grained control over how space is allocated and allows + critical file systems to reserve space to ensure other + file systems do not take all of the free space. + ZFS supports different types of quotas: the + dataset quota, the reference + quota (refquota), the + user + quota, and the + group quota. + + Quotas limit the amount of space that a dataset + and all of its descendants (snapshots of the + dataset, child datasets and the snapshots of those + datasets) can consume. + + + Quotas cannot be set on volumes, as the + volsize property acts as an + implicit quota. + + + + + Reference + Quota + + A reference quota limits the amount of space a + dataset can consume by enforcing a hard limit on the + space used. However, this hard limit includes only + space that the dataset references and does not include + space used by descendants, such as file systems or + snapshots. + + + + User + Quota + + User quotas are useful to limit the amount of + space that can be used by the specified user. + + + + + Group + Quota + + The group quota limits the amount of space that a + specified group can consume. + + + + Dataset + Reservation + + The reservation property makes + it possible to guaranteed a minimum amount of space + for the use of a specific dataset and its descendants. + This means that if a 10 GB reservation is set on + storage/home/bob, if another + dataset tries to use all of the free space, at least + 10 GB of space is reserved for this dataset. If + a snapshot is taken of + storage/home/bob, the space used + by that snapshot is counted against the reservation. + The refreservation + property works in a similar way, except it + excludes descendants, such as + snapshots. + Reservations of any sort are useful + in many situations, such as planning and testing the + suitability of disk space allocation in a new + system, or ensuring that enough space is available + on file systems for audio logs or system recovery + procedures and files. + + + + Reference + Reservation + + The refreservation property + makes it possible to guaranteed a minimum amount of + space for the use of a specific dataset + excluding its descendants. This + means that if a 10 GB reservation is set on + storage/home/bob, if another + dataset tries to use all of the free space, at least + 10 GB of space is reserved for this dataset. In + contrast to a regular reservation, + space used by snapshots and decendant datasets is not + counted against the reservation. As an example, if a + snapshot was taken of + storage/home/bob, enough disk + space would have to exist outside of the + refreservation amount for the + operation to succeed because descendants of the main + data set are not counted by the + refreservation amount and so do not + encroach on the space set. + + + + Resilver + + + + + + + + - The kmem address space can - be increased on all &os; architectures. On a test system - with one gigabyte of physical memory, success was achieved - with the following options added to - /boot/loader.conf, and the system - restarted: + + What Makes ZFS Different - vm.kmem_size="330M" -vm.kmem_size_max="330M" -vfs.zfs.arc_max="40M" -vfs.zfs.vdev.cache.size="5M" - - For a more detailed list of recommendations for - ZFS-related tuning, see . - + - - Using <acronym>ZFS</acronym> + + <acronym>ZFS</acronym> Quick Start Guide There is a start up mechanism that allows &os; to mount ZFS pools during system @@ -189,8 +743,8 @@ vfs.zfs.vdev.cache.size="5M"da0, da1, and da2. - Users of IDE hardware should instead use - ad + Users of SATA hardware should instead use + ada device names. @@ -200,7 +754,7 @@ vfs.zfs.vdev.cache.size="5M"zpool: - &prompt.root; zpool create example /dev/da0 + &prompt.root; zpool create example /dev/da0 To view the new pool, review the output of df: @@ -324,7 +878,9 @@ example/data 17547008 0 175 There is no way to prevent a disk from failing. One method of avoiding data loss due to a failed hard disk is to implement RAID. ZFS - supports this feature in its pool design. + supports this feature in its pool design. RAID-Z pools + require 3 or more disks but yield more usable space than + mirrored pools. To create a RAID-Z pool, issue the following command and specify the disks to add to the @@ -333,7 +889,7 @@ example/data 17547008 0 175 &prompt.root; zpool create storage raidz da0 da1 da2 - &sun; recommends that the amount of devices used in + &sun; recommends that the number of devices used in a RAID-Z configuration is between three and nine. For environments requiring a single pool consisting of 10 disks or more, consider breaking it up @@ -553,42 +1109,126 @@ errors: No known data errors Refer to &man.zfs.8; and &man.zpool.8; for other ZFS options. + - - ZFS Quotas + + <command>zpool</command> Administration - ZFS supports different types of quotas: the refquota, - the general quota, the user quota, and the group quota. - This section explains the basics of each type and includes - some usage instructions. - - Quotas limit the amount of space that a dataset and its - descendants can consume, and enforce a limit on the amount - of space used by filesystems and snapshots for the - descendants. Quotas are useful to limit the amount of space - a particular user can use. + - - Quotas cannot be set on volumes, as the - volsize property acts as an implicit - quota. - + + Creating & Destroying Storage Pools + + + + + + Adding & Removing Devices + + + + + + Dealing with Failed Devices + + + + + + Importing & Exporting Pools + + + + + + Upgrading a Storage Pool + + + + + + Checking the Status of a Pool + + + + + + Performance Monitoring + + + + + + Splitting a Storage Pool + + + + + + + <command>zfs</command> Administration + + + + + Creating & Destroying Datasets + + + + + + Creating & Destroying Volumes + + + - The - refquota=size - limits the amount of space a dataset can consume by - enforcing a hard limit on the space used. However, this - hard limit does not include space used by descendants, such - as file systems or snapshots. + + Renaming a Dataset - To enforce a general quota of 10 GB for + + + + + Setting Dataset Properties + + + + + + Managing Snapshots + + + + + + Managing Clones + + + + + + ZFS Replication + + + + + + Dataset, User and Group Quotes + + To enforce a dataset quota of 10 GB for storage/home/bob, use the following: &prompt.root; zfs set quota=10G storage/home/bob - User quotas limit the amount of space that can be used - by the specified user. The general format is + To enforce a reference quota of 10 GB for + storage/home/bob, use the + following: + + &prompt.root; zfs set refquota=10G storage/home/bob + + The general + format is userquota@user=size, and the user's name must be in one of the following formats: @@ -622,8 +1262,8 @@ errors: No known data errors - For example, to enforce a quota of 50 GB for a user - named joe, use the + For example, to enforce a user quota of 50 GB + for a user named joe, use the following: &prompt.root; zfs set userquota@joe=50G @@ -633,15 +1273,17 @@ errors: No known data errors &prompt.root; zfs set userquota@joe=none - User quota properties are not displayed by - zfs get all. - Non-root users can only see their own - quotas unless they have been granted the - userquota privilege. Users with this - privilege are able to view and set everyone's quota. + + User quota properties are not displayed by + zfs get all. + Non-root users can only see their own + quotas unless they have been granted the + userquota privilege. Users with this + privilege are able to view and set everyone's + quota. + - The group quota limits the amount of space that a - specified group can consume. The general format is + The general format for setting a group quota is: groupquota@group=size. To set the quota for the group @@ -678,35 +1320,10 @@ errors: No known data errors &prompt.root; zfs get quota storage/home/bob - - ZFS Reservations + + Reservations - ZFS supports two types of space reservations. This - section explains the basics of each and includes some usage - instructions. - - The reservation property makes it - possible to reserve a minimum amount of space guaranteed - for a dataset and its descendants. This means that if a - 10 GB reservation is set on - storage/home/bob, if disk - space gets low, at least 10 GB of space is reserved - for this dataset. The refreservation - property sets or indicates the minimum amount of space - guaranteed to a dataset excluding descendants, such as - snapshots. As an example, if a snapshot was taken of - storage/home/bob, enough disk space - would have to exist outside of the - refreservation amount for the operation - to succeed because descendants of the main data set are - not counted by the refreservation - amount and so do not encroach on the space set. - - Reservations of any sort are useful in many situations, - such as planning and testing the suitability of disk space - allocation in a new system, or ensuring that enough space is - available on file systems for system recovery procedures and - files. + The general format of the reservation property is @@ -733,6 +1350,141 @@ errors: No known data errors &prompt.root; zfs get reservation storage/home/bob &prompt.root; zfs get refreservation storage/home/bob + + + Compression + + + + + + Deduplication + + + + + + Delegated Administration + + + *** DIFF OUTPUT TRUNCATED AT 1000 LINES ***