Date: Sun, 20 Apr 2025 12:00:25 GMT From: Benedict Reuschling <bcr@FreeBSD.org> To: doc-committers@FreeBSD.org, dev-commits-doc-all@FreeBSD.org Subject: git: 04bb6ab4fb - main - Whitespace change: remove them from end of lines Message-ID: <202504201200.53KC0PH2091942@gitrepo.freebsd.org>
next in thread | raw e-mail | index | archive | help
The branch main has been updated by bcr: URL: https://cgit.FreeBSD.org/doc/commit/?id=04bb6ab4fb212e926e8ee08ba6b3ed032cf2e0a1 commit 04bb6ab4fb212e926e8ee08ba6b3ed032cf2e0a1 Author: Benedict Reuschling <bcr@FreeBSD.org> AuthorDate: 2025-04-20 11:59:14 +0000 Commit: Benedict Reuschling <bcr@FreeBSD.org> CommitDate: 2025-04-20 11:59:14 +0000 Whitespace change: remove them from end of lines Purely cosmetic change. --- .../content/en/books/handbook/zfs/_index.adoc | 28 +++++++++++----------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/documentation/content/en/books/handbook/zfs/_index.adoc b/documentation/content/en/books/handbook/zfs/_index.adoc index a4915c24cd..42ba9f0bf8 100644 --- a/documentation/content/en/books/handbook/zfs/_index.adoc +++ b/documentation/content/en/books/handbook/zfs/_index.adoc @@ -207,7 +207,7 @@ It assumes the file system contains important files and configures it to store t # zfs set copies=2 example/data .... -Use `df` to see the data and space usage: +Use `df` to see the data and space usage: [source,shell] .... @@ -497,7 +497,7 @@ crossref:zfs[zfs-term-volume,volumes]. === Creating and Destroying Storage Pools Creating a ZFS storage pool requires permanent decisions, as the pool structure cannot change after creation. -The most important decision is which types of vdevs to group the physical disks into. +The most important decision is which types of vdevs to group the physical disks into. See the list of crossref:zfs[zfs-term-vdev,vdev types] for details about the possible options. After creating the pool, most vdev types do not allow adding disks to the vdev. The exceptions are mirrors, which allow adding new disks to the vdev, and stripes, which upgrade to mirrors by attaching a new disk to the vdev. @@ -1164,7 +1164,7 @@ The pool is now back to a fully working state, with all error counts now zero. The smallest device in each vdev limits the usable size of a redundant pool. Replace the smallest device with a larger device. After completing a crossref:zfs[zfs-zpool-replace,replace] or -crossref:zfs[zfs-term-resilver,resilver] operation, the pool can grow to use the capacity of the new device. +crossref:zfs[zfs-term-resilver,resilver] operation, the pool can grow to use the capacity of the new device. For example, consider a mirror of a 1 TB drive and a 2 TB drive. The usable space is 1 TB. When replacing the 1 TB drive with another 2 TB drive, the resilvering process copies the existing data onto the new drive. @@ -1445,7 +1445,7 @@ This example shows a mirrored pool with two devices: [source,shell] .... -# zpool iostat -v +# zpool iostat -v capacity operations bandwidth pool alloc free read write read write ----------------------- ----- ----- ----- ----- ----- ----- @@ -1461,7 +1461,7 @@ data 288G 1.53T 2 12 9.23K 61.5K ZFS can split a pool consisting of one or more mirror vdevs into two pools. Unless otherwise specified, ZFS detaches the last member of each mirror and creates a new pool containing the same data. -Be sure to make a dry run of the operation with `-n` first. +Be sure to make a dry run of the operation with `-n` first. This displays the details of the requested operation without actually performing it. This helps confirm that the operation will do what the user intends. @@ -1475,7 +1475,7 @@ To manage the pool itself, use crossref:zfs[zfs-zpool,`zpool`]. === Creating and Destroying Datasets Unlike traditional disks and volume managers, space in ZFS is _not_ preallocated. -With traditional file systems, after partitioning and assigning the space, there is no way to add a new file system without adding a new disk. +With traditional file systems, after partitioning and assigning the space, there is no way to add a new file system without adding a new disk. With ZFS, creating new file systems is possible at any time. Each crossref:zfs[zfs-term-dataset,_dataset_] has properties including features like compression, deduplication, caching, and quotas, as well as other useful properties like readonly, case sensitivity, network file sharing, and a mount point. Nesting datasets within each other is possible and child datasets will inherit properties from their ancestors. @@ -2086,7 +2086,7 @@ Changing the clone independently from its originating dataset is possible now. The connection between the two is the snapshot. ZFS records this connection in the property `origin`. Promoting the clone with `zfs promote` makes the clone an independent dataset. -This removes the value of the `origin` property and disconnects the newly independent dataset from the snapshot. +This removes the value of the `origin` property and disconnects the newly independent dataset from the snapshot. This example shows it: [source,shell] @@ -2279,10 +2279,10 @@ To keep the contents of the file system encrypted in transit and on the remote s Change some settings and take security precautions first. This describes the necessary steps required for the `zfs send` operation; for more information on SSH, see crossref:security[openssh,"OpenSSH"]. -Change the configuration as follows: +Change the configuration as follows: * Passwordless SSH access between sending and receiving host using SSH keys -* ZFS requires the privileges of the `root` user to send and receive streams. This requires logging in to the receiving system as `root`. +* ZFS requires the privileges of the `root` user to send and receive streams. This requires logging in to the receiving system as `root`. * Security reasons prevent `root` from logging in by default. * Use the crossref:zfs[zfs-zfs-allow,ZFS Delegation] system to allow a non-`root` user on each system to perform the respective send and receive operations. On the sending system: @@ -2796,7 +2796,7 @@ This approach avoids the common pitfall with extensive partitioning where free s These datasets and volumes share the pool of remaining free space. Each pool is uniquely identified by a name and a GUID. The ZFS version number on the pool determines the features available. |[[zfs-term-vdev]]vdev Types -a|A pool consists of one or more vdevs, which themselves are a single disk or a group of disks, transformed to a RAID. When using a lot of vdevs, ZFS spreads data across the vdevs to increase performance and maximize usable space. All vdevs must be at least 128 MB in size. +a|A pool consists of one or more vdevs, which themselves are a single disk or a group of disks, transformed to a RAID. When using a lot of vdevs, ZFS spreads data across the vdevs to increase performance and maximize usable space. All vdevs must be at least 128 MB in size. * [[zfs-term-vdev-disk]] _Disk_ - The most basic vdev type is a standard block device. This can be an entire disk (such as [.filename]#/dev/ada0# or [.filename]#/dev/da0#) or a partition ([.filename]#/dev/ada0p3#). On FreeBSD, there is no performance penalty for using a partition rather than the entire disk. This differs from recommendations made by the Solaris documentation. + @@ -2830,7 +2830,7 @@ A configuration of two RAID-Z2 vdevs consisting of 8 disks each would create som storage of the cache to the crossref:zfs[zfs-term-l2arc,L2ARC]. Mirroring cache devices is impossible. Since a cache device stores only new copies of existing data, there is no risk of data loss. |[[zfs-term-txg]] Transaction Group (TXG) -|Transaction Groups are the way ZFS groups blocks changes together and writes them to the pool. Transaction groups are the atomic unit that ZFS uses to ensure consistency. ZFS assigns each transaction group a unique 64-bit consecutive identifier. There can be up to three active transaction groups at a time, one in each of these three states: +|Transaction Groups are the way ZFS groups blocks changes together and writes them to the pool. Transaction groups are the atomic unit that ZFS uses to ensure consistency. ZFS assigns each transaction group a unique 64-bit consecutive identifier. There can be up to three active transaction groups at a time, one in each of these three states: * _Open_ - A new transaction group begins in the open state and accepts new writes. There is always a transaction group in the open state, but the @@ -2918,7 +2918,7 @@ returns an `EBUSY` error. Each snapshot can have holds with a unique name each. The crossref:zfs[zfs-zfs-snapshot,release] command removes the hold so the snapshot can deleted. Snapshots, cloning, and rolling back works on volumes, but independently mounting does not. |[[zfs-term-clone]]Clone -|Cloning a snapshot is also possible. A clone is a writable version of a snapshot, allowing the file system to fork as a new dataset. As with a snapshot, a clone initially consumes no new space. As new data written to a clone uses new blocks, the size of the clone grows. When blocks are overwritten in the cloned file system or volume, the reference count on the previous block decreases. Removing the snapshot upon which a clone bases is impossible because the clone depends on it. The snapshot is the parent, and the clone is the child. Clones can be _promoted_, reversing this dependency and making the clone the parent and the previous parent the child. This operation requires no new space. Since the amount of space used by the parent and child reverses, it may affect existing quotas and reservations. +|Cloning a snapshot is also possible. A clone is a writable version of a snapshot, allowing the file system to fork as a new dataset. As with a snapshot, a clone initially consumes no new space. As new data written to a clone uses new blocks, the size of the clone grows. When blocks are overwritten in the cloned file system or volume, the reference count on the previous block decreases. Removing the snapshot upon which a clone bases is impossible because the clone depends on it. The snapshot is the parent, and the clone is the child. Clones can be _promoted_, reversing this dependency and making the clone the parent and the previous parent the child. This operation requires no new space. Since the amount of space used by the parent and child reverses, it may affect existing quotas and reservations. |[[zfs-term-checksum]]Checksum |Every block is also checksummed. The checksum algorithm used is a per-dataset @@ -2934,7 +2934,7 @@ validation of all checksums with crossref:zfs[zfs-term-scrub,`scrub`]. Checksum The `fletcher` algorithms are faster, but `sha256` is a strong cryptographic hash and has a much lower chance of collisions at the cost of some performance. Deactivating checksums is possible, but strongly discouraged. |[[zfs-term-compression]]Compression -|Each dataset has a compression property, which defaults to off. Set this property to an available compression algorithm. This causes compression of all new data written to the dataset. Beyond a reduction in space used, read and write throughput often increases because fewer blocks need reading or writing. +|Each dataset has a compression property, which defaults to off. Set this property to an available compression algorithm. This causes compression of all new data written to the dataset. Beyond a reduction in space used, read and write throughput often increases because fewer blocks need reading or writing. [[zfs-term-compression-lz4]] * _LZ4_ - Added in ZFS pool version 5000 (feature flags), LZ4 is now the recommended compression algorithm. LZ4 works about 50% faster than LZJB when operating on compressible data, and is over three times faster when operating on uncompressible data. LZ4 also decompresses about 80% faster than LZJB. On modern CPUs, LZ4 can often compress at over 500 MB/s, and decompress at over 1.5 GB/s (per single CPU core). @@ -3008,7 +3008,7 @@ dataset tries to use the free space, reserving at least 10 GB of space for this dataset. In contrast to a regular crossref:zfs[zfs-term-reservation,reservation], space used by snapshots and descendant datasets is not counted against the reservation. For example, if taking a snapshot of [.filename]#storage/home/bob#, enough disk space other than the `refreservation` amount must exist for the operation to succeed. Descendants of the main data set are not counted in the `refreservation` amount and so do not encroach on the space set. |[[zfs-term-resilver]]Resilver -|When replacing a failed disk, ZFS must fill the new disk with the lost data. _Resilvering_ is the process of using the parity information distributed across the remaining drives to calculate and write the missing data to the new drive. +|When replacing a failed disk, ZFS must fill the new disk with the lost data. _Resilvering_ is the process of using the parity information distributed across the remaining drives to calculate and write the missing data to the new drive. |[[zfs-term-online]]Online |A pool or vdev in the `Online` state has its member devices connected and fully operational. Individual devices in the `Online` state are functioning.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?202504201200.53KC0PH2091942>