From owner-svn-doc-head@FreeBSD.ORG Tue Apr 8 15:48:47 2014 Return-Path: Delivered-To: svn-doc-head@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 754D9C62; Tue, 8 Apr 2014 15:48:47 +0000 (UTC) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 54075112C; Tue, 8 Apr 2014 15:48:47 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.8/8.14.8) with ESMTP id s38FmlQe082982; Tue, 8 Apr 2014 15:48:47 GMT (envelope-from dru@svn.freebsd.org) Received: (from dru@localhost) by svn.freebsd.org (8.14.8/8.14.8/Submit) id s38Fml6N082981; Tue, 8 Apr 2014 15:48:47 GMT (envelope-from dru@svn.freebsd.org) Message-Id: <201404081548.s38Fml6N082981@svn.freebsd.org> From: Dru Lavigne Date: Tue, 8 Apr 2014 15:48:47 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-all@freebsd.org, svn-doc-head@freebsd.org Subject: svn commit: r44487 - head/en_US.ISO8859-1/books/handbook/disks X-SVN-Group: doc-head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-head@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list List-Id: SVN commit messages for the doc tree for head List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Apr 2014 15:48:47 -0000 Author: dru Date: Tue Apr 8 15:48:46 2014 New Revision: 44487 URL: http://svnweb.freebsd.org/changeset/doc/44487 Log: White space fix only. Translators can ignore. Sponsored by: iXsystems Modified: head/en_US.ISO8859-1/books/handbook/disks/chapter.xml Modified: head/en_US.ISO8859-1/books/handbook/disks/chapter.xml ============================================================================== --- head/en_US.ISO8859-1/books/handbook/disks/chapter.xml Tue Apr 8 15:41:16 2014 (r44486) +++ head/en_US.ISO8859-1/books/handbook/disks/chapter.xml Tue Apr 8 15:48:46 2014 (r44487) @@ -530,7 +530,7 @@ add path 'da*' mode 0660 group operator< If SCSI disks are installed in the - system, change the second line as follows: + system, change the second line as follows: add path 'da[3-9]*' mode 0660 group operator @@ -559,11 +559,12 @@ add path 'da*' mode 0660 group operator< system is to be mounted. This directory needs to be owned by the user that is to mount the file system. One way to do that is for root to - create a subdirectory owned by that user as - /mnt/username. In the following example, - replace username with the login - name of the user and usergroup with - the user's primary group: + create a subdirectory owned by that user as /mnt/username. + In the following example, replace + username with the login name of the + user and usergroup with the user's + primary group: &prompt.root; mkdir /mnt/username &prompt.root; chown username:usergroup /mnt/username @@ -893,8 +894,8 @@ scsibus1: <acronym>ATAPI</acronym> Drives - With the help of the - ATAPI/CAM module, + With the help of the ATAPI/CAM module, cdda2wav can also be used on ATAPI drives. This tool is usually a better choice for most of users, as it supports jitter @@ -905,11 +906,11 @@ scsibus1: The ATAPI CD driver makes each track available as - /dev/acddtnn, where - d is the drive number, and - nn is the track number written - with two decimal digits, prefixed with zero as needed. So - the first track on the first disk is + /dev/acddtnn, + where d is the drive number, + and nn is the track number + written with two decimal digits, prefixed with zero as + needed. So the first track on the first disk is /dev/acd0t01, the second is /dev/acd0t02, the third is /dev/acd0t03, and so on. @@ -1173,69 +1174,69 @@ cd0: Attempt to query device size failed burning - Compared to the CD, the - DVD is the next generation of optical media - storage technology. The DVD can hold more - data than any CD and is the standard for - video publishing. + Compared to the CD, the + DVD is the next generation of optical media + storage technology. The DVD can hold more + data than any CD and is the standard for + video publishing. - Five physical recordable formats can be defined for a - recordable DVD: + Five physical recordable formats can be defined for a + recordable DVD: - - - DVD-R: This was the first DVD - recordable format available. The DVD-R standard is - defined by the DVD - Forum. This format is write once. - + + + DVD-R: This was the first DVD + recordable format available. The DVD-R standard is defined + by the DVD + Forum. This format is write once. + - - DVD-RW: This is the rewritable - version of the DVD-R standard. A - DVD-RW can be rewritten about 1000 - times. - + + DVD-RW: This is the rewritable + version of the DVD-R standard. A + DVD-RW can be rewritten about 1000 + times. + - - DVD-RAM: This is a rewritable - format which can be seen as a removable hard drive. - However, this media is not compatible with most - DVD-ROM drives and DVD-Video players - as only a few DVD writers support the - DVD-RAM format. Refer to for more information on - DVD-RAM use. - + + DVD-RAM: This is a rewritable format + which can be seen as a removable hard drive. However, this + media is not compatible with most + DVD-ROM drives and DVD-Video players as + only a few DVD writers support the + DVD-RAM format. Refer to for more information on + DVD-RAM use. + - - DVD+RW: This is a rewritable format - defined by the DVD+RW + + DVD+RW: This is a rewritable format + defined by the DVD+RW Alliance. A DVD+RW can be - rewritten about 1000 times. - + rewritten about 1000 times. + - - DVD+R: This format is the write once variation - of the DVD+RW format. - - + + DVD+R: This format is the write once variation of the + DVD+RW format. + + - A single layer recordable DVD can hold - up to 4,700,000,000 bytes which is actually 4.38 GB - or 4485 MB as 1 kilobyte is 1024 bytes. + A single layer recordable DVD can hold up + to 4,700,000,000 bytes which is actually 4.38 GB or + 4485 MB as 1 kilobyte is 1024 bytes. - - A distinction must be made between the physical media - and the application. For example, a DVD-Video is a specific - file layout that can be written on any recordable - DVD physical media such as DVD-R, DVD+R, - or DVD-RW. Before choosing the type of - media, ensure that both the burner and the DVD-Video player - are compatible with the media under consideration. - + + A distinction must be made between the physical media and + the application. For example, a DVD-Video is a specific file + layout that can be written on any recordable + DVD physical media such as DVD-R, DVD+R, or + DVD-RW. Before choosing the type of media, + ensure that both the burner and the DVD-Video player are + compatible with the media under consideration. + Configuration @@ -1540,7 +1541,8 @@ cd0: Attempt to query device size failed For More Information To obtain more information about a DVD, - use dvd+rw-mediainfo /dev/cd0 while the + use dvd+rw-mediainfo + /dev/cd0 while the disc in the specified drive. More information about @@ -2067,7 +2069,7 @@ cd0: Attempt to query device size failed livefs - CD + CD Store this printout and a copy of the installation media in a secure location. Should an emergency restore be @@ -2754,8 +2756,8 @@ Filesystem 1K-blocks Used Avail Capacity . For the purposes of this example, a new hard drive partition has been added as /dev/ad4s1c and - /dev/ad0s1* represents the existing - standard &os; partitions. + /dev/ad0s1* + represents the existing standard &os; partitions. &prompt.root; ls /dev/ad* /dev/ad0 /dev/ad0s1b /dev/ad0s1e /dev/ad4s1 @@ -2868,7 +2870,8 @@ sector_size = 2048 &man.newfs.8; must be performed on an attached gbde partition which is - identified by a *.bde + identified by a + *.bde extension to the device name. @@ -3297,7 +3300,8 @@ Device 1K-blocks Used Av - Highly Available Storage (<acronym>HAST</acronym>) + Highly Available Storage + (<acronym>HAST</acronym>) @@ -3348,57 +3352,56 @@ Device 1K-blocks Used Av High availability is one of the main requirements in serious business applications and highly-available storage is a - key component in such environments. In &os;, the Highly Available STorage - (HAST) - framework allows transparent storage of - the same data across several physically separated machines - connected by a TCP/IP network. HAST can be - understood as a network-based RAID1 (mirror), and is similar to - the DRBD® storage system used in the GNU/&linux; - platform. In combination with other high-availability features - of &os; like CARP, HAST - makes it possible to build a highly-available storage cluster - that is resistant to hardware failures. + key component in such environments. In &os;, the Highly + Available STorage (HAST) framework allows + transparent storage of the same data across several physically + separated machines connected by a TCP/IP + network. HAST can be understood as a + network-based RAID1 (mirror), and is similar to the DRBD® + storage system used in the GNU/&linux; platform. In combination + with other high-availability features of &os; like + CARP, HAST makes it + possible to build a highly-available storage cluster that is + resistant to hardware failures. - The following are the main features of - HAST: + The following are the main features of + HAST: - - - Can be used to mask I/O errors on local hard - drives. - + + + Can be used to mask I/O errors on + local hard drives. + - - File system agnostic as it works with any file - system supported by &os;. - + + File system agnostic as it works with any file system + supported by &os;. + - - Efficient and quick resynchronization as - only the blocks that were modified during the downtime of a - node are synchronized. - + + Efficient and quick resynchronization as only the blocks + that were modified during the downtime of a node are + synchronized. + - + - - Can be used in an already deployed environment to add - additional redundancy. - + + Can be used in an already deployed environment to add + additional redundancy. + - - Together with CARP, - Heartbeat, or other tools, it - can be used to build a robust and durable storage - system. - - + + Together with CARP, + Heartbeat, or other tools, it can + be used to build a robust and durable storage system. + + After reading this section, you will know: @@ -3442,48 +3445,47 @@ Device 1K-blocks Used Av The HAST project was sponsored by The &os; Foundation with support from http://www.omc.net/ and http://www.omc.net/ + and http://www.transip.nl/. HAST Operation - HAST provides synchronous - block-level replication between two - physical machines: - the primary, also known as the + HAST provides synchronous block-level + replication between two physical machines: the + primary, also known as the master node, and the secondary, or slave node. These two machines together are referred to as a cluster. - Since HAST works in a - primary-secondary configuration, it allows only one of the - cluster nodes to be active at any given time. The - primary node, also called + Since HAST works in a primary-secondary + configuration, it allows only one of the cluster nodes to be + active at any given time. The primary node, also called active, is the one which will handle all - the I/O requests to HAST-managed - devices. The secondary node is - automatically synchronized from the primary - node. + the I/O requests to + HAST-managed devices. The secondary node + is automatically synchronized from the primary node. The physical components of the HAST - system are the local disk on primary node, and the - disk on the remote, secondary node. + system are the local disk on primary node, and the disk on the + remote, secondary node. HAST operates synchronously on a block level, making it transparent to file systems and applications. HAST provides regular GEOM providers in - /dev/hast/ for use by - other tools or applications. There is no difference - between using HAST-provided devices and - raw disks or partitions. + /dev/hast/ for use by other tools or + applications. There is no difference between using + HAST-provided devices and raw disks or + partitions. Each write, delete, or flush operation is sent to both the - local disk and to the remote disk over TCP/IP. Each read - operation is served from the local disk, unless the local disk - is not up-to-date or an I/O error occurs. In such cases, the - read operation is sent to the secondary node. + local disk and to the remote disk over + TCP/IP. Each read operation is served from + the local disk, unless the local disk is not up-to-date or an + I/O error occurs. In such cases, the read + operation is sent to the secondary node. HAST tries to provide fast failure recovery. For this reason, it is important to reduce @@ -3499,30 +3501,31 @@ Device 1K-blocks Used Av - memsync: This mode reports a write operation - as completed when the local write operation is finished - and when the remote node acknowledges data arrival, but - before actually storing the data. The data on the remote - node will be stored directly after sending the - acknowledgement. This mode is intended to reduce - latency, but still provides good + memsync: This mode reports a + write operation as completed when the local write + operation is finished and when the remote node + acknowledges data arrival, but before actually storing the + data. The data on the remote node will be stored directly + after sending the acknowledgement. This mode is intended + to reduce latency, but still provides good reliability. - fullsync: This mode reports a write - operation as completed when both the local write and the - remote write complete. This is the safest and the + fullsync: This mode reports a + write operation as completed when both the local write and + the remote write complete. This is the safest and the slowest replication mode. This mode is the default. - async: This mode reports a write operation as - completed when the local write completes. This is the - fastest and the most dangerous replication mode. It - should only be used when replicating to a distant node where - latency is too high for other modes. + async: This mode reports a write + operation as completed when the local write completes. + This is the fastest and the most dangerous replication + mode. It should only be used when replicating to a + distant node where latency is too high for other + modes. @@ -3541,8 +3544,8 @@ Device 1K-blocks Used Av - The userland management - utility, &man.hastctl.8;. + The userland management utility, + &man.hastctl.8;. @@ -3553,26 +3556,26 @@ Device 1K-blocks Used Av Users who prefer to statically build - GEOM_GATE support into the kernel - should add this line to the custom kernel configuration - file, then rebuild the kernel using the instructions in GEOM_GATE support into the kernel should + add this line to the custom kernel configuration file, then + rebuild the kernel using the instructions in : options GEOM_GATE The following example describes how to configure two nodes - in master-slave/primary-secondary - operation using HAST to replicate the data - between the two. The nodes will be called - hasta, with an IP address of - 172.16.0.1, and - hastb, with an IP of address + in master-slave/primary-secondary operation using + HAST to replicate the data between the two. + The nodes will be called hasta, with an + IP address of + 172.16.0.1, and hastb, + with an IP of address 172.16.0.2. Both nodes will have a dedicated hard drive /dev/ad6 of the same size for HAST operation. The HAST pool, sometimes referred to as a - resource or the GEOM provider in - /dev/hast/, will be called + resource or the GEOM provider in /dev/hast/, will be called test. Configuration of HAST is done using @@ -3596,14 +3599,14 @@ Device 1K-blocks Used Av It is also possible to use host names in the - remote statements if - the hosts are resolvable and defined either in + remote statements if the hosts are + resolvable and defined either in /etc/hosts or in the local DNS. - Once the configuration exists on both nodes, - the HAST pool can be created. Run these + Once the configuration exists on both nodes, the + HAST pool can be created. Run these commands on both nodes to place the initial metadata onto the local disk and to start &man.hastd.8;: @@ -3615,17 +3618,16 @@ Device 1K-blocks Used Av providers with an existing file system or to convert an existing storage to a HAST-managed pool. This procedure needs to store some metadata on the provider - and there will not be enough required space - available on an existing provider. + and there will not be enough required space available on an + existing provider. A HAST node's primary or secondary role is selected by an administrator, or software like Heartbeat, using &man.hastctl.8;. - On the primary node, - hasta, issue - this command: + On the primary node, hasta, issue this + command: &prompt.root; hastctl role primary test @@ -3634,25 +3636,25 @@ Device 1K-blocks Used Av &prompt.root; hastctl role secondary test - Verify the result by running hastctl on each - node: + Verify the result by running hastctl on + each node: &prompt.root; hastctl status test Check the status line in the output. - If it says degraded, - something is wrong with the configuration file. It should say complete - on each node, meaning that the synchronization - between the nodes has started. The synchronization - completes when hastctl status - reports 0 bytes of dirty extents. - + If it says degraded, something is wrong + with the configuration file. It should say + complete on each node, meaning that the + synchronization between the nodes has started. The + synchronization completes when hastctl + status reports 0 bytes of dirty + extents. The next step is to create a file system on the - GEOM provider and mount it. This must be done on the - primary node. Creating - the file system can take a few minutes, depending on the size - of the hard drive. This example creates a UFS + GEOM provider and mount it. This must be + done on the primary node. Creating the + file system can take a few minutes, depending on the size of + the hard drive. This example creates a UFS file system on /dev/hast/test: &prompt.root; newfs -U /dev/hast/test