Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 31 Jan 2012 10:49:53 +0000
From:      Daniel Gerzo <danger@freebsd.org>
To:        doc@freebsd.org
Subject:   New Handbook Section for Review - graid3
Message-ID:  <20120131104953.GA55314@freefall.freebsd.org>

next in thread | raw e-mail | index | archive | help

--SUOF0GtieIMvvwua
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hello everybody,

  A new Handbook section covering graid3 is now available for review;
  the patch is attached.

  The text is based on PR 164228. A built version is available at
  http://people.freebsd.org/~danger/geom-raid3.html.

  Comments are welcome.

-- 
Kind regards
  Daniel Gerzo

--SUOF0GtieIMvvwua
Content-Type: text/x-diff; charset=us-ascii
Content-Disposition: attachment; filename="geom.diff"

Index: chapter.sgml
===================================================================
RCS file: /home/dcvs/doc/en_US.ISO8859-1/books/handbook/geom/chapter.sgml,v
retrieving revision 1.51
diff -u -r1.51 chapter.sgml
--- chapter.sgml	21 Nov 2011 18:11:25 -0000	1.51
+++ chapter.sgml	31 Jan 2012 10:44:44 -0000
@@ -436,6 +436,164 @@
     </sect2>
   </sect1>
 
+  <sect1 id="GEOM-raid3">
+    <sect1info>
+      <authorgroup>
+	<author>
+	  <firstname>Mark</firstname>
+	  <surname>Gladman</surname>
+	  <contrib>Written by </contrib>
+	</author>
+	<author>
+	  <firstname>Daniel</firstname>
+	  <surname>Gerzo</surname>
+	</author>
+      </authorgroup>
+      <authorgroup>
+	<author>
+	  <firstname>Tom</firstname>
+	  <surname>Rhodes</surname>
+	  <contrib>Based on documentation by </contrib>
+	</author>
+	<author>
+	  <firstname>Murray</firstname>
+	  <surname>Stokely</surname>
+	</author>
+      </authorgroup>
+    </sect1info>
+
+    <indexterm>
+      <primary>GEOM</primary>
+    </indexterm>
+    <indexterm>
+      <primary>RAID3</primary>
+    </indexterm>
+
+    <title><acronym>RAID</acronym>3 - byte-level striping with dedicated
+      parity</title>
+
+    <para><acronym>RAID</acronym>3 is a method used to combine several
+      disk drives into a single volume with a dedicated parity
+      disk.  In a <acronym>RAID</acronym>3 system, data is split up in
+      to a number of bytes that get written across all the drives in
+      the array except for one disk which acts as a dedicated parity
+      disk.  This means that reading 1024kb from a
+      <acronym>RAID</acronym>3 implementation will access all disks in
+      the array.  This performance can be enhanced by using multiple
+      disk controllers.  The <acronym>RAID</acronym>3 array provides a
+      fault tolerance of 1 drive, while providing 1 - 1/n size of the
+      total size of all drives in the array, where n is the total
+      amount of hard drives in the array.  Such configuration is
+      mostly suitable for storing data of larger sizes, e.g.
+      multimedia files.</para>
+
+    <para>At least 3 physical hard drives are required to build a
+      <acronym>RAID</acronym>3 array.  Each disk must be of the same
+      size, since I/O requests are interleaved to read or write to
+      multiple disks in parallel.  Also due to the nature of
+      <acronym>RAID</acronym>3, the number of components must be
+      equal to 3, 5, 9, 17, etc. (2^n + 1).</para>
+
+    <sect2>
+      <title>Creating Dedicated <acronym>RAID</acronym>3 Array</title>
+
+      <para>In &os;, support for <acronym>RAID</acronym>3 is
+	implemented by the &man.graid3.8; <acronym>GEOM</acronym>
+	class.  In order to create a dedicated
+	<acronym>RAID</acronym>3 array on &os;, the following steps
+	have to be done.</para>
+
+      <note>
+	<para>While it is theoretically possible to boot from
+	  <acronym>RAID</acronym>3 array on &os;, such configuration
+	  is not common and is not advised.  As such, this section
+	  does not provide description of how to accomplish such
+	  configuration.</para>
+      </note>
+
+      <procedure>
+	<step>
+	  <para>First step is to load the appropriate kernel module.
+	    This can be done through invoking the following
+	    command:</para>
+
+	  <screen>&prompt.root; <userinput>graid3 load</userinput></screen>
+
+	  <para>Alternatively, it is possible to manually load the
+	    <filename>geom_raid3.ko</filename> module:</para>
+
+	  <screen>&prompt.root; <userinput>kldload geom_raid3.ko</userinput></screen>
+	</step>
+
+	<step>
+	  <para>Create or ensure that a suitable mount point
+	    exists:</para>
+	  
+	  <screen>&prompt.root; <userinput>mkdir <replaceable>/multimedia/</replaceable></userinput></screen>
+	</step>
+
+	<step>
+	  <para>Determine the device names for the disks which will be
+	    added to the array, and create the new
+	    <acronym>RAID</acronym>3 device. The final device listed
+	    will act as the dedicated parity disk.  The following
+	    example will use three unpartitioned
+	    <acronym>ATA</acronym> drives &mdash;
+	    <devicename><replaceable>ada1</replaceable></devicename>
+	    and <devicename><replaceable>ada2</replaceable></devicename>
+	    for data and
+	    <devicename><replaceable>ada3</replaceable></devicename>
+	    for parity:</para>
+
+	  <screen>&prompt.root; <userinput>graid3 label -v gr0 /dev/ada1 /dev/ada2 /dev/ada3</userinput>
+Metadata value stored on /dev/ada1.
+Metadata value stored on /dev/ada2.
+Metadata value stored on /dev/ada3.
+Done.</screen>
+	</step>
+
+	<step>
+	  <para>Partition the newly created
+	    <devicename>gr0</devicename> device and put an UFS file
+	    system on it:</para>
+
+	  <screen>&prompt.root; <userinput>gpart create -s GPT /dev/raid3/gr0</userinput>
+&prompt.root; <userinput>gpart add -t freebsd-ufs /dev/raid3/gr0</userinput>
+&prompt.root; <userinput>newfs -j /dev/raid3/gr0p1</userinput></screen>
+	 
+	  <para>Many numbers will glide across the screen, and after a
+	    few seconds, the process will be complete.  The volume has
+	    been created and is ready to be mounted.</para>
+	</step>
+
+	<step>
+	  <para>The last step is to mount the file system:</para>
+
+	  <screen>&prompt.root; <userinput>mount /dev/raid3/gr0p1 /multimedia/</userinput></screen>
+
+	  <para>The <acronym>RAID</acronym>3 array is now ready to
+	    use.</para>
+	</step>
+      </procedure>
+
+      <note>
+	<para>To retain this configuration across reboots, the
+	  <filename>geom_raid3.ko</filename> module must be
+	  automatically loaded during the system initialization.  To
+	  accomplish this, invoke the following command:</para>
+
+	<screen>&prompt.root; <userinput>echo 'geom_raid3_load="YES"' >> /boot/loader.conf</userinput></screen>
+
+	<para>It is also required to instruct the system to
+	  automatically mount the array's file system during the boot
+	  process.  For this purpose, place the volume information in
+	  the <filename>/etc/fstab</filename> file:</para>
+
+	<screen>&prompt.root; <userinput>echo "/dev/raid3/gr0p1 /multimedia ufs rw 2 2" >> /etc/fstab</userinput></screen>
+      </note>
+    </sect2>
+  </sect1>
+
   <sect1 id="geom-ggate">
     <title>GEOM Gate Network Devices</title>
 

--SUOF0GtieIMvvwua--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20120131104953.GA55314>