Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 29 May 2017 14:50:37 +0000 (UTC)
From:      Benedict Reuschling <bcr@FreeBSD.org>
To:        doc-committers@freebsd.org, svn-doc-all@freebsd.org, svn-doc-head@freebsd.org
Subject:   svn commit: r50292 - head/en_US.ISO8859-1/books/handbook/virtualization
Message-ID:  <201705291450.v4TEobit039355@repo.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: bcr
Date: Mon May 29 14:50:37 2017
New Revision: 50292
URL: https://svnweb.freebsd.org/changeset/doc/50292

Log:
  Add a new section about Xen to the virtualization chapter.
  It is based on the entries in the FreeBSD Wiki andi Xen's
  own instructions specific to FreeBSD.
  
  In particular, it describes how to configure the host machine,
  set up the Dom0, and add a DomU VM afterwards.
  
  Reviewed by:	royger, wblock
  Differential Revision:	https://reviews.freebsd.org/D10774

Modified:
  head/en_US.ISO8859-1/books/handbook/virtualization/chapter.xml

Modified: head/en_US.ISO8859-1/books/handbook/virtualization/chapter.xml
==============================================================================
--- head/en_US.ISO8859-1/books/handbook/virtualization/chapter.xml	Mon May 29 13:57:38 2017	(r50291)
+++ head/en_US.ISO8859-1/books/handbook/virtualization/chapter.xml	Mon May 29 14:50:37 2017	(r50292)
@@ -30,6 +30,16 @@
 	<contrib>bhyve section by </contrib>
       </author>
     </authorgroup>
+
+    <authorgroup>
+      <author>
+	<personname>
+	  <firstname>Benedict</firstname>
+	  <surname>Reuschling</surname>
+	</personname>
+	<contrib>Xen section by </contrib>
+      </author>
+    </authorgroup>
   </info>
 
   <sect1 xml:id="virtualization-synopsis">
@@ -1354,17 +1364,338 @@ ifconfig_bridge0="addm <replaceable>igb0
 	</step>
       </procedure>
     </sect2>
-<!--
-  Note:  There is no working/end-user ready Xen support for FreeBSD as of 07-2010.
-         Hide all information regarding Xen under FreeBSD.
-
-    <sect2 id="virtualization-other">
-      <title>Other Virtualization Options</title>
-
-      <para>There is ongoing work in getting
-	<application>&xen;</application>
-	to work as a host environment on &os;.</para>
-    </sect2>
+  </sect1>
+
+  <sect1 xml:id="virtualization-host-xen">
+    <title>&os; as a &xen;-Host</title>
+
+    <para><application>Xen</application> is a GPLv2-licensed <link
+	xlink:href="https://en.wikipedia.org/wiki/Hypervisor#Classification">type
+	1 hypervisor</link> for &intel; and &arm; architectures.  &os;
+      has included &i386; and &amd;&nbsp;64-Bit <link
+	xlink:href="https://wiki.xenproject.org/wiki/DomU">DomU</link>;
+      and <link
+	xlink:href="https://en.wikipedia.org/wiki/Amazon_Elastic_Compute_Cloud">Amazon
+	EC2</link> unprivileged domain (virtual machine) support since
+      &os;&nbsp;8.0 and includes Dom0 control domain (host) support in
+      &os;&nbsp;11.0.  Support for para-virtualized (PV) domains has
+      been removed from &os;&nbsp;11 in favor of hardware virtualized
+      (HVM) domains, which provides better performance.</para>
+
+    <para>&xen; is a bare-metal hypervisor, which means that it is the
+      first program loaded after the BIOS. A special privileged guest
+      called the Domain-0 (<literal>Dom0</literal> for short) is then
+      started.  The Dom0 uses its special privileges to directly
+      access the underlying physical hardware, making it a
+      high-performance solution.  It is able to access the disk
+      controllers and network adapters directly.  The &xen; management
+      tools to manage and control the &xen; hypervisor are also used
+      by the Dom0 to create, list, and destroy VMs.  Dom0 provides
+      virtual disks and networking for unprivileged domains, often
+      called <literal>DomU</literal>.  &xen; Dom0 can be compared to
+      the service console of other hypervisor solutions, while the
+      DomU is where individual guest VMs are run.</para>
+
+<!-- Hidden until the mode in which FreeBSD uses Xen is supported.
+    <para>Features of &xen; include GPU passthrough from the host
+      running the Dom0 into a DomU guest machine.  This requires a
+      CPU, chipset, and BIOS with VT-D support and might require extra
+      patches or not work with all graphics cards.  A list of adapters
+      can be found in the <link
+	xlink:href="https://wiki.xenproject.org/wiki/Xen_VGA_Passthrough_Tested_Adapters">Xen
+	Wiki</link>.  Note that not all GPUs listed there are
+      supported on &os;.  The  &xen; hypervisor also supports PCI
+      passthrough to give a DomU guest full, direct access to a PCI
+      device like NIC, disk controller, or soundcard.</para>
 -->
+    <para>&xen; can migrate VMs between different &xen; servers.  When
+      the two xen hosts share the same underlying storage, the
+      migration can be done without having to shut the VM down first.
+      Instead, the migration is performed live while the DomU is
+      running and there is no need to restart it or plan a downtime.
+      This is useful in maintenance scenarios or upgrade windows to
+      ensure that the services provided by the DomU are still
+      provided.  Many more features of &xen; are listed on the <link
+	xlink:href="https://wiki.xenproject.org/wiki/Category:Overview">Xen
+	Wiki Overview page</link>.  Note that not all features are
+      supported on &os; yet.</para>
+
+    <sect2 xml:id="virtualization-host-xen-requirements">
+      <title>Hardware Requirements for &xen; Dom0</title>
+
+      <para>To run the &xen; hypervisor on a host, certain hardware
+	functionality is required.  Hardware virtualized domains
+	require Extended Page Table (<link
+	  xlink:href="http://en.wikipedia.org/wiki/Extended_Page_Table">EPT</link>)
+	and Input/Output Memory Management Unit (<link
+	  xlink:href="http://en.wikipedia.org/wiki/List_of_IOMMU-supporting_hardware">IOMMU</link>)
+	support in the host processor.</para>
+    </sect2>
+
+    <sect2 xml:id="virtualization-host-xen-dom0-setup">
+      <title>&xen; Dom0 Control Domain Setup</title>
+
+      <para>The <package>emulators/xen</package> package works with
+	&os;&nbsp;11 amd64 binary snapshots and equivalent systems
+	built from source.  This example assumes VNC output for
+	unprivileged domains which is accessed from a another system
+	using a tool such as <package>net/tightvnc</package>.</para>
+
+      <para>Install <package>emulators/xen</package>:</para>
+
+      <screen>&prompt.root; <userinput>pkg install xen</userinput></screen>
+
+      <para>Configuration files must be edited to prepare the host
+	for the Dom0 integration.  An entry to
+	<filename>/etc/sysctl.conf</filename> disables the limit on
+	how many pages of memory are allowed to be wired.  Otherwise,
+	domU VMs with higher memory requirements will not run.</para>
+
+      <screen>&prompt.root; <userinput>sysrc -f /etc/sysctl.conf vm.max_wired=-1</userinput></screen>
+
+      <para>Another memory-related setting involves changing
+	<filename>/etc/login.conf</filename>, setting the
+	<literal>memorylocked</literal> option to
+	<literal>unlimited</literal>.  Otherwise, creating DomU
+	domains may fail with <literal>Cannot allocate
+	  memory</literal> errors.  After making the change to
+	<filename>/etc/login.conf</filename>, run
+	<command>cap_mkdb</command> to update the capability database.
+	See <xref linkend="security-resourcelimits"/> for
+	details.</para>
+
+      <screen>&prompt.root; <userinput>sed -i '' -e 's/memorylocked=64K/memorylocked=unlimited/' /etc/login.conf</userinput>
+&prompt.root; <userinput>cap_mkdb /etc/login.conf</userinput></screen>
+
+      <para>Add an entry for the &xen; console to
+	<filename>/etc/ttys</filename>:</para>
+
+      <screen>&prompt.root; <userinput>echo 'xc0     "/usr/libexec/getty Pc"         xterm   on  secure' >> /etc/ttys</userinput></screen>
+
+      <para>Selecting a &xen; kernel in
+	<filename>/boot/loader.conf</filename> activates the Dom0.
+	&xen; also requires resources like CPU and memory from the
+	host machine for itself and other DomU domains.  How much CPU
+	and memory depends on the individual requirements and hardware
+	capabilities.  In this example, 8&nbsp;GB of memory and 4
+	virtual CPUs are made available for the Dom0. The serial
+	console is also activated and logging options are
+	defined.</para>
+
+      <screen>&prompt.root; <userinput>sysrc -f /boot/loader.conf hw.pci.mcfg=0</userinput>
+&prompt.root; <userinput>xen_kernel="/boot/xen"</userinput>
+&prompt.root; <userinput>xen_cmdline="dom0_mem=<replaceable>8192M</replaceable> dom0_max_vcpus=<replaceable>4</replaceable> dom0pvh=1 console=com1,vga com1=115200,8n1 guest_loglvl=all loglvl=all"</userinput></screen>
+
+	<para>Log files that &xen; creates for the Dom0 and DomU VMs
+	  are stored in <filename>/var/log/xen</filename>.  This
+	  directory does not exist by default and must be
+	  created.</para>
+
+	<screen>&prompt.root; <userinput>mkdir -p /var/log/xen</userinput>
+&prompt.root; <userinput>chmod 644 /var/log/xen</userinput></screen>
+
+	<para>&xen; provides a boot menu to activate and de-activate
+	  the hypervisor on demand in
+	  <filename>/boot/menu.rc.local</filename>:</para>
+
+	<screen>&prompt.root; <userinput>echo "try-include /boot/xen.4th" >> /boot/menu.rc.local</userinput></screen>
+
+	<para>Activate the xencommons service during system
+	  startup:</para>
+
+	<screen>&prompt.root; <userinput>sysrc xencommons_enable=yes</userinput></screen>
+
+	<para>These settings are enough to start a Dom0-enabled
+	  system.  However, it lacks network functionality for the
+	  DomU machines.  To fix that, define a bridged interface with
+	  the main NIC of the system which the DomU VMs can use to
+	  connect to the network.  Replace
+	  <replaceable>igb0</replaceable> with the host network
+	  interface name.</para>
+
+	<screen>&prompt.root; <userinput>sysrc autobridge_interfaces=bridge0</userinput>
+&prompt.root; <userinput>sysrc autobridge_bridge0=<replaceable>igb0</replaceable></userinput>
+&prompt.root; <userinput>sysrc ifconfig_bridge0=SYNCDHCP</userinput></screen>
+
+	<para>Restart the host to load the &xen; kernel and start the
+	  Dom0.</para>
+
+	<screen>&prompt.root; <userinput>reboot</userinput></screen>
+
+	<para>After successfully booting the &xen; kernel and logging
+	  into the system again, the &xen; management tool
+	  <command>xl</command> is used to show information about the
+	  domains.</para>
+
+	<screen>&prompt.root; <userinput>xl list</userinput>
+Name                                        ID   Mem VCPUs      State   Time(s)
+Domain-0                                     0  8192     4     r-----     962.0</screen>
+
+	<para>The output confirms that the Dom0 (called
+	  <literal>Domain-0</literal>) has the ID <literal>0</literal>
+	  and is running.  It also has the memory and virtual CPUs
+	  that were defined in <filename>/boot/loader.conf</filename>
+	  earlier.  More information can be found in the <link
+	    xlink:href="https://www.xenproject.org/help/documentation.html">&xen;
+	    Documentation</link>.  NDomU guest VMs can now be
+	  created.</para>
+      </sect2>
+
+      <sect2 xml:id="virtualization-host-xen-domu-setup">
+	<title>&xen; DomU Guest VM Configuration</title>
+
+	<para>Unprivileged domains consist of a configuration file and
+	  virtual or physical hard disks.  Virtual disk storage for
+	  the DomU can be files created by &man.truncate.1; or ZFS
+	  volumes as described in <xref linkend="zfs-zfs-volume"/>.
+	  In this example, a 20&nbsp;GB volume is used.  A VM is
+	  created with the ZFS volume, a &os; ISO image, 1&nbsp;GB of
+	  RAM and two virtual CPUs.  The ISO installation file is
+	  retrieved with &man.fetch.1; and saved locally in a file
+	  called <filename>freebsd.iso</filename>.</para>
+
+      <screen>&prompt.root; <userinput>fetch <replaceable>ftp://ftp.freebsd.org/pub/FreeBSD/releases/ISO-IMAGES/10.3/FreeBSD-10.3-RELEASE-amd64-bootonly.iso</replaceable>; -o <replaceable>freebsd.iso</replaceable></userinput></screen>
+
+      <para>A ZFS volume of 20&nbsp;GB called
+	<filename>xendisk0</filename> is created to serve as the disk
+	space for the VM.</para>
+
+      <screen>&prompt.root; <userinput>zfs create -V20G -o volmode=dev zroot/xendisk0</userinput></screen>
+
+      <para>The new DomU guest VM is defined in a file.  Some specific
+	definitions like name, keymap, and VNC connection details are
+	also defined.  The following <filename>freebsd.cfg</filename>
+	contains a minimum DomU configuration for this example:</para>
+
+      <screen>&prompt.root; <userinput>cat freebsd.cfg</userinput>
+builder = "hvm" <co xml:id="co-xen-builder"/>
+name = "freebsd" <co xml:id="co-xen-name"/>
+memory = 1024 <co xml:id="co-xen-memory"/>
+vcpus = 2 <co xml:id="co-xen-vcpus"/>
+vif = [ 'mac=00:16:3E:74:34:32,bridge=bridge0' ] <co xml:id="co-xen-vif"/>
+disk = [
+'/dev/zvol/tank/xendisk0,raw,hda,rw', <co xml:id="co-xen-disk"/>
+'/root/freebsd.iso,raw,hdc:cdrom,r' <co xml:id="co-xen-cdrom"/>
+  ]
+vnc = 1 <co xml:id="co-xen-vnc"/>
+vnclisten = "0.0.0.0"
+serial="pty"
+usbdevice="tablet"</screen>
+
+      <para>These lines are explained in more detail:</para>
+
+      <calloutlist>
+	<callout arearefs="co-xen-builder">
+	  <para>This defines what kind of virtualization to use.
+	    <literal>hvm</literal> refers to hardware-assisted
+	    virtualization or hardware virtual machine.  Guest
+	    operating systems can run unmodified on CPUs with
+	    virtualization extensions, providing nearly the same
+	    performance as running on physical hardware.
+	    <literal>generic</literal> is the default value and
+	    creates a PV domain.</para>
+	</callout>
+
+	<callout arearefs="co-xen-name">
+	  <para>Name of this virtual machine to distinguish it from
+	    others running on the same Dom0.  Required.</para>
+	</callout>
+
+	<callout arearefs="co-xen-memory">
+	  <para>Quantity of RAM in megabytes to make available to the
+	    VM.  This amount is subtracted from the hypervisor's total
+	    available memory, not the memory of the Dom0.</para>
+	</callout>
+
+	<callout arearefs="co-xen-vcpus">
+	  <para>Number of virtual CPUs available to the guest VM.  For
+	    best performance, do not create guests with more virtual
+	    CPUs than the number of physical CPUs on the host.</para>
+	</callout>
+
+	<callout arearefs="co-xen-vif">
+	  <para>Virtual network adapter.  This is the bridge connected
+	    to the network interface of the host.  The
+	    <literal>mac</literal> parameter is the MAC address set on
+	    the virtual network interface.  This parameter is
+	    optional, if no MAC is provided &xen; will generate a
+	    random one.</para>
+	</callout>
+
+	<callout arearefs="co-xen-disk">
+	  <para>Full path to the disk, file, or ZFS volume of the disk
+	    storage for this VM.  Options and multiple disk
+	    definitions are separated by commas.</para>
+	</callout>
+
+	<callout arearefs="co-xen-cdrom">
+	  <para>Defines the Boot medium from which the initial
+	    operating system is installed.  In this example, it is the
+	    ISO imaged downloaded earlier.  Consult the &xen;
+	    documentation for other kinds of devices and options to
+	    set.</para>
+	</callout>
+
+	<callout arearefs="co-xen-vnc">
+	  <para>Options controlling VNC connectivity to the serial
+	    console of the DomU.  In order, these are: active VNC
+	    support, define IP address on which to listen, device node
+	    for the serial console, and the input method for precise
+	    positioning of the mouse and other input methods.
+	    <literal>keymap</literal> defines which keymap to use, and
+	    is <literal>english</literal> by default.</para>
+	</callout>
+      </calloutlist>
+
+      <para>After the file has been created with all the necessary
+	options, the DomU is created by passing it to <command>xl
+	  create</command> as a parameter.</para>
+
+      <screen>&prompt.root; <userinput>xl create freebsd.cfg</userinput></screen>
+
+      <note>
+	<para>Each time the Dom0 is restarted, the configuration file
+	  must be passed to <command>xl create</command> again to
+	  re-create the DomU.  By default, only the Dom0 is created
+	  after a reboot, not the individual VMs.  The VMs can
+	  continue where they left off as they stored the operating
+	  system on the virtual disk.  The virtual machine
+	  configuration can change over time (for example, when adding
+	  more memory).  The virtual machine configuration files must
+	  be properly backed up and kept available to be able to
+	  re-create the guest VM when needed.</para>
+      </note>
+
+      <para>The output of <command>xl list</command> confirms that the
+	DomU has been created.</para>
+
+      <screen>&prompt.root; <userinput>xl list</userinput>
+Name                                        ID   Mem VCPUs      State   Time(s)
+Domain-0                                     0  8192     4     r-----  1653.4
+freebsd                                      1  1024     1     -b----   663.9</screen>
+
+      <para>To begin the installation of the base operating system,
+	start the VNC client, directing it to the main network address
+	of the host or to the IP address defined on the
+	<literal>vnclisten</literal> line of
+	<filename>freebsd.cfg</filename>.  After the operating system
+	has been installed, shut down the DomU and disconnect the VNC
+	viewer.  Edit <filename>freebsd.cfg</filename>, removing the
+	line with the <literal>cdrom</literal> definition or
+	commenting it out by inserting a <literal>#</literal>
+	character at the beginning of the line.  To load this new
+	configuration, it is necessary to remove the old DomU with
+	<command>xl destroy</command>, passing either the name or the
+	id as the parameter.  Afterwards, recreate it using the
+	modified <filename>freebsd.cfg</filename>.</para>
+
+      <screen>&prompt.root; <userinput>xl destroy freebsd</userinput>
+&prompt.root; <userinput>xl create freebsd.cfg</userinput></screen>
+
+      <para>The machine can then be accessed again using the VNC
+	viewer.  This time, it will boot from the virtual disk where
+	the operating system has been installed and can be used as a
+	virtual machine.</para>
+    </sect2>
   </sect1>
 </chapter>



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201705291450.v4TEobit039355>