Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 4 Feb 2016 21:26:10 GMT
From:      kczekirda@FreeBSD.org
To:        svn-soc-all@FreeBSD.org
Subject:   socsvn commit: r298401 - soc2015/kczekirda/asiabsdcon2016
Message-ID:  <201602042126.u14LQAlM000440@socsvn.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: kczekirda
Date: Thu Feb  4 21:26:10 2016
New Revision: 298401
URL: http://svnweb.FreeBSD.org/socsvn/?view=rev&rev=298401

Log:
  corrects

Modified:
  soc2015/kczekirda/asiabsdcon2016/paper.pdf
  soc2015/kczekirda/asiabsdcon2016/paper.tex

Modified: soc2015/kczekirda/asiabsdcon2016/paper.pdf
==============================================================================
Binary file (source and/or target). No diff available.

Modified: soc2015/kczekirda/asiabsdcon2016/paper.tex
==============================================================================
--- soc2015/kczekirda/asiabsdcon2016/paper.tex	Thu Feb  4 20:55:49 2016	(r298400)
+++ soc2015/kczekirda/asiabsdcon2016/paper.tex	Thu Feb  4 21:26:10 2016	(r298401)
@@ -50,13 +50,13 @@
 \subsection*{Abstract}
 "FreeBSD Test Cluster Automation" is a Google Summer of Code 2015 project for FreeBSD organization to create an infrastructure for automated tests building, installing and first booting process of FreeBSD. 
 
-The base of this project is iPXE - Open Source Boot Firmware, which is used for controlling nodes. A small webapplication written in python is a frontend for the database where are saved informations about nodes, current states and states of revisions. The project is using also mfsBSD and bsdinstall extension for an automatic and non-interactive installation process, was done during Google Summer of Code 2014. 
+The base of this project is iPXE - Open Source Boot Firmware, which is used for controlling nodes. A small webapplication written in python is a frontend for the database where information about nodes, current states and states of revisions are saved. The project is also using mfsBSD and bsdinstall extension for an automatic and non-interactive installation process, it was done during Google Summer of Code 2014. 
 
 On the server side the main part of the project is FreeNAS, it is used to provide shared storage and jails for applications. The ZFS filesystem with deduplication enabled on dataset for source code allows to save every tested revision of the source code with space saving. 
 
-The scope of the project was only infrastructure, without focusing on tests. During the project was made simple tests of building and installing FreeBSD, it's similar to https://jenkins.freebsd.org/ but on the bare metal infrustructure and it's possible to test all commits, not all commits from one of period of time like in jenkins.
+The scope of the project was only infrastructure, without focusing on tests. During the project  simple tests of building and installing FreeBSD were made, it's similar to https://jenkins.freebsd.org/ but on the bare metal infrustructure and it's possible to test all commits, not all commits from one period of time like in jenkins.
 
-Another interesting application for this project is testing drivers, for example network card drivers. Inside testing cluster it's possible build driver after any commit, test it, measure and report.
+Another interesting application for this project is testing drivers, for example network card drivers. Inside testing cluster it's possible to build a driver after any commit, test it, measure and report.
 
 The most important requirement during this project was as little intervention as possible.
 
@@ -69,7 +69,7 @@
 \item boot from wireless network
 \end{itemize}
 
-And the most important for this project: control the boot process with a scripts.
+And the most important for this project is to control the boot process with scripts.
 
 The firts stage of the project was creating iPXE port for FreeBSD. The port is ready for submition and has many possibilities for extensions.
 
@@ -77,7 +77,7 @@
 
 The Preboot eXecution Environment allows to boot from a network interface. Host broadcasts a DHCP discover a request and a DHCP server responds with a DHCP packet that includes PXE options (the name of a boot server and a boot file). The client downloads his boot file by using TFTP and then executes it. In this project it is iPXE loader and this is classical chainloading of iPXE. In the next step iPXE loads MEMDISK kernel with the location of modified mfsBSD iso file as its parameter and then nodes mount shared storage via NFS protocol.
 
-As you can see, there is a lot of services to configure:
+As you can see, there are a lot of services to configure:
 
 \begin{itemize}
 \item DHCP server
@@ -87,8 +87,7 @@
 \item Management application
 \end{itemize}
 
-\subsection{DHCP Server}
-Firts step of booting node from the network is DHCP service. DHCP server responds with a DHCP packet that included PXE options, in this case the name of TFTP boot server and a boot file. 
+The first step of booting node from the network is DHCP service. DHCP server responds with a DHCP packet that included PXE options, in this case the name of TFTP boot server and a boot file. 
 
 An example of the dhcp server configuration:
 
@@ -112,7 +111,7 @@
 In this case we can see, that TFTP server is located on 192.168.22.19 IP address, filename is different and depends on client user-class. iPXE image (filename "undionly.kpxe") is handed when the DHCP request comes from a legacy PXE client. In the next step request sends iPXE DHCP client with user-class iPXE and answer in filename option is the url with menu.ipxe script.
 
 \subsection{TFTP Server}
-Trivial File Transfer Protocol (TFTP) is a service used for transwer iPXE image compiled from the port. Nodes download the image from the TFTP server each time that they boot. In my project I use FreeNAS and TFTP configuration screen shows default configuration and it is sufficient.
+Trivial File Transfer Protocol (TFTP) is a service used for transfer iPXE image compiled from the port. Nodes download the image from the TFTP server each time they boot. In my project I use FreeNAS and TFTP configuration screen shows default configuration and it is sufficient.
 
 \begin{figure}[h]
 \begin{center}
@@ -125,10 +124,10 @@
 HTTP server is used for serving iso image of custom mfsBSD and initial script: menu.ipxe. In my case it's apache in the jail on the FreeNAS box.
 
 \subsection{NFS Server}
-The NFS service is provided by FreeNAS. It's a storage for source code. If node have not enough RAM memory can also save obj files there. NFS export is stored on the ZFS filesystem. The dataset have enabled deduplication. This configuration allows to have access to every revision of the source code without switching beetween revisions in repository.
+The NFS service is provided by FreeNAS. It's a storage for source code. If node have not enough RAM memory can also save obj files there. NFS export is stored on the ZFS filesystem. The dataset has enabled deduplication. This configuration allows to have access to every revision of the source code without switching beetween revisions in repository.
 
 \subsection{Management}
-The frontend of management application is writen in python with bottle framework. Informations about nodes and revisions are saved in the sqlite database. The management is the place, where user can manage nodes and revisions and it works as http server. Application supports methods:
+The frontend of management application is written in python with bottle framework. Information about nodes and revisions is saved in the sqlite database. The management is the place, where the user can manage nodes and revisions and it works as http server. Application supports methods:
 
 \begin{itemize}
 \item / to provide default ipxe script
@@ -138,7 +137,7 @@
 \item /admin/delete\_node/:id
 \item /admin/add\_task
 \item /admin/delete\_task/:id
-\item /menu/:mac to send static ipxe script which name is saved in the database
+\item /menu/:mac to send static ipxe script whose name is saved in the database
 \item /static/ to provide static files
 \item /admin/take\_task/:mac to start environment preparing 
 \item /admin/change\_boot/:host/:new to change boot ipxe script
@@ -146,14 +145,14 @@
 \item /admin/change\_node\_status/:hostname/:new\_status
 \end{itemize}
 
-Example screenshot you can see below.
+Example screenshot you can see at Figure 1.
 
 \section{Client side}
-From client side there is only one thing I have to carry on - set network card as first booting device. The iPXE uses script decides which is the next step on booting - hard drive or network.
+From client side there is only one thing I have to carry on - set network card as the first booting device. The iPXE uses script and decides which is the next step on booting is -  it’s either hard drive or network.
 
 \section{mfsBSD configuration}
 
-mfsBSD configuration is very simple, because I added only this lines to mfsbsd/conf/rc.local.sample file:
+mfsBSD configuration is very simple, because I added only these lines to mfsbsd/conf/rc.local.sample file:
 
 {\tt \small\begin{verbatim}
 sleep 10
@@ -164,7 +163,7 @@
 \end{verbatim}
 }
 
-Node mounts storage and runs cluster script, where are other instructions.
+Node mounts storage and runs cluster script, where other instructions are.
 
 \section{iPXE scripts}
 
@@ -180,7 +179,7 @@
 \end{verbatim}
 }
 
-In this script node sends request to management application and tell them that it is clean and it is ready to take new task. Very important parameter is mac address of the network card. The management uses this parameter to search which is the next one ipxe script (wait, cluster or hdd). 
+In this script node sends request to management application and tells them that it is clean and it is ready to take new task. Very important parameter is mac address of the network card. The management uses this parameter to search which is the next one ipxe script (wait, cluster or hdd). 
 
 The second script is wait.ipxe:
 
@@ -203,7 +202,7 @@
 
 This script is the infinite loop. Every 120 seconds node asks the management for new ipxe script. During this time the management is preparing environment (creating directories for revision, copying the source tree etc).
 
-When server finish preparing environment ipxe script for node changes to cluster.ipxe:
+When server finishes preparing environment ipxe script for node changes to cluster.ipxe:
 
 {\tt \small\begin{verbatim}
 #!ipxe
@@ -218,7 +217,7 @@
 
 and node boots from mfsBSD iso and do tests. 
 
-When all tests is fine cluster script change node status (and ipxe script) to hdd:
+When all tests are fine cluster script changes node status (and ipxe script) to hdd:
 
 {\tt \small\begin{verbatim}
 #!ipxe
@@ -239,10 +238,10 @@
 \subsection{The node}
 \begin{itemize}
 \item the node starts netbooting from take task status
-\item in first step of PXE booting node sends DHCP request and DHCP server respond with \texttt{next-server} and \texttt{filename} options and node knows what and from to download. 
+\item in the first step of PXE booting node sends DHCP request and DHCP server responds with \texttt{next-server} and \texttt{filename} options and node knows what and where from to download. 
 \item the node downloads iPXE loader binary by TFTP protocol and executes it
-\item iPXE sends DHCP request and gives an answer with different filename option - url to iPXE starting script
-\item iPXE starting script asks the management for chainloading next script and authorize itself by mac address
+\item iPXE sends DHCP request and gives an answer with a different filename option - url to iPXE starting script
+\item iPXE starting script asks the management for chainloading next script and authorizes itself by mac address
 \item management return \texttt{take\_task.ipxe} file
 \item \texttt{take\_task.ipxe} runs next chainloading and node waits for environment preparation, in this time on server side script \texttt{take\_task.sh} prepares files (update svn, rsync to new src space)
 \item the node chainloads \texttt{cluster.ipxe} script and node starts mfsBSD
@@ -252,17 +251,21 @@
 \item the node reboots and boots from the network like in first step
 \end{itemize}
 
-If any step from building, installing or booting stage fails then the node starts netbooting and takes new task.
+If any step from building, installing or booting stage fails then the node starts netbooting and takes a new task.
+
+Diagram of states you can see at Figure 2.
 
 \subsection{Revision}
 
 \begin{itemize}
-\item first status of revision is NEW, in this status revision waits for free node to take task 
+\item the first status of revision is NEW, in this status revision awaits for free node to take a task 
 \item when the node starts netbooting and revision is first in the queue status changes to preparing
-\item in next steps revision is tested by compilation, installation and boot
-\item revision is marked as success or failed and logs of every steps are available on management server
+\item in the next steps revision is tested by compilation, installation and boot
+\item revision is marked as success or failed and logs of every steps are available on the management server
 \end{itemize}
 
+Diagram of states you can see at Figure 3.
+
 \section{Urls}
 
 \begin{itemize}
@@ -277,6 +280,7 @@
 \begin{center}
   \centering
   \includegraphics[width=1\textwidth]{mgmt.png}
+  \caption {Dashboard screenshot.}
 \end{center}
 \end{figure}
 
@@ -286,6 +290,7 @@
 \begin{center}
   \centering
   \includegraphics[width=1\textwidth]{node.png}
+  \caption {Nodes states diagram}
 \end{center}
 \end{figure}
 
@@ -295,6 +300,7 @@
 \begin{center}
   \centering
   \includegraphics[width=1\textwidth]{revision.png}
+  \caption {Revisions states diagram}
 \end{center}
 \end{figure}
 



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201602042126.u14LQAlM000440>