From owner-svn-doc-head@freebsd.org Sat Jun 23 14:55:55 2018 Return-Path: Delivered-To: svn-doc-head@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 4F9221023D4F; Sat, 23 Jun 2018 14:55:55 +0000 (UTC) (envelope-from bcr@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 03F85851F7; Sat, 23 Jun 2018 14:55:55 +0000 (UTC) (envelope-from bcr@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id DA4261156E; Sat, 23 Jun 2018 14:55:54 +0000 (UTC) (envelope-from bcr@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id w5NEtswN048274; Sat, 23 Jun 2018 14:55:54 GMT (envelope-from bcr@FreeBSD.org) Received: (from bcr@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id w5NEtstd048273; Sat, 23 Jun 2018 14:55:54 GMT (envelope-from bcr@FreeBSD.org) Message-Id: <201806231455.w5NEtstd048273@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: bcr set sender to bcr@FreeBSD.org using -f From: Benedict Reuschling Date: Sat, 23 Jun 2018 14:55:54 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-all@freebsd.org, svn-doc-head@freebsd.org Subject: svn commit: r51904 - head/en_US.ISO8859-1/articles/linux-emulation X-SVN-Group: doc-head X-SVN-Commit-Author: bcr X-SVN-Commit-Paths: head/en_US.ISO8859-1/articles/linux-emulation X-SVN-Commit-Revision: 51904 X-SVN-Commit-Repository: doc MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-head@freebsd.org X-Mailman-Version: 2.1.26 Precedence: list List-Id: SVN commit messages for the doc tree for head List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 23 Jun 2018 14:55:55 -0000 Author: bcr Date: Sat Jun 23 14:55:54 2018 New Revision: 51904 URL: https://svnweb.freebsd.org/changeset/doc/51904 Log: Style cleanup, purely cosmetical, no visual content changes: - Wrap overly long lines - Use two spaces after a sentence stop in a few places Modified: head/en_US.ISO8859-1/articles/linux-emulation/article.xml Modified: head/en_US.ISO8859-1/articles/linux-emulation/article.xml ============================================================================== --- head/en_US.ISO8859-1/articles/linux-emulation/article.xml Sat Jun 23 06:57:42 2018 (r51903) +++ head/en_US.ISO8859-1/articles/linux-emulation/article.xml Sat Jun 23 14:55:54 2018 (r51904) @@ -3,13 +3,23 @@ "http://www.FreeBSD.org/XML/share/xml/freebsd50.dtd"> -
- &linux; emulation in &os; - +
+ + &linux; emulation in &os; - RomanDivacky -
rdivacky@FreeBSD.org
-
+ + + Roman + Divacky + + +
+ rdivacky@FreeBSD.org +
+
+
&tm-attrib.adobe; @@ -28,151 +38,165 @@ $FreeBSD$ - This masters thesis deals with updating the &linux; emulation layer - (the so called Linuxulator). The task was to update the layer to match - the functionality of &linux; 2.6. As a reference implementation, the - &linux; 2.6.16 kernel was chosen. The concept is loosely based on - the NetBSD implementation. Most of the work was done in the summer - of 2006 as a part of the Google Summer of Code students program. - The focus was on bringing the NPTL (new &posix; - thread library) support into the emulation layer, including - TLS (thread local storage), + This masters thesis deals with updating the &linux; + emulation layer (the so called + Linuxulator). The task was to update + the layer to match the functionality of &linux; 2.6. As a + reference implementation, the &linux; 2.6.16 kernel was + chosen. The concept is loosely based on the NetBSD + implementation. Most of the work was done in the summer of + 2006 as a part of the Google Summer of Code students program. + The focus was on bringing the NPTL (new + &posix; thread library) support into the emulation layer, + including TLS (thread local storage), futexes (fast user space mutexes), PID mangling, and some other minor things. Many small problems were identified and fixed in the process. My work was integrated into the main &os; source - repository and will be shipped in the upcoming 7.0R release. We, - the emulation development team, are working on making the - &linux; 2.6 emulation the default emulation layer in &os;. + repository and will be shipped in the upcoming 7.0R release. + We, the emulation development team, are working on making the + &linux; 2.6 emulation the default emulation layer in + &os;.
Introduction - In the last few years the open source &unix; based operating systems - started to be widely deployed on server and client machines. Among - these operating systems I would like to point out two: &os;, for its BSD - heritage, time proven code base and many interesting features and - &linux; for its wide user base, enthusiastic open developer community - and support from large companies. &os; tends to be used on server - class machines serving heavy duty networking tasks with less usage on - desktop class machines for ordinary users. While &linux; has the same - usage on servers, but it is used much more by home based users. This - leads to a situation where there are many binary only programs available - for &linux; that lack support for &os;. + In the last few years the open source &unix; based operating + systems started to be widely deployed on server and client + machines. Among these operating systems I would like to point + out two: &os;, for its BSD heritage, time proven code base and + many interesting features and &linux; for its wide user base, + enthusiastic open developer community and support from large + companies. &os; tends to be used on server class machines + serving heavy duty networking tasks with less usage on desktop + class machines for ordinary users. While &linux; has the same + usage on servers, but it is used much more by home based users. + This leads to a situation where there are many binary only + programs available for &linux; that lack support for + &os;. - Naturally, a need for the ability to run &linux; binaries on a &os; - system arises and this is what this thesis deals with: the emulation of - the &linux; kernel in the &os; operating system. + Naturally, a need for the ability to run &linux; binaries on + a &os; system arises and this is what this thesis deals with: + the emulation of the &linux; kernel in the &os; operating + system. - During the Summer of 2006 Google Inc. sponsored a project which - focused on extending the &linux; emulation layer (the so called Linuxulator) - in &os; to include &linux; 2.6 facilities. This thesis is written as a - part of this project. + During the Summer of 2006 Google Inc. sponsored a project + which focused on extending the &linux; emulation layer (the so + called Linuxulator) in &os; to include &linux; 2.6 facilities. + This thesis is written as a part of this project. A look inside… - In this section we are going to describe every operating system in - question. How they deal with syscalls, trapframes etc., all the low-level - stuff. We also describe the way they understand common &unix; - primitives like what a PID is, what a thread is, etc. In the third - subsection we talk about how &unix; on &unix; emulation could be done - in general. + In this section we are going to describe every operating + system in question. How they deal with syscalls, trapframes + etc., all the low-level stuff. We also describe the way they + understand common &unix; primitives like what a PID is, what a + thread is, etc. In the third subsection we talk about how + &unix; on &unix; emulation could be done in general. What is &unix; &unix; is an operating system with a long history that has - influenced almost every other operating system currently in use. - Starting in the 1960s, its development continues to this day (although - in different projects). &unix; development soon forked into two main - ways: the BSDs and System III/V families. They mutually influenced - themselves by growing a common &unix; standard. Among the - contributions originated in BSD we can name virtual memory, TCP/IP - networking, FFS, and many others. The System V branch contributed to - SysV interprocess communication primitives, copy-on-write, etc. &unix; - itself does not exist any more but its ideas have been used by many - other operating systems world wide thus forming the so called &unix;-like - operating systems. These days the most influential ones are &linux;, - Solaris, and possibly (to some extent) &os;. There are in-company - &unix; derivatives (AIX, HP-UX etc.), but these have been more and - more migrated to the aforementioned systems. Let us summarize typical - &unix; characteristics. + influenced almost every other operating system currently in + use. Starting in the 1960s, its development continues to this + day (although in different projects). &unix; development soon + forked into two main ways: the BSDs and System III/V families. + They mutually influenced themselves by growing a common &unix; + standard. Among the contributions originated in BSD we can + name virtual memory, TCP/IP networking, FFS, and many others. + The System V branch contributed to SysV interprocess + communication primitives, copy-on-write, etc. &unix; itself + does not exist any more but its ideas have been used by many + other operating systems world wide thus forming the so called + &unix;-like operating systems. These days the most + influential ones are &linux;, Solaris, and possibly (to some + extent) &os;. There are in-company &unix; derivatives (AIX, + HP-UX etc.), but these have been more and more migrated to the + aforementioned systems. Let us summarize typical &unix; + characteristics. Technical details - Every running program constitutes a process that represents a state - of the computation. Running process is divided between kernel-space - and user-space. Some operations can be done only from kernel space - (dealing with hardware etc.), but the process should spend most of its - lifetime in the user space. The kernel is where the management of the - processes, hardware, and low-level details take place. The kernel - provides a standard unified &unix; API to the user space. The most - important ones are covered below. + Every running program constitutes a process that + represents a state of the computation. Running process is + divided between kernel-space and user-space. Some operations + can be done only from kernel space (dealing with hardware + etc.), but the process should spend most of its lifetime in + the user space. The kernel is where the management of the + processes, hardware, and low-level details take place. The + kernel provides a standard unified &unix; API to the user + space. The most important ones are covered below. - Communication between kernel and user space process + Communication between kernel and user space + process - Common &unix; API defines a syscall as a way to issue commands - from a user space process to the kernel. The most common - implementation is either by using an interrupt or specialized - instruction (think of - SYSENTER/SYSCALL instructions - for ia32). Syscalls are defined by a number. For example in &os;, - the syscall number 85 is the &man.swapon.2; syscall and the - syscall number 132 is &man.mkfifo.2;. Some syscalls need - parameters, which are passed from the user-space to the kernel-space - in various ways (implementation dependant). Syscalls are + Common &unix; API defines a syscall as a way to issue + commands from a user space process to the kernel. The most + common implementation is either by using an interrupt or + specialized instruction (think of + SYSENTER/SYSCALL + instructions for ia32). Syscalls are defined by a number. + For example in &os;, the syscall number 85 is the + &man.swapon.2; syscall and the syscall number 132 is + &man.mkfifo.2;. Some syscalls need parameters, which are + passed from the user-space to the kernel-space in various + ways (implementation dependant). Syscalls are synchronous. Another possible way to communicate is by using a - trap. Traps occur asynchronously after - some event occurs (division by zero, page fault etc.). A trap - can be transparent for a process (page fault) or can result in - a reaction like sending a signal - (division by zero). + trap. Traps occur asynchronously + after some event occurs (division by zero, page fault etc.). + A trap can be transparent for a process (page fault) or can + result in a reaction like sending a + signal (division by zero). Communication between processes - There are other APIs (System V IPC, shared memory etc.) but the - single most important API is signal. Signals are sent by processes - or by the kernel and received by processes. Some signals - can be ignored or handled by a user supplied routine, some result - in a predefined action that cannot be altered or ignored. + There are other APIs (System V IPC, shared memory etc.) + but the single most important API is signal. Signals are + sent by processes or by the kernel and received by + processes. Some signals can be ignored or handled by a user + supplied routine, some result in a predefined action that + cannot be altered or ignored. Process management - Kernel instances are processed first in the system (so called - init). Every running process can create its identical copy using - the &man.fork.2; syscall. Some slightly modified versions of this - syscall were introduced but the basic semantic is the same. Every - running process can morph into some other process using the - &man.exec.3; syscall. Some modifications of this syscall were - introduced but all serve the same basic purpose. Processes end - their lives by calling the &man.exit.2; syscall. Every process is - identified by a unique number called PID. Every process has a - defined parent (identified by its PID). + Kernel instances are processed first in the system (so + called init). Every running process can create its + identical copy using the &man.fork.2; syscall. Some + slightly modified versions of this syscall were introduced + but the basic semantic is the same. Every running process + can morph into some other process using the &man.exec.3; + syscall. Some modifications of this syscall were introduced + but all serve the same basic purpose. Processes end their + lives by calling the &man.exit.2; syscall. Every process is + identified by a unique number called PID. Every process has + a defined parent (identified by its PID). Thread management - Traditional &unix; does not define any API nor implementation - for threading, while &posix; defines its threading API but the - implementation is undefined. Traditionally there were two ways of - implementing threads. Handling them as separate processes (1:1 - threading) or envelope the whole thread group in one process and - managing the threading in userspace (1:N threading). Comparing - main features of each approach: + Traditional &unix; does not define any API nor + implementation for threading, while &posix; defines its + threading API but the implementation is undefined. + Traditionally there were two ways of implementing threads. + Handling them as separate processes (1:1 threading) or + envelope the whole thread group in one process and managing + the threading in userspace (1:N threading). Comparing main + features of each approach: 1:1 threading @@ -199,10 +223,11 @@ + lightweight threads - + scheduling can be easily altered by the user + + scheduling can be easily altered by the + user - - syscalls must be wrapped + - syscalls must be wrapped - cannot utilize more than one CPU @@ -214,24 +239,26 @@ What is &os;? - The &os; project is one of the oldest open source operating - systems currently available for daily use. It is a direct descendant - of the genuine &unix; so it could be claimed that it is a true &unix; - although licensing issues do not permit that. The start of the project - dates back to the early 1990's when a crew of fellow BSD users patched - the 386BSD operating system. Based on this patchkit a new operating - system arose named &os; for its liberal license. Another group created - the NetBSD operating system with different goals in mind. We will - focus on &os;. + The &os; project is one of the oldest open source + operating systems currently available for daily use. It is a + direct descendant of the genuine &unix; so it could be claimed + that it is a true &unix; although licensing issues do not + permit that. The start of the project dates back to the early + 1990's when a crew of fellow BSD users patched the 386BSD + operating system. Based on this patchkit a new operating + system arose named &os; for its liberal license. Another + group created the NetBSD operating system with different goals + in mind. We will focus on &os;. - &os; is a modern &unix;-based operating system with all the - features of &unix;. Preemptive multitasking, multiuser facilities, - TCP/IP networking, memory protection, symmetric multiprocessing - support, virtual memory with merged VM and buffer cache, they are all - there. One of the interesting and extremely useful features is the - ability to emulate other &unix;-like operating systems. As of - December 2006 and 7-CURRENT development, the following - emulation functionalities are supported: + &os; is a modern &unix;-based operating system with all + the features of &unix;. Preemptive multitasking, multiuser + facilities, TCP/IP networking, memory protection, symmetric + multiprocessing support, virtual memory with merged VM and + buffer cache, they are all there. One of the interesting and + extremely useful features is the ability to emulate other + &unix;-like operating systems. As of December 2006 and + 7-CURRENT development, the following emulation functionalities + are supported: @@ -241,10 +268,12 @@ &os;/i386 emulation on &os;/ia64 - &linux;-emulation of &linux; operating system on &os; + &linux;-emulation of &linux; operating system on + &os; - NDIS-emulation of Windows networking drivers interface + NDIS-emulation of Windows networking drivers + interface NetBSD-emulation of NetBSD operating system @@ -257,62 +286,70 @@ - Actively developed emulations are the &linux; layer and various - &os;-on-&os; layers. Others are not supposed to work properly nor - be usable these days. + Actively developed emulations are the &linux; layer and + various &os;-on-&os; layers. Others are not supposed to work + properly nor be usable these days. Technical details - &os; is traditional flavor of &unix; in the sense of dividing the - run of processes into two halves: kernel space and user space run. - There are two types of process entry to the kernel: a syscall and a - trap. There is only one way to return. In the subsequent sections - we will describe the three gates to/from the kernel. The whole - description applies to the i386 architecture as the Linuxulator - only exists there but the concept is similar on other architectures. - The information was taken from [1] and the source code. + &os; is traditional flavor of &unix; in the sense of + dividing the run of processes into two halves: kernel space + and user space run. There are two types of process entry to + the kernel: a syscall and a trap. There is only one way to + return. In the subsequent sections we will describe the + three gates to/from the kernel. The whole description + applies to the i386 architecture as the Linuxulator only + exists there but the concept is similar on other + architectures. The information was taken from [1] and the + source code. System entries - &os; has an abstraction called an execution class loader, - which is a wedge into the &man.execve.2; syscall. This employs a - structure sysentvec, which describes an - executable ABI. It contains things like errno translation table, - signal translation table, various functions to serve syscall needs - (stack fixup, coredumping, etc.). Every ABI the &os; kernel wants - to support must define this structure, as it is used later in the - syscall processing code and at some other places. System entries - are handled by trap handlers, where we can access both the - kernel-space and the user-space at once. + &os; has an abstraction called an execution class + loader, which is a wedge into the &man.execve.2; syscall. + This employs a structure sysentvec, + which describes an executable ABI. It contains things + like errno translation table, signal translation table, + various functions to serve syscall needs (stack fixup, + coredumping, etc.). Every ABI the &os; kernel wants to + support must define this structure, as it is used later in + the syscall processing code and at some other places. + System entries are handled by trap handlers, where we can + access both the kernel-space and the user-space at + once. Syscalls Syscalls on &os; are issued by executing interrupt - 0x80 with register %eax set - to a desired syscall number with arguments passed on the stack. + 0x80 with register + %eax set to a desired syscall number + with arguments passed on the stack. - When a process issues an interrupt 0x80, the - int0x80 syscall trap handler is issued (defined - in sys/i386/i386/exception.s), which prepares - arguments (i.e. copies them on to the stack) for a - call to a C function &man.syscall.2; (defined in - sys/i386/i386/trap.c), which processes the - passed in trapframe. The processing consists of preparing the - syscall (depending on the sysvec entry), - determining if the syscall is 32-bit or 64-bit one (changes size - of the parameters), then the parameters are copied, including the - syscall. Next, the actual syscall function is executed with - processing of the return code (special cases for - ERESTART and EJUSTRETURN - errors). Finally an userret() is scheduled, - switching the process back to the users-pace. The parameters to - the actual syscall handler are passed in the form of - struct thread *td, - struct syscall args * arguments where the second + When a process issues an interrupt + 0x80, the int0x80 + syscall trap handler is issued (defined in + sys/i386/i386/exception.s), which + prepares arguments (i.e. copies them on to the stack) for + a call to a C function &man.syscall.2; (defined in + sys/i386/i386/trap.c), which + processes the passed in trapframe. The processing + consists of preparing the syscall (depending on the + sysvec entry), determining if the + syscall is 32-bit or 64-bit one (changes size of the + parameters), then the parameters are copied, including the + syscall. Next, the actual syscall function is executed + with processing of the return code (special cases for + ERESTART and + EJUSTRETURN errors). Finally an + userret() is scheduled, switching the + process back to the users-pace. The parameters to the + actual syscall handler are passed in the form of + struct thread *td, struct + syscall args * arguments where the second parameter is a pointer to the copied in structure of parameters. @@ -320,68 +357,76 @@ Traps - Handling of traps in &os; is similar to the handling of - syscalls. Whenever a trap occurs, an assembler handler is called. - It is chosen between alltraps, alltraps with regs pushed or - calltrap depending on the type of the trap. This handler prepares - arguments for a call to a C function trap() - (defined in sys/i386/i386/trap.c), which then - processes the occurred trap. After the processing it might send a - signal to the process and/or exit to userland using - userret(). + Handling of traps in &os; is similar to the handling + of syscalls. Whenever a trap occurs, an assembler handler + is called. It is chosen between alltraps, alltraps with + regs pushed or calltrap depending on the type of the trap. + This handler prepares arguments for a call to a C function + trap() (defined in + sys/i386/i386/trap.c), which then + processes the occurred trap. After the processing it + might send a signal to the process and/or exit to userland + using userret(). Exits - Exits from kernel to userspace happen using the assembler - routine doreti regardless of whether the kernel - was entered via a trap or via a syscall. This restores the program - status from the stack and returns to the userspace. + Exits from kernel to userspace happen using the + assembler routine doreti regardless of + whether the kernel was entered via a trap or via a + syscall. This restores the program status from the stack + and returns to the userspace. &unix; primitives - &os; operating system adheres to the traditional &unix; scheme, - where every process has a unique identification number, the so - called PID (Process ID). PID numbers are + &os; operating system adheres to the traditional + &unix; scheme, where every process has a unique + identification number, the so called + PID (Process ID). PID numbers are allocated either linearly or randomly ranging from - 0 to PID_MAX. The allocation - of PID numbers is done using linear searching of PID space. Every - thread in a process receives the same PID number as result of the - &man.getpid.2; call. + 0 to PID_MAX. The + allocation of PID numbers is done using linear searching + of PID space. Every thread in a process receives the same + PID number as result of the &man.getpid.2; call. - There are currently two ways to implement threading in &os;. - The first way is M:N threading followed by the 1:1 threading model. - The default library used is M:N threading - (libpthread) and you can switch at runtime to - 1:1 threading (libthr). The plan is to switch - to 1:1 library by default soon. Although those two libraries use - the same kernel primitives, they are accessed through different - API(es). The M:N library uses the kse_* family - of syscalls while the 1:1 library uses the thr_* - family of syscalls. Because of this, there is no general concept - of thread ID shared between kernel and userspace. Of course, both - threading libraries implement the pthread thread ID API. Every - kernel thread (as described by struct thread) - has td tid identifier but this is not directly accessible - from userland and solely serves the kernel's needs. It is also - used for 1:1 threading library as pthread's thread ID but handling - of this is internal to the library and cannot be relied on. + There are currently two ways to implement threading in + &os;. The first way is M:N threading followed by the 1:1 + threading model. The default library used is M:N + threading (libpthread) and you can + switch at runtime to 1:1 threading + (libthr). The plan is to switch to 1:1 + library by default soon. Although those two libraries use + the same kernel primitives, they are accessed through + different API(es). The M:N library uses the + kse_* family of syscalls while the 1:1 + library uses the thr_* family of + syscalls. Because of this, there is no general concept of + thread ID shared between kernel and userspace. Of course, + both threading libraries implement the pthread thread ID + API. Every kernel thread (as described by struct + thread) has td tid identifier but this is not + directly accessible from userland and solely serves the + kernel's needs. It is also used for 1:1 threading library + as pthread's thread ID but handling of this is internal to + the library and cannot be relied on. - As stated previously there are two implementations of threading - in &os;. The M:N library divides the work between kernel space and - userspace. Thread is an entity that gets scheduled in the kernel - but it can represent various number of userspace threads. - M userspace threads get mapped to N kernel threads thus saving - resources while keeping the ability to exploit multiprocessor - parallelism. Further information about the implementation can be - obtained from the man page or [1]. The 1:1 library directly maps a - userland thread to a kernel thread thus greatly simplifying the - scheme. None of these designs implement a fairness mechanism (such - a mechanism was implemented but it was removed recently because it - caused serious slowdown and made the code more difficult to deal + As stated previously there are two implementations of + threading in &os;. The M:N library divides the work + between kernel space and userspace. Thread is an entity + that gets scheduled in the kernel but it can represent + various number of userspace threads. M userspace threads + get mapped to N kernel threads thus saving resources while + keeping the ability to exploit multiprocessor parallelism. + Further information about the implementation can be + obtained from the man page or [1]. The 1:1 library + directly maps a userland thread to a kernel thread thus + greatly simplifying the scheme. None of these designs + implement a fairness mechanism (such a mechanism was + implemented but it was removed recently because it caused + serious slowdown and made the code more difficult to deal with). @@ -390,64 +435,70 @@ What is &linux; - &linux; is a &unix;-like kernel originally developed by Linus - Torvalds, and now being contributed to by a massive crowd of - programmers all around the world. From its mere beginnings to today, - with wide support from companies such as IBM or Google, &linux; is - being associated with its fast development pace, full hardware support - and benevolent dictator model of organization. + &linux; is a &unix;-like kernel originally developed by + Linus Torvalds, and now being contributed to by a massive + crowd of programmers all around the world. From its mere + beginnings to today, with wide support from companies such as + IBM or Google, &linux; is being associated with its fast + development pace, full hardware support and benevolent + dictator model of organization. - &linux; development started in 1991 as a hobbyist project at - University of Helsinki in Finland. Since then it has obtained all the - features of a modern &unix;-like OS: multiprocessing, multiuser - support, virtual memory, networking, basically everything is there. - There are also highly advanced features like virtualization etc. + &linux; development started in 1991 as a hobbyist project + at University of Helsinki in Finland. Since then it has + obtained all the features of a modern &unix;-like OS: + multiprocessing, multiuser support, virtual memory, + networking, basically everything is there. There are also + highly advanced features like virtualization etc. - As of 2006 &linux; seems to be the most widely used open source - operating system with support from independent software vendors like - Oracle, RealNetworks, Adobe, etc. Most of the commercial software - distributed for &linux; can only be obtained in a binary form so - recompilation for other operating systems is impossible. + As of 2006 &linux; seems to be the most widely used open + source operating system with support from independent software + vendors like Oracle, RealNetworks, Adobe, etc. Most of the + commercial software distributed for &linux; can only be + obtained in a binary form so recompilation for other operating + systems is impossible. Most of the &linux; development happens in a Git version control system. - Git is a distributed system so there is - no central source of the &linux; code, but some branches are considered - prominent and official. The version number scheme implemented by - &linux; consists of four numbers A.B.C.D. Currently development - happens in 2.6.C.D, where C represents major version, where new - features are added or changed while D is a minor version for bugfixes - only. + Git is a distributed system so + there is no central source of the &linux; code, but some + branches are considered prominent and official. The version + number scheme implemented by &linux; consists of four numbers + A.B.C.D. Currently development happens in 2.6.C.D, where C + represents major version, where new features are added or + changed while D is a minor version for bugfixes only. More information can be obtained from [3]. Technical details - &linux; follows the traditional &unix; scheme of dividing the run - of a process in two halves: the kernel and user space. The kernel can - be entered in two ways: via a trap or via a syscall. The return is - handled only in one way. The further description applies to - &linux; 2.6 on the &i386; architecture. This information was - taken from [2]. + &linux; follows the traditional &unix; scheme of + dividing the run of a process in two halves: the kernel and + user space. The kernel can be entered in two ways: via a + trap or via a syscall. The return is handled only in one + way. The further description applies to &linux; 2.6 on + the &i386; architecture. This information was taken from + [2]. Syscalls Syscalls in &linux; are performed (in userspace) using - syscallX macros where X substitutes a number - representing the number of parameters of the given syscall. This - macro translates to a code that loads %eax - register with a number of the syscall and executes interrupt - 0x80. After this syscall return is called, - which translates negative return values to positive - errno values and sets res to - -1 in case of an error. Whenever the interrupt - 0x80 is called the process enters the kernel in - system call trap handler. This routine saves all registers on the - stack and calls the selected syscall entry. Note that the &linux; - calling convention expects parameters to the syscall to be passed - via registers as shown here: + syscallX macros where X substitutes a + number representing the number of parameters of the given + syscall. This macro translates to a code that loads + %eax register with a number of the + syscall and executes interrupt 0x80. + After this syscall return is called, which translates + negative return values to positive + errno values and sets + res to -1 in case of + an error. Whenever the interrupt 0x80 + is called the process enters the kernel in system call + trap handler. This routine saves all registers on the + stack and calls the selected syscall entry. Note that the + &linux; calling convention expects parameters to the + syscall to be passed via registers as shown here: @@ -470,53 +521,58 @@ - There are some exceptions to this, where &linux; uses different - calling convention (most notably the clone - syscall). + There are some exceptions to this, where &linux; uses + different calling convention (most notably the + clone syscall). Traps The trap handlers are introduced in - arch/i386/kernel/traps.c and most of these - handlers live in arch/i386/kernel/entry.S, - where handling of the traps happens. + arch/i386/kernel/traps.c and most of + these handlers live in + arch/i386/kernel/entry.S, where + handling of the traps happens. Exits - Return from the syscall is managed by syscall &man.exit.3;, - which checks for the process having unfinished work, then checks - whether we used user-supplied selectors. If this happens stack - fixing is applied and finally the registers are restored from the - stack and the process returns to the userspace. + Return from the syscall is managed by syscall + &man.exit.3;, which checks for the process having + unfinished work, then checks whether we used user-supplied + selectors. If this happens stack fixing is applied and + finally the registers are restored from the stack and the + process returns to the userspace. &unix; primitives - In the 2.6 version, the &linux; operating system redefined some - of the traditional &unix; primitives, notably PID, TID and thread. - PID is defined not to be unique for every process, so for some - processes (threads) &man.getppid.2; returns the same value. Unique - identification of process is provided by TID. This is because - NPTL (New &posix; Thread Library) defines - threads to be normal processes (so called 1:1 threading). Spawning - a new process in &linux; 2.6 happens using the - clone syscall (fork variants are reimplemented using - it). This clone syscall defines a set of flags that affect - behavior of the cloning process regarding thread implementation. - The semantic is a bit fuzzy as there is no single flag telling the - syscall to create a thread. + In the 2.6 version, the &linux; operating system + redefined some of the traditional &unix; primitives, + notably PID, TID and thread. PID is defined not to be + unique for every process, so for some processes (threads) + &man.getppid.2; returns the same value. Unique + identification of process is provided by TID. This is + because NPTL (New &posix; Thread + Library) defines threads to be normal processes (so called + 1:1 threading). Spawning a new process in + &linux; 2.6 happens using the + clone syscall (fork variants are + reimplemented using it). This clone syscall defines a set + of flags that affect behavior of the cloning process + regarding thread implementation. The semantic is a bit + fuzzy as there is no single flag telling the syscall to + create a thread. Implemented clone flags are: - CLONE_VM - processes share their memory - space + CLONE_VM - processes share + their memory space CLONE_FS - share umask, cwd and @@ -527,72 +583,78 @@ files - CLONE_SIGHAND - share signal handlers - and blocked signals + CLONE_SIGHAND - share signal + handlers and blocked signals - CLONE_PARENT - share parent + CLONE_PARENT - share + parent - CLONE_THREAD - be thread (further - explanation below) + CLONE_THREAD - be thread + (further explanation below) - CLONE_NEWNS - new namespace + CLONE_NEWNS - new + namespace CLONE_SYSVSEM - share SysV undo structures - CLONE_SETTLS - setup TLS at supplied - address + CLONE_SETTLS - setup TLS at + supplied address - CLONE_PARENT_SETTID - set TID in the - parent + CLONE_PARENT_SETTID - set TID + in the parent - CLONE_CHILD_CLEARTID - clear TID in the - child + CLONE_CHILD_CLEARTID - clear + TID in the child - CLONE_CHILD_SETTID - set TID in the - child + CLONE_CHILD_SETTID - set TID in + the child - CLONE_PARENT sets the real parent to the - parent of the caller. This is useful for threads because if thread - A creates thread B we want thread B to be parented to the parent of - the whole thread group. CLONE_THREAD does - exactly the same thing as CLONE_PARENT, - CLONE_VM and CLONE_SIGHAND, - rewrites PID to be the same as PID of the caller, sets exit signal - to be none and enters the thread group. - CLONE_SETTLS sets up GDT entries for TLS - handling. The CLONE_*_*TID set of flags - sets/clears user supplied address to TID or 0. + CLONE_PARENT sets the real parent + to the parent of the caller. This is useful for threads + because if thread A creates thread B we want thread B to + be parented to the parent of the whole thread group. + CLONE_THREAD does exactly the same + thing as CLONE_PARENT, + CLONE_VM and + CLONE_SIGHAND, rewrites PID to be the + same as PID of the caller, sets exit signal to be none and + enters the thread group. CLONE_SETTLS + sets up GDT entries for TLS handling. The + CLONE_*_*TID set of flags sets/clears + user supplied address to TID or 0. - As you can see the CLONE_THREAD does most - of the work and does not seem to fit the scheme very well. The - original intention is unclear (even for authors, according to - comments in the code) but I think originally there was one - threading flag, which was then parcelled among many other flags - but this separation was never fully finished. It is also unclear - what this partition is good for as glibc does not use that so only - hand-written use of the clone permits a programmer to access this - features. + As you can see the CLONE_THREAD + does most of the work and does not seem to fit the scheme + very well. The original intention is unclear (even for + authors, according to comments in the code) but I think + originally there was one threading flag, which was then + parcelled among many other flags but this separation was + never fully finished. It is also unclear what this + partition is good for as glibc does not use that so only + hand-written use of the clone permits a programmer to + access this features. - For non-threaded programs the PID and TID are the same. For - threaded programs the first thread PID and TID are the same and - every created thread shares the same PID and gets assigned a - unique TID (because CLONE_THREAD is passed in) - also parent is shared for all processes forming this threaded + For non-threaded programs the PID and TID are the + same. For threaded programs the first thread PID and TID + are the same and every created thread shares the same PID + and gets assigned a unique TID (because + CLONE_THREAD is passed in) also parent + is shared for all processes forming this threaded program. - The code that implements &man.pthread.create.3; in NPTL defines - the clone flags like this: + The code that implements &man.pthread.create.3; in + NPTL defines the clone flags like this: int clone_flags = (CLONE_VM | CLONE_FS | CLONE_FILES | CLONE_SIGNAL @@ -606,12 +668,13 @@ | 0); - The CLONE_SIGNAL is defined like + The CLONE_SIGNAL is defined + like #define CLONE_SIGNAL (CLONE_SIGHAND | CLONE_THREAD) - the last 0 means no signal is sent when any of the threads - exits. + the last 0 means no signal is sent when any of the + threads exits. @@ -619,71 +682,80 @@ What is emulation - According to a dictionary definition, emulation is the ability of - a program or device to imitate another program or device. This is - achieved by providing the same reaction to a given stimulus as the - emulated object. In practice, the software world mostly sees three - types of emulation - a program used to emulate a machine (QEMU, various - game console emulators etc.), software emulation of a hardware facility - (OpenGL emulators, floating point units emulation etc.) and operating - system emulation (either in kernel of the operating system or as a - userspace program). + According to a dictionary definition, emulation is the + ability of a program or device to imitate another program or + device. This is achieved by providing the same reaction to a + given stimulus as the emulated object. In practice, the + software world mostly sees three types of emulation - a + program used to emulate a machine (QEMU, various game console + emulators etc.), software emulation of a hardware facility + (OpenGL emulators, floating point units emulation etc.) and + operating system emulation (either in kernel of the operating + system or as a userspace program). - Emulation is usually used in a place, where using the original - component is not feasible nor possible at all. For example someone - might want to use a program developed for a different operating - system than they use. Then emulation comes in handy. Sometimes - there is no other way but to use emulation - e.g. when the hardware - device you try to use does not exist (yet/anymore) then there is no - other way but emulation. This happens often when porting an operating + Emulation is usually used in a place, where using the + original component is not feasible nor possible at all. For + example someone might want to use a program developed for a + different operating system than they use. Then emulation + comes in handy. Sometimes there is no other way but to use + emulation - e.g. when the hardware device you try to use does + not exist (yet/anymore) then there is no other way but + emulation. This happens often when porting an operating system to a new (non-existent) platform. Sometimes it is just cheaper to emulate. - Looking from an implementation point of view, there are two main - approaches to the implementation of emulation. You can either emulate - the whole thing - accepting possible inputs of the original object, - maintaining inner state and emitting correct output based on the state - and/or input. This kind of emulation does not require any special - conditions and basically can be implemented anywhere for any - device/program. The drawback is that implementing such emulation is - quite difficult, time-consuming and error-prone. In some cases we can - use a simpler approach. Imagine you want to emulate a printer that - prints from left to right on a printer that prints from right to left. - It is obvious that there is no need for a complex emulation layer but - simply reversing of the printed text is sufficient. Sometimes the - emulating environment is very similar to the emulated one so just a - thin layer of some translation is necessary to provide fully working - emulation! As you can see this is much less demanding to implement, - so less time-consuming and error-prone than the previous approach. But - the necessary condition is that the two environments must be similar *** DIFF OUTPUT TRUNCATED AT 1000 LINES ***