Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 17 Mar 2008 13:16:41 -0700 (PDT)
From:      Matthew Dillon <dillon@apollo.backplane.com>
To:        "Alexander Sack" <pisymbol@gmail.com>
Cc:        jgordeev@dir.bg, "Andrey V. Elsukov" <bu7cher@yandex.ru>, Robert Watson <rwatson@freebsd.org>, freebsd-hackers@freebsd.org
Subject:   Re: Re[2]: vkernel & GSoC, some questions
Message-ID:  <200803172016.m2HKGfjA020263@apollo.backplane.com>
References:  <20080316122108.S44049@fledge.watson.org> <E1JatyK-000FfY-00.shmukler-mail-ru@f8.mail.ru> <200803162313.m2GNDbvl009550@apollo.backplane.com> <3c0b01820803171243k5eb6abd3y1e1c44694c6be0f6@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help

:Matt, I'm sorry I'm not trying to hijack this thread but isn't the vkernel
:approach very similar to VMWare's hosted architecture products (such as
:Fusion for the Mac and Client Workstation for windows)?
:
:As I understand it, they have a regular process like vkernel called
:vmware-vmx which provides the management of different VM contexts running
:along side the host OS.  It also does a passthrough for invalid PTEs to the
:real kernel and manages contexts in I believe the same fashion you just
:described.  There is also an I/O subsystem a long side it to reuse the
:hosted drivers to managed the virtualized filesystem and devices - not sure
:what Dragon does.
:
:I realize that their claim to fame is as you said x86 binary code
:translations but I believe VMWare's product is very close to what you are
:describing with respect to vkernels (please correct me if I'm wrong).  Its
:just that this thread has devolved slightly into a hypervisor vs. hosted
:architecture world and I believe their is room for both.
:
:Thanks!
:
:-aps

    This reminds me of XEN.  Basically instead of trying to rewrite
    instructions or do 100% hardware emulation it sounds like they are
    providing XEN-like functionality where the target OS is aware it is
    running inside a hypervisor and can make explicit 'shortcut' calls to
    the hypervisor instead of attempting to access the resource via
    emulated hardware.

    These shortcuts are going to be considerably more efficient, resulting
    in better performance.  It is also the claim to fame that a vkernel
    architecture has.  In fact, XEN is really much closer to a vkernel
    architecture then it is to a hypervisor architecture.  A vkernel can
    be thought of as the most generic and flexible implementation, with
    access to many system calls verses the fairly limited set XEN provides,
    and a hypervisor's access to the same subset is done by emulating
    hardware devices.

    In all three cases the emulated hardware -- disk and network basically,
    devolves down into calling read() or write() or the real-kernel
    equivalent.  A hypervisor has the most work to do since it is trying to
    emulate a hardware interface (adding another layer).  XEN has less work
    to do as it is really not trying to emulate hardware.  A vkernel has
    even less work to do because it is running as a userland program and can
    simply make the appropriate system call to implement the back-end.

    There are more similarities then differences.  I expect VMWare is feeling
    the pressure from having to hack their code so much to support multiple
    operating systems... I mean, literally, every time microsoft comes out
    with an update VMWare has to hack something new in.  it's really amazing
    how hard it is to emulate a complete hardware environment, let alone do
    it efficiently.

    Frankly, I would love to see something like VMWare force an industry-wide
    API for machine access which bypasses the holy hell of a mess we have
    with the BIOS, and see BIOSes then respec to a new far cleaner API.  The
    BIOS is the stinking pile of horseshit that has held back OS development
    for the last 15 years.

    For hardware emulation to really work efficiently one pretty much has to
    dedicate an entire cpu to the emulator in order to allow it to operate
    more like a coprocessor and save a larger chunk of the context switch
    overhead which is the bane of VMWare, UML/vkernel, AND XEN.  This may
    seem wasteful but when you are talking about systems with 4 or more cores
    which are more I/O and memory limited then they are cpu limited,
    dedicating a whole cpu to handle critical path operations would probably
    boost performance considerably.

					-Matt




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?200803172016.m2HKGfjA020263>