Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 17 Jul 2023 09:08:27 -0800
From:      Rob Wing <rob.fx907@gmail.com>
To:        Elena Mihailescu <elenamihailescu22@gmail.com>
Cc:        =?UTF-8?Q?Corvin_K=C3=B6hne?= <corvink@freebsd.org>,  "freebsd-virtualization@freebsd.org" <freebsd-virtualization@freebsd.org>,  Mihai Carabas <mihai.carabas@gmail.com>, Matthew Grooms <mgrooms@shrew.net>
Subject:   Re: Warm and Live Migration Implementation for bhyve
Message-ID:  <CAF3%2Bn_e31oOBv_cN5dtAThN3R0%2BZZ0TyD5fBL5PYHpZYnF=S%2BA@mail.gmail.com>
In-Reply-To: <CAGOCPLi_oD5E1yQSVRGaqNAfVthmK3LWpMR%2B9GYzYj%2BCB4sdTA@mail.gmail.com>
References:  <CAGOCPLhJrNrysBM1vc87vfkX5jZLCmnyfGf%2Bcv2wmHFF1UhC-w@mail.gmail.com> <3d7ee1f6ff98fe9aede5a85702b906fc3014b6b6.camel@FreeBSD.org> <CAGOCPLg4ZeaRLK0VeRzifteXt3dJnSqZ=YT5BJ8EtH7%2BwMkTfA@mail.gmail.com> <b66fb737fca369239b3953892132f7e29906564f.camel@FreeBSD.org> <CAGOCPLi_oD5E1yQSVRGaqNAfVthmK3LWpMR%2B9GYzYj%2BCB4sdTA@mail.gmail.com>

index | next in thread | previous in thread | raw e-mail

[-- Attachment #1 --]
I'm curious why the stream send bits are rolled into bhyve as opposed to
using netcat/ssh to do the network transfer?

sort of how one would do a zfs send/recv between hosts

On Monday, July 17, 2023, Elena Mihailescu <elenamihailescu22@gmail.com>
wrote:

> Hi Corvin,
>
> On Mon, 3 Jul 2023 at 09:35, Corvin Köhne <corvink@freebsd.org> wrote:
> >
> > On Tue, 2023-06-27 at 16:35 +0300, Elena Mihailescu wrote:
> > > Hi Corvin,
> > >
> > > Thank you for the questions! I'll respond to them inline.
> > >
> > > On Mon, 26 Jun 2023 at 10:16, Corvin Köhne <corvink@freebsd.org>
> > > wrote:
> > > >
> > > > Hi Elena,
> > > >
> > > > thanks for posting this proposal here.
> > > >
> > > > Some open questions from my side:
> > > >
> > > > 1. How is the data send to the target? Does the host send a
> > > > complete
> > > > dump and the target parses it? Or does the target request data one
> > > > by
> > > > one und the host sends it as response?
> > > >
> > > It's not a dump of the guest's state, it's transmitted in steps.
> > > However, some parts may be migrated as a chunk (e.g., the emulated
> > > devices' state is transmitted as the buffer generated from the
> > > snapshot functions).
> > >
> >
> > How does the receiver know which chunk relates to which device? It
> > would be nice if you can start bhyve on the receiver side without
> > parameters e.g. `bhyve --receive=127.0.0.1:1234`. Therefore, the
> > protocol has to carry some information about the device configuration.
> >
>
> Regarding your first question, we send a chunk of data (a buffer) with
> the state: we resume the data in the same order we saved it. It relies
> on save/restore. We currently do not support migrating between
> different versions of suspend&resume/migration.
>
> It would be nice to have something like `bhyve
> --receive=127.0.0.1:1234`, but I don't think it is possible at this
> point mainly because of the following two reasons:
> - the guest image must be shared (e.g., via NFS) between the source
> and destination hosts. If the mounting points differ between the two,
> opening the disk at the destination will fail (also, we must suppose
> that the user used an absolute path since a relative one won't work)
> - if the VM uses a network adapter, we must specify the tap interface
> on the destination host (e.g., if on the source host the VM uses
> `tap0`, on the destination host, `tap0` may not exist or may be used
> by other VMs).
>
>
> >
> > > I'll try to describe a bit the protocol we have implemented for
> > > migration, maybe it can partially respond to the second and third
> > > questions.
> > >
> > > The destination host waits for the source host to connect (through a
> > > socket).
> > > After that, the source sends its system specifications (hw_machine,
> > > hw_model, hw_pagesize). If the source and destination hosts have
> > > identical hardware configurations, the migration can take place.
> > >
> > > Then, if we have live migration, we migrate the memory in rounds
> > > (i.e., we get a list of the pages that have the dirty bit set, send
> > > it
> > > to the destination to know what pages will be received, then send the
> > > pages through the socket; this process is repeated until the last
> > > round).
> > >
> > > Next, we stop the guest's vcpus, send the remaining memory (for live
> > > migration) or the guest's memory from vmctx->baseaddr for warm
> > > migration. Then, based on the suspend/resume feature, we get the
> > > state
> > > of the virtualized devices (the ones from the kernel space) and send
> > > this buffer to the destination. We repeat this for the emulated
> > > devices as well (the ones from the userspace).
> > >
> > > On the receiver host, we get the memory pages and set them to their
> > > according position in the guest's memory, use the restore functions
> > > for the state of the devices and start the guest's execution.
> > >
> > > Excluding the guest's memory transfer, the rest is based on the
> > > suspend/resume feature. We snapshot the guest's state, but instead of
> > > saving the data locally, we send it via network to the destination.
> > > On
> > > the destination host, we start a new virtual machine, but instead of
> > > reading/getting the state from the disk (i.e., the snapshot files) we
> > > get this state via the network from the source host.
> > >
> > > If the destination can properly resume the guest activity, it will
> > > send an "OK" to the source host so it can destroy/remove the guest
> > > from its end.
> > >
> > > Both warm and live migration are based on "cold migration". Cold
> > > migration means we suspend the guest on the source host, and restore
> > > the guest on the destination host from the snapshot files. Warm
> > > migration only does this using a socket, while live migration changes
> > > the way the memory is migrated.
> > >
> > > > 2. What happens if we add a new data section?
> > > >
> > > What are you referring to with a new data section? Is this question
> > > related to the third one? If so, see my answer below.
> > >
> > > > 3. What happens if the bhyve version differs on host and target
> > > > machine?
> > >
> > > The two hosts must be identical for migration, that's why we have the
> > > part where we check the specifications between the two migration
> > > hosts. They are expected to have the same version of bhyve and
> > > FreeBSD. We will add an additional check in the check specs part to
> > > see if we have the same FreeBSD build.
> > >
> > > As long as the changes in the virtual memory subsystem won't affect
> > > bhyve (and how the virtual machine sees/uses the memory), the
> > > migration constraints should only be related to suspend/resume. The
> > > state of the virtual devices is handled by the snapshot system, so if
> > > it is able to accommodate changes in the data structures, the
> > > migration process will not be affected.
> > >
> > > Thank you,
> > > Elena
> > >
> > > >
> > > >
> > > > --
> > > > Kind regards,
> > > > Corvin
> > > >
> > > > On Fri, 2023-06-23 at 13:00 +0300, Elena Mihailescu wrote:
> > > > > Hello,
> > > > >
> > > > > This mail presents the migration feature we have implemented for
> > > > > bhyve. Any feedback from the community is much appreciated.
> > > > >
> > > > > We have opened a stack of reviews on Phabricator
> > > > > (https://reviews.freebsd.org/D34717) that is meant to split the
> > > > > code
> > > > > in smaller parts so it can be more easily reviewed. A brief
> > > > > history
> > > > > of
> > > > > the implementation can be found at the bottom of this email.
> > > > >
> > > > > The migration mechanism we propose needs two main components in
> > > > > order
> > > > > to move a virtual machine from one host to another:
> > > > > 1. the guest's state (vCPUs, emulated and virtualized devices)
> > > > > 2. the guest's memory
> > > > >
> > > > > For the first part, we rely on the suspend/resume feature. We
> > > > > call
> > > > > the
> > > > > same functions as the ones used by suspend/resume, but instead of
> > > > > saving the data in files, we send it via the network.
> > > > >
> > > > > The most time consuming aspect of migration is transmitting guest
> > > > > memory. The UPB team has implemented two options to accomplish
> > > > > this:
> > > > > 1. Warm Migration: The guest execution is suspended on the source
> > > > > host
> > > > > while the memory is sent to the destination host. This method is
> > > > > less
> > > > > complex but may cause extended downtime.
> > > > > 2. Live Migration: The guest continues to execute on the source
> > > > > host
> > > > > while the memory is transmitted to the destination host. This
> > > > > method
> > > > > is more complex but offers reduced downtime.
> > > > >
> > > > > The proposed live migration procedure (pre-copy live migration)
> > > > > migrates the memory in rounds:
> > > > > 1. In the initial round, we migrate all the guest memory (all
> > > > > pages
> > > > > that are allocated)
> > > > > 2. In the subsequent rounds, we migrate only the pages that were
> > > > > modified since the previous round started
> > > > > 3. In the final round, we suspend the guest, migrate the
> > > > > remaining
> > > > > pages that were modified from the previous round and the guest's
> > > > > internal state (vCPU, emulated and virtualized devices).
> > > > >
> > > > > To detect the pages that were modified between rounds, we propose
> > > > > an
> > > > > additional dirty bit (virtualization dirty bit) for each memory
> > > > > page.
> > > > > This bit would be set every time the page's dirty bit is set.
> > > > > However,
> > > > > this virtualization dirty bit is reset only when the page is
> > > > > migrated.
> > > > >
> > > > > The proposed implementation is split in two parts:
> > > > > 1. The first one, the warm migration, is just a wrapper on the
> > > > > suspend/resume feature which, instead of saving the suspended
> > > > > state
> > > > > on
> > > > > disk, sends it via the network to the destination
> > > > > 2. The second part, the live migration, uses the layer previously
> > > > > presented, but sends the guest's memory in rounds, as described
> > > > > above.
> > > > >
> > > > > The migration process works as follows:
> > > > > 1. we identify:
> > > > >  - VM_NAME - the name of the virtual machine which will be
> > > > > migrated
> > > > >  - SRC_IP - the IP address of the source host
> > > > >  - DST_IP - the IP address of the destination host (default is
> > > > > 24983)
> > > > >  - DST_PORT - the port we want to use for migration
> > > > > 2. we start a virtual machine on the destination host that will
> > > > > wait
> > > > > for a migration. Here, we must specify SRC_IP (and the port we
> > > > > want
> > > > > to
> > > > > open for migration, default is 24983).
> > > > > e.g.: bhyve ... -R SRC_IP:24983 guest_vm_dst
> > > > > 3. using bhyvectl on the source host, we start the migration
> > > > > process.
> > > > > e.g.: bhyvectl --migrate=DST_IP:24983 --vm=guest_vm
> > > > >
> > > > > A full tutorial on this can be found here:
> > > > > https://github.com/FreeBSD-UPB/freebsd-src/wiki/Virtual-
> Machine-Migration-using-bhyve
> > > > >
> > > > > For sending the migration request to a virtual machine, we use
> > > > > the
> > > > > same thread/socket that is used for suspend.
> > > > > For receiving a migration request, we used a similar approach to
> > > > > the
> > > > > resume process.
> > > > >
> > > > > As some of you may remember seeing similar emails from our part
> > > > > on
> > > > > the
> > > > > freebsd-virtualization list, I'll present a brief history of this
> > > > > project:
> > > > > The first part of the project was the suspend/resume
> > > > > implementation
> > > > > which landed in bhyve in 2020, under the BHYVE_SNAPSHOT guard
> > > > > (https://reviews.freebsd.org/D19495).
> > > > > After that, we focused on two tracks:
> > > > > 1. adding various suspend/resume features (multiple device
> > > > > support -
> > > > > https://reviews.freebsd.org/D26387, CAPSICUM support -
> > > > > https://reviews.freebsd.org/D30471, having an uniform file format
> > > > > -
> > > > > at
> > > > > that time, during the bhyve bi-weekly calls, we concluded that
> > > > > the
> > > > > JSON format was the most suitable at that time -
> > > > > https://reviews.freebsd.org/D29262) so we can remove the #ifdef
> > > > > BHYVE_SNAPSHOT guard.
> > > > > 2. implementing the migration feature for bhyve. Since this one
> > > > > relies
> > > > > on the save/restore, but does not modify its behaviour, we
> > > > > considered
> > > > > we can go in parallel with both tracks.
> > > > > We had various presentations in the FreeBSD Community on these
> > > > > topics:
> > > > > AsiaBSDCon2018, AsiaBSDCon2019, BSDCan2019, BSDCan2020,
> > > > > AsiaBSDCon2023.
> > > > >
> > > > > The first patches for warm and live migration were opened in
> > > > > 2021:
> > > > > https://reviews.freebsd.org/D28270,
> > > > > https://reviews.freebsd.org/D30954. However, the general feedback
> > > > > on
> > > > > these was that the patches are too big to be reviewed, so we
> > > > > should
> > > > > split them in smaller chunks (this was also true for some of the
> > > > > suspend/resume improvements). Thus, we split them into smaller
> > > > > parts.
> > > > > Also, as things changed in bhyve (i.e., capsicum support for
> > > > > suspend/resume was added this year), we rebased and updated our
> > > > > reviews.
> > > > >
> > > > > Thank you,
> > > > > Elena
> > > > >
> > > >
> >
> > --
> > Kind regards,
> > Corvin
>
> Thanks,
> Elena
>
>

[-- Attachment #2 --]
I&#39;m curious why the stream send bits are rolled into bhyve as opposed to using netcat/ssh to do the network transfer?<div><br></div><div>sort of how one would do a zfs send/recv between hosts<br><br>On Monday, July 17, 2023, Elena Mihailescu &lt;<a href="mailto:elenamihailescu22@gmail.com">elenamihailescu22@gmail.com</a>&gt; wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Corvin,<br>
<br>
On Mon, 3 Jul 2023 at 09:35, Corvin Köhne &lt;<a href="mailto:corvink@freebsd.org">corvink@freebsd.org</a>&gt; wrote:<br>
&gt;<br>
&gt; On Tue, 2023-06-27 at 16:35 +0300, Elena Mihailescu wrote:<br>
&gt; &gt; Hi Corvin,<br>
&gt; &gt;<br>
&gt; &gt; Thank you for the questions! I&#39;ll respond to them inline.<br>
&gt; &gt;<br>
&gt; &gt; On Mon, 26 Jun 2023 at 10:16, Corvin Köhne &lt;<a href="mailto:corvink@freebsd.org">corvink@freebsd.org</a>&gt;<br>
&gt; &gt; wrote:<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Hi Elena,<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; thanks for posting this proposal here.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Some open questions from my side:<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; 1. How is the data send to the target? Does the host send a<br>
&gt; &gt; &gt; complete<br>
&gt; &gt; &gt; dump and the target parses it? Or does the target request data one<br>
&gt; &gt; &gt; by<br>
&gt; &gt; &gt; one und the host sends it as response?<br>
&gt; &gt; &gt;<br>
&gt; &gt; It&#39;s not a dump of the guest&#39;s state, it&#39;s transmitted in steps.<br>
&gt; &gt; However, some parts may be migrated as a chunk (e.g., the emulated<br>
&gt; &gt; devices&#39; state is transmitted as the buffer generated from the<br>
&gt; &gt; snapshot functions).<br>
&gt; &gt;<br>
&gt;<br>
&gt; How does the receiver know which chunk relates to which device? It<br>
&gt; would be nice if you can start bhyve on the receiver side without<br>
&gt; parameters e.g. `bhyve --receive=127.0.0.1:1234`. Therefore, the<br>
&gt; protocol has to carry some information about the device configuration.<br>
&gt;<br>
<br>
Regarding your first question, we send a chunk of data (a buffer) with<br>
the state: we resume the data in the same order we saved it. It relies<br>
on save/restore. We currently do not support migrating between<br>
different versions of suspend&amp;resume/migration.<br>
<br>
It would be nice to have something like `bhyve<br>
--receive=127.0.0.1:1234`, but I don&#39;t think it is possible at this<br>
point mainly because of the following two reasons:<br>
- the guest image must be shared (e.g., via NFS) between the source<br>
and destination hosts. If the mounting points differ between the two,<br>
opening the disk at the destination will fail (also, we must suppose<br>
that the user used an absolute path since a relative one won&#39;t work)<br>
- if the VM uses a network adapter, we must specify the tap interface<br>
on the destination host (e.g., if on the source host the VM uses<br>
`tap0`, on the destination host, `tap0` may not exist or may be used<br>
by other VMs).<br>
<br>
<br>
&gt;<br>
&gt; &gt; I&#39;ll try to describe a bit the protocol we have implemented for<br>
&gt; &gt; migration, maybe it can partially respond to the second and third<br>
&gt; &gt; questions.<br>
&gt; &gt;<br>
&gt; &gt; The destination host waits for the source host to connect (through a<br>
&gt; &gt; socket).<br>
&gt; &gt; After that, the source sends its system specifications (hw_machine,<br>
&gt; &gt; hw_model, hw_pagesize). If the source and destination hosts have<br>
&gt; &gt; identical hardware configurations, the migration can take place.<br>
&gt; &gt;<br>
&gt; &gt; Then, if we have live migration, we migrate the memory in rounds<br>
&gt; &gt; (i.e., we get a list of the pages that have the dirty bit set, send<br>
&gt; &gt; it<br>
&gt; &gt; to the destination to know what pages will be received, then send the<br>
&gt; &gt; pages through the socket; this process is repeated until the last<br>
&gt; &gt; round).<br>
&gt; &gt;<br>
&gt; &gt; Next, we stop the guest&#39;s vcpus, send the remaining memory (for live<br>
&gt; &gt; migration) or the guest&#39;s memory from vmctx-&gt;baseaddr for warm<br>
&gt; &gt; migration. Then, based on the suspend/resume feature, we get the<br>
&gt; &gt; state<br>
&gt; &gt; of the virtualized devices (the ones from the kernel space) and send<br>
&gt; &gt; this buffer to the destination. We repeat this for the emulated<br>
&gt; &gt; devices as well (the ones from the userspace).<br>
&gt; &gt;<br>
&gt; &gt; On the receiver host, we get the memory pages and set them to their<br>
&gt; &gt; according position in the guest&#39;s memory, use the restore functions<br>
&gt; &gt; for the state of the devices and start the guest&#39;s execution.<br>
&gt; &gt;<br>
&gt; &gt; Excluding the guest&#39;s memory transfer, the rest is based on the<br>
&gt; &gt; suspend/resume feature. We snapshot the guest&#39;s state, but instead of<br>
&gt; &gt; saving the data locally, we send it via network to the destination.<br>
&gt; &gt; On<br>
&gt; &gt; the destination host, we start a new virtual machine, but instead of<br>
&gt; &gt; reading/getting the state from the disk (i.e., the snapshot files) we<br>
&gt; &gt; get this state via the network from the source host.<br>
&gt; &gt;<br>
&gt; &gt; If the destination can properly resume the guest activity, it will<br>
&gt; &gt; send an &quot;OK&quot; to the source host so it can destroy/remove the guest<br>
&gt; &gt; from its end.<br>
&gt; &gt;<br>
&gt; &gt; Both warm and live migration are based on &quot;cold migration&quot;. Cold<br>
&gt; &gt; migration means we suspend the guest on the source host, and restore<br>
&gt; &gt; the guest on the destination host from the snapshot files. Warm<br>
&gt; &gt; migration only does this using a socket, while live migration changes<br>
&gt; &gt; the way the memory is migrated.<br>
&gt; &gt;<br>
&gt; &gt; &gt; 2. What happens if we add a new data section?<br>
&gt; &gt; &gt;<br>
&gt; &gt; What are you referring to with a new data section? Is this question<br>
&gt; &gt; related to the third one? If so, see my answer below.<br>
&gt; &gt;<br>
&gt; &gt; &gt; 3. What happens if the bhyve version differs on host and target<br>
&gt; &gt; &gt; machine?<br>
&gt; &gt;<br>
&gt; &gt; The two hosts must be identical for migration, that&#39;s why we have the<br>
&gt; &gt; part where we check the specifications between the two migration<br>
&gt; &gt; hosts. They are expected to have the same version of bhyve and<br>
&gt; &gt; FreeBSD. We will add an additional check in the check specs part to<br>
&gt; &gt; see if we have the same FreeBSD build.<br>
&gt; &gt;<br>
&gt; &gt; As long as the changes in the virtual memory subsystem won&#39;t affect<br>
&gt; &gt; bhyve (and how the virtual machine sees/uses the memory), the<br>
&gt; &gt; migration constraints should only be related to suspend/resume. The<br>
&gt; &gt; state of the virtual devices is handled by the snapshot system, so if<br>
&gt; &gt; it is able to accommodate changes in the data structures, the<br>
&gt; &gt; migration process will not be affected.<br>
&gt; &gt;<br>
&gt; &gt; Thank you,<br>
&gt; &gt; Elena<br>
&gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; --<br>
&gt; &gt; &gt; Kind regards,<br>
&gt; &gt; &gt; Corvin<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; On Fri, 2023-06-23 at 13:00 +0300, Elena Mihailescu wrote:<br>
&gt; &gt; &gt; &gt; Hello,<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; This mail presents the migration feature we have implemented for<br>
&gt; &gt; &gt; &gt; bhyve. Any feedback from the community is much appreciated.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; We have opened a stack of reviews on Phabricator<br>
&gt; &gt; &gt; &gt; (<a href="https://reviews.freebsd.org/D34717" target="_blank">https://reviews.freebsd.org/<wbr>D34717</a>) that is meant to split the<br>
&gt; &gt; &gt; &gt; code<br>
&gt; &gt; &gt; &gt; in smaller parts so it can be more easily reviewed. A brief<br>
&gt; &gt; &gt; &gt; history<br>
&gt; &gt; &gt; &gt; of<br>
&gt; &gt; &gt; &gt; the implementation can be found at the bottom of this email.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; The migration mechanism we propose needs two main components in<br>
&gt; &gt; &gt; &gt; order<br>
&gt; &gt; &gt; &gt; to move a virtual machine from one host to another:<br>
&gt; &gt; &gt; &gt; 1. the guest&#39;s state (vCPUs, emulated and virtualized devices)<br>
&gt; &gt; &gt; &gt; 2. the guest&#39;s memory<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; For the first part, we rely on the suspend/resume feature. We<br>
&gt; &gt; &gt; &gt; call<br>
&gt; &gt; &gt; &gt; the<br>
&gt; &gt; &gt; &gt; same functions as the ones used by suspend/resume, but instead of<br>
&gt; &gt; &gt; &gt; saving the data in files, we send it via the network.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; The most time consuming aspect of migration is transmitting guest<br>
&gt; &gt; &gt; &gt; memory. The UPB team has implemented two options to accomplish<br>
&gt; &gt; &gt; &gt; this:<br>
&gt; &gt; &gt; &gt; 1. Warm Migration: The guest execution is suspended on the source<br>
&gt; &gt; &gt; &gt; host<br>
&gt; &gt; &gt; &gt; while the memory is sent to the destination host. This method is<br>
&gt; &gt; &gt; &gt; less<br>
&gt; &gt; &gt; &gt; complex but may cause extended downtime.<br>
&gt; &gt; &gt; &gt; 2. Live Migration: The guest continues to execute on the source<br>
&gt; &gt; &gt; &gt; host<br>
&gt; &gt; &gt; &gt; while the memory is transmitted to the destination host. This<br>
&gt; &gt; &gt; &gt; method<br>
&gt; &gt; &gt; &gt; is more complex but offers reduced downtime.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; The proposed live migration procedure (pre-copy live migration)<br>
&gt; &gt; &gt; &gt; migrates the memory in rounds:<br>
&gt; &gt; &gt; &gt; 1. In the initial round, we migrate all the guest memory (all<br>
&gt; &gt; &gt; &gt; pages<br>
&gt; &gt; &gt; &gt; that are allocated)<br>
&gt; &gt; &gt; &gt; 2. In the subsequent rounds, we migrate only the pages that were<br>
&gt; &gt; &gt; &gt; modified since the previous round started<br>
&gt; &gt; &gt; &gt; 3. In the final round, we suspend the guest, migrate the<br>
&gt; &gt; &gt; &gt; remaining<br>
&gt; &gt; &gt; &gt; pages that were modified from the previous round and the guest&#39;s<br>
&gt; &gt; &gt; &gt; internal state (vCPU, emulated and virtualized devices).<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; To detect the pages that were modified between rounds, we propose<br>
&gt; &gt; &gt; &gt; an<br>
&gt; &gt; &gt; &gt; additional dirty bit (virtualization dirty bit) for each memory<br>
&gt; &gt; &gt; &gt; page.<br>
&gt; &gt; &gt; &gt; This bit would be set every time the page&#39;s dirty bit is set.<br>
&gt; &gt; &gt; &gt; However,<br>
&gt; &gt; &gt; &gt; this virtualization dirty bit is reset only when the page is<br>
&gt; &gt; &gt; &gt; migrated.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; The proposed implementation is split in two parts:<br>
&gt; &gt; &gt; &gt; 1. The first one, the warm migration, is just a wrapper on the<br>
&gt; &gt; &gt; &gt; suspend/resume feature which, instead of saving the suspended<br>
&gt; &gt; &gt; &gt; state<br>
&gt; &gt; &gt; &gt; on<br>
&gt; &gt; &gt; &gt; disk, sends it via the network to the destination<br>
&gt; &gt; &gt; &gt; 2. The second part, the live migration, uses the layer previously<br>
&gt; &gt; &gt; &gt; presented, but sends the guest&#39;s memory in rounds, as described<br>
&gt; &gt; &gt; &gt; above.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; The migration process works as follows:<br>
&gt; &gt; &gt; &gt; 1. we identify:<br>
&gt; &gt; &gt; &gt;  - VM_NAME - the name of the virtual machine which will be<br>
&gt; &gt; &gt; &gt; migrated<br>
&gt; &gt; &gt; &gt;  - SRC_IP - the IP address of the source host<br>
&gt; &gt; &gt; &gt;  - DST_IP - the IP address of the destination host (default is<br>
&gt; &gt; &gt; &gt; 24983)<br>
&gt; &gt; &gt; &gt;  - DST_PORT - the port we want to use for migration<br>
&gt; &gt; &gt; &gt; 2. we start a virtual machine on the destination host that will<br>
&gt; &gt; &gt; &gt; wait<br>
&gt; &gt; &gt; &gt; for a migration. Here, we must specify SRC_IP (and the port we<br>
&gt; &gt; &gt; &gt; want<br>
&gt; &gt; &gt; &gt; to<br>
&gt; &gt; &gt; &gt; open for migration, default is 24983).<br>
&gt; &gt; &gt; &gt; e.g.: bhyve ... -R SRC_IP:24983 guest_vm_dst<br>
&gt; &gt; &gt; &gt; 3. using bhyvectl on the source host, we start the migration<br>
&gt; &gt; &gt; &gt; process.<br>
&gt; &gt; &gt; &gt; e.g.: bhyvectl --migrate=DST_IP:24983 --vm=guest_vm<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; A full tutorial on this can be found here:<br>
&gt; &gt; &gt; &gt; <a href="https://github.com/FreeBSD-UPB/freebsd-src/wiki/Virtual-Machine-Migration-using-bhyve" target="_blank">https://github.com/FreeBSD-<wbr>UPB/freebsd-src/wiki/Virtual-<wbr>Machine-Migration-using-bhyve</a><br>;
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; For sending the migration request to a virtual machine, we use<br>
&gt; &gt; &gt; &gt; the<br>
&gt; &gt; &gt; &gt; same thread/socket that is used for suspend.<br>
&gt; &gt; &gt; &gt; For receiving a migration request, we used a similar approach to<br>
&gt; &gt; &gt; &gt; the<br>
&gt; &gt; &gt; &gt; resume process.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; As some of you may remember seeing similar emails from our part<br>
&gt; &gt; &gt; &gt; on<br>
&gt; &gt; &gt; &gt; the<br>
&gt; &gt; &gt; &gt; freebsd-virtualization list, I&#39;ll present a brief history of this<br>
&gt; &gt; &gt; &gt; project:<br>
&gt; &gt; &gt; &gt; The first part of the project was the suspend/resume<br>
&gt; &gt; &gt; &gt; implementation<br>
&gt; &gt; &gt; &gt; which landed in bhyve in 2020, under the BHYVE_SNAPSHOT guard<br>
&gt; &gt; &gt; &gt; (<a href="https://reviews.freebsd.org/D19495" target="_blank">https://reviews.freebsd.org/<wbr>D19495</a>).<br>;
&gt; &gt; &gt; &gt; After that, we focused on two tracks:<br>
&gt; &gt; &gt; &gt; 1. adding various suspend/resume features (multiple device<br>
&gt; &gt; &gt; &gt; support -<br>
&gt; &gt; &gt; &gt; <a href="https://reviews.freebsd.org/D26387" target="_blank">https://reviews.freebsd.org/<wbr>D26387</a>, CAPSICUM support -<br>
&gt; &gt; &gt; &gt; <a href="https://reviews.freebsd.org/D30471" target="_blank">https://reviews.freebsd.org/<wbr>D30471</a>, having an uniform file format<br>
&gt; &gt; &gt; &gt; -<br>
&gt; &gt; &gt; &gt; at<br>
&gt; &gt; &gt; &gt; that time, during the bhyve bi-weekly calls, we concluded that<br>
&gt; &gt; &gt; &gt; the<br>
&gt; &gt; &gt; &gt; JSON format was the most suitable at that time -<br>
&gt; &gt; &gt; &gt; <a href="https://reviews.freebsd.org/D29262" target="_blank">https://reviews.freebsd.org/<wbr>D29262</a>) so we can remove the #ifdef<br>
&gt; &gt; &gt; &gt; BHYVE_SNAPSHOT guard.<br>
&gt; &gt; &gt; &gt; 2. implementing the migration feature for bhyve. Since this one<br>
&gt; &gt; &gt; &gt; relies<br>
&gt; &gt; &gt; &gt; on the save/restore, but does not modify its behaviour, we<br>
&gt; &gt; &gt; &gt; considered<br>
&gt; &gt; &gt; &gt; we can go in parallel with both tracks.<br>
&gt; &gt; &gt; &gt; We had various presentations in the FreeBSD Community on these<br>
&gt; &gt; &gt; &gt; topics:<br>
&gt; &gt; &gt; &gt; AsiaBSDCon2018, AsiaBSDCon2019, BSDCan2019, BSDCan2020,<br>
&gt; &gt; &gt; &gt; AsiaBSDCon2023.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; The first patches for warm and live migration were opened in<br>
&gt; &gt; &gt; &gt; 2021:<br>
&gt; &gt; &gt; &gt; <a href="https://reviews.freebsd.org/D28270" target="_blank">https://reviews.freebsd.org/<wbr>D28270</a>,<br>;
&gt; &gt; &gt; &gt; <a href="https://reviews.freebsd.org/D30954" target="_blank">https://reviews.freebsd.org/<wbr>D30954</a>. However, the general feedback<br>
&gt; &gt; &gt; &gt; on<br>
&gt; &gt; &gt; &gt; these was that the patches are too big to be reviewed, so we<br>
&gt; &gt; &gt; &gt; should<br>
&gt; &gt; &gt; &gt; split them in smaller chunks (this was also true for some of the<br>
&gt; &gt; &gt; &gt; suspend/resume improvements). Thus, we split them into smaller<br>
&gt; &gt; &gt; &gt; parts.<br>
&gt; &gt; &gt; &gt; Also, as things changed in bhyve (i.e., capsicum support for<br>
&gt; &gt; &gt; &gt; suspend/resume was added this year), we rebased and updated our<br>
&gt; &gt; &gt; &gt; reviews.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; Thank you,<br>
&gt; &gt; &gt; &gt; Elena<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt;<br>
&gt; --<br>
&gt; Kind regards,<br>
&gt; Corvin<br>
<br>
Thanks,<br>
Elena<br>
<br>
</blockquote></div>
help

Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAF3%2Bn_e31oOBv_cN5dtAThN3R0%2BZZ0TyD5fBL5PYHpZYnF=S%2BA>