Date: Mon, 17 Jul 2023 09:08:27 -0800 From: Rob Wing <rob.fx907@gmail.com> To: Elena Mihailescu <elenamihailescu22@gmail.com> Cc: =?UTF-8?Q?Corvin_K=C3=B6hne?= <corvink@freebsd.org>, "freebsd-virtualization@freebsd.org" <freebsd-virtualization@freebsd.org>, Mihai Carabas <mihai.carabas@gmail.com>, Matthew Grooms <mgrooms@shrew.net> Subject: Re: Warm and Live Migration Implementation for bhyve Message-ID: <CAF3%2Bn_e31oOBv_cN5dtAThN3R0%2BZZ0TyD5fBL5PYHpZYnF=S%2BA@mail.gmail.com> In-Reply-To: <CAGOCPLi_oD5E1yQSVRGaqNAfVthmK3LWpMR%2B9GYzYj%2BCB4sdTA@mail.gmail.com> References: <CAGOCPLhJrNrysBM1vc87vfkX5jZLCmnyfGf%2Bcv2wmHFF1UhC-w@mail.gmail.com> <3d7ee1f6ff98fe9aede5a85702b906fc3014b6b6.camel@FreeBSD.org> <CAGOCPLg4ZeaRLK0VeRzifteXt3dJnSqZ=YT5BJ8EtH7%2BwMkTfA@mail.gmail.com> <b66fb737fca369239b3953892132f7e29906564f.camel@FreeBSD.org> <CAGOCPLi_oD5E1yQSVRGaqNAfVthmK3LWpMR%2B9GYzYj%2BCB4sdTA@mail.gmail.com>
index | next in thread | previous in thread | raw e-mail
[-- Attachment #1 --] I'm curious why the stream send bits are rolled into bhyve as opposed to using netcat/ssh to do the network transfer? sort of how one would do a zfs send/recv between hosts On Monday, July 17, 2023, Elena Mihailescu <elenamihailescu22@gmail.com> wrote: > Hi Corvin, > > On Mon, 3 Jul 2023 at 09:35, Corvin Köhne <corvink@freebsd.org> wrote: > > > > On Tue, 2023-06-27 at 16:35 +0300, Elena Mihailescu wrote: > > > Hi Corvin, > > > > > > Thank you for the questions! I'll respond to them inline. > > > > > > On Mon, 26 Jun 2023 at 10:16, Corvin Köhne <corvink@freebsd.org> > > > wrote: > > > > > > > > Hi Elena, > > > > > > > > thanks for posting this proposal here. > > > > > > > > Some open questions from my side: > > > > > > > > 1. How is the data send to the target? Does the host send a > > > > complete > > > > dump and the target parses it? Or does the target request data one > > > > by > > > > one und the host sends it as response? > > > > > > > It's not a dump of the guest's state, it's transmitted in steps. > > > However, some parts may be migrated as a chunk (e.g., the emulated > > > devices' state is transmitted as the buffer generated from the > > > snapshot functions). > > > > > > > How does the receiver know which chunk relates to which device? It > > would be nice if you can start bhyve on the receiver side without > > parameters e.g. `bhyve --receive=127.0.0.1:1234`. Therefore, the > > protocol has to carry some information about the device configuration. > > > > Regarding your first question, we send a chunk of data (a buffer) with > the state: we resume the data in the same order we saved it. It relies > on save/restore. We currently do not support migrating between > different versions of suspend&resume/migration. > > It would be nice to have something like `bhyve > --receive=127.0.0.1:1234`, but I don't think it is possible at this > point mainly because of the following two reasons: > - the guest image must be shared (e.g., via NFS) between the source > and destination hosts. If the mounting points differ between the two, > opening the disk at the destination will fail (also, we must suppose > that the user used an absolute path since a relative one won't work) > - if the VM uses a network adapter, we must specify the tap interface > on the destination host (e.g., if on the source host the VM uses > `tap0`, on the destination host, `tap0` may not exist or may be used > by other VMs). > > > > > > > I'll try to describe a bit the protocol we have implemented for > > > migration, maybe it can partially respond to the second and third > > > questions. > > > > > > The destination host waits for the source host to connect (through a > > > socket). > > > After that, the source sends its system specifications (hw_machine, > > > hw_model, hw_pagesize). If the source and destination hosts have > > > identical hardware configurations, the migration can take place. > > > > > > Then, if we have live migration, we migrate the memory in rounds > > > (i.e., we get a list of the pages that have the dirty bit set, send > > > it > > > to the destination to know what pages will be received, then send the > > > pages through the socket; this process is repeated until the last > > > round). > > > > > > Next, we stop the guest's vcpus, send the remaining memory (for live > > > migration) or the guest's memory from vmctx->baseaddr for warm > > > migration. Then, based on the suspend/resume feature, we get the > > > state > > > of the virtualized devices (the ones from the kernel space) and send > > > this buffer to the destination. We repeat this for the emulated > > > devices as well (the ones from the userspace). > > > > > > On the receiver host, we get the memory pages and set them to their > > > according position in the guest's memory, use the restore functions > > > for the state of the devices and start the guest's execution. > > > > > > Excluding the guest's memory transfer, the rest is based on the > > > suspend/resume feature. We snapshot the guest's state, but instead of > > > saving the data locally, we send it via network to the destination. > > > On > > > the destination host, we start a new virtual machine, but instead of > > > reading/getting the state from the disk (i.e., the snapshot files) we > > > get this state via the network from the source host. > > > > > > If the destination can properly resume the guest activity, it will > > > send an "OK" to the source host so it can destroy/remove the guest > > > from its end. > > > > > > Both warm and live migration are based on "cold migration". Cold > > > migration means we suspend the guest on the source host, and restore > > > the guest on the destination host from the snapshot files. Warm > > > migration only does this using a socket, while live migration changes > > > the way the memory is migrated. > > > > > > > 2. What happens if we add a new data section? > > > > > > > What are you referring to with a new data section? Is this question > > > related to the third one? If so, see my answer below. > > > > > > > 3. What happens if the bhyve version differs on host and target > > > > machine? > > > > > > The two hosts must be identical for migration, that's why we have the > > > part where we check the specifications between the two migration > > > hosts. They are expected to have the same version of bhyve and > > > FreeBSD. We will add an additional check in the check specs part to > > > see if we have the same FreeBSD build. > > > > > > As long as the changes in the virtual memory subsystem won't affect > > > bhyve (and how the virtual machine sees/uses the memory), the > > > migration constraints should only be related to suspend/resume. The > > > state of the virtual devices is handled by the snapshot system, so if > > > it is able to accommodate changes in the data structures, the > > > migration process will not be affected. > > > > > > Thank you, > > > Elena > > > > > > > > > > > > > > > -- > > > > Kind regards, > > > > Corvin > > > > > > > > On Fri, 2023-06-23 at 13:00 +0300, Elena Mihailescu wrote: > > > > > Hello, > > > > > > > > > > This mail presents the migration feature we have implemented for > > > > > bhyve. Any feedback from the community is much appreciated. > > > > > > > > > > We have opened a stack of reviews on Phabricator > > > > > (https://reviews.freebsd.org/D34717) that is meant to split the > > > > > code > > > > > in smaller parts so it can be more easily reviewed. A brief > > > > > history > > > > > of > > > > > the implementation can be found at the bottom of this email. > > > > > > > > > > The migration mechanism we propose needs two main components in > > > > > order > > > > > to move a virtual machine from one host to another: > > > > > 1. the guest's state (vCPUs, emulated and virtualized devices) > > > > > 2. the guest's memory > > > > > > > > > > For the first part, we rely on the suspend/resume feature. We > > > > > call > > > > > the > > > > > same functions as the ones used by suspend/resume, but instead of > > > > > saving the data in files, we send it via the network. > > > > > > > > > > The most time consuming aspect of migration is transmitting guest > > > > > memory. The UPB team has implemented two options to accomplish > > > > > this: > > > > > 1. Warm Migration: The guest execution is suspended on the source > > > > > host > > > > > while the memory is sent to the destination host. This method is > > > > > less > > > > > complex but may cause extended downtime. > > > > > 2. Live Migration: The guest continues to execute on the source > > > > > host > > > > > while the memory is transmitted to the destination host. This > > > > > method > > > > > is more complex but offers reduced downtime. > > > > > > > > > > The proposed live migration procedure (pre-copy live migration) > > > > > migrates the memory in rounds: > > > > > 1. In the initial round, we migrate all the guest memory (all > > > > > pages > > > > > that are allocated) > > > > > 2. In the subsequent rounds, we migrate only the pages that were > > > > > modified since the previous round started > > > > > 3. In the final round, we suspend the guest, migrate the > > > > > remaining > > > > > pages that were modified from the previous round and the guest's > > > > > internal state (vCPU, emulated and virtualized devices). > > > > > > > > > > To detect the pages that were modified between rounds, we propose > > > > > an > > > > > additional dirty bit (virtualization dirty bit) for each memory > > > > > page. > > > > > This bit would be set every time the page's dirty bit is set. > > > > > However, > > > > > this virtualization dirty bit is reset only when the page is > > > > > migrated. > > > > > > > > > > The proposed implementation is split in two parts: > > > > > 1. The first one, the warm migration, is just a wrapper on the > > > > > suspend/resume feature which, instead of saving the suspended > > > > > state > > > > > on > > > > > disk, sends it via the network to the destination > > > > > 2. The second part, the live migration, uses the layer previously > > > > > presented, but sends the guest's memory in rounds, as described > > > > > above. > > > > > > > > > > The migration process works as follows: > > > > > 1. we identify: > > > > > - VM_NAME - the name of the virtual machine which will be > > > > > migrated > > > > > - SRC_IP - the IP address of the source host > > > > > - DST_IP - the IP address of the destination host (default is > > > > > 24983) > > > > > - DST_PORT - the port we want to use for migration > > > > > 2. we start a virtual machine on the destination host that will > > > > > wait > > > > > for a migration. Here, we must specify SRC_IP (and the port we > > > > > want > > > > > to > > > > > open for migration, default is 24983). > > > > > e.g.: bhyve ... -R SRC_IP:24983 guest_vm_dst > > > > > 3. using bhyvectl on the source host, we start the migration > > > > > process. > > > > > e.g.: bhyvectl --migrate=DST_IP:24983 --vm=guest_vm > > > > > > > > > > A full tutorial on this can be found here: > > > > > https://github.com/FreeBSD-UPB/freebsd-src/wiki/Virtual- > Machine-Migration-using-bhyve > > > > > > > > > > For sending the migration request to a virtual machine, we use > > > > > the > > > > > same thread/socket that is used for suspend. > > > > > For receiving a migration request, we used a similar approach to > > > > > the > > > > > resume process. > > > > > > > > > > As some of you may remember seeing similar emails from our part > > > > > on > > > > > the > > > > > freebsd-virtualization list, I'll present a brief history of this > > > > > project: > > > > > The first part of the project was the suspend/resume > > > > > implementation > > > > > which landed in bhyve in 2020, under the BHYVE_SNAPSHOT guard > > > > > (https://reviews.freebsd.org/D19495). > > > > > After that, we focused on two tracks: > > > > > 1. adding various suspend/resume features (multiple device > > > > > support - > > > > > https://reviews.freebsd.org/D26387, CAPSICUM support - > > > > > https://reviews.freebsd.org/D30471, having an uniform file format > > > > > - > > > > > at > > > > > that time, during the bhyve bi-weekly calls, we concluded that > > > > > the > > > > > JSON format was the most suitable at that time - > > > > > https://reviews.freebsd.org/D29262) so we can remove the #ifdef > > > > > BHYVE_SNAPSHOT guard. > > > > > 2. implementing the migration feature for bhyve. Since this one > > > > > relies > > > > > on the save/restore, but does not modify its behaviour, we > > > > > considered > > > > > we can go in parallel with both tracks. > > > > > We had various presentations in the FreeBSD Community on these > > > > > topics: > > > > > AsiaBSDCon2018, AsiaBSDCon2019, BSDCan2019, BSDCan2020, > > > > > AsiaBSDCon2023. > > > > > > > > > > The first patches for warm and live migration were opened in > > > > > 2021: > > > > > https://reviews.freebsd.org/D28270, > > > > > https://reviews.freebsd.org/D30954. However, the general feedback > > > > > on > > > > > these was that the patches are too big to be reviewed, so we > > > > > should > > > > > split them in smaller chunks (this was also true for some of the > > > > > suspend/resume improvements). Thus, we split them into smaller > > > > > parts. > > > > > Also, as things changed in bhyve (i.e., capsicum support for > > > > > suspend/resume was added this year), we rebased and updated our > > > > > reviews. > > > > > > > > > > Thank you, > > > > > Elena > > > > > > > > > > > > > -- > > Kind regards, > > Corvin > > Thanks, > Elena > > [-- Attachment #2 --] I'm curious why the stream send bits are rolled into bhyve as opposed to using netcat/ssh to do the network transfer?<div><br></div><div>sort of how one would do a zfs send/recv between hosts<br><br>On Monday, July 17, 2023, Elena Mihailescu <<a href="mailto:elenamihailescu22@gmail.com">elenamihailescu22@gmail.com</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Corvin,<br> <br> On Mon, 3 Jul 2023 at 09:35, Corvin Köhne <<a href="mailto:corvink@freebsd.org">corvink@freebsd.org</a>> wrote:<br> ><br> > On Tue, 2023-06-27 at 16:35 +0300, Elena Mihailescu wrote:<br> > > Hi Corvin,<br> > ><br> > > Thank you for the questions! I'll respond to them inline.<br> > ><br> > > On Mon, 26 Jun 2023 at 10:16, Corvin Köhne <<a href="mailto:corvink@freebsd.org">corvink@freebsd.org</a>><br> > > wrote:<br> > > ><br> > > > Hi Elena,<br> > > ><br> > > > thanks for posting this proposal here.<br> > > ><br> > > > Some open questions from my side:<br> > > ><br> > > > 1. How is the data send to the target? Does the host send a<br> > > > complete<br> > > > dump and the target parses it? Or does the target request data one<br> > > > by<br> > > > one und the host sends it as response?<br> > > ><br> > > It's not a dump of the guest's state, it's transmitted in steps.<br> > > However, some parts may be migrated as a chunk (e.g., the emulated<br> > > devices' state is transmitted as the buffer generated from the<br> > > snapshot functions).<br> > ><br> ><br> > How does the receiver know which chunk relates to which device? It<br> > would be nice if you can start bhyve on the receiver side without<br> > parameters e.g. `bhyve --receive=127.0.0.1:1234`. Therefore, the<br> > protocol has to carry some information about the device configuration.<br> ><br> <br> Regarding your first question, we send a chunk of data (a buffer) with<br> the state: we resume the data in the same order we saved it. It relies<br> on save/restore. We currently do not support migrating between<br> different versions of suspend&resume/migration.<br> <br> It would be nice to have something like `bhyve<br> --receive=127.0.0.1:1234`, but I don't think it is possible at this<br> point mainly because of the following two reasons:<br> - the guest image must be shared (e.g., via NFS) between the source<br> and destination hosts. If the mounting points differ between the two,<br> opening the disk at the destination will fail (also, we must suppose<br> that the user used an absolute path since a relative one won't work)<br> - if the VM uses a network adapter, we must specify the tap interface<br> on the destination host (e.g., if on the source host the VM uses<br> `tap0`, on the destination host, `tap0` may not exist or may be used<br> by other VMs).<br> <br> <br> ><br> > > I'll try to describe a bit the protocol we have implemented for<br> > > migration, maybe it can partially respond to the second and third<br> > > questions.<br> > ><br> > > The destination host waits for the source host to connect (through a<br> > > socket).<br> > > After that, the source sends its system specifications (hw_machine,<br> > > hw_model, hw_pagesize). If the source and destination hosts have<br> > > identical hardware configurations, the migration can take place.<br> > ><br> > > Then, if we have live migration, we migrate the memory in rounds<br> > > (i.e., we get a list of the pages that have the dirty bit set, send<br> > > it<br> > > to the destination to know what pages will be received, then send the<br> > > pages through the socket; this process is repeated until the last<br> > > round).<br> > ><br> > > Next, we stop the guest's vcpus, send the remaining memory (for live<br> > > migration) or the guest's memory from vmctx->baseaddr for warm<br> > > migration. Then, based on the suspend/resume feature, we get the<br> > > state<br> > > of the virtualized devices (the ones from the kernel space) and send<br> > > this buffer to the destination. We repeat this for the emulated<br> > > devices as well (the ones from the userspace).<br> > ><br> > > On the receiver host, we get the memory pages and set them to their<br> > > according position in the guest's memory, use the restore functions<br> > > for the state of the devices and start the guest's execution.<br> > ><br> > > Excluding the guest's memory transfer, the rest is based on the<br> > > suspend/resume feature. We snapshot the guest's state, but instead of<br> > > saving the data locally, we send it via network to the destination.<br> > > On<br> > > the destination host, we start a new virtual machine, but instead of<br> > > reading/getting the state from the disk (i.e., the snapshot files) we<br> > > get this state via the network from the source host.<br> > ><br> > > If the destination can properly resume the guest activity, it will<br> > > send an "OK" to the source host so it can destroy/remove the guest<br> > > from its end.<br> > ><br> > > Both warm and live migration are based on "cold migration". Cold<br> > > migration means we suspend the guest on the source host, and restore<br> > > the guest on the destination host from the snapshot files. Warm<br> > > migration only does this using a socket, while live migration changes<br> > > the way the memory is migrated.<br> > ><br> > > > 2. What happens if we add a new data section?<br> > > ><br> > > What are you referring to with a new data section? Is this question<br> > > related to the third one? If so, see my answer below.<br> > ><br> > > > 3. What happens if the bhyve version differs on host and target<br> > > > machine?<br> > ><br> > > The two hosts must be identical for migration, that's why we have the<br> > > part where we check the specifications between the two migration<br> > > hosts. They are expected to have the same version of bhyve and<br> > > FreeBSD. We will add an additional check in the check specs part to<br> > > see if we have the same FreeBSD build.<br> > ><br> > > As long as the changes in the virtual memory subsystem won't affect<br> > > bhyve (and how the virtual machine sees/uses the memory), the<br> > > migration constraints should only be related to suspend/resume. The<br> > > state of the virtual devices is handled by the snapshot system, so if<br> > > it is able to accommodate changes in the data structures, the<br> > > migration process will not be affected.<br> > ><br> > > Thank you,<br> > > Elena<br> > ><br> > > ><br> > > ><br> > > > --<br> > > > Kind regards,<br> > > > Corvin<br> > > ><br> > > > On Fri, 2023-06-23 at 13:00 +0300, Elena Mihailescu wrote:<br> > > > > Hello,<br> > > > ><br> > > > > This mail presents the migration feature we have implemented for<br> > > > > bhyve. Any feedback from the community is much appreciated.<br> > > > ><br> > > > > We have opened a stack of reviews on Phabricator<br> > > > > (<a href="https://reviews.freebsd.org/D34717" target="_blank">https://reviews.freebsd.org/<wbr>D34717</a>) that is meant to split the<br> > > > > code<br> > > > > in smaller parts so it can be more easily reviewed. A brief<br> > > > > history<br> > > > > of<br> > > > > the implementation can be found at the bottom of this email.<br> > > > ><br> > > > > The migration mechanism we propose needs two main components in<br> > > > > order<br> > > > > to move a virtual machine from one host to another:<br> > > > > 1. the guest's state (vCPUs, emulated and virtualized devices)<br> > > > > 2. the guest's memory<br> > > > ><br> > > > > For the first part, we rely on the suspend/resume feature. We<br> > > > > call<br> > > > > the<br> > > > > same functions as the ones used by suspend/resume, but instead of<br> > > > > saving the data in files, we send it via the network.<br> > > > ><br> > > > > The most time consuming aspect of migration is transmitting guest<br> > > > > memory. The UPB team has implemented two options to accomplish<br> > > > > this:<br> > > > > 1. Warm Migration: The guest execution is suspended on the source<br> > > > > host<br> > > > > while the memory is sent to the destination host. This method is<br> > > > > less<br> > > > > complex but may cause extended downtime.<br> > > > > 2. Live Migration: The guest continues to execute on the source<br> > > > > host<br> > > > > while the memory is transmitted to the destination host. This<br> > > > > method<br> > > > > is more complex but offers reduced downtime.<br> > > > ><br> > > > > The proposed live migration procedure (pre-copy live migration)<br> > > > > migrates the memory in rounds:<br> > > > > 1. In the initial round, we migrate all the guest memory (all<br> > > > > pages<br> > > > > that are allocated)<br> > > > > 2. In the subsequent rounds, we migrate only the pages that were<br> > > > > modified since the previous round started<br> > > > > 3. In the final round, we suspend the guest, migrate the<br> > > > > remaining<br> > > > > pages that were modified from the previous round and the guest's<br> > > > > internal state (vCPU, emulated and virtualized devices).<br> > > > ><br> > > > > To detect the pages that were modified between rounds, we propose<br> > > > > an<br> > > > > additional dirty bit (virtualization dirty bit) for each memory<br> > > > > page.<br> > > > > This bit would be set every time the page's dirty bit is set.<br> > > > > However,<br> > > > > this virtualization dirty bit is reset only when the page is<br> > > > > migrated.<br> > > > ><br> > > > > The proposed implementation is split in two parts:<br> > > > > 1. The first one, the warm migration, is just a wrapper on the<br> > > > > suspend/resume feature which, instead of saving the suspended<br> > > > > state<br> > > > > on<br> > > > > disk, sends it via the network to the destination<br> > > > > 2. The second part, the live migration, uses the layer previously<br> > > > > presented, but sends the guest's memory in rounds, as described<br> > > > > above.<br> > > > ><br> > > > > The migration process works as follows:<br> > > > > 1. we identify:<br> > > > > - VM_NAME - the name of the virtual machine which will be<br> > > > > migrated<br> > > > > - SRC_IP - the IP address of the source host<br> > > > > - DST_IP - the IP address of the destination host (default is<br> > > > > 24983)<br> > > > > - DST_PORT - the port we want to use for migration<br> > > > > 2. we start a virtual machine on the destination host that will<br> > > > > wait<br> > > > > for a migration. Here, we must specify SRC_IP (and the port we<br> > > > > want<br> > > > > to<br> > > > > open for migration, default is 24983).<br> > > > > e.g.: bhyve ... -R SRC_IP:24983 guest_vm_dst<br> > > > > 3. using bhyvectl on the source host, we start the migration<br> > > > > process.<br> > > > > e.g.: bhyvectl --migrate=DST_IP:24983 --vm=guest_vm<br> > > > ><br> > > > > A full tutorial on this can be found here:<br> > > > > <a href="https://github.com/FreeBSD-UPB/freebsd-src/wiki/Virtual-Machine-Migration-using-bhyve" target="_blank">https://github.com/FreeBSD-<wbr>UPB/freebsd-src/wiki/Virtual-<wbr>Machine-Migration-using-bhyve</a><br> > > > ><br> > > > > For sending the migration request to a virtual machine, we use<br> > > > > the<br> > > > > same thread/socket that is used for suspend.<br> > > > > For receiving a migration request, we used a similar approach to<br> > > > > the<br> > > > > resume process.<br> > > > ><br> > > > > As some of you may remember seeing similar emails from our part<br> > > > > on<br> > > > > the<br> > > > > freebsd-virtualization list, I'll present a brief history of this<br> > > > > project:<br> > > > > The first part of the project was the suspend/resume<br> > > > > implementation<br> > > > > which landed in bhyve in 2020, under the BHYVE_SNAPSHOT guard<br> > > > > (<a href="https://reviews.freebsd.org/D19495" target="_blank">https://reviews.freebsd.org/<wbr>D19495</a>).<br> > > > > After that, we focused on two tracks:<br> > > > > 1. adding various suspend/resume features (multiple device<br> > > > > support -<br> > > > > <a href="https://reviews.freebsd.org/D26387" target="_blank">https://reviews.freebsd.org/<wbr>D26387</a>, CAPSICUM support -<br> > > > > <a href="https://reviews.freebsd.org/D30471" target="_blank">https://reviews.freebsd.org/<wbr>D30471</a>, having an uniform file format<br> > > > > -<br> > > > > at<br> > > > > that time, during the bhyve bi-weekly calls, we concluded that<br> > > > > the<br> > > > > JSON format was the most suitable at that time -<br> > > > > <a href="https://reviews.freebsd.org/D29262" target="_blank">https://reviews.freebsd.org/<wbr>D29262</a>) so we can remove the #ifdef<br> > > > > BHYVE_SNAPSHOT guard.<br> > > > > 2. implementing the migration feature for bhyve. Since this one<br> > > > > relies<br> > > > > on the save/restore, but does not modify its behaviour, we<br> > > > > considered<br> > > > > we can go in parallel with both tracks.<br> > > > > We had various presentations in the FreeBSD Community on these<br> > > > > topics:<br> > > > > AsiaBSDCon2018, AsiaBSDCon2019, BSDCan2019, BSDCan2020,<br> > > > > AsiaBSDCon2023.<br> > > > ><br> > > > > The first patches for warm and live migration were opened in<br> > > > > 2021:<br> > > > > <a href="https://reviews.freebsd.org/D28270" target="_blank">https://reviews.freebsd.org/<wbr>D28270</a>,<br> > > > > <a href="https://reviews.freebsd.org/D30954" target="_blank">https://reviews.freebsd.org/<wbr>D30954</a>. However, the general feedback<br> > > > > on<br> > > > > these was that the patches are too big to be reviewed, so we<br> > > > > should<br> > > > > split them in smaller chunks (this was also true for some of the<br> > > > > suspend/resume improvements). Thus, we split them into smaller<br> > > > > parts.<br> > > > > Also, as things changed in bhyve (i.e., capsicum support for<br> > > > > suspend/resume was added this year), we rebased and updated our<br> > > > > reviews.<br> > > > ><br> > > > > Thank you,<br> > > > > Elena<br> > > > ><br> > > ><br> ><br> > --<br> > Kind regards,<br> > Corvin<br> <br> Thanks,<br> Elena<br> <br> </blockquote></div>help
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAF3%2Bn_e31oOBv_cN5dtAThN3R0%2BZZ0TyD5fBL5PYHpZYnF=S%2BA>
