Date: Fri, 20 Jan 2023 20:11:11 GMT From: Robert Wing <rew@FreeBSD.org> To: src-committers@FreeBSD.org, dev-commits-src-all@FreeBSD.org, dev-commits-src-main@FreeBSD.org Subject: git: c668e8173a8f - main - vmm: take exclusive mem_segs_lock in vm_cleanup() Message-ID: <202301202011.30KKBBCi082542@gitrepo.freebsd.org>
next in thread | raw e-mail | index | archive | help
The branch main has been updated by rew: URL: https://cgit.FreeBSD.org/src/commit/?id=c668e8173a8fc047b54a5c51b0fe4637e87836b6 commit c668e8173a8fc047b54a5c51b0fe4637e87836b6 Author: Robert Wing <rew@FreeBSD.org> AuthorDate: 2023-01-20 11:10:53 +0000 Commit: Robert Wing <rew@FreeBSD.org> CommitDate: 2023-01-20 11:10:53 +0000 vmm: take exclusive mem_segs_lock in vm_cleanup() The consumers of vm_cleanup() are vm_reinit() and vm_destroy(). The vm_reinit() call path is, here vmmdev_ioctl() takes mem_segs_lock: vmmdev_ioctl() vm_reinit() vm_cleanup(destroy=false) The call path for vm_destroy() is (mem_segs_lock not taken): sysctl_vmm_destroy() vmmdev_destroy() vm_destroy() vm_cleanup(destroy=true) Fix this by taking mem_segs_lock in vm_cleanup() when destroy == true. Reviewed by: corvink, markj, jhb Fixes: 67b69e76e8ee ("vmm: Use an sx lock to protect the memory map.") Differential Revision: https://reviews.freebsd.org/D38071 --- sys/amd64/vmm/vmm.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/sys/amd64/vmm/vmm.c b/sys/amd64/vmm/vmm.c index 169109e8df6e..24f97a9244f0 100644 --- a/sys/amd64/vmm/vmm.c +++ b/sys/amd64/vmm/vmm.c @@ -651,6 +651,9 @@ vm_cleanup(struct vm *vm, bool destroy) struct mem_map *mm; int i; + if (destroy) + vm_xlock_memsegs(vm); + ppt_unassign_all(vm); if (vm->iommu != NULL) @@ -690,6 +693,7 @@ vm_cleanup(struct vm *vm, bool destroy) if (destroy) { for (i = 0; i < VM_MAX_MEMSEGS; i++) vm_free_memseg(vm, i); + vm_unlock_memsegs(vm); vmmops_vmspace_free(vm->vmspace); vm->vmspace = NULL;
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?202301202011.30KKBBCi082542>