Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 07 May 2019 08:02:58 +0000
From:      bugzilla-noreply@freebsd.org
To:        virtualization@FreeBSD.org
Subject:   [Bug 231117] I/O lockups inside bhyve vms
Message-ID:  <bug-231117-27103-Zek1iFczlH@https.bugs.freebsd.org/bugzilla/>
In-Reply-To: <bug-231117-27103@https.bugs.freebsd.org/bugzilla/>
References:  <bug-231117-27103@https.bugs.freebsd.org/bugzilla/>

next in thread | previous in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D231117

Mateusz Kwiatkowski <kwiat3k@panic.pl> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |kwiat3k@panic.pl

--- Comment #19 from Mateusz Kwiatkowski <kwiat3k@panic.pl> ---
I have very similar problem as described in this issue: I/O in guests hangs.
I've experienced this with FreeBSD 11.2, 12.0 (both with ZFS inside) and Ub=
untu
18.04 (ext4) guests.

This started happening after migrating guests from old hypervisor running 1=
2.0
on Xeon, to the new on running CURRENT (r347183) on AMD Epyc. On old hyperv=
isor
these VMs were running stable for couple of months.
On the new hypervisor there's plenty of free resources. Swap is disabled.

Mem: 3761M Active, 1636M Inact, 5802M Wired, 51G Free
ARC: 4000M Total, 487M MFU, 3322M MRU, 3456K Anon, 129M Header, 59M Other
     2228M Compressed, 3202M Uncompressed, 1.44:1 Ratio

vfs.zfs.arc_min: 8215740416
vfs.zfs.arc_max: 52582796492


Procstat from bhyve provess:
root@utgard:~ # procstat -kk 95379
  PID    TID COMM                TDNAME              KSTACK
95379 101075 bhyve               mevent              mi_switch+0x174
sleepq_switch+0x110 sleepq_catch_signals+0x3e7 sleepq_wait_sig+0xf _sleep+0=
x2d0
kqueue_kevent+0xa94 kern_kevent_fp+0x95 kern_kevent+0x9f
kern_kevent_generic+0x70 sys_kevent+0x61 amd64_syscall+0x276
fast_syscall_common+0x101
95379 101258 bhyve               blk-4:0-0           mi_switch+0x174
sleepq_switch+0x110 sleepq_catch_signals+0x3e7 sleepq_wait_sig+0xf _sleep+0=
x2d0
umtxq_sleep+0x153 do_wait+0x206 __umtx_op_wait_uint_private+0x7e
amd64_syscall+0x276 fast_syscall_common+0x101
95379 101259 bhyve               blk-4:0-1           mi_switch+0x174
sleepq_switch+0x110 sleepq_catch_signals+0x3e7 sleepq_wait_sig+0xf _sleep+0=
x2d0
umtxq_sleep+0x153 do_wait+0x206 __umtx_op_wait_uint_private+0x7e
amd64_syscall+0x276 fast_syscall_common+0x101
95379 101260 bhyve               blk-4:0-2           mi_switch+0x174
sleepq_switch+0x110 sleepq_catch_signals+0x3e7 sleepq_wait_sig+0xf _sleep+0=
x2d0
umtxq_sleep+0x153 do_wait+0x206 __umtx_op_wait_uint_private+0x7e
amd64_syscall+0x276 fast_syscall_common+0x101
95379 101261 bhyve               blk-4:0-3           mi_switch+0x174
sleepq_switch+0x110 sleepq_catch_signals+0x3e7 sleepq_wait_sig+0xf _sleep+0=
x2d0
umtxq_sleep+0x153 do_wait+0x206 __umtx_op_wait_uint_private+0x7e
amd64_syscall+0x276 fast_syscall_common+0x101
95379 101262 bhyve               blk-4:0-4           mi_switch+0x174
sleepq_switch+0x110 sleepq_catch_signals+0x3e7 sleepq_wait_sig+0xf _sleep+0=
x2d0
umtxq_sleep+0x153 do_wait+0x206 __umtx_op_wait_uint_private+0x7e
amd64_syscall+0x276 fast_syscall_common+0x101
95379 101263 bhyve               blk-4:0-5           mi_switch+0x174
sleepq_switch+0x110 sleepq_catch_signals+0x3e7 sleepq_wait_sig+0xf _sleep+0=
x2d0
umtxq_sleep+0x153 do_wait+0x206 __umtx_op_wait_uint_private+0x7e
amd64_syscall+0x276 fast_syscall_common+0x101
95379 101264 bhyve               blk-4:0-6           mi_switch+0x174
sleepq_switch+0x110 sleepq_catch_signals+0x3e7 sleepq_wait_sig+0xf _sleep+0=
x2d0
umtxq_sleep+0x153 do_wait+0x206 __umtx_op_wait_uint_private+0x7e
amd64_syscall+0x276 fast_syscall_common+0x101
95379 101265 bhyve               blk-4:0-7           mi_switch+0x174
sleepq_switch+0x110 sleepq_catch_signals+0x3e7 sleepq_wait_sig+0xf _sleep+0=
x2d0
umtxq_sleep+0x153 do_wait+0x206 __umtx_op_wait_uint_private+0x7e
amd64_syscall+0x276 fast_syscall_common+0x101
95379 101266 bhyve               vtnet-5:0 tx        mi_switch+0x174
sleepq_switch+0x110 sleepq_catch_signals+0x3e7 sleepq_wait_sig+0xf _sleep+0=
x2d0
umtxq_sleep+0x153 do_wait+0x206 __umtx_op_wait_uint_private+0x7e
amd64_syscall+0x276 fast_syscall_common+0x101
95379 101267 bhyve               vcpu 0              mi_switch+0x174
sleepq_switch+0x110 sleepq_timedwait+0x4f msleep_spin_sbt+0x144 vm_run+0x970
vmmdev_ioctl+0x7ea devfs_ioctl+0xca VOP_IOCTL_APV+0x63 vn_ioctl+0x124
devfs_ioctl_f+0x1f kern_ioctl+0x28a sys_ioctl+0x15d amd64_syscall+0x276
fast_syscall_common+0x101
95379 101268 bhyve               vcpu 1              mi_switch+0x174
sleepq_switch+0x110 sleepq_timedwait+0x4f msleep_spin_sbt+0x144 vm_run+0x970
vmmdev_ioctl+0x7ea devfs_ioctl+0xca VOP_IOCTL_APV+0x63 vn_ioctl+0x124
devfs_ioctl_f+0x1f kern_ioctl+0x28a sys_ioctl+0x15d amd64_syscall+0x276
fast_syscall_common+0x101


I will be happy to provide more information to help solving this issue.

--=20
You are receiving this mail because:
You are the assignee for the bug.=



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-231117-27103-Zek1iFczlH>