From owner-freebsd-virtualization@freebsd.org Tue May 7 08:03:02 2019 Return-Path: Delivered-To: freebsd-virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id E55B815A2968 for ; Tue, 7 May 2019 08:03:01 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mailman.ysv.freebsd.org (mailman.ysv.freebsd.org [IPv6:2001:1900:2254:206a::50:5]) by mx1.freebsd.org (Postfix) with ESMTP id 7DDFD7672F for ; Tue, 7 May 2019 08:03:01 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: by mailman.ysv.freebsd.org (Postfix) id 4149815A2966; Tue, 7 May 2019 08:03:01 +0000 (UTC) Delivered-To: virtualization@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id 1EEA215A2965 for ; Tue, 7 May 2019 08:03:01 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from mxrelay.ysv.freebsd.org (mxrelay.ysv.freebsd.org [IPv6:2001:1900:2254:206a::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.ysv.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id AA8247672B for ; Tue, 7 May 2019 08:03:00 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org (kenobi.freebsd.org [IPv6:2001:1900:2254:206a::16:76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.ysv.freebsd.org (Postfix) with ESMTPS id E61251FBFA for ; Tue, 7 May 2019 08:02:59 +0000 (UTC) (envelope-from bugzilla-noreply@freebsd.org) Received: from kenobi.freebsd.org ([127.0.1.118]) by kenobi.freebsd.org (8.15.2/8.15.2) with ESMTP id x4782xwp005051 for ; Tue, 7 May 2019 08:02:59 GMT (envelope-from bugzilla-noreply@freebsd.org) Received: (from www@localhost) by kenobi.freebsd.org (8.15.2/8.15.2/Submit) id x4782xKK005050 for virtualization@FreeBSD.org; Tue, 7 May 2019 08:02:59 GMT (envelope-from bugzilla-noreply@freebsd.org) X-Authentication-Warning: kenobi.freebsd.org: www set sender to bugzilla-noreply@freebsd.org using -f From: bugzilla-noreply@freebsd.org To: virtualization@FreeBSD.org Subject: [Bug 231117] I/O lockups inside bhyve vms Date: Tue, 07 May 2019 08:02:58 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: Base System X-Bugzilla-Component: kern X-Bugzilla-Version: 11.2-RELEASE X-Bugzilla-Keywords: regression X-Bugzilla-Severity: Affects Some People X-Bugzilla-Who: kwiat3k@panic.pl X-Bugzilla-Status: New X-Bugzilla-Resolution: X-Bugzilla-Priority: --- X-Bugzilla-Assigned-To: virtualization@FreeBSD.org X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: https://bugs.freebsd.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: freebsd-virtualization@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Discussion of various virtualization techniques FreeBSD supports." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 May 2019 08:03:02 -0000 https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D231117 Mateusz Kwiatkowski changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |kwiat3k@panic.pl --- Comment #19 from Mateusz Kwiatkowski --- I have very similar problem as described in this issue: I/O in guests hangs. I've experienced this with FreeBSD 11.2, 12.0 (both with ZFS inside) and Ub= untu 18.04 (ext4) guests. This started happening after migrating guests from old hypervisor running 1= 2.0 on Xeon, to the new on running CURRENT (r347183) on AMD Epyc. On old hyperv= isor these VMs were running stable for couple of months. On the new hypervisor there's plenty of free resources. Swap is disabled. Mem: 3761M Active, 1636M Inact, 5802M Wired, 51G Free ARC: 4000M Total, 487M MFU, 3322M MRU, 3456K Anon, 129M Header, 59M Other 2228M Compressed, 3202M Uncompressed, 1.44:1 Ratio vfs.zfs.arc_min: 8215740416 vfs.zfs.arc_max: 52582796492 Procstat from bhyve provess: root@utgard:~ # procstat -kk 95379 PID TID COMM TDNAME KSTACK 95379 101075 bhyve mevent mi_switch+0x174 sleepq_switch+0x110 sleepq_catch_signals+0x3e7 sleepq_wait_sig+0xf _sleep+0= x2d0 kqueue_kevent+0xa94 kern_kevent_fp+0x95 kern_kevent+0x9f kern_kevent_generic+0x70 sys_kevent+0x61 amd64_syscall+0x276 fast_syscall_common+0x101 95379 101258 bhyve blk-4:0-0 mi_switch+0x174 sleepq_switch+0x110 sleepq_catch_signals+0x3e7 sleepq_wait_sig+0xf _sleep+0= x2d0 umtxq_sleep+0x153 do_wait+0x206 __umtx_op_wait_uint_private+0x7e amd64_syscall+0x276 fast_syscall_common+0x101 95379 101259 bhyve blk-4:0-1 mi_switch+0x174 sleepq_switch+0x110 sleepq_catch_signals+0x3e7 sleepq_wait_sig+0xf _sleep+0= x2d0 umtxq_sleep+0x153 do_wait+0x206 __umtx_op_wait_uint_private+0x7e amd64_syscall+0x276 fast_syscall_common+0x101 95379 101260 bhyve blk-4:0-2 mi_switch+0x174 sleepq_switch+0x110 sleepq_catch_signals+0x3e7 sleepq_wait_sig+0xf _sleep+0= x2d0 umtxq_sleep+0x153 do_wait+0x206 __umtx_op_wait_uint_private+0x7e amd64_syscall+0x276 fast_syscall_common+0x101 95379 101261 bhyve blk-4:0-3 mi_switch+0x174 sleepq_switch+0x110 sleepq_catch_signals+0x3e7 sleepq_wait_sig+0xf _sleep+0= x2d0 umtxq_sleep+0x153 do_wait+0x206 __umtx_op_wait_uint_private+0x7e amd64_syscall+0x276 fast_syscall_common+0x101 95379 101262 bhyve blk-4:0-4 mi_switch+0x174 sleepq_switch+0x110 sleepq_catch_signals+0x3e7 sleepq_wait_sig+0xf _sleep+0= x2d0 umtxq_sleep+0x153 do_wait+0x206 __umtx_op_wait_uint_private+0x7e amd64_syscall+0x276 fast_syscall_common+0x101 95379 101263 bhyve blk-4:0-5 mi_switch+0x174 sleepq_switch+0x110 sleepq_catch_signals+0x3e7 sleepq_wait_sig+0xf _sleep+0= x2d0 umtxq_sleep+0x153 do_wait+0x206 __umtx_op_wait_uint_private+0x7e amd64_syscall+0x276 fast_syscall_common+0x101 95379 101264 bhyve blk-4:0-6 mi_switch+0x174 sleepq_switch+0x110 sleepq_catch_signals+0x3e7 sleepq_wait_sig+0xf _sleep+0= x2d0 umtxq_sleep+0x153 do_wait+0x206 __umtx_op_wait_uint_private+0x7e amd64_syscall+0x276 fast_syscall_common+0x101 95379 101265 bhyve blk-4:0-7 mi_switch+0x174 sleepq_switch+0x110 sleepq_catch_signals+0x3e7 sleepq_wait_sig+0xf _sleep+0= x2d0 umtxq_sleep+0x153 do_wait+0x206 __umtx_op_wait_uint_private+0x7e amd64_syscall+0x276 fast_syscall_common+0x101 95379 101266 bhyve vtnet-5:0 tx mi_switch+0x174 sleepq_switch+0x110 sleepq_catch_signals+0x3e7 sleepq_wait_sig+0xf _sleep+0= x2d0 umtxq_sleep+0x153 do_wait+0x206 __umtx_op_wait_uint_private+0x7e amd64_syscall+0x276 fast_syscall_common+0x101 95379 101267 bhyve vcpu 0 mi_switch+0x174 sleepq_switch+0x110 sleepq_timedwait+0x4f msleep_spin_sbt+0x144 vm_run+0x970 vmmdev_ioctl+0x7ea devfs_ioctl+0xca VOP_IOCTL_APV+0x63 vn_ioctl+0x124 devfs_ioctl_f+0x1f kern_ioctl+0x28a sys_ioctl+0x15d amd64_syscall+0x276 fast_syscall_common+0x101 95379 101268 bhyve vcpu 1 mi_switch+0x174 sleepq_switch+0x110 sleepq_timedwait+0x4f msleep_spin_sbt+0x144 vm_run+0x970 vmmdev_ioctl+0x7ea devfs_ioctl+0xca VOP_IOCTL_APV+0x63 vn_ioctl+0x124 devfs_ioctl_f+0x1f kern_ioctl+0x28a sys_ioctl+0x15d amd64_syscall+0x276 fast_syscall_common+0x101 I will be happy to provide more information to help solving this issue. --=20 You are receiving this mail because: You are the assignee for the bug.=