Date: Sat, 14 Oct 2017 06:15:32 +0000 From: bugzilla-noreply@freebsd.org To: freebsd-virtualization@FreeBSD.org Subject: [Bug 222916] [bhyve] Debian guest kernel panics with message "CPU#0 stuck for Xs!" Message-ID: <bug-222916-27103-QIvdd2ETcp@https.bugs.freebsd.org/bugzilla/> In-Reply-To: <bug-222916-27103@https.bugs.freebsd.org/bugzilla/> References: <bug-222916-27103@https.bugs.freebsd.org/bugzilla/>
next in thread | previous in thread | raw e-mail | index | archive | help
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D222916 --- Comment #5 from karihre@gmail.com --- Thanks for that tip. I reduced the ARC size with sysctl and confirmed it to= be 20GB with zfs-info. Thinking back to the 4 guest / 4 host cpus. Lets say the collection of gues= ts consume 4 cpus and four tasks on the host consume 4 cpus (totaling a load average of 8), does the host system scheduler not shuffle tasks around like= it would if I were running 8 cpu intensive processes on the host? Or does the interaction between bhyve and the host scheduler somehow result in the virt= ual cpus being set aside for tens of seconds? I guess I'm just trying to understand, I would think one of the main motivations for using a hypervisor is exactly over-subscribing cpu cores as= you may have guests with "bursty" load behavior, so on average your total guests+host load is less than the number of cpus, but surely you can divide= the cpu time in a "fair" manner when the system is overloaded. Memory I would think is a little trickyer, there it makes sense to make sure the host system consumption + guest consumption never exceeds the total host memory. Anyhow, just trying to make sense of this, there doesn't seem to be too much information available online on these topics, or perhaps I'm looking in all= the wrong places. Thank you, Kari --=20 You are receiving this mail because: You are the assignee for the bug.=
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?bug-222916-27103-QIvdd2ETcp>