Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 23 Jun 2015 00:26:42 -0700
From:      Neel Natu <neelnatu@gmail.com>
To:        Andriy Gapon <avg@freebsd.org>
Cc:        "freebsd-virtualization@freebsd.org" <freebsd-virtualization@freebsd.org>
Subject:   Re: bhyve: centos 7.1 with multiple virtual processors
Message-ID:  <CAFgRE9E5uTDUomaibL6jmxNwGJnz2RXiGxLDNoKkQ=%2BRBsh69A@mail.gmail.com>
In-Reply-To: <558900A7.40609@FreeBSD.org>
References:  <5587EE05.2020001@FreeBSD.org> <CAFgRE9Hpxm7pC_ETdQJKNk7FwbGvYjd60D0bnoOC=t46aJvusQ@mail.gmail.com> <558900A7.40609@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Hi Andriy,

On Mon, Jun 22, 2015 at 11:45 PM, Andriy Gapon <avg@freebsd.org> wrote:
> On 23/06/2015 05:37, Neel Natu wrote:
>> Hi Andriy,
>>
>> FWIW I can boot up a Centos 7.1 virtual machine with 2 and 4 vcpus
>> fine on my host with 8 physical cores.
>>
>> I have some questions about your setup inline.
>>
>> On Mon, Jun 22, 2015 at 4:14 AM, Andriy Gapon <avg@freebsd.org> wrote:
>>>
>>> If I run a CentOS 7.1 VM with more than one CPU more often than not it would
>>> hang on startup and bhyve would start spinning.
>>>
>>> The following are the last messages seen in the VM:
>>>
>>> Switching to clocksource hpet
>>> ------------[ cut here ]------------
>>> WARNING: at kernel/time/clockevents.c:239 clockevents_program_event+0xdb/0xf0()
>>> Modules linked in:
>>> CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.10.0-229.4.2.el7.x86_64 #1
>>> Hardware name:   BHYVE, BIOS 1.00 03/14/2014
>>>  0000000000000000 00000000cab5bdb6 ffff88003fc03e08 ffffffff81604eaa
>>>  ffff88003fc03e40 ffffffff8106e34b 80000000000f423f 80000000000f423f
>>>  ffffffff81915440 0000000000000000 0000000000000000 ffff88003fc03e50
>>> Call Trace:
>>>  <IRQ>  [<ffffffff81604eaa>] dump_stack+0x19/0x1b
>>>  [<ffffffff8106e34b>] warn_slowpath_common+0x6b/0xb0
>>>  [<ffffffff8106e49a>] warn_slowpath_null+0x1a/0x20
>>>  [<ffffffff810ce6eb>] clockevents_program_event+0xdb/0xf0
>>>  [<ffffffff810cf211>] tick_handle_periodic_broadcast+0x41/0x50
>>>  [<ffffffff81016525>] timer_interrupt+0x15/0x20
>>>  [<ffffffff8110b5ee>] handle_irq_event_percpu+0x3e/0x1e0
>>>  [<ffffffff8110b7cd>] handle_irq_event+0x3d/0x60
>>>  [<ffffffff8110e467>] handle_edge_irq+0x77/0x130
>>>  [<ffffffff81015cff>] handle_irq+0xbf/0x150
>>>  [<ffffffff81077df7>] ? irq_enter+0x17/0xa0
>>>  [<ffffffff816172af>] do_IRQ+0x4f/0xf0
>>>  [<ffffffff8160c4ad>] common_interrupt+0x6d/0x6d
>>>  <EOI>  [<ffffffff8126e359>] ? selinux_inode_alloc_security+0x59/0xa0
>>>  [<ffffffff811de58f>] ? __d_instantiate+0xbf/0x100
>>>  [<ffffffff811de56f>] ? __d_instantiate+0x9f/0x100
>>>  [<ffffffff811de60d>] d_instantiate+0x3d/0x70
>>>  [<ffffffff8124d748>] debugfs_mknod.isra.5.part.6.constprop.15+0x98/0x130
>>>  [<ffffffff8124da82>] __create_file+0x1c2/0x2c0
>>>  [<ffffffff81a6c6bf>] ? set_graph_function+0x1f/0x1f
>>>  [<ffffffff8124dbcb>] debugfs_create_dir+0x1b/0x20
>>>  [<ffffffff8112c1ce>] tracing_init_dentry_tr+0x7e/0x90
>>>  [<ffffffff8112c250>] tracing_init_dentry+0x10/0x20
>>>  [<ffffffff81a6c6d2>] ftrace_init_debugfs+0x13/0x1fd
>>>  [<ffffffff81a6c6bf>] ? set_graph_function+0x1f/0x1f
>>>  [<ffffffff810020e8>] do_one_initcall+0xb8/0x230
>>>  [<ffffffff81a45203>] kernel_init_freeable+0x18b/0x22a
>>>  [<ffffffff81a449db>] ? initcall_blacklist+0xb0/0xb0
>>>  [<ffffffff815f33f0>] ? rest_init+0x80/0x80
>>>  [<ffffffff815f33fe>] kernel_init+0xe/0xf0
>>>  [<ffffffff81614d3c>] ret_from_fork+0x7c/0xb0
>>>  [<ffffffff815f33f0>] ? rest_init+0x80/0x80
>>> ---[ end trace d5caa1cab8e7e98d ]---
>>>
>>
>> A few questions to narrow this down:
>> - Is the host very busy when the VM is started (or what is the host
>> doing when this happened)?
>
> The host typically is not heavily loaded.  There is X server running and some
> applications.  I'd imagine that those could cause some additional latency but
> not CPU starvation.
>

Yup, I agree.

Does this ever happen with a single vcpu guest?

The other mystery is the NMIs the host is receiving. I (re)verified to
make sure that bhyve/vmm.ko do not assert NMIs so it has to be
something else on the host that's doing it ...

best
Neel

>> - How many vcpus are you giving to the VM?
>> - How many cores on the host?
>
> I tried only 2 / 2.
>
>>>
>>> At the same time sometimes there is one or more of spurious NMIs on the _host_
>>> system:
>>> NMI ISA c, EISA ff
>>> NMI ... going to debugger
>>>
>>
>> Hmm, that's interesting. Are you using hwpmc to do instruction sampling?
>
> hwpmc driver is in the kernel, but it was not used.
>
>>> bhyve seems to spin here:
>>> vmm.ko`svm_vmrun+0x894
>>> vmm.ko`vm_run+0xbb7
>>> vmm.ko`vmmdev_ioctl+0x5a4
>>> kernel`devfs_ioctl_f+0x13b
>>> kernel`kern_ioctl+0x1e1
>>> kernel`sys_ioctl+0x16a
>>> kernel`amd64_syscall+0x3ca
>>> kernel`0xffffffff8088997b
>>>
>>> (kgdb) list *svm_vmrun+0x894
>>> 0xffffffff813c9194 is in svm_vmrun
>>> (/usr/src/sys/modules/vmm/../../amd64/vmm/amd/svm.c:1895).
>>> 1890
>>> 1891    static __inline void
>>> 1892    enable_gintr(void)
>>> 1893    {
>>> 1894
>>> 1895            __asm __volatile("stgi");
>>> 1896    }
>>> 1897
>>> 1898    /*
>>> 1899     * Start vcpu with specified RIP.
>>>
>>
>> Yeah, that's not surprising because host interrupts are blocked when
>> the cpu is executing in guest context. The 'enable_gintr()' enables
>> interrupts so it gets blamed by the interrupt-based sampling.
>>
>> In this case it just means that the cpu was in guest context when a
>> host-interrupt fired.
>
> I see.  FWIW, that was captured with DTrace.
>
> --
> Andriy Gapon



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAFgRE9E5uTDUomaibL6jmxNwGJnz2RXiGxLDNoKkQ=%2BRBsh69A>