From nobody Wed Dec 1 23:03:35 2021 X-Original-To: freebsd-virtualization@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 1BF1118C100C for ; Wed, 1 Dec 2021 23:13:17 +0000 (UTC) (envelope-from bsdlists@jld3.net) Received: from mail.jld3.net (mail.jld3.net [45.55.236.93]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4J4FHx0G1sz4l0y for ; Wed, 1 Dec 2021 23:13:17 +0000 (UTC) (envelope-from bsdlists@jld3.net) Received: from localhost (localhost [127.0.0.1]) by mail.jld3.net (Postfix) with ESMTP id CE45F40551; Wed, 1 Dec 2021 16:13:16 -0700 (MST) X-Virus-Scanned: amavisd-new at jld3.net Received: from mail.jld3.net ([127.0.0.1]) by localhost (mail.jld3.net [127.0.0.1]) (amavisd-new, port 10024) with LMTP id jcGGgoGUpYCB; Wed, 1 Dec 2021 16:13:15 -0700 (MST) Received: from [172.21.35.248] (c-24-9-144-115.hsd1.co.comcast.net [24.9.144.115]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) (Authenticated sender: jld) by mail.jld3.net (Postfix) with ESMTPSA id 752C9409E3; Wed, 1 Dec 2021 16:13:15 -0700 (MST) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.jld3.net 752C9409E3 DKIM-Filter: OpenDKIM Filter v2.11.0 mail.jld3.net 752C9409E3 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jld3.net; s=8d052f02dde2; t=1638400395; bh=MTWdg4tEhtkfNaM5QOURz7WhtftDKkC3kUJ62g6bJ/I=; h=From:To:Subject:Date:Message-ID:MIME-Version:From; b=vHNlUTsf5yFydfcqOTXJw5F5+H02gQei4DXp9CxNGeMV3B7tN3MW875xon7dO25HY DF1tXtSrUNbmmaVFa1TqxhkcGZJRzFYovnZJYkGHh9bNpXiEckQLY313EfG0OARVa6 ng6DM5ZnuEgxJ9RtP9Y6unYk37H5u5FwGsokwYrA= To: "Miroslav Lachman" <000.fbsd@quip.cz> Cc: freebsd-virtualization@freebsd.org Subject: Re: bhyve vCPU limit Date: Wed, 01 Dec 2021 16:03:35 -0700 X-Mailer: MailMate (1.13.2r5673) Message-ID: In-Reply-To: <30e4454c-414a-833f-3829-586a450e7205@quip.cz> References: <4E8A7FD3-B01E-4ADE-A290-360F3B04AC0F@jld3.net> <30e4454c-414a-833f-3829-586a450e7205@quip.cz> List-Id: Discussion List-Archive: https://lists.freebsd.org/archives/freebsd-virtualization List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-virtualization@freebsd.org X-BeenThere: freebsd-virtualization@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 4J4FHx0G1sz4l0y X-Spamd-Bar: ---- Authentication-Results: mx1.freebsd.org; none X-Spamd-Result: default: False [-4.00 / 15.00]; REPLY(-4.00)[] Reply-To: bsdlists@jld3.net From: John Doherty via freebsd-virtualization X-Original-From: John Doherty X-ThisMailContainsUnwantedMimeParts: N On Wed 2021-12-01 09:52 AM MST -0700, <000.fbsd@quip.cz> wrote: > On 01/12/2021 17:17, John Doherty via freebsd-virtualization wrote: >> That limitation appears to still exist in FreeBSD 13.0-RELEASE: >> >> [root@grit] # freebsd-version -k ; grep 'VM_MAXCPU' >> /usr/src/sys/amd64/include/vmm.h >> 13.0-RELEASE >> #define    VM_MAXCPU    16            /* maximum >> virtual cpus */ >> >> I ran into this in May 2021 and with some help from folks on this >> list was able to increase it. The simplest (if not minimalist) way to >> do that is: >> >> 1. edit /usr/src/sys/amd64/include/vmm.h to increase that value: I >> used 48 >> 2. make buildworld >> 3. make installworld >> >> The increased value has been working fine for me since I did that. I >> run a couple of VMs with 24 vCPUs each and several others with >> smaller numbers all the time and have run others with as many as 48 >> temporarily. No problems that I have seen. > > I am sorry for hijacking this thread but your information is very > interesting. I was playing with VMs in VirtualBox and Bhyve and > compared performance with increasing vCPU count. The more cores VM get > the slower was even a simple single threaded task like loading PF > rules from /etc/pf.conf. It was tested on FreeBSD 11.4 and 12.2, I > tested ULE and 4BSD schedulers. Maybe it was somewhat HW related but > it always shows VMs with more than 2 v CPUs significantly slower. VMs > with 6+ vCPU was almost unusable (loading of PF ruleset takes about 8 > seconds instead of fraction on single vCPU VM). > > Do you have any special tunning to have so large number of vCPU > without this penalty? I did not do anything special other than the steps described above. I did do some other stuff while sort of stumbling toward the eventual solution, but that's neither here nor there anymore. The steps above are what I used to build the primary system where I use bhyve. The physical host has two Xeon E5-2690 v4 CPUs, 14 cores/28 threads each so 28 cores/56 threads total. I have not seen but neither have I tried to measure any problems like you describe. bhyve works very well for me and I especially like it in combination with the vm-bhyve package, which I'm using to manage the VMs. Of the various virtualization systems I've used or tried over the years, I like this combination more than any other. It's simple, clean, integrates well with ZFS, and is a pleasure to use, at least by my lights. Haven't had any trouble either before or after increasing the VM_MAXCPU value.