From nobody Wed Feb 28 18:29:44 2024 X-Original-To: virtualization@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 4TlNDx4Wc5z5C4BS for ; Wed, 28 Feb 2024 18:29:53 +0000 (UTC) (envelope-from mgrooms@shrew.net) Received: from mx2.shrew.net (mx2.shrew.net [204.27.62.58]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mx1.freebsd.org (Postfix) with ESMTPS id 4TlNDw1HqCz4DMN for ; Wed, 28 Feb 2024 18:29:52 +0000 (UTC) (envelope-from mgrooms@shrew.net) Authentication-Results: mx1.freebsd.org; dkim=pass header.d=shrew.net header.s=default header.b=rJFaMgHh; dmarc=none; spf=pass (mx1.freebsd.org: domain of mgrooms@shrew.net designates 204.27.62.58 as permitted sender) smtp.mailfrom=mgrooms@shrew.net Received: from mail.shrew.net (mail1.shrew.prv [10.26.2.18]) by mx2.shrew.net (8.17.1/8.17.1) with ESMTP id 41SITih6095085 for ; Wed, 28 Feb 2024 12:29:44 -0600 (CST) (envelope-from mgrooms@shrew.net) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shrew.net; s=default; t=1709144984; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=gXRs2Fh+Ei/m2sI+avvuZYdIzUV6M/DV80XK/QYcBLs=; b=rJFaMgHhlDGdYV2Wjx753Cplzdv5xu/kxs9tGNQhDZdeWiP8RkfljbEcChzVwM9JxUuBr7 urstcP5txXT+scMKm0Z1PyLSib1MkWKNCiiFIfEdlFd0FNATUQRR5LIAXpG7IGhU8UFH54 UQ8Zx8TOfRs0vGd2VmlMI2GzcuTI7Rygqb2iFhHbklU6Tmc/6wtye/n+RPOLPgrMNWzgSU kSAdr67ZIdROcsyw8AfSF0kGEJ9T3e1Yuk/hQ8d6kVVM50h4yQGPrwP+BoOn84EupX/7pq YcET6zQUR/miEXQz/qxLm+JoWY1PvWYYCUQ6FBlIqJRhNMhlShbEtUocYq33Gw== Received: from [10.22.200.32] (unknown [136.62.156.42]) by mail.shrew.net (Postfix) with ESMTPSA id A43463AB37 for ; Wed, 28 Feb 2024 12:29:44 -0600 (CST) Content-Type: multipart/alternative; boundary="------------W0i8tGWHhvTJGsHdz01WWBm5" Message-ID: <25ddf43d-f700-4cb5-af2a-1fe669d1e24b@shrew.net> Date: Wed, 28 Feb 2024 12:29:44 -0600 List-Id: Discussion List-Archive: https://lists.freebsd.org/archives/freebsd-virtualization List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-freebsd-virtualization@freebsd.org X-BeenThere: freebsd-virtualization@freebsd.org MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: bhyve disk performance issue Content-Language: en-US To: virtualization@freebsd.org References: <6a128904-a4c1-41ec-a83d-56da56871ceb@shrew.net> <28ea168c-1211-4104-b8b4-daed0e60950d@app.fastmail.com> <0ff6f30a-b53a-4d0f-ac21-eaf701d35d00@shrew.net> <6f6b71ac-2349-4045-9eaf-5c50d42b89be@shrew.net> <50614ea4-f0f9-44a2-b5e6-ebb33cfffbc4@shrew.net> <6a4e7e1d-cca5-45d4-a268-1805a15d9819@shrew.net> From: Matthew Grooms In-Reply-To: X-Spamd-Bar: --- X-Spamd-Result: default: False [-3.42 / 15.00]; NEURAL_HAM_LONG(-1.00)[-1.000]; NEURAL_HAM_MEDIUM(-1.00)[-1.000]; NEURAL_HAM_SHORT(-0.93)[-0.927]; R_SPF_ALLOW(-0.20)[+mx]; R_DKIM_ALLOW(-0.20)[shrew.net:s=default]; MIME_GOOD(-0.10)[multipart/alternative,text/plain]; XM_UA_NO_VERSION(0.01)[]; FROM_HAS_DN(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; ASN(0.00)[asn:19969, ipnet:204.27.56.0/21, country:US]; RCPT_COUNT_ONE(0.00)[1]; MIME_TRACE(0.00)[0:+,1:+,2:~]; DMARC_NA(0.00)[shrew.net]; RCVD_TLS_LAST(0.00)[]; ARC_NA(0.00)[]; RCVD_COUNT_TWO(0.00)[2]; FROM_EQ_ENVFROM(0.00)[]; MID_RHS_MATCH_FROM(0.00)[]; TO_DN_NONE(0.00)[]; PREVIOUSLY_DELIVERED(0.00)[virtualization@freebsd.org]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MLMMJ_DEST(0.00)[virtualization@freebsd.org]; DKIM_TRACE(0.00)[shrew.net:+] X-Rspamd-Queue-Id: 4TlNDw1HqCz4DMN This is a multi-part message in MIME format. --------------W0i8tGWHhvTJGsHdz01WWBm5 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit On 2/27/24 04:21, Vitaliy Gusev wrote: > Hi, > > >> On 23 Feb 2024, at 18:37, Matthew Grooms wrote: >> >>> ... >> The problem occurs when an image file is used on either ZFS or UFS. >> The problem also occurs when the virtual disk is backed by a raw disk >> partition or a ZVOL. This issue isn't related to a specific >> underlying filesystem. >> > > Do I understand right, you ran testing inside VM inside guest VM  on > ext4 filesystem? If so you should be aware about additional overhead > in comparison when you were running tests on the hosts. > Hi Vitaliy, I appreciate you providing the feedback and suggestions. I spent over a week trying as many combinations of host and guest options as possible to narrow this issue down to a specific host storage or a guest device model option. Unfortunately the problem occurred with every combination I tested while running Linux as the guest. Note, I only tested RHEL8 & RHEL9 compatible distributions ( Alma & Rocky ). The problem did not occur when I ran FreeBSD as the guest. The problem did not occur when I ran KVM in the host and Linux as the guest. > I would suggest to run fio (or even dd) on raw disk device inside VM, > i.e. without filesystem at all.  Just do not forget do “echo 3 > > /proc/sys/vm/drop_caches” in Linux Guest VM before you run tests. The two servers I was using to test with are are no longer available. However, I'll have two more identical servers arriving in the next week or so. I'll try to run additional tests and report back here. I used bonnie++ as that was easily installed from the package repos on all the systems I tested. > > Could you also give more information about: > >  1. What results did you get (decode bonnie++ output)? If you look back at this email thread, there are many examples of running bonnie++ on the guest. I first ran the tests on the host system using Linux + ext4 and FreeBSD 14 + UFS & ZFS to get a baseline of performance. Then I ran bonnie++ tests using bhyve as the hypervisor and Linux & FreeBSD as the guest. The combination of host and guest storage options included ... 1) block device + virtio blk 2) block device + nvme 3) UFS disk image + virtio blk 4) UFS disk image + nvme 5) ZFS disk image + virtio blk 6) ZFS disk image + nvme 7) ZVOL + virtio blk 8) ZVOL + nvme In every instance, I observed the Linux guest disk IO often perform very well for some time after the guest was first booted. Then the performance of the guest would drop to a fraction of the original performance. The benchmark test was run every 5 or 10 minutes in a cron job. Sometimes the guest would perform well for up to an hour before performance would drop off. Most of the time it would only perform well for a few cycles ( 10 - 30 mins ) before performance would drop off. The only way to restore the performance was to reboot the guest. Once I determined that the problem was not specific to a particular host or guest storage option, I switched my testing to only use a block device as backing storage on the host to avoid hitting any system disk caches. Here is the test script I used in the cron job ... #!/bin/sh FNAME='output.txt' echo ================================================================================ >> $FNAME echo Begin @ `/usr/bin/date` >> $FNAME echo >> $FNAME /usr/sbin/bonnie++ 2>&1 | /usr/bin/grep -v 'done\|,' >> $FNAME echo >> $FNAME echo End @ `/usr/bin/date` >> $FNAME As you can see, I'm calling bonnie++ with the system defaults. That uses a data set size that's 2x the guest RAM in an attempt to minimize the effect of filesystem cache on results. Here is an example of the output that bonnie++ produces ... Version  2.00 ------Sequential Output------ --Sequential Input- --Random-                     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP /sec %CP  /sec %CP linux-blk    63640M  694k  99  1.6g  99  737m  76  985k  99 1.3g  69 +++++ +++ Latency             11579us     535us   11889us    8597us 21819us    8238us Version  2.00       ------Sequential Create------ --------Random Create-------- linux-blk           -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--               files  /sec %CP  /sec %CP  /sec %CP  /sec %CP /sec %CP  /sec %CP                  16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ Latency              7620us     126us    1648us     151us 15us     633us --------------------------------- speed drop --------------------------------- Version  2.00       ------Sequential Output------ --Sequential Input- --Random-                     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP /sec %CP  /sec %CP linux-blk    63640M  676k  99  451m  99  314m  93  951k  99 402m  99 15167 530 Latency             11902us    8959us   24711us   10185us 20884us    5831us Version  2.00       ------Sequential Create------ --------Random Create-------- linux-blk           -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--               files  /sec %CP  /sec %CP  /sec %CP  /sec %CP /sec %CP  /sec %CP                  16     0  96 +++++ +++ +++++ +++     0  96 +++++ +++     0  75 Latency               343us     165us    1636us     113us 55us    1836us In the example above, the benchmark test repeated about 20 times with results that were similar to the performance shown above the dotted line ( ~ 1.6g/s seq write and 1.3g/s seq read ). After that, the performance dropped to what's shown below the dotted line which is less than 1/4 the original speed ( ~ 451m/s seq write and 402m/s seq read ). >  2. What results expecting? > What I expect is that, when I perform the same test with the same parameters, the results would stay more or less consistent over time. This is true when KVM is used as the hypervisor on the same hardware and guest options. That said, I'm not worried about bhyve being consistently slower than kvm or a FreeBSD guest being consistently slower than a Linux guest. I'm concerned that the performance drop over time is indicative of an issue with how bhyve interacts with non-freebsd guests. >  3. VM configuration, virtio-blk disk size, etc. >  4. Full command for tests (including size of test-set), bhyve, etc. I believe this was answered above. Please let me know if you have additional questions. > >  5. Did you pass virtio-blk as 512 or 4K ? If 512, probably you should > try 4K. > The testing performed was not exclusively with virtio-blk. >  6. Linux has several read-ahead options for IO schedule, and it could > be related too. > I suppose it's possible that bhyve could be somehow causing the disk scheduler in the Linux guest to act differently. I'll see if I can figure out how to disable that in future tests. > Additionally could also you play with “sync=disabled” volume/zvol > option? Of course it is only for write testing. The testing performed was not exclusively with zvols. Once I have more hardware available, I'll try to report back with more testing. It may be interesting to also see how a Windows guest performs compared to Linux & FreeBSD. I suspect that this issue may only be triggered when a fast disk array is in use on the host. My tests use a 16x SSD RAID 10 array. It's also quite possible that the disk IO slowdown is only a symptom of another issue that's triggered by the disk IO test ( please see end of my last post related to scheduler priority observations ). All I can say for sure is that ... 1) There is a problem and it's reproducible across multiple hosts 2) It affects RHEL8 & RHEL9 guests but not FreeBSD guests 3) It is not specific to any host or guest storage option Thanks, -Matthew --------------W0i8tGWHhvTJGsHdz01WWBm5 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit
On 2/27/24 04:21, Vitaliy Gusev wrote:
Hi,


On 23 Feb 2024, at 18:37, Matthew Grooms <mgrooms@shrew.net> wrote:

...
The problem occurs when an image file is used on either ZFS or UFS. The problem also occurs when the virtual disk is backed by a raw disk partition or a ZVOL. This issue isn't related to a specific underlying filesystem.


Do I understand right, you ran testing inside VM inside guest VM  on ext4 filesystem? If so you should be aware about additional overhead in comparison when you were running tests on the hosts.

Hi Vitaliy,

I appreciate you providing the feedback and suggestions. I spent over a week trying as many combinations of host and guest options as possible to narrow this issue down to a specific host storage or a guest device model option. Unfortunately the problem occurred with every combination I tested while running Linux as the guest. Note, I only tested RHEL8 & RHEL9 compatible distributions ( Alma & Rocky ). The problem did not occur when I ran FreeBSD as the guest. The problem did not occur when I ran KVM in the host and Linux as the guest.

I would suggest to run fio (or even dd) on raw disk device inside VM, i.e. without filesystem at all.  Just do not forget do “echo 3 > /proc/sys/vm/drop_caches” in Linux Guest VM before you run tests.

The two servers I was using to test with are are no longer available. However, I'll have two more identical servers arriving in the next week or so. I'll try to run additional tests and report back here. I used bonnie++ as that was easily installed from the package repos on all the systems I tested.


Could you also give more information about:

 1. What results did you get (decode bonnie++ output)?

If you look back at this email thread, there are many examples of running bonnie++ on the guest. I first ran the tests on the host system using Linux + ext4 and FreeBSD 14 + UFS & ZFS to get a baseline of performance. Then I ran bonnie++ tests using bhyve as the hypervisor and Linux & FreeBSD as the guest. The combination of host and guest storage options included ...

1) block device + virtio blk
2) block device + nvme
3) UFS disk image + virtio blk
4) UFS disk image + nvme
5) ZFS disk image + virtio blk
6) ZFS disk image + nvme
7) ZVOL + virtio blk
8) ZVOL + nvme

In every instance, I observed the Linux guest disk IO often perform very well for some time after the guest was first booted. Then the performance of the guest would drop to a fraction of the original performance. The benchmark test was run every 5 or 10 minutes in a cron job. Sometimes the guest would perform well for up to an hour before performance would drop off. Most of the time it would only perform well for a few cycles ( 10 - 30 mins ) before performance would drop off. The only way to restore the performance was to reboot the guest. Once I determined that the problem was not specific to a particular host or guest storage option, I switched my testing to only use a block device as backing storage on the host to avoid hitting any system disk caches.

Here is the test script I used in the cron job ...

#!/bin/sh
FNAME='output.txt'

echo ================================================================================ >> $FNAME
echo Begin @ `/usr/bin/date` >> $FNAME
echo >> $FNAME
/usr/sbin/bonnie++ 2>&1 | /usr/bin/grep -v 'done\|,' >> $FNAME
echo >> $FNAME
echo End @ `/usr/bin/date` >> $FNAME


As you can see, I'm calling bonnie++ with the system defaults. That uses a data set size that's 2x the guest RAM in an attempt to minimize the effect of filesystem cache on results. Here is an example of the output that bonnie++ produces ...

Version  2.00       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
linux-blk    63640M  694k  99  1.6g  99  737m  76  985k  99  1.3g  69 +++++ +++
Latency             11579us     535us   11889us    8597us   21819us    8238us
Version  2.00       ------Sequential Create------ --------Random Create--------
linux-blk           -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency              7620us     126us    1648us     151us      15us     633us

--------------------------------- speed drop ---------------------------------

Version  2.00       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
linux-blk    63640M  676k  99  451m  99  314m  93  951k  99  402m  99 15167 530
Latency             11902us    8959us   24711us   10185us   20884us    5831us
Version  2.00       ------Sequential Create------ --------Random Create--------
linux-blk           -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16     0  96 +++++ +++ +++++ +++     0  96 +++++ +++     0  75
Latency               343us     165us    1636us     113us      55us    1836us

In the example above, the benchmark test repeated about 20 times with results that were similar to the performance shown above the dotted line ( ~ 1.6g/s seq write and 1.3g/s seq read ). After that, the performance dropped to what's shown below the dotted line which is less than 1/4 the original speed ( ~ 451m/s seq write and 402m/s seq read ).

 2. What results expecting?

What I expect is that, when I perform the same test with the same parameters, the results would stay more or less consistent over time. This is true when KVM is used as the hypervisor on the same hardware and guest options. That said, I'm not worried about bhyve being consistently slower than kvm or a FreeBSD guest being consistently slower than a Linux guest. I'm concerned that the performance drop over time is indicative of an issue with how bhyve interacts with non-freebsd guests.

 3. VM configuration, virtio-blk disk size, etc.
 4. Full command for tests (including size of test-set), bhyve, etc.

I believe this was answered above. Please let me know if you have additional questions.


 5. Did you pass virtio-blk as 512 or 4K ? If 512, probably you should try 4K.

The testing performed was not exclusively with virtio-blk.

 6. Linux has several read-ahead options for IO schedule, and it could be related too.

I suppose it's possible that bhyve could be somehow causing the disk scheduler in the Linux guest to act differently. I'll see if I can figure out how to disable that in future tests.

Additionally could also you play with “sync=disabled” volume/zvol option? Of course it is only for write testing.

The testing performed was not exclusively with zvols.

Once I have more hardware available, I'll try to report back with more testing. It may be interesting to also see how a Windows guest performs compared to Linux & FreeBSD. I suspect that this issue may only be triggered when a fast disk array is in use on the host. My tests use a 16x SSD RAID 10 array. It's also quite possible that the disk IO slowdown is only a symptom of another issue that's triggered by the disk IO test ( please see end of my last post related to scheduler priority observations ). All I can say for sure is that ...

1) There is a problem and it's reproducible across multiple hosts
2) It affects RHEL8 & RHEL9 guests but not FreeBSD guests
3) It is not specific to any host or guest storage option

Thanks,

-Matthew

--------------W0i8tGWHhvTJGsHdz01WWBm5--