Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 18 Feb 2024 09:21:10 -0600
From:      Matthew Grooms <mgrooms@shrew.net>
To:        virtualization@freebsd.org
Subject:   Re: bhyve disk performance issue
Message-ID:  <50614ea4-f0f9-44a2-b5e6-ebb33cfffbc4@shrew.net>
In-Reply-To: <6f6b71ac-2349-4045-9eaf-5c50d42b89be@shrew.net>
References:  <6a128904-a4c1-41ec-a83d-56da56871ceb@shrew.net> <28ea168c-1211-4104-b8b4-daed0e60950d@app.fastmail.com> <0ff6f30a-b53a-4d0f-ac21-eaf701d35d00@shrew.net> <6f6b71ac-2349-4045-9eaf-5c50d42b89be@shrew.net>

next in thread | previous in thread | raw e-mail | index | archive | help
This is a multi-part message in MIME format.
--------------aSmCDs7yhrXrJYYlTx0uyb8U
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 2/17/24 15:53, Matthew Grooms wrote:
> On 2/16/24 12:00, Matthew Grooms wrote:
>> On 2/16/24 11:42, Chuck Tuffli wrote:
>>> On Fri, Feb 16, 2024, at 9:19 AM, Matthew Grooms wrote:
>>>>
>>>> Hi All,
>>>>
>>>>
>>>> I'm in the middle of a project that involves building out a handful 
>>>> of servers to host virtual Linux instances. Part of that includes 
>>>> testing bhyve to see how it performs. The intent is to compare host 
>>>> storage options such as raw vs zvol block devices and ufs vs zfs 
>>>> disk images using hardware raid vs zfs managed disks. It would also 
>>>> involve
>>>>
>>>>
>>> …
>>>>
>>>> Here is a list of a few other things I'd like to try:
>>>>
>>>>
>>>> 1) Wiring guest memory ( unlikely as it's 32G of 256G )
>>>> 2) Downgrading the host to 13.2-RELEASE
>>>
>>> FWIW we recently did a similar exercise and saw significant 
>>> performance differences on ZFS backed disk images when comparing 
>>> 14.0 and 13.2. We didn’t have time to root cause the difference, so 
>>> it could simply be some tuning difference needed for 14.
>>
>> Hi Chuck,
>>
>> That's very helpful feedback. I'll start by downgrading the host to 
>> 13.2 and report back here.
>>
>
> Unfortunately same story with 13.2. I'm going to try and downgrade to 
> 12.4 and see if it gets any better.
>
> ================================================================================
> Begin @ Sat Feb 17 11:00:01 CST 2024
>
> Version  2.00       ------Sequential Output------ --Sequential Input- 
> --Random-
>                     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
> --Seeks--
> Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP /sec %CP  
> /sec %CP
> localhost.lo 63640M  690k  99  1.5g  97  727m  78  950k  99 1.3g  68 
> +++++ +++
> Latency             11759us   29114us    8098us    8649us 23413us    
> 4540us
> Version  2.00       ------Sequential Create------ --------Random 
> Create--------
> localhost.localdoma -Create-- --Read--- -Delete-- -Create-- --Read--- 
> -Delete--
>               files  /sec %CP  /sec %CP  /sec %CP  /sec %CP /sec %CP  
> /sec %CP
>                  16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ 
> +++++ +++
> Latency              7791us     131us    1671us     464us 15us    1811us
>
> End @ Sat Feb 17 11:03:13 CST 2024
> ================================================================================
> Begin @ Sat Feb 17 11:10:01 CST 2024
>
> Version  2.00       ------Sequential Output------ --Sequential Input- 
> --Random-
>                     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
> --Seeks--
> Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP /sec %CP  
> /sec %CP
> localhost.lo 63640M  667k  99  449m  99  313m  94  940k  99 398m  99 
> 16204 563
> Latency             12147us    1079us   24470us    8795us 17853us    
> 4384us
> Version  2.00       ------Sequential Create------ --------Random 
> Create--------
> localhost.localdoma -Create-- --Read--- -Delete-- -Create-- --Read--- 
> -Delete--
>               files  /sec %CP  /sec %CP  /sec %CP  /sec %CP /sec %CP  
> /sec %CP
>                  16     0  93 +++++ +++ +++++ +++     0  96 +++++ +++ 
> +++++ +++
> Latency               118us     159us     269us     164us 54us     844us
>
> End @ Sat Feb 17 11:18:43 CST 2024
>
I wasn't able to get a working 12.4 system in place due to lack of 
packages. However, I did fire up a FreeBSD 14 VM and let it run 
overnight on the same SSD array. It consistently ran at a much higher 
speed for 50+ runs @ 10m intervals ...

================================================================================
Begin @ Sun Feb 18 15:00:00 UTC 2024

Version  1.98       ------Sequential Output------ --Sequential Input- 
--Random-
                     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
--Seeks--
Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  
/sec %CP
freebsd.shrew.l 64G  628k  99  1.6g  98  831m  60 1278k  99  1.1g 42 
+++++ +++
Latency             13447us   68490us     207ms    7187us 195ms   17665us
Version  1.98       ------Sequential Create------ --------Random 
Create--------
freebsd.shrew.lab   -Create-- --Read--- -Delete-- -Create-- --Read--- 
-Delete--
               files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  
/sec %CP
                  16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ 
+++++ +++
Latency             18225us      18us      28us   18812us 18us      25us

End @ Sun Feb 18 15:03:11 UTC 2024

I used identical options to run both that VM and the Linux VM I've been 
testing. The backing store for each VM has a 1TB partition and the guest 
interfaces are NVME. Now I'm really scratching my head.

Chuck, were you testing disk performance in Linux VMs or only FreeBSD?

Anyone have ideas on why Linux disk performance would drop off a cliff 
over time?

Thanks,

Thanks,

-Matthew

--------------aSmCDs7yhrXrJYYlTx0uyb8U
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 8bit

<!DOCTYPE html>
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <div class="moz-cite-prefix">On 2/17/24 15:53, Matthew Grooms wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:6f6b71ac-2349-4045-9eaf-5c50d42b89be@shrew.net">
      <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
      <div class="moz-cite-prefix">On 2/16/24 12:00, Matthew Grooms
        wrote:<br>
      </div>
      <blockquote type="cite"
        cite="mid:0ff6f30a-b53a-4d0f-ac21-eaf701d35d00@shrew.net">
        <meta http-equiv="Content-Type"
          content="text/html; charset=UTF-8">
        On 2/16/24 11:42, Chuck Tuffli wrote:<br>
        <blockquote type="cite"
cite="mid:28ea168c-1211-4104-b8b4-daed0e60950d@app.fastmail.com">
          <meta http-equiv="content-type"
            content="text/html; charset=UTF-8">
          <title></title>
          <style type="text/css">p.MsoNormal,p.MsoNoSpacing{margin:0}</style>
          <div>On Fri, Feb 16, 2024, at 9:19 AM, Matthew Grooms wrote:<br>
          </div>
          <blockquote type="cite" id="qt" style="">
            <p>Hi All,<br>
            </p>
            <p><br>
            </p>
            <div>I'm in the middle of a project that involves building
              out a handful of servers to host virtual Linux instances.
              Part of that includes testing bhyve to see how it
              performs. The intent is to compare host storage options
              such as raw vs zvol block devices and ufs vs zfs disk
              images using hardware raid vs zfs managed disks. It would
              also involve<br>
            </div>
            <p><br>
            </p>
          </blockquote>
          <div>…<br>
          </div>
          <blockquote type="cite" id="qt" style="">
            <p><span style="color:rgb(27, 30, 32);">Here is a list of a
                few other things I'd like to try:</span><br>
            </p>
            <div> <br>
            </div>
            <div> 1) Wiring guest memory ( unlikely as it's 32G of 256G
              )<br>
            </div>
            <div> 2) Downgrading the host to 13.2-RELEASE<br>
            </div>
          </blockquote>
          <div><br>
          </div>
          <div>FWIW we recently did a similar exercise and saw
            significant performance differences on ZFS backed disk
            images when comparing 14.0 and 13.2. We didn’t have time to
            root cause the difference, so it could simply be some tuning
            difference needed for 14. <br>
          </div>
        </blockquote>
        <br>
        <p>Hi Chuck,</p>
        <p>That's very helpful feedback. I'll start by downgrading the
          host to 13.2 and report back here.<br>
          <br>
        </p>
      </blockquote>
      <p><br>
        Unfortunately same story with 13.2. I'm going to try and
        downgrade to 12.4 and see if it gets any better.<br>
        <br>
      </p>
      <p>================================================================================<br>
        Begin @ Sat Feb 17 11:00:01 CST 2024<br>
        <br>
        Version  2.00       ------Sequential Output------ --Sequential
        Input- --Random-<br>
                            -Per Chr- --Block-- -Rewrite- -Per Chr-
        --Block-- --Seeks--<br>
        Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP 
        /sec %CP  /sec %CP<br>
        localhost.lo 63640M  690k  99  1.5g  97  727m  78  950k  99 
        1.3g  68 +++++ +++<br>
        Latency             11759us   29114us    8098us    8649us  
        23413us    4540us<br>
        Version  2.00       ------Sequential Create------ --------Random
        Create--------<br>
        localhost.localdoma -Create-- --Read--- -Delete-- -Create--
        --Read--- -Delete--<br>
                      files  /sec %CP  /sec %CP  /sec %CP  /sec %CP 
        /sec %CP  /sec %CP<br>
                         16 +++++ +++ +++++ +++ +++++ +++ +++++ +++
        +++++ +++ +++++ +++<br>
        Latency              7791us     131us    1671us     464us     
        15us    1811us<br>
        <br>
        End @ Sat Feb 17 11:03:13 CST 2024<br>
================================================================================<br>
        Begin @ Sat Feb 17 11:10:01 CST 2024<br>
        <br>
        Version  2.00       ------Sequential Output------ --Sequential
        Input- --Random-<br>
                            -Per Chr- --Block-- -Rewrite- -Per Chr-
        --Block-- --Seeks--<br>
        Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP 
        /sec %CP  /sec %CP<br>
        localhost.lo 63640M  667k  99  449m  99  313m  94  940k  99 
        398m  99 16204 563<br>
        Latency             12147us    1079us   24470us    8795us  
        17853us    4384us<br>
        Version  2.00       ------Sequential Create------ --------Random
        Create--------<br>
        localhost.localdoma -Create-- --Read--- -Delete-- -Create--
        --Read--- -Delete--<br>
                      files  /sec %CP  /sec %CP  /sec %CP  /sec %CP 
        /sec %CP  /sec %CP<br>
                         16     0  93 +++++ +++ +++++ +++     0  96
        +++++ +++ +++++ +++<br>
        Latency               118us     159us     269us     164us     
        54us     844us<br>
        <br>
        End @ Sat Feb 17 11:18:43 CST 2024<br>
        <br>
      </p>
    </blockquote>
    <p>I wasn't able to get a working 12.4 system in place due to lack
      of packages. However, I did fire up a FreeBSD 14 VM and let it run
      overnight on the same SSD array. It consistently ran at a much
      higher speed for 50+ runs @ 10m intervals ...<br>
      <br>
================================================================================<br>
      Begin @ Sun Feb 18 15:00:00 UTC 2024<br>
      <br>
      Version  1.98       ------Sequential Output------ --Sequential
      Input- --Random-<br>
                          -Per Chr- --Block-- -Rewrite- -Per Chr-
      --Block-- --Seeks--<br>
      Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
      %CP  /sec %CP<br>
      freebsd.shrew.l 64G  628k  99  1.6g  98  831m  60 1278k  99  1.1g 
      42 +++++ +++<br>
      Latency             13447us   68490us     207ms    7187us    
      195ms   17665us<br>
      Version  1.98       ------Sequential Create------ --------Random
      Create--------<br>
      freebsd.shrew.lab   -Create-- --Read--- -Delete-- -Create--
      --Read--- -Delete--<br>
                    files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
      %CP  /sec %CP<br>
                       16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++
      +++ +++++ +++<br>
      Latency             18225us      18us      28us   18812us     
      18us      25us<br>
      <br>
      End @ Sun Feb 18 15:03:11 UTC 2024<br>
      <br>
      I used identical options to run both that VM and the Linux VM I've
      been testing. The backing store for each VM has a 1TB partition
      and the guest interfaces are NVME. Now I'm really scratching my
      head.<br>
      <br>
      Chuck, were you testing disk performance in Linux VMs or only
      FreeBSD?<br>
      <br>
      Anyone have ideas on why Linux disk performance would drop off a
      cliff over time?<br>
      <br>
      Thanks,<br>
      <br>
      Thanks,</p>
    <p>-Matthew<br>
    </p>
  </body>
</html>

--------------aSmCDs7yhrXrJYYlTx0uyb8U--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?50614ea4-f0f9-44a2-b5e6-ebb33cfffbc4>