Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 16 Feb 2024 09:55:27 -0800
From:      "Chuck Tuffli" <chuck@tuffli.net>
To:        "Paul Procacci" <pprocacci@gmail.com>
Cc:        "Matthew Grooms" <mgrooms@shrew.net>, "FreeBSD virtualization" <freebsd-virtualization@freebsd.org>
Subject:   Re: bhyve disk performance issue
Message-ID:  <5fc8b9d9-da94-4694-9134-d0cc22df2eaf@app.fastmail.com>
In-Reply-To:  <CAFbbPuhqtUcDnt=7PdgOqspr4b2T3zyFuWF8fuhNd5G6VZN=%2Bw@mail.gmail.com>
References:  <6a128904-a4c1-41ec-a83d-56da56871ceb@shrew.net> <28ea168c-1211-4104-b8b4-daed0e60950d@app.fastmail.com> <CAFbbPuhqtUcDnt=7PdgOqspr4b2T3zyFuWF8fuhNd5G6VZN=%2Bw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
--edaaeeb4c44c4b3f8141d4116aa98709
Content-Type: text/plain;charset=utf-8
Content-Transfer-Encoding: quoted-printable

On Fri, Feb 16, 2024, at 9:45 AM, Paul Procacci wrote:
>=20
>=20
> On Fri, Feb 16, 2024 at 12:43=E2=80=AFPM Chuck Tuffli <chuck@tuffli.ne=
t> wrote:
>> __
>> On Fri, Feb 16, 2024, at 9:19 AM, Matthew Grooms wrote:
>>> Hi All,
>>>=20
>>>=20
>>>=20
>>> I'm in the middle of a project that involves building out a handful =
of servers to host virtual Linux instances. Part of that includes testin=
g bhyve to see how it performs. The intent is to compare host storage op=
tions such as raw vs zvol block devices and ufs vs zfs disk images using=
 hardware raid vs zfs managed disks. It would also involve
>>>=20
>>>=20
>> =E2=80=A6
>>> Here is a list of a few other things I'd like to try:
>>>=20
>>>=20
>>> 1) Wiring guest memory ( unlikely as it's 32G of 256G )
>>> 2) Downgrading the host to 13.2-RELEASE
>>=20
>> FWIW we recently did a similar exercise and saw significant performan=
ce differences on ZFS backed disk images when comparing 14.0 and 13.2. W=
e didn=E2=80=99t have time to root cause the difference, so it could sim=
ply be some tuning difference needed for 14.=20
>>=20
>> =E2=80=94chuck
> I myself am actually doing something very very similar.
> I was seeing atrocious disk performance until I set the disk type to n=
vme.
> Now it's screaming fast.
>=20
> disk0_type=3D"nvme"
> Not sure what yours is set at, but it might be worth looking into.
Similar to Matthew, we were testing both virtio and nvme and saw perform=
ance differences for both emulation types between 13 and 14.=20

--edaaeeb4c44c4b3f8141d4116aa98709
Content-Type: text/html;charset=utf-8
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE html><html><head><title></title><style type=3D"text/css">p.Mso=
Normal,p.MsoNoSpacing{margin:0}</style></head><body><div>On Fri, Feb 16,=
 2024, at 9:45 AM, Paul Procacci wrote:<br></div><blockquote type=3D"cit=
e" id=3D"qt" style=3D""><div dir=3D"ltr"><div><div dir=3D"ltr"><br></div=
><div><br></div><div class=3D"qt-gmail_quote"><div dir=3D"ltr" class=3D"=
qt-gmail_attr">On Fri, Feb 16, 2024 at 12:43=E2=80=AFPM Chuck Tuffli &lt=
;<a href=3D"mailto:chuck@tuffli.net">chuck@tuffli.net</a>&gt; wrote:<br>=
</div><blockquote class=3D"qt-gmail_quote" style=3D"margin-top:0px;margi=
n-right:0px;margin-bottom:0px;margin-left:0.8ex;border-left-width:1px;bo=
rder-left-style:solid;border-left-color:rgb(204, 204, 204);padding-left:=
1ex;"><div class=3D"qt-msg-6751001795835766023"><div><u></u><br></div><d=
iv><div>On Fri, Feb 16, 2024, at 9:19 AM, Matthew Grooms wrote:<br></div=
><blockquote type=3D"cite" id=3D"qt-m_-6751001795835766023qt"><p>Hi All,=
<br></p><p><br></p><div>I'm in the middle of a project that involves bui=
lding out a
      handful of servers to host virtual Linux instances. Part of that
      includes testing bhyve to see how it performs. The intent is to
      compare host storage options such as raw vs zvol block devices and
      ufs vs zfs disk images using hardware raid vs zfs managed disks.
      It would also involve<br></div><p><br></p></blockquote><div>=E2=80=
=A6<br></div><blockquote type=3D"cite" id=3D"qt-m_-6751001795835766023qt=
"><p><span style=3D"color:rgb(27, 30, 32);">Here is a list of a few othe=
r things I'd like to try:</span><br></p><div><br></div><div>1) Wiring gu=
est memory ( unlikely as it's 32G of 256G )<br></div><div>2) Downgrading=
 the host to 13.2-RELEASE<br></div></blockquote><div><br></div><div>FWIW=
 we recently did a similar exercise and saw significant performance diff=
erences on ZFS backed disk images when comparing 14.0 and 13.2. We didn=E2=
=80=99t have time to root cause the difference, so it could simply be so=
me tuning difference needed for 14.&nbsp;<br></div><div><br></div><div>=E2=
=80=94chuck<br></div></div></div></blockquote></div></div><div>I myself =
am actually doing something very very similar.<br></div><div>I was seein=
g atrocious disk performance until I set the disk type to nvme.<br></div=
><div><div>Now it's screaming fast.<br></div><div><br></div><div>disk0_t=
ype=3D"nvme"<br></div></div><div>Not sure what yours is set at, but it m=
ight be worth looking into.<br></div></div></blockquote><div>Similar to =
Matthew, we were testing both virtio and nvme and saw performance differ=
ences for both emulation types between 13 and 14.&nbsp;</div><div><br></=
div></body></html>
--edaaeeb4c44c4b3f8141d4116aa98709--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5fc8b9d9-da94-4694-9134-d0cc22df2eaf>