Date: Wed, 28 Feb 2024 15:31:39 -0600 From: Matthew Grooms <mgrooms@shrew.net> To: Vitaliy Gusev <gusev.vitaliy@gmail.com> Cc: virtualization@freebsd.org Subject: Re: bhyve disk performance issue Message-ID: <3738b08a-7841-4d18-9439-5f1c73a5c9e1@shrew.net> In-Reply-To: <3850080E-EBD1-4414-9C4E-DD89611C9F58@gmail.com> References: <6a128904-a4c1-41ec-a83d-56da56871ceb@shrew.net> <28ea168c-1211-4104-b8b4-daed0e60950d@app.fastmail.com> <0ff6f30a-b53a-4d0f-ac21-eaf701d35d00@shrew.net> <6f6b71ac-2349-4045-9eaf-5c50d42b89be@shrew.net> <50614ea4-f0f9-44a2-b5e6-ebb33cfffbc4@shrew.net> <6a4e7e1d-cca5-45d4-a268-1805a15d9819@shrew.net> <f01a9bca-7023-40c0-93f2-8cdbe4cd8078@tubnor.net> <edb80fff-561b-4dc5-95ee-204e0c6d95df@shrew.net> <a07d070b-4dc1-40c9-bc80-163cd59a5bfc@Duedinghausen.eu> <e45c95df-4858-48aa-a274-ba1bf8e599d5@shrew.net> <BE794E98-7B69-4626-BB66-B56F23D6A67E@gmail.com> <25ddf43d-f700-4cb5-af2a-1fe669d1e24b@shrew.net> <1DAEB435-A613-4A04-B63F-D7AF7A0B7C0A@gmail.com> <b353b39a-56d3-4757-a607-3c612944b509@shrew.net> <3850080E-EBD1-4414-9C4E-DD89611C9F58@gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
[-- Attachment #1 --]
On 2/28/24 15:02, Vitaliy Gusev wrote:
>
>
>> On 28 Feb 2024, at 23:03, Matthew Grooms <mgrooms@shrew.net> wrote:
>>
>> ...
>> The virtual disks were provisioned with either a 128G disk image or a
>> 1TB raw partition, so I don't think space was an issue.
>>
>> Trim is definitely not an issue. I'm using a tiny fraction of the
>> 32TB array have tried both heavily under-provisioned HW RAID10 and SW
>> RAID10 using GEOM. The latter was tested after sending full trim
>> resets to all drives individually.
>>
> It could be then TRIM/UNMAP is not used, zvol (for the instance)
> becomes full for the while. ZFS considers it as all blocks are used
> and write operations could have troubles. I believe it was recently
> fixed.
>
> Also look at this one:
>
> GuestFS->UNMAP->bhyve->Host-FS->PhysicalDisk
>
> The problem of UNMAP that it could have unpredictable slowdown at any
> time. So I would suggest to check results with enabled and disabled
> UNMAP in a guest.
>
Yes. I'm aware of issues associated with TRIM/UNMAP, but that's not
implemented by any hardware RAID vendor that I'm aware of. I tested with
both hardware and software RAID10. The issue I'm reporting is present in
both cases. I'm quite certain this has nothing to do with TRIM/UNMAP.
-Matthew
[-- Attachment #2 --]
<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<div class="moz-cite-prefix">On 2/28/24 15:02, Vitaliy Gusev wrote:<br>
</div>
<blockquote type="cite"
cite="mid:3850080E-EBD1-4414-9C4E-DD89611C9F58@gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<br id="lineBreakAtBeginningOfMessage">
<div><br>
<blockquote type="cite">
<div>On 28 Feb 2024, at 23:03, Matthew Grooms
<a class="moz-txt-link-rfc2396E" href="mailto:mgrooms@shrew.net"><mgrooms@shrew.net></a> wrote:</div>
<br class="Apple-interchange-newline">
<div>
<meta charset="UTF-8">
<div class="moz-cite-prefix"
style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 14px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;">...</div>
</div>
</blockquote>
<blockquote type="cite">
<div>
<div class="moz-cite-prefix"
style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 14px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;">The
virtual disks were provisioned with either a 128G disk
image or a 1TB raw partition, so I don't think space was
an issue.</div>
<p
style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 14px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;">Trim
is definitely not an issue. I'm using a tiny fraction of
the 32TB array have tried both heavily under-provisioned
HW RAID10 and SW RAID10 using GEOM. The latter was tested
after sending full trim resets to all drives individually.</p>
</div>
</blockquote>
<div>It could be then TRIM/UNMAP is not used, zvol (for the
instance) becomes full for the while. ZFS considers it as all
blocks are used and write operations could have troubles. I
believe it was recently fixed.</div>
<div><br>
</div>
<div>Also look at this one:</div>
<div><br>
</div>
<div>
GuestFS->UNMAP->bhyve->Host-FS->PhysicalDisk</div>
<div><br>
</div>
<div>The problem of UNMAP that it could have unpredictable
slowdown at any time. So I would suggest to check results with
enabled and disabled UNMAP in a guest.</div>
<br>
</div>
</blockquote>
<p>Yes. I'm aware of issues associated with TRIM/UNMAP, but that's
not implemented by any hardware RAID vendor that I'm aware of. I
tested with both hardware and software RAID10. The issue I'm
reporting is present in both cases. I'm quite certain this has
nothing to do with TRIM/UNMAP.<br>
<br>
-Matthew<br>
</p>
</body>
</html>
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?3738b08a-7841-4d18-9439-5f1c73a5c9e1>
