Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 16 Jul 2024 16:15:24 -0400
From:      Emil Tsalapatis <freebsd-lists@etsalapatis.com>
To:        David Chisnall <theraven@freebsd.org>
Cc:        Warner Losh <imp@bsdimp.com>, Alan Somers <asomers@freebsd.org>,  FreeBSD Hackers <freebsd-hackers@freebsd.org>
Subject:   Re: Is anyone working on VirtFS (FUSE over VirtIO)
Message-ID:  <CABFh=a6Tm=2JJdrk9LDQ%2BM96Wndr8%2Br=C4c17K3RQ0mb4%2BN0KQ@mail.gmail.com>
In-Reply-To: <75944503-8599-43CF-84C5-0C10CA325761@freebsd.org>
References:  <CABFh=a4t=73NLyJFqBOs1pRuo8B_d8wOH_mavnD-Da9dU-3k8Q@mail.gmail.com> <75944503-8599-43CF-84C5-0C10CA325761@freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
--00000000000038270d061d631223
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi,

On Mon, Jul 15, 2024 at 3:47=E2=80=AFAM David Chisnall <theraven@freebsd.or=
g> wrote:

> Hi,
>
> This looks great! Are there infrastructure problems with supporting the
> DAX or is it =E2=80=98just work=E2=80=99? I had hoped that the extensions=
 to the buffer
> cache that allow ARC to own pages that are delegated to the buffer cache
> would be sufficient.
>
>
After going over the Linux code, I think adding direct mapping doesn't
require any changes outside of FUSE and virtio code. Direct mapping mainly
requires code to manage the virtiofs device's memory region in the driver.
This is a shared memory region between guest and host with which the driver
backs FUSE inodes. The driver then includes an allocator used to map parts
of an inode into the region.

It should be possible to pass host-guest shared pages to ARC, with the
caveat that the virtiofs driver should be able to reclaim them at any time.
Does the code currently allow this? Virtiofs needs this because it maps
region pages to inodes, and must reuse cold region pages during an
allocation if there aren't any available. Basically, the region is a
separate pool of device pages that's managed directly by virtiofs.

If I understand the protocol correctly, the DAX mode is the same as the
> direct mmap mode in FUSE (not sure if FreeBSD!=E2=80=99s kernel fuse bits=
 support
> this?).
>
>

Yeah, virtiofs DAX seems like it's similar to FUSE direct mmap, but with
FUSE inodes being backed by the shared region instead. I don't think
FreeBSD has direct mmap but I may be wrong there.

Emil



> David
>
> On 14 Jul 2024, at 15:07, Emil Tsalapatis <freebsd-lists@etsalapatis.com>
> wrote:
>
> =EF=BB=BF
> Hi David, Warner,
>
>     I'm glad you find this approach interesting! I've been meaning to
> update the virtio-dbg patch for a while but unfortunately haven't found t=
he
> time in the last month since I uploaded it... I'll update it soon to
> address the reviews and split off the userspace device emulation code out
> of the patch to make reviewing easier (thanks Alan for the suggestion). I=
f
> you have any questions or feedback please let me know.
>
> WRT virtiofs itself, I've been working on it too but I haven't found the
> time to clean it up and upload it. I have a messy but working
> implementation here
> <https://github.com/etsal/freebsd-src/tree/virtiofs-head>. The changes to
> FUSE itself are indeed minimal because it is enough to redirect the
> messages into a virtiofs device instead of sending them to a local FUSE
> device. The virtiofs device and the FUSE device are both simple
> bidirectional queues. Not sure on how to deal with directly mapping files
> between host and guest just yet, because the Linux driver uses their DAX
> interface for that, but it should be possible.
>
> Emil
>
> On Sun, Jul 14, 2024 at 3:11=E2=80=AFAM David Chisnall <theraven@freebsd.=
org>
> wrote:
>
>> Wow, that looks incredibly useful.  Not needing bhyve / qemu (nested, if
>> your main development is a VM) to test virtio drivers would be a huge
>> productivity win.
>>
>> David
>>
>> On 13 Jul 2024, at 23:06, Warner Losh <imp@bsdimp.com> wrote:
>>
>> Hey David,
>>
>> You might want to check out  https://reviews.freebsd.org/D45370 which
>> has the testing framework as well as hints at other work that's been don=
e
>> for virtiofs by Emil Tsalapatis. It looks quite interesting. Anything he=
's
>> done that's at odds with what I've said just shows where my analysis was
>> flawed :) This looks quite promising, but I've not had the time to look =
at
>> it in detail yet.
>>
>> Warner
>>
>> On Sat, Jul 13, 2024 at 2:44=E2=80=AFAM David Chisnall <theraven@freebsd=
.org>
>> wrote:
>>
>>> On 31 Dec 2023, at 16:19, Warner Losh <imp@bsdimp.com> wrote:
>>>
>>>
>>> Yea. The FUSE protocol is going to be the challenge here. For this to b=
e
>>> useful, the VirtioFS support on the FreeBSD  needs to be 100% in the
>>> kernel, since you can't have userland in the loop. This isn't so terrib=
le,
>>> though, since our VFS interface provides a natural breaking point for
>>> converting the requests into FUSE requests. The trouble, I fear, is a
>>> mismatch between FreeBSD's VFS abstraction layer and Linux's will cause
>>> issues (many years ago, the weakness of FreeBSD VFS caused problems for=
 a
>>> company doing caching, though things have no doubt improved from those
>>> days). Second, there's a KVM tie-in for the direct mapped pages between=
 the
>>> VM and the hypervisor. I'm not sure how that works on the client (FreeB=
SD)
>>> side (though the description also says it's mapped via a PCI bar, so ma=
ybe
>>> the VM OS doesn't care).
>>>
>>>
>>> From what I can tell from a little bit of looking at the code, our FUSE
>>> implementation has a fairly cleanly abstracted layer (in fuse_ipc.c) fo=
r
>>> handling the message queue.  For VirtioFS, it would 'just' be necessary=
 to
>>> factor out the bits here that do uio into something that talked to a Vi=
rtIO
>>> ring.  I don=E2=80=99t know what the VFS limitations are, but since the=
 protocol
>>> for VirtioFS is the kernel <-> userspace protocol for FUSE, it seems th=
at
>>> any functionality that works with FUSE filesystems in userspace would w=
ork
>>> with VirtioFS filesystems.
>>>
>>> The shared buffer cache bits are nice, but are optional, so could be
>>> done in a later version once the basic functionality worked.
>>>
>>> David
>>>
>>>
>>

--00000000000038270d061d631223
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hi,<br></div><br><div class=3D"gmail_quote"><div dir=
=3D"ltr" class=3D"gmail_attr">On Mon, Jul 15, 2024 at 3:47=E2=80=AFAM David=
 Chisnall &lt;<a href=3D"mailto:theraven@freebsd.org">theraven@freebsd.org<=
/a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0=
px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><=
div dir=3D"auto"><div dir=3D"ltr"></div><div dir=3D"ltr"><div dir=3D"ltr">H=
i,</div><div dir=3D"ltr"><br></div><div dir=3D"ltr">This looks great! Are t=
here infrastructure problems with supporting the DAX or is it =E2=80=98just=
 work=E2=80=99? I had hoped that the extensions to the buffer cache that al=
low ARC to own pages that are delegated to the buffer cache would be suffic=
ient.</div><div dir=3D"ltr"><br></div></div></div></blockquote><div><br></d=
iv><div><div dir=3D"ltr">After
 going over the Linux code, I think adding direct mapping doesn&#39;t requi=
re any changes outside of FUSE and virtio code. Direct mapping mainly=20
requires code to manage the virtiofs device&#39;s memory region in the driv=
er.=20
This is a shared memory region between guest and host with which the=20
driver backs FUSE inodes. The driver then includes an allocator used to=20
map parts of an inode into the region.</div><div dir=3D"ltr"><br></div><div=
 dir=3D"ltr">It should be possible to pass host-guest shared pages to ARC, =
with=20
the caveat that the virtiofs driver should be able to reclaim them at=20
any time. Does the code currently allow this? Virtiofs needs this because i=
t maps region pages to inodes, and must reuse cold region pages during an a=
llocation if there aren&#39;t any available.=20
Basically, the region is a separate pool of device pages that&#39;s managed=
=20
directly by virtiofs.<br></div><div dir=3D"ltr"><br></div></div><blockquote=
 class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px so=
lid rgb(204,204,204);padding-left:1ex"><div dir=3D"auto"><div dir=3D"ltr"><=
div dir=3D"ltr"></div><div dir=3D"ltr">If I understand the protocol correct=
ly, the DAX mode is the same as the direct mmap mode in FUSE (not sure if F=
reeBSD!=E2=80=99s kernel fuse bits support this?).</div><div dir=3D"ltr"><b=
r></div></div></div></blockquote><div><br><br>Yeah, virtiofs DAX seems like=
 it&#39;s similar to FUSE=20
direct mmap, but with FUSE inodes being backed by the shared region instead=
. I=20
don&#39;t think FreeBSD has direct mmap but I may be wrong there.<br><br>Em=
il</div><div><br></div><div>=C2=A0</div><blockquote class=3D"gmail_quote" s=
tyle=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);pad=
ding-left:1ex"><div dir=3D"auto"><div dir=3D"ltr"><div dir=3D"ltr"></div><d=
iv dir=3D"ltr">David</div></div><div dir=3D"ltr"><br><blockquote type=3D"ci=
te">On 14 Jul 2024, at 15:07, Emil Tsalapatis &lt;<a href=3D"mailto:freebsd=
-lists@etsalapatis.com" target=3D"_blank">freebsd-lists@etsalapatis.com</a>=
&gt; wrote:<br><br></blockquote></div><blockquote type=3D"cite"><div dir=3D=
"ltr">=EF=BB=BF<div dir=3D"ltr"><div>Hi David, Warner,</div><div><br></div>=
<div>=C2=A0=C2=A0=C2=A0 I&#39;m glad you find this approach interesting! I&=
#39;ve been meaning to update the virtio-dbg patch for a while but unfortun=
ately haven&#39;t found the time in the last month since I uploaded it... I=
&#39;ll update it soon to address the reviews and split off the=20
userspace device emulation code out of the patch to make reviewing=20
easier (thanks Alan for the suggestion). If you have any questions or feedb=
ack please let me know.<br></div><div><br></div><div>WRT virtiofs itself, I=
&#39;ve been working on it too but I haven&#39;t found the time to clean it=
 up and upload it. I have a messy but working implementation <a href=3D"htt=
ps://github.com/etsal/freebsd-src/tree/virtiofs-head" target=3D"_blank">her=
e</a>. The changes to FUSE itself are indeed minimal because it is enough t=
o redirect the messages into a virtiofs device instead of sending them to a=
 local FUSE device. The virtiofs device and the FUSE device are both simple=
 bidirectional queues. Not sure on how to deal with directly mapping files =
between host and guest just yet, because the Linux driver uses their DAX in=
terface for that, but it should be possible.<br></div><div><br></div><div>E=
mil<br></div></div><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D=
"gmail_attr">On Sun, Jul 14, 2024 at 3:11=E2=80=AFAM David Chisnall &lt;<a =
href=3D"mailto:theraven@freebsd.org" target=3D"_blank">theraven@freebsd.org=
</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:=
0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">=
<div>Wow, that looks incredibly useful.=C2=A0 Not needing bhyve / qemu (nes=
ted, if your main development is a VM) to test virtio drivers would be a hu=
ge productivity win. =C2=A0<div><br></div><div>David<br id=3D"m_41344538774=
74676446m_2432313125591762966lineBreakAtBeginningOfMessage"><div><br><block=
quote type=3D"cite"><div>On 13 Jul 2024, at 23:06, Warner Losh &lt;<a href=
=3D"mailto:imp@bsdimp.com" target=3D"_blank">imp@bsdimp.com</a>&gt; wrote:<=
/div><br><div><div dir=3D"ltr"><div>Hey David,</div><div><br></div><div>You=
 might want to check out=C2=A0 <a href=3D"https://reviews.freebsd.org/D4537=
0" target=3D"_blank">https://reviews.freebsd.org/D45370</a>; which has the t=
esting framework as well as hints at other work that&#39;s been done for vi=
rtiofs=C2=A0by Emil=C2=A0Tsalapatis. It looks quite interesting. Anything h=
e&#39;s done that&#39;s at odds with what I&#39;ve said just shows where my=
 analysis was flawed :) This looks quite promising, but I&#39;ve not had th=
e time to look at it in detail yet.</div><div><br></div><div>Warner</div></=
div><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail_attr">On=
 Sat, Jul 13, 2024 at 2:44=E2=80=AFAM David Chisnall &lt;<a href=3D"mailto:=
theraven@freebsd.org" target=3D"_blank">theraven@freebsd.org</a>&gt; wrote:=
<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8=
ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>On 31 Dec =
2023, at 16:19, Warner Losh &lt;<a href=3D"mailto:imp@bsdimp.com" target=3D=
"_blank">imp@bsdimp.com</a>&gt; wrote:<br><div><blockquote type=3D"cite"><b=
r><div><div style=3D"font-family:Helvetica;font-size:12px;font-style:normal=
;font-variant-caps:normal;font-weight:400;letter-spacing:normal;text-align:=
start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0=
px;text-decoration:none">Yea. The FUSE protocol is going to be the challeng=
e here. For this to be useful, the VirtioFS=C2=A0support on=C2=A0the FreeBS=
D=C2=A0 needs to be 100% in the kernel, since you can&#39;t have userland i=
n the loop. This isn&#39;t so terrible, though, since our VFS interface pro=
vides a natural breaking point for converting the requests into FUSE reques=
ts. The trouble, I fear, is a mismatch between FreeBSD&#39;s VFS abstractio=
n layer and Linux&#39;s will cause issues (many years ago, the weakness of =
FreeBSD VFS caused problems for a company doing caching, though things have=
 no doubt improved from those days). Second, there&#39;s a KVM tie-in for t=
he direct mapped pages between the VM and the hypervisor. I&#39;m not sure =
how that works on the client (FreeBSD) side (though the description also sa=
ys it&#39;s mapped via a PCI bar, so maybe the VM OS doesn&#39;t care).</di=
v></div></blockquote><br></div><div>From what I can tell from a little bit =
of looking at the code, our FUSE implementation has a fairly cleanly abstra=
cted layer (in fuse_ipc.c) for handling the message queue.=C2=A0 For Virtio=
FS, it would &#39;just&#39; be necessary to factor out the bits here that d=
o uio into something that talked to a VirtIO ring.=C2=A0 I don=E2=80=99t kn=
ow what the VFS limitations are, but since the protocol for VirtioFS is the=
 kernel &lt;-&gt; userspace protocol for FUSE, it seems that any functional=
ity that works with FUSE filesystems in userspace would work with VirtioFS =
filesystems.</div><div><br></div><div>The shared buffer cache bits are nice=
, but are optional, so could be done in a later version once the basic func=
tionality worked. =C2=A0</div><div><br></div><div>David</div><div><br></div=
></div></blockquote></div>
</div></blockquote></div><br></div></div></blockquote></div>
</div></blockquote></div></blockquote></div></div>

--00000000000038270d061d631223--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CABFh=a6Tm=2JJdrk9LDQ%2BM96Wndr8%2Br=C4c17K3RQ0mb4%2BN0KQ>