Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 5 Jul 2023 22:05:04 GMT
From:      John Baldwin <jhb@FreeBSD.org>
To:        doc-committers@FreeBSD.org, dev-commits-doc-all@FreeBSD.org
Subject:   git: 3ba5fc40d0 - main - 2023Q2 status report for NVMe over Fabrics
Message-ID:  <202307052205.365M5422062349@gitrepo.freebsd.org>

next in thread | raw e-mail | index | archive | help
The branch main has been updated by jhb:

URL: https://cgit.FreeBSD.org/doc/commit/?id=3ba5fc40d0f5a946ffb89268c319c8967365d5fb

commit 3ba5fc40d0f5a946ffb89268c319c8967365d5fb
Author:     John Baldwin <jhb@FreeBSD.org>
AuthorDate: 2023-07-05 22:04:31 +0000
Commit:     John Baldwin <jhb@FreeBSD.org>
CommitDate: 2023-07-05 22:04:31 +0000

    2023Q2 status report for NVMe over Fabrics
    
    Reviewed by:    Pau Amma <pauamma@gundo.com>, salvadore
    Differential Revision:  https://reviews.freebsd.org/D40792
---
 .../en/status/report-2023-04-2023-06/nvmf.adoc     | 71 ++++++++++++++++++++++
 1 file changed, 71 insertions(+)

diff --git a/website/content/en/status/report-2023-04-2023-06/nvmf.adoc b/website/content/en/status/report-2023-04-2023-06/nvmf.adoc
new file mode 100644
index 0000000000..445119c7f9
--- /dev/null
+++ b/website/content/en/status/report-2023-04-2023-06/nvmf.adoc
@@ -0,0 +1,71 @@
+=== NVMe over Fabrics
+
+Links: +
+link:https://github.com/bsdjhb/freebsd/tree/nvmf2[nvmf2 branch]	URL: link:https://github.com/bsdjhb/freebsd/tree/nvmf2[]
+
+Contact: John Baldwin <jhb@FreeBSD.org>
+
+NVMe over Fabrics enables communication with a storage device using
+the NVMe protocol over a network fabric.
+This is similar to using iSCSI to export a storage device over a
+network using SCSI commands.
+
+NVMe over Fabrics currently defines network transports for
+Fibre Channel, RDMA, and TCP.
+
+The work in the nvmf2 branch includes a userland library (lib/libnvmf)
+which contains an abstraction for transports and an implementation of
+a TCP transport.
+It also includes changes to man:nvmecontrol[8] to add 'discover',
+'connect', and 'disconnect' commands to manage connections to a remote
+controller.
+
+The branch also contains an in-kernel Fabrics implementation.
+[.filename]#nvmf_transport.ko# contains a transport abstraction that
+sits in between the nvmf host (initiator in SCSI terms) and the
+individual transports.
+[.filename]#nvmf_tcp.ko# contains an implementation of the TCP
+transport layer.
+[.filename]#nvmf.ko# contains an NVMe over Fabrics host (initiator)
+which connects to a remote controller and exports remote namespaces as
+disk devices.
+Similar to the man:nvme[4] driver for NVMe over PCI-express,
+namespaces are exported via [.filename]#/dev/nvmeXnsY# devices which
+only support simple operations, but are also exported as ndaX disk
+devices via CAM.
+Unlike man:nvme[4], man:nvmf[4] does not support the man:nvd[4] disk
+driver.
+nvmecontrol can be used with remote namespaces and remote controllers,
+for example to fetch log pages, display identify info, etc.
+
+Note that man:nvmf[4] is currently a bit simple and some error cases
+are still a TODO.
+If an error occurs, the queues (and backing network connections) are
+dropped, but the devices stay around, but with I/O requests paused.
+'nvmecontrol reconnect' can be used to connect a new set of network
+connections to resume operation.
+Unlike iSCSI which uses a persistent daemon (man:iscsid[8]) to
+reconnect after an error, reconnections must be done manually.
+
+The current code is very new and likely not robust.
+It is certainly not ready for production use.
+Experienced users who do not mind all their data vanishing in a puff
+of smoke after a kernel panic and who have an interest in NVMe over
+Fabrics can start testing it at their own risk.
+
+The next main task is to implement a Fabrics controller (target in
+SCSI language).
+Probably a simple one in userland first followed by a "real" one that
+offloads the data handling to the kernel but is somewhat integrated
+with man:ctld[8] so that individual disk devices can be exported
+either via iSCSI or NVMe or both using a single config file and daemon
+to manage all of that.
+This may require a fair bit of refactoring in ctld to make it less
+iSCSI-specific.
+Working on the controller side will also validate some of the
+currently under-tested API design decisions in the
+transport-independent layer.
+I think it probably does not make sense to merge any of the NVMe over
+Fabrics changes into the tree until after this step.
+
+Sponsored by: Chelsio Communications



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?202307052205.365M5422062349>