Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 8 Feb 2022 11:37:12 -0500
From:      Ken Merry <ken@freebsd.org>
To:        Alexander Motin <mav@FreeBSD.org>, John Baldwin <jhb@FreeBSD.org>
Cc:        Scott Long <scottl@samsco.org>, =?utf-8?Q?Edward_Tomasz_Napiera=C5=82a?= <trasz@freebsd.org>, "scsi@freebsd.org" <scsi@FreeBSD.org>
Subject:   Re: NVMeoF and ctl
Message-ID:  <737FE056-D58D-40FB-A374-16DD5C0E99CF@freebsd.org>
In-Reply-To: <2cf0c467-6bb6-55c0-586d-54ffec559c78@FreeBSD.org>
References:  <694eabe2-e796-ebdc-b3f1-eff8f8fc1b24@FreeBSD.org> <EB0FE720-A845-44CE-9C51-80F7C995CFE1@samsco.org> <2cf0c467-6bb6-55c0-586d-54ffec559c78@FreeBSD.org>

next in thread | previous in thread | raw e-mail | index | archive | help
CTL is very SCSI specific, but of course when I wrote it in 2003, it was =
the Copan Target Layer and ran on Linux, CAM only supported SCSI, and I =
only had vague hopes of getting CTL into FreeBSD one day.

Scott and Alexander have some good points.

A few thoughts:

1. Most if not all HBAs that support NVMeoF will also support SCSI.  =
(Chelsio, Qlogic, Emulex, and Mellanox support both).  Whatever we do =
(refactored multi-protocol CTL or separate stacks), we=E2=80=99ll want =
to allow users to run NVMeoF and SCSI target and initiator at the same =
time.  If you go the separate target stack route, you can of course have =
separate peripheral driver code to connect to CAM.  (I=E2=80=99m =
assuming you would still want to go through CAM=E2=80=A6)

2. =46rom a user standpoint, it might be nice to have a single =
configuration and management interface=E2=80=A6but that could =
potentially make the thing more unwieldy.  I guess whatever we do, =
we=E2=80=99ll want it to be well thought out.

3. It would be nice to have functionality like CTL that allows an =
internally-visible NVMe target implementation.  We=E2=80=99ve got some =
NVMe device emulation in Bhyve, but this would be more generic, and =
could be used to provide storage to Bhyve VMs, or useful to test new =
NVMe initiator code without extra hardware or cranking up a VM.

4. As Alexander pointed out, NVMe=E2=80=99s ordering requirements are =
not as complex as SCSI.  See sys/cam/ctl/ctl_ser_table.c and the OOA =
queue for an illustration of the SCSI complexity.  NVMe also allows for =
multiple queues, and namespaces (which I suppose are like multiple SCSI =
LUNs).  Performance, mainly low latency, will probably be a primary =
design goal.  A separate stack might make that easier, although if you =
did it through CTL, you would split SCSI and NVMe off in the peripheral =
driver code (scsi_ctl.c) and the two codepaths probably wouldn=E2=80=99t =
come back together until you got to the block or Ramdisk backend. =20

I don=E2=80=99t think it must be done one way or the other.  There are =
some tradeoffs.

I=E2=80=99m glad you=E2=80=99re getting paid to work on it, NVMe target =
is a feature we need in FreeBSD, and I=E2=80=99m sure you=E2=80=99ll do =
a good job with it. :)

Ken=20
=E2=80=94=20
Ken Merry
ken@FreeBSD.ORG



> On Feb 7, 2022, at 21:48, Alexander Motin <mav@FreeBSD.org> wrote:
>=20
> I feel that would we subtract SCSI out of CTL, there would not much =
left, aside of some very basic interfaces.  And those may appear =
benefitting from taking different approaches with NVMe's multiple =
queues, more relaxed request ordering semantics, etc.  Into recent NVMe =
specifications they've pumped in many things to bet on par with SCSI, =
but I am not sure it is similar enough to not turn common code into a =
huge mess.  Though I haven't looked what Linux did in that front and how =
good idea it was there.
>=20
> On 07.02.2022 19:31, Scott Long wrote:
>> CTL stands for =E2=80=9CCAM Target Layer=E2=80=9D, but yes, it=E2=80=99=
s a Periph and it=E2=80=99s deeply tied to SCSI protocol, even if it=E2=80=
=99s mostly transport agnostic.  I guess the answer to your question =
depends on the scope of your contract.  It would be ideal to refactor =
CTL into protocol-specific sub-modules, but that might take a =
significant amount of work, and might not be all that satisfying at the =
end.  I=E2=80=99d probably just copy CTL into a new, independent module, =
start replacing SCSI protocol idioms with NVMe ones, and along the way =
look for low-hanging fruit that can be refactored into a common library.
>> Scott
>>> On Feb 7, 2022, at 5:24 PM, John Baldwin <jhb@FreeBSD.org> wrote:
>>>=20
>>> One of the things I will be working on in the near future is NVMe =
over fabrics
>>> support, and specifically over TCP as Chelsio NICs include NVMe =
offload support
>>> (I think PDU encap/decap similar to the cxgbei driver for iSCSI =
offload).
>>>=20
>>> A question I have about this is if it makes sense for NVMeoF target =
support to
>>> make use of ctl?  =46rom what I can see, the code in ctl today seems =
to be
>>> very SCSI specific including both in the kernel and in the userland =
ctld
>>> unlike the Linux target code which appears to support both NVMeoF =
and iSCSI
>>> in its ctld equivalent.  Is the intention for there to be a cleaner =
separation
>>> here, and if so do you have any thoughts on what the design would be =
like?
>>> Or should NVMeoF just be its own thing separate from ctl and ctld?
>>>=20
>>> --=20
>>> John Baldwin
>>>=20
>=20
> --=20
> Alexander Motin




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?737FE056-D58D-40FB-A374-16DD5C0E99CF>