Date: Fri, 17 Apr 2020 10:17:00 -0600 From: Scott Long <scottl@samsco.org> To: Warner Losh <imp@bsdimp.com> Cc: Miroslav Lachman <000.fbsd@quip.cz>, FreeBSD-Current <freebsd-current@freebsd.org> Subject: Re: PCIe NVME drives not detected on Dell R6515 Message-ID: <31E8B2BE-BED2-4084-868D-32C48CB3CD6E@samsco.org> In-Reply-To: <9EF043C1-FF8F-4997-B59A-EC3BF7D1CEEE@samsco.org> References: <bc00d2f4-d281-e125-3333-65f38da20817@quip.cz> <0F8BCB8C-DE60-4A34-A4D8-F1BB4B9F906A@samsco.org> <CANCZdfprct8pELBaev=Ub3sXb_JRx9xovUhzxDpSwY2rXfMtrg@mail.gmail.com> <9EF043C1-FF8F-4997-B59A-EC3BF7D1CEEE@samsco.org>
next in thread | previous in thread | raw e-mail | index | archive | help
You are correct about Intel vs AMD. Comparing the full output of = pciconf from FreeBSD with the fragment of lspci from Linux suggests that = there=E2=80=99s at least one set of a PCIe switch and child devices that = is not being enumerated by FreeBSD. Can you send the full output of = `lspci -tvv` from linux? Thanks, Scott > On Apr 17, 2020, at 9:54 AM, Scott Long <scottl@samsco.org> wrote: >=20 > Would that be the Intel VMD/VROC stuff? If so, there=E2=80=99s a = driver for FreeBSD, but it=E2=80=99s not well tested yet. Will have to = dig in further. >=20 > Scott >=20 >=20 >> On Apr 17, 2020, at 9:50 AM, Warner Losh <imp@bsdimp.com> wrote: >>=20 >>=20 >>=20 >> On Fri, Apr 17, 2020 at 9:39 AM Scott Long <scottl@samsco.org> wrote: >> Can you send me the output of =E2=80=98pciconf -llv=E2=80=99, either = in 12-STABLE or 13-CURRENT? Also, can you send me the output of = =E2=80=98dmesg=E2=80=99? >>=20 >> There was another thread that said there was a raid card in the = way... It would be cool to find a way to get it out of the way... :) >>=20 >> Warner >>=20 >> Thanks, >> Scott >>=20 >>=20 >>> On Apr 17, 2020, at 5:23 AM, Miroslav Lachman <000.fbsd@quip.cz> = wrote: >>>=20 >>> I already asked on stable@ but as I tried it on 13-CURRENT with the = same result I am trying to ask for help here. >>>=20 >>> I have rented dedicated server Dell PowerEdge R6515 with iDRAC = access only. >>> There are 2 NVME drives which I wanted to use as ZFS root pool. >>>=20 >>> They are shown in iDRAC >>>=20 >>> Device Description: PCIe SSD in Slot 1 in Bay 1 >>> Device Protocol: NVMe-MI1.0 >>> Model: Dell Express Flash NVMe P4510 1TB SFF >>> Bus: 130 >>> Manufacturer: INTEL >>> Product ID: a54 >>> Revision: VDV1DP23 >>> Enclosure: PCIe SSD Backplane 1 >>>=20 >>>=20 >>> pciconf -l show many things, some of them are named "noneN@pci..." = but none "nvme" >>>=20 >>> The is printscreen (12.1 but 13-CURRENT is the same) >>>=20 >>> https://ibb.co/tPnymL7 >>>=20 >>> But I booted Linux SystemRescueCd and nvme devices are there visible = in /dev/ >>> https://ibb.co/sj22Nwg >>>=20 >>> There is verbose output of Linux lspci https://ibb.co/dPZTwV1 >>>=20 >>> Linux dmesg contains: >>> nvme nvme0: pci function 0000:81:00.0 >>> nvme nvme1: pci function 0000:82:00.0 >>> nvme nvme0: 32/0/0 default/read/poll queues >>> nvme nvme1: 32/0/0 default/read/poll queues >>>=20 >>>=20 >>> The machine is Dell PowerEdge R6515 with AMD EPYC 7302P >>>=20 >>>=20 >>> More details extracted from iDRAC web interface >>>=20 >>> I found this informations >>>=20 >>> PCIe SSD in Slot 1 in Bay 1 >>> Bus: 82 >>> BusProtocol: PCIE >>> Device: 0 >>> DeviceDescription: PCIe SSD in Slot 1 in Bay 1 >>> DeviceProtocol: NVMe-MI1.0 >>> DeviceType: PCIeSSD >>> DriveFormFactor: 2.5 inch >>> FailurePredicted: NO >>> FQDD: Disk.Bay.1:Enclosure.Internal.0-1 >>> FreeSizeInBytes: Information Not Available >>> Function: 0 >>> HotSpareStatus: Information Not Available >>> InstanceID: Disk.Bay.1:Enclosure.Internal.0-1 >>> Manufacturer: INTEL >>> MaximumCapableSpeed: 8 GT/s >>> MediaType: Solid State Drive >>> Model: Dell Express Flash NVMe P4510 1TB SFF >>> NegotiatedSpeed: 8 GT/s >>> PCIeCapableLinkWidth: x4 >>> PCIeNegotiatedLinkWidth: x4 >>> PrimaryStatus: Ok >>> ProductID: a54 >>> RaidStatus: Information Not Available >>> RAIDType: Unknown >>> RemainingRatedWriteEndurance: 100 % >>> Revision: VDV1DP23 >>> SerialNumber: PHLJxxxxxxWF1PxxxxN >>> SizeInBytes: 1000204886016 >>> Slot: 1 >>> State: Ready >>> SystemEraseCapability: CryptographicErasePD >>>=20 >>> PCIe SSD in Slot 1 in Bay 1 - PCI Device >>> BusNumber: 130 >>> DataBusWidth: 4x or x4 >>> Description: Express Flash NVMe 1.0 TB 2.5" U.2 (P4510) >>> DeviceDescription: PCIe SSD in Slot 1 in Bay 1 >>> DeviceNumber: 0 >>> DeviceType: PCIDevice >>> FQDD: Disk.Bay.1:Enclosure.Internal.0-1 >>> FunctionNumber: 0 >>> InstanceID: Disk.Bay.1:Enclosure.Internal.0-1 >>> LastSystemInventoryTime: 2020-04-17T03:21:39 >>> LastUpdateTime: 2020-03-31T13:55:06 >>> Manufacturer: Intel Corporation >>> PCIDeviceID: 0A54 >>> PCISubDeviceID: 2003 >>> PCISubVendorID: 1028 >>> PCIVendorID: 8086 >>> SlotLength: 2.5 Inch Drive Form Factor >>> SlotType: PCI Express Gen 3 SFF-8639 >>>=20 >>>=20 >>> Can anybody shed some light what the real problem is? >>>=20 >>> Is the hardware not properly detected or is the driver completely = missing? >>>=20 >>> NVME PCIe architecture is out of my knowledge. >>>=20 >>> I really appreciate any help. >>>=20 >>> Kind regards >>> Miroslav Lachman >>> _______________________________________________ >>> freebsd-current@freebsd.org mailing list >>> https://lists.freebsd.org/mailman/listinfo/freebsd-current >>> To unsubscribe, send any mail to = "freebsd-current-unsubscribe@freebsd.org" >>=20 >> _______________________________________________ >> freebsd-current@freebsd.org mailing list >> https://lists.freebsd.org/mailman/listinfo/freebsd-current >> To unsubscribe, send any mail to = "freebsd-current-unsubscribe@freebsd.org" >=20 > _______________________________________________ > freebsd-current@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to = "freebsd-current-unsubscribe@freebsd.org"
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?31E8B2BE-BED2-4084-868D-32C48CB3CD6E>