Date: Fri, 17 Apr 2020 10:15:50 -0600 From: Warner Losh <imp@bsdimp.com> To: Scott Long <scottl@samsco.org> Cc: Miroslav Lachman <000.fbsd@quip.cz>, FreeBSD-Current <freebsd-current@freebsd.org> Subject: Re: PCIe NVME drives not detected on Dell R6515 Message-ID: <CANCZdfpSt-CpxUEEE8c=dB6Viydf8Ghrf54ASz22W0-MgT2Kkw@mail.gmail.com> In-Reply-To: <9EF043C1-FF8F-4997-B59A-EC3BF7D1CEEE@samsco.org> References: <bc00d2f4-d281-e125-3333-65f38da20817@quip.cz> <0F8BCB8C-DE60-4A34-A4D8-F1BB4B9F906A@samsco.org> <CANCZdfprct8pELBaev=Ub3sXb_JRx9xovUhzxDpSwY2rXfMtrg@mail.gmail.com> <9EF043C1-FF8F-4997-B59A-EC3BF7D1CEEE@samsco.org>
next in thread | previous in thread | raw e-mail | index | archive | help
No. It was some kind of extra card (Perteq?) managing things in a Dell server. It may be the VMD/VROC stuff, and that driver may be worth a shot, but I'm thinking not. It looks like Doug's code for that is in the tree, though https://reviews.freebsd.org/rS353380. Or are other changes needed? I've been blessed with either being able to turn this off, or not having to deal... Warner On Fri, Apr 17, 2020 at 9:54 AM Scott Long <scottl@samsco.org> wrote: > Would that be the Intel VMD/VROC stuff? If so, there=E2=80=99s a driver = for > FreeBSD, but it=E2=80=99s not well tested yet. Will have to dig in furth= er. > > Scott > > > > On Apr 17, 2020, at 9:50 AM, Warner Losh <imp@bsdimp.com> wrote: > > > > > > > > On Fri, Apr 17, 2020 at 9:39 AM Scott Long <scottl@samsco.org> wrote: > > Can you send me the output of =E2=80=98pciconf -llv=E2=80=99, either in= 12-STABLE or > 13-CURRENT? Also, can you send me the output of =E2=80=98dmesg=E2=80=99? > > > > There was another thread that said there was a raid card in the way... > It would be cool to find a way to get it out of the way... :) > > > > Warner > > > > Thanks, > > Scott > > > > > > > On Apr 17, 2020, at 5:23 AM, Miroslav Lachman <000.fbsd@quip.cz> > wrote: > > > > > > I already asked on stable@ but as I tried it on 13-CURRENT with the > same result I am trying to ask for help here. > > > > > > I have rented dedicated server Dell PowerEdge R6515 with iDRAC access > only. > > > There are 2 NVME drives which I wanted to use as ZFS root pool. > > > > > > They are shown in iDRAC > > > > > > Device Description: PCIe SSD in Slot 1 in Bay 1 > > > Device Protocol: NVMe-MI1.0 > > > Model: Dell Express Flash NVMe P4510 1TB SFF > > > Bus: 130 > > > Manufacturer: INTEL > > > Product ID: a54 > > > Revision: VDV1DP23 > > > Enclosure: PCIe SSD Backplane 1 > > > > > > > > > pciconf -l show many things, some of them are named "noneN@pci..." > but none "nvme" > > > > > > The is printscreen (12.1 but 13-CURRENT is the same) > > > > > > https://ibb.co/tPnymL7 > > > > > > But I booted Linux SystemRescueCd and nvme devices are there visible > in /dev/ > > > https://ibb.co/sj22Nwg > > > > > > There is verbose output of Linux lspci https://ibb.co/dPZTwV1 > > > > > > Linux dmesg contains: > > > nvme nvme0: pci function 0000:81:00.0 > > > nvme nvme1: pci function 0000:82:00.0 > > > nvme nvme0: 32/0/0 default/read/poll queues > > > nvme nvme1: 32/0/0 default/read/poll queues > > > > > > > > > The machine is Dell PowerEdge R6515 with AMD EPYC 7302P > > > > > > > > > More details extracted from iDRAC web interface > > > > > > I found this informations > > > > > > PCIe SSD in Slot 1 in Bay 1 > > > Bus: 82 > > > BusProtocol: PCIE > > > Device: 0 > > > DeviceDescription: PCIe SSD in Slot 1 in Bay 1 > > > DeviceProtocol: NVMe-MI1.0 > > > DeviceType: PCIeSSD > > > DriveFormFactor: 2.5 inch > > > FailurePredicted: NO > > > FQDD: Disk.Bay.1:Enclosure.Internal.0-1 > > > FreeSizeInBytes: Information Not Available > > > Function: 0 > > > HotSpareStatus: Information Not Available > > > InstanceID: Disk.Bay.1:Enclosure.Internal.0-1 > > > Manufacturer: INTEL > > > MaximumCapableSpeed: 8 GT/s > > > MediaType: Solid State Drive > > > Model: Dell Express Flash NVMe P4510 1TB SFF > > > NegotiatedSpeed: 8 GT/s > > > PCIeCapableLinkWidth: x4 > > > PCIeNegotiatedLinkWidth: x4 > > > PrimaryStatus: Ok > > > ProductID: a54 > > > RaidStatus: Information Not Available > > > RAIDType: Unknown > > > RemainingRatedWriteEndurance: 100 % > > > Revision: VDV1DP23 > > > SerialNumber: PHLJxxxxxxWF1PxxxxN > > > SizeInBytes: 1000204886016 > > > Slot: 1 > > > State: Ready > > > SystemEraseCapability: CryptographicErasePD > > > > > > PCIe SSD in Slot 1 in Bay 1 - PCI Device > > > BusNumber: 130 > > > DataBusWidth: 4x or x4 > > > Description: Express Flash NVMe 1.0 TB 2.5" U.2 (P4510) > > > DeviceDescription: PCIe SSD in Slot 1 in Bay 1 > > > DeviceNumber: 0 > > > DeviceType: PCIDevice > > > FQDD: Disk.Bay.1:Enclosure.Internal.0-1 > > > FunctionNumber: 0 > > > InstanceID: Disk.Bay.1:Enclosure.Internal.0-1 > > > LastSystemInventoryTime: 2020-04-17T03:21:39 > > > LastUpdateTime: 2020-03-31T13:55:06 > > > Manufacturer: Intel Corporation > > > PCIDeviceID: 0A54 > > > PCISubDeviceID: 2003 > > > PCISubVendorID: 1028 > > > PCIVendorID: 8086 > > > SlotLength: 2.5 Inch Drive Form Factor > > > SlotType: PCI Express Gen 3 SFF-8639 > > > > > > > > > Can anybody shed some light what the real problem is? > > > > > > Is the hardware not properly detected or is the driver completely > missing? > > > > > > NVME PCIe architecture is out of my knowledge. > > > > > > I really appreciate any help. > > > > > > Kind regards > > > Miroslav Lachman > > > _______________________________________________ > > > freebsd-current@freebsd.org mailing list > > > https://lists.freebsd.org/mailman/listinfo/freebsd-current > > > To unsubscribe, send any mail to " > freebsd-current-unsubscribe@freebsd.org" > > > > _______________________________________________ > > freebsd-current@freebsd.org mailing list > > https://lists.freebsd.org/mailman/listinfo/freebsd-current > > To unsubscribe, send any mail to " > freebsd-current-unsubscribe@freebsd.org" > >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CANCZdfpSt-CpxUEEE8c=dB6Viydf8Ghrf54ASz22W0-MgT2Kkw>