Date: Fri, 12 Apr 2019 18:37:47 -0600 From: Warner Losh <imp@bsdimp.com> To: "Patrick M. Hausen" <hausen@punkt.de> Cc: FreeBSD-STABLE Mailing List <freebsd-stable@freebsd.org> Subject: Re: NVME aborting outstanding i/o and controller resets Message-ID: <CANCZdfr_0xV=QmeTRtQbtvcp3wN0rw4C7KpgrXt3zVpmADwArg@mail.gmail.com> In-Reply-To: <92DAD65A-9BFE-4294-9066-977F498300A3@punkt.de> References: <818CF16A-D71C-47C0-8A1B-35C9D8F68F4E@punkt.de> <CF2365AE-23EA-4F18-9520-C998216155D5@punkt.de> <CANCZdfoPZ9ViQzZ2k8GT5pNw5hjso3rzmYxzU=s%2B3K=ze%2BLZwg@mail.gmail.com> <58E4FC01-D154-42D4-BA0F-EF9A2C60DBF7@punkt.de> <CANCZdfpeZ-MMKB3Sh=3vhsjJcmFkGG7Jq8nW52D5S45PL3menA@mail.gmail.com> <45D98122-7596-4E8A-8A0D-C33E017C1109@punkt.de> <CANCZdfrcnRwqDPXMyT6xNKUZ5nX8x9Fj6DHbCnh%2BQ4mWzx0vGQ@mail.gmail.com> <92DAD65A-9BFE-4294-9066-977F498300A3@punkt.de>
next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Apr 12, 2019, 1:22 PM Patrick M. Hausen <hausen@punkt.de> wrote: > Hi Warner, > > thanks for taking the time again =E2=80=A6 > > > OK. This means that whatever I/O workload we've done has caused the NVM= E > card to stop responding for 30s, so we reset it. > > I figured as much ;-) > > > So it's an intel card. > > Yes - I already added this info several times. 6 of them, 2.5=E2=80=9C NV= ME =E2=80=9Edisk > drives=E2=80=9C. > Yea, it was more of a knowing sigh... > OK. That suggests Intel has a problem with their firmware. > > I came across this one: > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=3D211713 > > Is it more probable that Intel has got buggy firmware here than that > =E2=80=9Ewe=E2=80=9C are missing interrupts? > More probable bad firmware. One of the things I think that is in HEAD is a mitigation for this that looks for completed IO on timeout before doing a reset. The mainboard is the Supermicro H11SSW-NT. Two NVME drive bays share > a connector on the mainboard: > > NVMe Ports ( NVMe 0~7, 10, 11, 14, 15) > > The H11SSW-iN/NT has tweleve (12) NVMe ports (2 ports per 1 Slim > SAS connector) on the motherboard. > These ports provide high-speed, low-latency PCI-E 3.0 x4 > connections directly from the CPU to NVMe Solid > State (SSD) drives. This greatly increases SSD data- throughput > performance and significantly reduces PCI-E > latency by simplifying driver/software requirements resulting fro= m > direct PCI-E interface from the CPU to the NVMe SSD drives. > > Is this purely mechanical or do two drives share PCI-E resources? Which > would explain > why the problems always come in pairs (nvme6 and nvme7, for example). > I'm unfamiliar with this setup, but coming in pairs increases the missed interrupt theory in my mind. Firmware issues usually don't come in pairs. This afternoon I set up a system with 4 drives and I was not able to > reproduce the problem. > (We just got 3 more machines which happened to have 4 drives each and no > M.2 directly > on the mainboard). > I will change the config to 6 drives like with the two FreeNAS systems in > our data center. > > > [=E2=80=A6 nda(4) ...] > > I doubt that would have any effect. They both throw as much I/O onto th= e > card as possible in the default config. > > I found out - yes, just the same. > NDA drives with an iosched kernel will be able to rate limit, which may be useful as a diagnostic tool... > There's been some minor improvements in -current here. Any chance you > could experimentally try that with this test? You won't get as many I/O > abort errors (since we don't print those), and we have a few more > workarounds for the reset path (though honestly, it's still kinda stinky)= . > > HEAD or RELENG_12, too? > HEAD is preferred, but any recent snapshot will do. Warner Kind regards, > Patrick > -- > punkt.de GmbH Internet - Dienstleistungen - Beratung > Kaiserallee 13a Tel.: 0721 9109-0 Fax: -100 > 76133 Karlsruhe info@punkt.de http://punkt.de > AG Mannheim 108285 Gf: Juergen Egeling > >
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CANCZdfr_0xV=QmeTRtQbtvcp3wN0rw4C7KpgrXt3zVpmADwArg>