Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 15 Jul 2021 22:18:17 GMT
From:      Warner Losh <imp@FreeBSD.org>
To:        src-committers@FreeBSD.org, dev-commits-src-all@FreeBSD.org, dev-commits-src-main@FreeBSD.org
Subject:   git: fc9a08402317 - main - nvme: Enable interrupts after qpair fully constructed
Message-ID:  <202107152218.16FMIHsl020867@gitrepo.freebsd.org>

next in thread | raw e-mail | index | archive | help
The branch main has been updated by imp:

URL: https://cgit.FreeBSD.org/src/commit/?id=fc9a0840231770bc7e7dcfe4616babdc6d4389a6

commit fc9a0840231770bc7e7dcfe4616babdc6d4389a6
Author:     Warner Losh <imp@FreeBSD.org>
AuthorDate: 2021-07-15 22:17:23 +0000
Commit:     Warner Losh <imp@FreeBSD.org>
CommitDate: 2021-07-15 22:17:23 +0000

    nvme: Enable interrupts after qpair fully constructed
    
    To guard against the ill effects of a spurious interrupt during
    construction (or one that was bogusly pending), enable interrupts after
    the qpair is completely constructed. Otherwise, we can die with null
    pointer dereferences in nvme_qpair_process_completions. This has been
    observed in at least one pre-release NVMe drive where the MSIX interrupt
    fired while the queue was being created, before we'd started the NVMe
    controller card.
    
    The alternative of only turning on the interrupts after the rest was
    tried, but was insufficient to work around this bug and made the code
    more complicated w/o benefit.
    
    Reviewed by:            mav, chuck
    Sponsored by:           Netflix
    Differential Revision:  https://reviews.freebsd.org/D31182
---
 sys/dev/nvme/nvme_qpair.c | 49 ++++++++++++++++++++++++-----------------------
 1 file changed, 25 insertions(+), 24 deletions(-)

diff --git a/sys/dev/nvme/nvme_qpair.c b/sys/dev/nvme/nvme_qpair.c
index 12770f38d42e..4402d1000e67 100644
--- a/sys/dev/nvme/nvme_qpair.c
+++ b/sys/dev/nvme/nvme_qpair.c
@@ -675,30 +675,6 @@ nvme_qpair_construct(struct nvme_qpair *qpair,
 	qpair->num_trackers = num_trackers;
 	qpair->ctrlr = ctrlr;
 
-	if (ctrlr->msix_enabled) {
-		/*
-		 * MSI-X vector resource IDs start at 1, so we add one to
-		 *  the queue's vector to get the corresponding rid to use.
-		 */
-		qpair->rid = qpair->vector + 1;
-
-		qpair->res = bus_alloc_resource_any(ctrlr->dev, SYS_RES_IRQ,
-		    &qpair->rid, RF_ACTIVE);
-		if (bus_setup_intr(ctrlr->dev, qpair->res,
-		    INTR_TYPE_MISC | INTR_MPSAFE, NULL,
-		    nvme_qpair_msix_handler, qpair, &qpair->tag) != 0) {
-			nvme_printf(ctrlr, "unable to setup intx handler\n");
-			goto out;
-		}
-		if (qpair->id == 0) {
-			bus_describe_intr(ctrlr->dev, qpair->res, qpair->tag,
-			    "admin");
-		} else {
-			bus_describe_intr(ctrlr->dev, qpair->res, qpair->tag,
-			    "io%d", qpair->id - 1);
-		}
-	}
-
 	mtx_init(&qpair->lock, "nvme qpair lock", NULL, MTX_DEF);
 
 	/* Note: NVMe PRP format is restricted to 4-byte alignment. */
@@ -818,6 +794,31 @@ nvme_qpair_construct(struct nvme_qpair *qpair,
 	qpair->act_tr = malloc_domainset(sizeof(struct nvme_tracker *) *
 	    qpair->num_entries, M_NVME, DOMAINSET_PREF(qpair->domain),
 	    M_ZERO | M_WAITOK);
+
+	if (ctrlr->msix_enabled) {
+		/*
+		 * MSI-X vector resource IDs start at 1, so we add one to
+		 *  the queue's vector to get the corresponding rid to use.
+		 */
+		qpair->rid = qpair->vector + 1;
+
+		qpair->res = bus_alloc_resource_any(ctrlr->dev, SYS_RES_IRQ,
+		    &qpair->rid, RF_ACTIVE);
+		if (bus_setup_intr(ctrlr->dev, qpair->res,
+		    INTR_TYPE_MISC | INTR_MPSAFE, NULL,
+		    nvme_qpair_msix_handler, qpair, &qpair->tag) != 0) {
+			nvme_printf(ctrlr, "unable to setup intx handler\n");
+			goto out;
+		}
+		if (qpair->id == 0) {
+			bus_describe_intr(ctrlr->dev, qpair->res, qpair->tag,
+			    "admin");
+		} else {
+			bus_describe_intr(ctrlr->dev, qpair->res, qpair->tag,
+			    "io%d", qpair->id - 1);
+		}
+	}
+
 	return (0);
 
 out:



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?202107152218.16FMIHsl020867>