Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 03 Sep 2019 14:06:55 -0000
From:      Alexander Motin <mav@FreeBSD.org>
To:        src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-stable@freebsd.org, svn-src-stable-11@freebsd.org
Subject:   svn commit: r346238 - stable/11/sys/dev/nvme
Message-ID:  <201904151535.x3FFZhP3011475@repo.freebsd.org>

next in thread | raw e-mail | index | archive | help
Author: mav
Date: Mon Apr 15 15:35:42 2019
New Revision: 346238
URL: https://svnweb.freebsd.org/changeset/base/346238

Log:
  MFC r337273 (by jhibbits):
  nvme(4): Add bus_dmamap_sync() at the end of the request path
  
  Summary:
  Some architectures, in this case powerpc64, need explicit synchronization
  barriers vs device accesses.
  
  Prior to this change, when running 'make buildworld -j72' on a 18-core
  (72-thread) POWER9, I would see controller resets often.  With this change, I
  don't see these resets messages, though another tester still does, for yet to be
  determined reasons, so this may not be a complete fix.  Additionally, I see a
  ~5-10% speed up in buildworld times, likely due to not needing to reset the
  controller.

Modified:
  stable/11/sys/dev/nvme/nvme_qpair.c
Directory Properties:
  stable/11/   (props changed)

Modified: stable/11/sys/dev/nvme/nvme_qpair.c
==============================================================================
--- stable/11/sys/dev/nvme/nvme_qpair.c	Mon Apr 15 15:09:25 2019	(r346237)
+++ stable/11/sys/dev/nvme/nvme_qpair.c	Mon Apr 15 15:35:42 2019	(r346238)
@@ -321,9 +321,13 @@ nvme_qpair_complete_tracker(struct nvme_qpair *qpair, 
 		req->retries++;
 		nvme_qpair_submit_tracker(qpair, tr);
 	} else {
-		if (req->type != NVME_REQUEST_NULL)
+		if (req->type != NVME_REQUEST_NULL) {
+			bus_dmamap_sync(qpair->dma_tag_payload,
+			    tr->payload_dma_map,
+			    BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE);
 			bus_dmamap_unload(qpair->dma_tag_payload,
 			    tr->payload_dma_map);
+		}
 
 		nvme_free_request(req);
 		tr->req = NULL;
@@ -407,6 +411,8 @@ nvme_qpair_process_completions(struct nvme_qpair *qpai
 		 */
 		return (false);
 
+	bus_dmamap_sync(qpair->dma_tag, qpair->queuemem_map,
+	    BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE);
 	while (1) {
 		cpl = &qpair->cpl[qpair->cq_head];
 
@@ -749,7 +755,16 @@ nvme_qpair_submit_tracker(struct nvme_qpair *qpair, st
 	if (++qpair->sq_tail == qpair->num_entries)
 		qpair->sq_tail = 0;
 
+	bus_dmamap_sync(qpair->dma_tag, qpair->queuemem_map,
+	    BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE);
+#ifndef __powerpc__
+	/*
+	 * powerpc's bus_dmamap_sync() already includes a heavyweight sync, but
+	 * no other archs do.
+	 */
 	wmb();
+#endif
+
 	nvme_mmio_write_4(qpair->ctrlr, doorbell[qpair->id].sq_tdbl,
 	    qpair->sq_tail);
 
@@ -800,6 +815,8 @@ nvme_payload_map(void *arg, bus_dma_segment_t *seg, in
 		tr->req->cmd.prp2 = 0;
 	}
 
+	bus_dmamap_sync(tr->qpair->dma_tag_payload, tr->payload_dma_map,
+	    BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE);
 	nvme_qpair_submit_tracker(tr->qpair, tr);
 }
 





Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?201904151535.x3FFZhP3011475>