Date: Sat, 31 Jul 2021 00:21:27 GMT From: Warner Losh <imp@FreeBSD.org> To: src-committers@FreeBSD.org, dev-commits-src-all@FreeBSD.org, dev-commits-src-branches@FreeBSD.org Subject: git: 696065e4f21a - stable/12 - NVME: Multiple busdma related fixes. Message-ID: <202107310021.16V0LRjv052018@gitrepo.freebsd.org>
next in thread | raw e-mail | index | archive | help
The branch stable/12 has been updated by imp: URL: https://cgit.FreeBSD.org/src/commit/?id=696065e4f21ad9bb52def38726c2f4ed593a2602 commit 696065e4f21ad9bb52def38726c2f4ed593a2602 Author: Michal Meloun <mmel@FreeBSD.org> AuthorDate: 2020-12-02 16:54:24 +0000 Commit: Warner Losh <imp@FreeBSD.org> CommitDate: 2021-07-31 00:02:52 +0000 NVME: Multiple busdma related fixes. - in nvme_qpair_process_completions() do dma sync before completion buffer is used. - in nvme_qpair_submit_tracker(), don't do explicit wmb() also for arm and arm64. Bus_dmamap_sync() on these architectures is sufficient to ensure that all CPU stores are visible to external (including DMA) observers. - Allocate completion buffer as BUS_DMA_COHERENT. On not-DMA coherent systems, buffers continuously owned (and accessed) by DMA must be allocated with this flag. Note that BUS_DMA_COHERENT flag is no-op on DMA coherent systems (or coherent buses in mixed systems). MFC after: 4 weeks Reviewed by: mav, imp Differential Revision: https://reviews.freebsd.org/D27446 (cherry picked from commit 8f9d5a8dbf4ea69c5f9a1e3a36e23732ffaa5c75) --- sys/dev/nvme/nvme_qpair.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/sys/dev/nvme/nvme_qpair.c b/sys/dev/nvme/nvme_qpair.c index 9be4fc67b923..e43e8285e9bb 100644 --- a/sys/dev/nvme/nvme_qpair.c +++ b/sys/dev/nvme/nvme_qpair.c @@ -550,6 +550,8 @@ nvme_qpair_process_completions(struct nvme_qpair *qpair) if (!qpair->is_enabled) return (false); + bus_dmamap_sync(qpair->dma_tag, qpair->queuemem_map, + BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); /* * A panic can stop the CPU this routine is running on at any point. If * we're called during a panic, complete the sq_head wrap protocol for @@ -583,8 +585,6 @@ nvme_qpair_process_completions(struct nvme_qpair *qpair) } } - bus_dmamap_sync(qpair->dma_tag, qpair->queuemem_map, - BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); while (1) { cpl = qpair->cpl[qpair->cq_head]; @@ -701,7 +701,7 @@ nvme_qpair_construct(struct nvme_qpair *qpair, bus_dma_tag_set_domain(qpair->dma_tag, qpair->domain); if (bus_dmamem_alloc(qpair->dma_tag, (void **)&queuemem, - BUS_DMA_NOWAIT, &qpair->queuemem_map)) { + BUS_DMA_COHERENT | BUS_DMA_NOWAIT, &qpair->queuemem_map)) { nvme_printf(ctrlr, "failed to alloc qpair memory\n"); goto out; } @@ -987,7 +987,7 @@ nvme_qpair_submit_tracker(struct nvme_qpair *qpair, struct nvme_tracker *tr) bus_dmamap_sync(qpair->dma_tag, qpair->queuemem_map, BUS_DMASYNC_PREREAD | BUS_DMASYNC_PREWRITE); -#ifndef __powerpc__ +#if !defined( __powerpc__) && !defined( __aarch64__) && !defined( __arm__) /* * powerpc's bus_dmamap_sync() already includes a heavyweight sync, but * no other archs do.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?202107310021.16V0LRjv052018>