From owner-svn-src-all@freebsd.org Mon Nov 2 08:26:20 2020 Return-Path: Delivered-To: svn-src-all@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 77FC744A1B1; Mon, 2 Nov 2020 08:26:20 +0000 (UTC) (envelope-from mmel@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4CPmGN2bNVz4FP9; Mon, 2 Nov 2020 08:26:20 +0000 (UTC) (envelope-from mmel@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 3BB6A268E9; Mon, 2 Nov 2020 08:26:20 +0000 (UTC) (envelope-from mmel@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id 0A28QKSw014119; Mon, 2 Nov 2020 08:26:20 GMT (envelope-from mmel@FreeBSD.org) Received: (from mmel@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id 0A28QKga014118; Mon, 2 Nov 2020 08:26:20 GMT (envelope-from mmel@FreeBSD.org) Message-Id: <202011020826.0A28QKga014118@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: mmel set sender to mmel@FreeBSD.org using -f From: Michal Meloun Date: Mon, 2 Nov 2020 08:26:20 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r367268 - head/sys/arm64/arm64 X-SVN-Group: head X-SVN-Commit-Author: mmel X-SVN-Commit-Paths: head/sys/arm64/arm64 X-SVN-Commit-Revision: 367268 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.33 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Nov 2020 08:26:20 -0000 Author: mmel Date: Mon Nov 2 08:26:19 2020 New Revision: 367268 URL: https://svnweb.freebsd.org/changeset/base/367268 Log: Improve loading of multipage aligned buffers. The multipage alignment requirements is incompatible with many aspects of actual busdma code. Multi-page alignment requests are incompatible with many aspects of current busdma code. Mainly with partially bounced buffer segments and per-page loop in bus_dmamap_load_buffer(). Because proper implementation would be a major restructuring of the code, add the fix only for already known uses and do KASSERT for all other cases. For this reason, bus_dmamap_load_buffer () should take the memory allocated by bus_dmam_alloc () as one segment bypassing per page segmentation. We can do this because it is guaranteed that the memory is physically continuous. Reviewed by: bz Tested by: imp, mv, daniel.engberg.lists_pyret.net, kjopek_gmail.com Differential Revision: https://reviews.freebsd.org/D26735 Modified: head/sys/arm64/arm64/busdma_bounce.c Modified: head/sys/arm64/arm64/busdma_bounce.c ============================================================================== --- head/sys/arm64/arm64/busdma_bounce.c Mon Nov 2 06:16:11 2020 (r367267) +++ head/sys/arm64/arm64/busdma_bounce.c Mon Nov 2 08:26:19 2020 (r367268) @@ -501,13 +501,6 @@ static int bounce_bus_dmamem_alloc(bus_dma_tag_t dmat, void** vaddr, int flags, bus_dmamap_t *mapp) { - /* - * XXX ARM64TODO: - * This bus_dma implementation requires IO-Coherent architecutre. - * If IO-Coherency is not guaranteed, the BUS_DMA_COHERENT flag has - * to be implented using non-cacheable memory. - */ - vm_memattr_t attr; int mflags; @@ -830,7 +823,19 @@ bounce_bus_dmamap_load_phys(bus_dma_tag_t dmat, bus_dm sgsize = MIN(buflen, dmat->common.maxsegsz); if (map->pagesneeded != 0 && must_bounce(dmat, map, curaddr, sgsize)) { - sgsize = MIN(sgsize, PAGE_SIZE - (curaddr & PAGE_MASK)); + /* + * The attempt to split a physically continuous buffer + * seems very controversial, it's unclear whether we + * can do this in all cases. Also, memory for bounced + * buffers is allocated as pages, so we cannot + * guarantee multipage alignment. + */ + KASSERT(dmat->common.alignment <= PAGE_SIZE, + ("bounced buffer cannot have alignment bigger " + "than PAGE_SIZE: %lu", dmat->common.alignment)); + sgsize = PAGE_SIZE - (curaddr & PAGE_MASK); + sgsize = roundup2(sgsize, dmat->common.alignment); + sgsize = MIN(sgsize, dmat->common.maxsegsz); curaddr = add_bounce_page(dmat, map, 0, curaddr, sgsize); } else if ((map->flags & DMAMAP_COHERENT) == 0) { @@ -843,11 +848,11 @@ bounce_bus_dmamap_load_phys(bus_dma_tag_t dmat, bus_dm sl++; sl->vaddr = 0; sl->paddr = curaddr; - sl->datacount = sgsize; sl->pages = PHYS_TO_VM_PAGE(curaddr); KASSERT(sl->pages != NULL, ("%s: page at PA:0x%08lx is not in " "vm_page_array", __func__, curaddr)); + sl->datacount = sgsize; } else sl->datacount += sgsize; } @@ -880,6 +885,11 @@ bounce_bus_dmamap_load_buffer(bus_dma_tag_t dmat, bus_ vm_offset_t kvaddr, vaddr, sl_vend; int error; + KASSERT((map->flags & DMAMAP_FROM_DMAMEM) != 0 || + dmat->common.alignment <= PAGE_SIZE, + ("loading user buffer with alignment bigger than PAGE_SIZE is not " + "supported")); + if (segs == NULL) segs = dmat->segments; @@ -895,6 +905,11 @@ bounce_bus_dmamap_load_buffer(bus_dma_tag_t dmat, bus_ } } + /* + * XXX Optimally we should parse input buffer for physically + * continuous segments first and then pass these segment into + * load loop. + */ sl = map->slist + map->sync_count - 1; vaddr = (vm_offset_t)buf; sl_pend = 0; @@ -916,15 +931,25 @@ bounce_bus_dmamap_load_buffer(bus_dma_tag_t dmat, bus_ * Compute the segment size, and adjust counts. */ max_sgsize = MIN(buflen, dmat->common.maxsegsz); - sgsize = PAGE_SIZE - (curaddr & PAGE_MASK); + if ((map->flags & DMAMAP_FROM_DMAMEM) != 0) { + sgsize = max_sgsize; + } else { + sgsize = PAGE_SIZE - (curaddr & PAGE_MASK); + sgsize = MIN(sgsize, max_sgsize); + } + if (map->pagesneeded != 0 && must_bounce(dmat, map, curaddr, sgsize)) { + /* See comment in bounce_bus_dmamap_load_phys */ + KASSERT(dmat->common.alignment <= PAGE_SIZE, + ("bounced buffer cannot have alignment bigger " + "than PAGE_SIZE: %lu", dmat->common.alignment)); + sgsize = PAGE_SIZE - (curaddr & PAGE_MASK); sgsize = roundup2(sgsize, dmat->common.alignment); sgsize = MIN(sgsize, max_sgsize); curaddr = add_bounce_page(dmat, map, kvaddr, curaddr, sgsize); } else if ((map->flags & DMAMAP_COHERENT) == 0) { - sgsize = MIN(sgsize, max_sgsize); if (map->sync_count > 0) { sl_pend = sl->paddr + sl->datacount; sl_vend = sl->vaddr + sl->datacount; @@ -934,7 +959,7 @@ bounce_bus_dmamap_load_buffer(bus_dma_tag_t dmat, bus_ (kvaddr != 0 && kvaddr != sl_vend) || (curaddr != sl_pend)) { if (++map->sync_count > dmat->common.nsegments) - goto cleanup; + break; sl++; sl->vaddr = kvaddr; sl->paddr = curaddr; @@ -950,8 +975,6 @@ bounce_bus_dmamap_load_buffer(bus_dma_tag_t dmat, bus_ sl->datacount = sgsize; } else sl->datacount += sgsize; - } else { - sgsize = MIN(sgsize, max_sgsize); } sgsize = _bus_dmamap_addseg(dmat, map, curaddr, sgsize, segs, segp); @@ -961,7 +984,6 @@ bounce_bus_dmamap_load_buffer(bus_dma_tag_t dmat, bus_ buflen -= sgsize; } -cleanup: /* * Did we fit? */