From owner-svn-src-stable-12@freebsd.org Tue Jun 4 15:32:58 2019 Return-Path: Delivered-To: svn-src-stable-12@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id D9DF315B4B2D; Tue, 4 Jun 2019 15:32:57 +0000 (UTC) (envelope-from br@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) server-signature RSA-PSS (4096 bits) client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 8721A6E89A; Tue, 4 Jun 2019 15:32:57 +0000 (UTC) (envelope-from br@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 6D8EC1F7A1; Tue, 4 Jun 2019 15:32:57 +0000 (UTC) (envelope-from br@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id x54FWvhF038199; Tue, 4 Jun 2019 15:32:57 GMT (envelope-from br@FreeBSD.org) Received: (from br@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id x54FWuGN038194; Tue, 4 Jun 2019 15:32:56 GMT (envelope-from br@FreeBSD.org) Message-Id: <201906041532.x54FWuGN038194@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: br set sender to br@FreeBSD.org using -f From: Ruslan Bukin Date: Tue, 4 Jun 2019 15:32:56 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-stable@freebsd.org, svn-src-stable-12@freebsd.org Subject: svn commit: r348621 - in stable/12/sys: conf riscv/include riscv/riscv X-SVN-Group: stable-12 X-SVN-Commit-Author: br X-SVN-Commit-Paths: in stable/12/sys: conf riscv/include riscv/riscv X-SVN-Commit-Revision: 348621 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 8721A6E89A X-Spamd-Bar: -- Authentication-Results: mx1.freebsd.org X-Spamd-Result: default: False [-2.94 / 15.00]; local_wl_from(0.00)[FreeBSD.org]; NEURAL_HAM_MEDIUM(-1.00)[-1.000,0]; NEURAL_HAM_LONG(-1.00)[-1.000,0]; NEURAL_HAM_SHORT(-0.94)[-0.942,0]; ASN(0.00)[asn:11403, ipnet:2610:1c1:1::/48, country:US] X-BeenThere: svn-src-stable-12@freebsd.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: SVN commit messages for only the 12-stable src tree List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Jun 2019 15:32:58 -0000 Author: br Date: Tue Jun 4 15:32:56 2019 New Revision: 348621 URL: https://svnweb.freebsd.org/changeset/base/348621 Log: MFC r347225: Provide a template for busdma code for RISC-V. RISC-V ISA specifies no cache management instructions so leave cache operations in cpufunc.h as no-op for now. Note some new hardware comes with their own memory-mapped cache management controller. Tested on HiFive Unleashed board with cgem(4). Sponsored by: DARPA, AFRL Added: stable/12/sys/riscv/include/bus_dma_impl.h (contents, props changed) stable/12/sys/riscv/riscv/busdma_bounce.c (contents, props changed) Modified: stable/12/sys/conf/files.riscv stable/12/sys/riscv/include/bus_dma.h stable/12/sys/riscv/include/cpufunc.h stable/12/sys/riscv/riscv/busdma_machdep.c stable/12/sys/riscv/riscv/machdep.c Modified: stable/12/sys/conf/files.riscv ============================================================================== --- stable/12/sys/conf/files.riscv Tue Jun 4 15:30:46 2019 (r348620) +++ stable/12/sys/conf/files.riscv Tue Jun 4 15:32:56 2019 (r348621) @@ -28,6 +28,7 @@ libkern/memset.c standard riscv/riscv/autoconf.c standard riscv/riscv/bus_machdep.c standard riscv/riscv/bus_space_asm.S standard +riscv/riscv/busdma_bounce.c standard riscv/riscv/busdma_machdep.c standard riscv/riscv/clock.c standard riscv/riscv/copyinout.S standard Modified: stable/12/sys/riscv/include/bus_dma.h ============================================================================== --- stable/12/sys/riscv/include/bus_dma.h Tue Jun 4 15:30:46 2019 (r348620) +++ stable/12/sys/riscv/include/bus_dma.h Tue Jun 4 15:32:56 2019 (r348621) @@ -3,7 +3,139 @@ #ifndef _MACHINE_BUS_DMA_H_ #define _MACHINE_BUS_DMA_H_ +#define WANT_INLINE_DMAMAP #include -#include + +#include + +/* + * Allocate a handle for mapping from kva/uva/physical + * address space into bus device space. + */ +static inline int +bus_dmamap_create(bus_dma_tag_t dmat, int flags, bus_dmamap_t *mapp) +{ + struct bus_dma_tag_common *tc; + + tc = (struct bus_dma_tag_common *)dmat; + return (tc->impl->map_create(dmat, flags, mapp)); +} + +/* + * Destroy a handle for mapping from kva/uva/physical + * address space into bus device space. + */ +static inline int +bus_dmamap_destroy(bus_dma_tag_t dmat, bus_dmamap_t map) +{ + struct bus_dma_tag_common *tc; + + tc = (struct bus_dma_tag_common *)dmat; + return (tc->impl->map_destroy(dmat, map)); +} + +/* + * Allocate a piece of memory that can be efficiently mapped into + * bus device space based on the constraints listed in the dma tag. + * A dmamap to for use with dmamap_load is also allocated. + */ +static inline int +bus_dmamem_alloc(bus_dma_tag_t dmat, void** vaddr, int flags, + bus_dmamap_t *mapp) +{ + struct bus_dma_tag_common *tc; + + tc = (struct bus_dma_tag_common *)dmat; + return (tc->impl->mem_alloc(dmat, vaddr, flags, mapp)); +} + +/* + * Free a piece of memory and it's allociated dmamap, that was allocated + * via bus_dmamem_alloc. Make the same choice for free/contigfree. + */ +static inline void +bus_dmamem_free(bus_dma_tag_t dmat, void *vaddr, bus_dmamap_t map) +{ + struct bus_dma_tag_common *tc; + + tc = (struct bus_dma_tag_common *)dmat; + tc->impl->mem_free(dmat, vaddr, map); +} + +/* + * Release the mapping held by map. + */ +static inline void +bus_dmamap_unload(bus_dma_tag_t dmat, bus_dmamap_t map) +{ + struct bus_dma_tag_common *tc; + + tc = (struct bus_dma_tag_common *)dmat; + tc->impl->map_unload(dmat, map); +} + +static inline void +bus_dmamap_sync(bus_dma_tag_t dmat, bus_dmamap_t map, bus_dmasync_op_t op) +{ + struct bus_dma_tag_common *tc; + + tc = (struct bus_dma_tag_common *)dmat; + tc->impl->map_sync(dmat, map, op); +} + +static inline int +_bus_dmamap_load_phys(bus_dma_tag_t dmat, bus_dmamap_t map, vm_paddr_t buf, + bus_size_t buflen, int flags, bus_dma_segment_t *segs, int *segp) +{ + struct bus_dma_tag_common *tc; + + tc = (struct bus_dma_tag_common *)dmat; + return (tc->impl->load_phys(dmat, map, buf, buflen, flags, segs, + segp)); +} + +static inline int +_bus_dmamap_load_ma(bus_dma_tag_t dmat, bus_dmamap_t map, struct vm_page **ma, + bus_size_t tlen, int ma_offs, int flags, bus_dma_segment_t *segs, + int *segp) +{ + struct bus_dma_tag_common *tc; + + tc = (struct bus_dma_tag_common *)dmat; + return (tc->impl->load_ma(dmat, map, ma, tlen, ma_offs, flags, + segs, segp)); +} + +static inline int +_bus_dmamap_load_buffer(bus_dma_tag_t dmat, bus_dmamap_t map, void *buf, + bus_size_t buflen, struct pmap *pmap, int flags, bus_dma_segment_t *segs, + int *segp) +{ + struct bus_dma_tag_common *tc; + + tc = (struct bus_dma_tag_common *)dmat; + return (tc->impl->load_buffer(dmat, map, buf, buflen, pmap, flags, segs, + segp)); +} + +static inline void +_bus_dmamap_waitok(bus_dma_tag_t dmat, bus_dmamap_t map, + struct memdesc *mem, bus_dmamap_callback_t *callback, void *callback_arg) +{ + struct bus_dma_tag_common *tc; + + tc = (struct bus_dma_tag_common *)dmat; + tc->impl->map_waitok(dmat, map, mem, callback, callback_arg); +} + +static inline bus_dma_segment_t * +_bus_dmamap_complete(bus_dma_tag_t dmat, bus_dmamap_t map, + bus_dma_segment_t *segs, int nsegs, int error) +{ + struct bus_dma_tag_common *tc; + + tc = (struct bus_dma_tag_common *)dmat; + return (tc->impl->map_complete(dmat, map, segs, nsegs, error)); +} #endif /* !_MACHINE_BUS_DMA_H_ */ Added: stable/12/sys/riscv/include/bus_dma_impl.h ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ stable/12/sys/riscv/include/bus_dma_impl.h Tue Jun 4 15:32:56 2019 (r348621) @@ -0,0 +1,96 @@ +/*- + * Copyright (c) 2013 The FreeBSD Foundation + * All rights reserved. + * + * This software was developed by Konstantin Belousov + * under sponsorship from the FreeBSD Foundation. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + * + * $FreeBSD$ + */ + +#ifndef _MACHINE_BUS_DMA_IMPL_H_ +#define _MACHINE_BUS_DMA_IMPL_H_ + +struct bus_dma_tag_common { + struct bus_dma_impl *impl; + struct bus_dma_tag_common *parent; + bus_size_t alignment; + bus_addr_t boundary; + bus_addr_t lowaddr; + bus_addr_t highaddr; + bus_dma_filter_t *filter; + void *filterarg; + bus_size_t maxsize; + u_int nsegments; + bus_size_t maxsegsz; + int flags; + bus_dma_lock_t *lockfunc; + void *lockfuncarg; + int ref_count; +}; + +struct bus_dma_impl { + int (*tag_create)(bus_dma_tag_t parent, + bus_size_t alignment, bus_addr_t boundary, bus_addr_t lowaddr, + bus_addr_t highaddr, bus_dma_filter_t *filter, + void *filterarg, bus_size_t maxsize, int nsegments, + bus_size_t maxsegsz, int flags, bus_dma_lock_t *lockfunc, + void *lockfuncarg, bus_dma_tag_t *dmat); + int (*tag_destroy)(bus_dma_tag_t dmat); + int (*map_create)(bus_dma_tag_t dmat, int flags, bus_dmamap_t *mapp); + int (*map_destroy)(bus_dma_tag_t dmat, bus_dmamap_t map); + int (*mem_alloc)(bus_dma_tag_t dmat, void** vaddr, int flags, + bus_dmamap_t *mapp); + void (*mem_free)(bus_dma_tag_t dmat, void *vaddr, bus_dmamap_t map); + int (*load_ma)(bus_dma_tag_t dmat, bus_dmamap_t map, + struct vm_page **ma, bus_size_t tlen, int ma_offs, int flags, + bus_dma_segment_t *segs, int *segp); + int (*load_phys)(bus_dma_tag_t dmat, bus_dmamap_t map, + vm_paddr_t buf, bus_size_t buflen, int flags, + bus_dma_segment_t *segs, int *segp); + int (*load_buffer)(bus_dma_tag_t dmat, bus_dmamap_t map, + void *buf, bus_size_t buflen, struct pmap *pmap, int flags, + bus_dma_segment_t *segs, int *segp); + void (*map_waitok)(bus_dma_tag_t dmat, bus_dmamap_t map, + struct memdesc *mem, bus_dmamap_callback_t *callback, + void *callback_arg); + bus_dma_segment_t *(*map_complete)(bus_dma_tag_t dmat, bus_dmamap_t map, + bus_dma_segment_t *segs, int nsegs, int error); + void (*map_unload)(bus_dma_tag_t dmat, bus_dmamap_t map); + void (*map_sync)(bus_dma_tag_t dmat, bus_dmamap_t map, + bus_dmasync_op_t op); +}; + +void bus_dma_dflt_lock(void *arg, bus_dma_lock_op_t op); +int bus_dma_run_filter(struct bus_dma_tag_common *dmat, bus_addr_t paddr); +int common_bus_dma_tag_create(struct bus_dma_tag_common *parent, + bus_size_t alignment, + bus_addr_t boundary, bus_addr_t lowaddr, bus_addr_t highaddr, + bus_dma_filter_t *filter, void *filterarg, bus_size_t maxsize, + int nsegments, bus_size_t maxsegsz, int flags, bus_dma_lock_t *lockfunc, + void *lockfuncarg, size_t sz, void **dmat); + +extern struct bus_dma_impl bus_dma_bounce_impl; + +#endif Modified: stable/12/sys/riscv/include/cpufunc.h ============================================================================== --- stable/12/sys/riscv/include/cpufunc.h Tue Jun 4 15:30:46 2019 (r348620) +++ stable/12/sys/riscv/include/cpufunc.h Tue Jun 4 15:32:56 2019 (r348621) @@ -109,6 +109,17 @@ sfence_vma_page(uintptr_t addr) #define rdinstret() csr_read64(instret) #define rdhpmcounter(n) csr_read64(hpmcounter##n) +extern int64_t dcache_line_size; +extern int64_t icache_line_size; + +#define cpu_dcache_wbinv_range(a, s) +#define cpu_dcache_inv_range(a, s) +#define cpu_dcache_wb_range(a, s) + +#define cpu_idcache_wbinv_range(a, s) +#define cpu_icache_sync_range(a, s) +#define cpu_icache_sync_range_checked(a, s) + static __inline void load_satp(uint64_t val) { Added: stable/12/sys/riscv/riscv/busdma_bounce.c ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ stable/12/sys/riscv/riscv/busdma_bounce.c Tue Jun 4 15:32:56 2019 (r348621) @@ -0,0 +1,1330 @@ +/*- + * Copyright (c) 1997, 1998 Justin T. Gibbs. + * Copyright (c) 2015-2016 The FreeBSD Foundation + * All rights reserved. + * + * Portions of this software were developed by Andrew Turner + * under sponsorship of the FreeBSD Foundation. + * + * Portions of this software were developed by Semihalf + * under sponsorship of the FreeBSD Foundation. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions, and the following disclaimer, + * without modification, immediately at the beginning of the file. + * 2. The name of the author may not be used to endorse or promote products + * derived from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR + * ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + */ + +#include +__FBSDID("$FreeBSD$"); + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#define MAX_BPAGES 4096 + +enum { + BF_COULD_BOUNCE = 0x01, + BF_MIN_ALLOC_COMP = 0x02, + BF_KMEM_ALLOC = 0x04, + BF_COHERENT = 0x10, +}; + +struct bounce_zone; + +struct bus_dma_tag { + struct bus_dma_tag_common common; + int map_count; + int bounce_flags; + bus_dma_segment_t *segments; + struct bounce_zone *bounce_zone; +}; + +struct bounce_page { + vm_offset_t vaddr; /* kva of bounce buffer */ + bus_addr_t busaddr; /* Physical address */ + vm_offset_t datavaddr; /* kva of client data */ + vm_page_t datapage; /* physical page of client data */ + vm_offset_t dataoffs; /* page offset of client data */ + bus_size_t datacount; /* client data count */ + STAILQ_ENTRY(bounce_page) links; +}; + +int busdma_swi_pending; + +struct bounce_zone { + STAILQ_ENTRY(bounce_zone) links; + STAILQ_HEAD(bp_list, bounce_page) bounce_page_list; + int total_bpages; + int free_bpages; + int reserved_bpages; + int active_bpages; + int total_bounced; + int total_deferred; + int map_count; + bus_size_t alignment; + bus_addr_t lowaddr; + char zoneid[8]; + char lowaddrid[20]; + struct sysctl_ctx_list sysctl_tree; + struct sysctl_oid *sysctl_tree_top; +}; + +static struct mtx bounce_lock; +static int total_bpages; +static int busdma_zonecount; +static STAILQ_HEAD(, bounce_zone) bounce_zone_list; + +static SYSCTL_NODE(_hw, OID_AUTO, busdma, CTLFLAG_RD, 0, "Busdma parameters"); +SYSCTL_INT(_hw_busdma, OID_AUTO, total_bpages, CTLFLAG_RD, &total_bpages, 0, + "Total bounce pages"); + +struct sync_list { + vm_offset_t vaddr; /* kva of client data */ + bus_addr_t paddr; /* physical address */ + vm_page_t pages; /* starting page of client data */ + bus_size_t datacount; /* client data count */ +}; + +struct bus_dmamap { + struct bp_list bpages; + int pagesneeded; + int pagesreserved; + bus_dma_tag_t dmat; + struct memdesc mem; + bus_dmamap_callback_t *callback; + void *callback_arg; + STAILQ_ENTRY(bus_dmamap) links; + u_int flags; +#define DMAMAP_COULD_BOUNCE (1 << 0) +#define DMAMAP_FROM_DMAMEM (1 << 1) + int sync_count; + struct sync_list slist[]; +}; + +static STAILQ_HEAD(, bus_dmamap) bounce_map_waitinglist; +static STAILQ_HEAD(, bus_dmamap) bounce_map_callbacklist; + +static void init_bounce_pages(void *dummy); +static int alloc_bounce_zone(bus_dma_tag_t dmat); +static int alloc_bounce_pages(bus_dma_tag_t dmat, u_int numpages); +static int reserve_bounce_pages(bus_dma_tag_t dmat, bus_dmamap_t map, + int commit); +static bus_addr_t add_bounce_page(bus_dma_tag_t dmat, bus_dmamap_t map, + vm_offset_t vaddr, bus_addr_t addr, bus_size_t size); +static void free_bounce_page(bus_dma_tag_t dmat, struct bounce_page *bpage); +int run_filter(bus_dma_tag_t dmat, bus_addr_t paddr); +static void _bus_dmamap_count_pages(bus_dma_tag_t dmat, bus_dmamap_t map, + pmap_t pmap, void *buf, bus_size_t buflen, int flags); +static void _bus_dmamap_count_phys(bus_dma_tag_t dmat, bus_dmamap_t map, + vm_paddr_t buf, bus_size_t buflen, int flags); +static int _bus_dmamap_reserve_pages(bus_dma_tag_t dmat, bus_dmamap_t map, + int flags); + +/* + * Allocate a device specific dma_tag. + */ +static int +bounce_bus_dma_tag_create(bus_dma_tag_t parent, bus_size_t alignment, + bus_addr_t boundary, bus_addr_t lowaddr, bus_addr_t highaddr, + bus_dma_filter_t *filter, void *filterarg, bus_size_t maxsize, + int nsegments, bus_size_t maxsegsz, int flags, bus_dma_lock_t *lockfunc, + void *lockfuncarg, bus_dma_tag_t *dmat) +{ + bus_dma_tag_t newtag; + int error; + + *dmat = NULL; + error = common_bus_dma_tag_create(parent != NULL ? &parent->common : + NULL, alignment, boundary, lowaddr, highaddr, filter, filterarg, + maxsize, nsegments, maxsegsz, flags, lockfunc, lockfuncarg, + sizeof (struct bus_dma_tag), (void **)&newtag); + if (error != 0) + return (error); + + newtag->common.impl = &bus_dma_bounce_impl; + newtag->map_count = 0; + newtag->segments = NULL; + + if ((flags & BUS_DMA_COHERENT) != 0) + newtag->bounce_flags |= BF_COHERENT; + + if (parent != NULL) { + if ((newtag->common.filter != NULL || + (parent->bounce_flags & BF_COULD_BOUNCE) != 0)) + newtag->bounce_flags |= BF_COULD_BOUNCE; + + /* Copy some flags from the parent */ + newtag->bounce_flags |= parent->bounce_flags & BF_COHERENT; + } + + if (newtag->common.lowaddr < ptoa((vm_paddr_t)Maxmem) || + newtag->common.alignment > 1) + newtag->bounce_flags |= BF_COULD_BOUNCE; + + if (((newtag->bounce_flags & BF_COULD_BOUNCE) != 0) && + (flags & BUS_DMA_ALLOCNOW) != 0) { + struct bounce_zone *bz; + + /* Must bounce */ + if ((error = alloc_bounce_zone(newtag)) != 0) { + free(newtag, M_DEVBUF); + return (error); + } + bz = newtag->bounce_zone; + + if (ptoa(bz->total_bpages) < maxsize) { + int pages; + + pages = atop(maxsize) - bz->total_bpages; + + /* Add pages to our bounce pool */ + if (alloc_bounce_pages(newtag, pages) < pages) + error = ENOMEM; + } + /* Performed initial allocation */ + newtag->bounce_flags |= BF_MIN_ALLOC_COMP; + } else + error = 0; + + if (error != 0) + free(newtag, M_DEVBUF); + else + *dmat = newtag; + CTR4(KTR_BUSDMA, "%s returned tag %p tag flags 0x%x error %d", + __func__, newtag, (newtag != NULL ? newtag->common.flags : 0), + error); + return (error); +} + +static int +bounce_bus_dma_tag_destroy(bus_dma_tag_t dmat) +{ + bus_dma_tag_t dmat_copy, parent; + int error; + + error = 0; + dmat_copy = dmat; + + if (dmat != NULL) { + if (dmat->map_count != 0) { + error = EBUSY; + goto out; + } + while (dmat != NULL) { + parent = (bus_dma_tag_t)dmat->common.parent; + atomic_subtract_int(&dmat->common.ref_count, 1); + if (dmat->common.ref_count == 0) { + if (dmat->segments != NULL) + free(dmat->segments, M_DEVBUF); + free(dmat, M_DEVBUF); + /* + * Last reference count, so + * release our reference + * count on our parent. + */ + dmat = parent; + } else + dmat = NULL; + } + } +out: + CTR3(KTR_BUSDMA, "%s tag %p error %d", __func__, dmat_copy, error); + return (error); +} + +static bus_dmamap_t +alloc_dmamap(bus_dma_tag_t dmat, int flags) +{ + u_long mapsize; + bus_dmamap_t map; + + mapsize = sizeof(*map); + mapsize += sizeof(struct sync_list) * dmat->common.nsegments; + map = malloc(mapsize, M_DEVBUF, flags | M_ZERO); + if (map == NULL) + return (NULL); + + /* Initialize the new map */ + STAILQ_INIT(&map->bpages); + + return (map); +} + +/* + * Allocate a handle for mapping from kva/uva/physical + * address space into bus device space. + */ +static int +bounce_bus_dmamap_create(bus_dma_tag_t dmat, int flags, bus_dmamap_t *mapp) +{ + struct bounce_zone *bz; + int error, maxpages, pages; + + error = 0; + + if (dmat->segments == NULL) { + dmat->segments = (bus_dma_segment_t *)malloc( + sizeof(bus_dma_segment_t) * dmat->common.nsegments, + M_DEVBUF, M_NOWAIT); + if (dmat->segments == NULL) { + CTR3(KTR_BUSDMA, "%s: tag %p error %d", + __func__, dmat, ENOMEM); + return (ENOMEM); + } + } + + *mapp = alloc_dmamap(dmat, M_NOWAIT); + if (*mapp == NULL) { + CTR3(KTR_BUSDMA, "%s: tag %p error %d", + __func__, dmat, ENOMEM); + return (ENOMEM); + } + + /* + * Bouncing might be required if the driver asks for an active + * exclusion region, a data alignment that is stricter than 1, and/or + * an active address boundary. + */ + if (dmat->bounce_flags & BF_COULD_BOUNCE) { + /* Must bounce */ + if (dmat->bounce_zone == NULL) { + if ((error = alloc_bounce_zone(dmat)) != 0) { + free(*mapp, M_DEVBUF); + return (error); + } + } + bz = dmat->bounce_zone; + + (*mapp)->flags = DMAMAP_COULD_BOUNCE; + + /* + * Attempt to add pages to our pool on a per-instance + * basis up to a sane limit. + */ + if (dmat->common.alignment > 1) + maxpages = MAX_BPAGES; + else + maxpages = MIN(MAX_BPAGES, Maxmem - + atop(dmat->common.lowaddr)); + if ((dmat->bounce_flags & BF_MIN_ALLOC_COMP) == 0 || + (bz->map_count > 0 && bz->total_bpages < maxpages)) { + pages = MAX(atop(dmat->common.maxsize), 1); + pages = MIN(maxpages - bz->total_bpages, pages); + pages = MAX(pages, 1); + if (alloc_bounce_pages(dmat, pages) < pages) + error = ENOMEM; + if ((dmat->bounce_flags & BF_MIN_ALLOC_COMP) + == 0) { + if (error == 0) { + dmat->bounce_flags |= + BF_MIN_ALLOC_COMP; + } + } else + error = 0; + } + bz->map_count++; + } + if (error == 0) + dmat->map_count++; + else + free(*mapp, M_DEVBUF); + CTR4(KTR_BUSDMA, "%s: tag %p tag flags 0x%x error %d", + __func__, dmat, dmat->common.flags, error); + return (error); +} + +/* + * Destroy a handle for mapping from kva/uva/physical + * address space into bus device space. + */ +static int +bounce_bus_dmamap_destroy(bus_dma_tag_t dmat, bus_dmamap_t map) +{ + + /* Check we are destroying the correct map type */ + if ((map->flags & DMAMAP_FROM_DMAMEM) != 0) + panic("bounce_bus_dmamap_destroy: Invalid map freed\n"); + + if (STAILQ_FIRST(&map->bpages) != NULL || map->sync_count != 0) { + CTR3(KTR_BUSDMA, "%s: tag %p error %d", __func__, dmat, EBUSY); + return (EBUSY); + } + if (dmat->bounce_zone) { + KASSERT((map->flags & DMAMAP_COULD_BOUNCE) != 0, + ("%s: Bounce zone when cannot bounce", __func__)); + dmat->bounce_zone->map_count--; + } + free(map, M_DEVBUF); + dmat->map_count--; + CTR2(KTR_BUSDMA, "%s: tag %p error 0", __func__, dmat); + return (0); +} + + +/* + * Allocate a piece of memory that can be efficiently mapped into + * bus device space based on the constraints lited in the dma tag. + * A dmamap to for use with dmamap_load is also allocated. + */ +static int +bounce_bus_dmamem_alloc(bus_dma_tag_t dmat, void** vaddr, int flags, + bus_dmamap_t *mapp) +{ + /* + * XXX ARM64TODO: + * This bus_dma implementation requires IO-Coherent architecutre. + * If IO-Coherency is not guaranteed, the BUS_DMA_COHERENT flag has + * to be implented using non-cacheable memory. + */ + + vm_memattr_t attr; + int mflags; + + if (flags & BUS_DMA_NOWAIT) + mflags = M_NOWAIT; + else + mflags = M_WAITOK; + + if (dmat->segments == NULL) { + dmat->segments = (bus_dma_segment_t *)malloc( + sizeof(bus_dma_segment_t) * dmat->common.nsegments, + M_DEVBUF, mflags); + if (dmat->segments == NULL) { + CTR4(KTR_BUSDMA, "%s: tag %p tag flags 0x%x error %d", + __func__, dmat, dmat->common.flags, ENOMEM); + return (ENOMEM); + } + } + if (flags & BUS_DMA_ZERO) + mflags |= M_ZERO; + if (flags & BUS_DMA_NOCACHE) + attr = VM_MEMATTR_UNCACHEABLE; + else if ((flags & BUS_DMA_COHERENT) != 0 && + (dmat->bounce_flags & BF_COHERENT) == 0) + /* + * If we have a non-coherent tag, and are trying to allocate + * a coherent block of memory it needs to be uncached. + */ + attr = VM_MEMATTR_UNCACHEABLE; + else + attr = VM_MEMATTR_DEFAULT; + + /* + * Create the map, but don't set the could bounce flag as + * this allocation should never bounce; + */ + *mapp = alloc_dmamap(dmat, mflags); + if (*mapp == NULL) { + CTR4(KTR_BUSDMA, "%s: tag %p tag flags 0x%x error %d", + __func__, dmat, dmat->common.flags, ENOMEM); + return (ENOMEM); + } + (*mapp)->flags = DMAMAP_FROM_DMAMEM; + + /* + * Allocate the buffer from the malloc(9) allocator if... + * - It's small enough to fit into a single power of two sized bucket. + * - The alignment is less than or equal to the maximum size + * - The low address requirement is fulfilled. + * else allocate non-contiguous pages if... + * - The page count that could get allocated doesn't exceed + * nsegments also when the maximum segment size is less + * than PAGE_SIZE. + * - The alignment constraint isn't larger than a page boundary. + * - There are no boundary-crossing constraints. + * else allocate a block of contiguous pages because one or more of the + * constraints is something that only the contig allocator can fulfill. + * + * NOTE: The (dmat->common.alignment <= dmat->maxsize) check + * below is just a quick hack. The exact alignment guarantees + * of malloc(9) need to be nailed down, and the code below + * should be rewritten to take that into account. + * + * In the meantime warn the user if malloc gets it wrong. + */ + if ((dmat->common.maxsize <= PAGE_SIZE) && + (dmat->common.alignment <= dmat->common.maxsize) && + dmat->common.lowaddr >= ptoa((vm_paddr_t)Maxmem) && + attr == VM_MEMATTR_DEFAULT) { + *vaddr = malloc(dmat->common.maxsize, M_DEVBUF, mflags); + } else if (dmat->common.nsegments >= + howmany(dmat->common.maxsize, MIN(dmat->common.maxsegsz, PAGE_SIZE)) && + dmat->common.alignment <= PAGE_SIZE && + (dmat->common.boundary % PAGE_SIZE) == 0) { + /* Page-based multi-segment allocations allowed */ + *vaddr = (void *)kmem_alloc_attr(dmat->common.maxsize, mflags, + 0ul, dmat->common.lowaddr, attr); + dmat->bounce_flags |= BF_KMEM_ALLOC; + } else { + *vaddr = (void *)kmem_alloc_contig(dmat->common.maxsize, mflags, + 0ul, dmat->common.lowaddr, dmat->common.alignment != 0 ? + dmat->common.alignment : 1ul, dmat->common.boundary, attr); + dmat->bounce_flags |= BF_KMEM_ALLOC; + } + if (*vaddr == NULL) { + CTR4(KTR_BUSDMA, "%s: tag %p tag flags 0x%x error %d", + __func__, dmat, dmat->common.flags, ENOMEM); + free(*mapp, M_DEVBUF); + return (ENOMEM); + } else if (vtophys(*vaddr) & (dmat->common.alignment - 1)) { + printf("bus_dmamem_alloc failed to align memory properly.\n"); + } + dmat->map_count++; + CTR4(KTR_BUSDMA, "%s: tag %p tag flags 0x%x error %d", + __func__, dmat, dmat->common.flags, 0); + return (0); +} + +/* + * Free a piece of memory and it's allociated dmamap, that was allocated + * via bus_dmamem_alloc. Make the same choice for free/contigfree. + */ +static void +bounce_bus_dmamem_free(bus_dma_tag_t dmat, void *vaddr, bus_dmamap_t map) +{ + + /* + * Check the map came from bounce_bus_dmamem_alloc, so the map + * should be NULL and the BF_KMEM_ALLOC flag cleared if malloc() + * was used and set if kmem_alloc_contig() was used. + */ + if ((map->flags & DMAMAP_FROM_DMAMEM) == 0) + panic("bus_dmamem_free: Invalid map freed\n"); + if ((dmat->bounce_flags & BF_KMEM_ALLOC) == 0) + free(vaddr, M_DEVBUF); + else + kmem_free((vm_offset_t)vaddr, dmat->common.maxsize); + free(map, M_DEVBUF); + dmat->map_count--; + CTR3(KTR_BUSDMA, "%s: tag %p flags 0x%x", __func__, dmat, + dmat->bounce_flags); +} + +static void +_bus_dmamap_count_phys(bus_dma_tag_t dmat, bus_dmamap_t map, vm_paddr_t buf, + bus_size_t buflen, int flags) +{ + bus_addr_t curaddr; + bus_size_t sgsize; + + if ((map->flags & DMAMAP_COULD_BOUNCE) != 0 && map->pagesneeded == 0) { + /* + * Count the number of bounce pages + * needed in order to complete this transfer + */ + curaddr = buf; + while (buflen != 0) { + sgsize = MIN(buflen, dmat->common.maxsegsz); + if (bus_dma_run_filter(&dmat->common, curaddr)) { + sgsize = MIN(sgsize, + PAGE_SIZE - (curaddr & PAGE_MASK)); + map->pagesneeded++; + } + curaddr += sgsize; + buflen -= sgsize; + } + CTR1(KTR_BUSDMA, "pagesneeded= %d\n", map->pagesneeded); + } +} + +static void +_bus_dmamap_count_pages(bus_dma_tag_t dmat, bus_dmamap_t map, pmap_t pmap, + void *buf, bus_size_t buflen, int flags) +{ + vm_offset_t vaddr; + vm_offset_t vendaddr; + bus_addr_t paddr; + bus_size_t sg_len; + + if ((map->flags & DMAMAP_COULD_BOUNCE) != 0 && map->pagesneeded == 0) { + CTR4(KTR_BUSDMA, "lowaddr= %d Maxmem= %d, boundary= %d, " + "alignment= %d", dmat->common.lowaddr, + ptoa((vm_paddr_t)Maxmem), + dmat->common.boundary, dmat->common.alignment); + CTR2(KTR_BUSDMA, "map= %p, pagesneeded= %d", map, + map->pagesneeded); + /* + * Count the number of bounce pages + * needed in order to complete this transfer + */ + vaddr = (vm_offset_t)buf; + vendaddr = (vm_offset_t)buf + buflen; + + while (vaddr < vendaddr) { + sg_len = PAGE_SIZE - ((vm_offset_t)vaddr & PAGE_MASK); + if (pmap == kernel_pmap) + paddr = pmap_kextract(vaddr); + else + paddr = pmap_extract(pmap, vaddr); + if (bus_dma_run_filter(&dmat->common, paddr) != 0) { + sg_len = roundup2(sg_len, + dmat->common.alignment); + map->pagesneeded++; + } + vaddr += sg_len; + } + CTR1(KTR_BUSDMA, "pagesneeded= %d\n", map->pagesneeded); + } +} + +static int +_bus_dmamap_reserve_pages(bus_dma_tag_t dmat, bus_dmamap_t map, int flags) +{ + + /* Reserve Necessary Bounce Pages */ + mtx_lock(&bounce_lock); + if (flags & BUS_DMA_NOWAIT) { + if (reserve_bounce_pages(dmat, map, 0) != 0) { + mtx_unlock(&bounce_lock); + return (ENOMEM); + } + } else { + if (reserve_bounce_pages(dmat, map, 1) != 0) { + /* Queue us for resources */ + STAILQ_INSERT_TAIL(&bounce_map_waitinglist, map, links); + mtx_unlock(&bounce_lock); + return (EINPROGRESS); + } + } + mtx_unlock(&bounce_lock); + + return (0); +} + +/* + * Add a single contiguous physical range to the segment list. + */ +static int +_bus_dmamap_addseg(bus_dma_tag_t dmat, bus_dmamap_t map, bus_addr_t curaddr, + bus_size_t sgsize, bus_dma_segment_t *segs, int *segp) +{ + bus_addr_t baddr, bmask; + int seg; + + /* + * Make sure we don't cross any boundaries. + */ + bmask = ~(dmat->common.boundary - 1); + if (dmat->common.boundary > 0) { + baddr = (curaddr + dmat->common.boundary) & bmask; + if (sgsize > (baddr - curaddr)) + sgsize = (baddr - curaddr); + } + + /* + * Insert chunk into a segment, coalescing with + * previous segment if possible. + */ + seg = *segp; + if (seg == -1) { + seg = 0; + segs[seg].ds_addr = curaddr; + segs[seg].ds_len = sgsize; + } else { + if (curaddr == segs[seg].ds_addr + segs[seg].ds_len && + (segs[seg].ds_len + sgsize) <= dmat->common.maxsegsz && + (dmat->common.boundary == 0 || + (segs[seg].ds_addr & bmask) == (curaddr & bmask))) + segs[seg].ds_len += sgsize; + else { + if (++seg >= dmat->common.nsegments) + return (0); + segs[seg].ds_addr = curaddr; + segs[seg].ds_len = sgsize; + } + } + *segp = seg; + return (sgsize); +} + +/* + * Utility function to load a physical buffer. segp contains + * the starting segment on entrace, and the ending segment on exit. + */ +static int +bounce_bus_dmamap_load_phys(bus_dma_tag_t dmat, bus_dmamap_t map, + vm_paddr_t buf, bus_size_t buflen, int flags, bus_dma_segment_t *segs, + int *segp) +{ + struct sync_list *sl; + bus_size_t sgsize; + bus_addr_t curaddr, sl_end; + int error; + + if (segs == NULL) + segs = dmat->segments; + + if ((dmat->bounce_flags & BF_COULD_BOUNCE) != 0) { + _bus_dmamap_count_phys(dmat, map, buf, buflen, flags); + if (map->pagesneeded != 0) { + error = _bus_dmamap_reserve_pages(dmat, map, flags); + if (error) + return (error); + } + } + + sl = map->slist + map->sync_count - 1; + sl_end = 0; + + while (buflen > 0) { + curaddr = buf; + sgsize = MIN(buflen, dmat->common.maxsegsz); + if (((dmat->bounce_flags & BF_COULD_BOUNCE) != 0) && + map->pagesneeded != 0 && + bus_dma_run_filter(&dmat->common, curaddr)) { + sgsize = MIN(sgsize, PAGE_SIZE - (curaddr & PAGE_MASK)); + curaddr = add_bounce_page(dmat, map, 0, curaddr, + sgsize); + } else if ((dmat->bounce_flags & BF_COHERENT) == 0) { + if (map->sync_count > 0) + sl_end = sl->paddr + sl->datacount; + + if (map->sync_count == 0 || curaddr != sl_end) { + if (++map->sync_count > dmat->common.nsegments) + break; + sl++; + sl->vaddr = 0; *** DIFF OUTPUT TRUNCATED AT 1000 LINES ***