From owner-svn-src-all@freebsd.org Tue Jul 14 14:09:31 2020 Return-Path: Delivered-To: svn-src-all@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id F37173640AA; Tue, 14 Jul 2020 14:09:30 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4B5j7Z4gXhz4FdY; Tue, 14 Jul 2020 14:09:30 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 7353C27461; Tue, 14 Jul 2020 14:09:30 +0000 (UTC) (envelope-from markj@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id 06EE9UaX055972; Tue, 14 Jul 2020 14:09:30 GMT (envelope-from markj@FreeBSD.org) Received: (from markj@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id 06EE9TiD055966; Tue, 14 Jul 2020 14:09:29 GMT (envelope-from markj@FreeBSD.org) Message-Id: <202007141409.06EE9TiD055966@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: markj set sender to markj@FreeBSD.org using -f From: Mark Johnston Date: Tue, 14 Jul 2020 14:09:29 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r363180 - in head: share/man/man4 sys/arm64/conf sys/conf sys/dev/safexcel sys/modules sys/modules/safexcel X-SVN-Group: head X-SVN-Commit-Author: markj X-SVN-Commit-Paths: in head: share/man/man4 sys/arm64/conf sys/conf sys/dev/safexcel sys/modules sys/modules/safexcel X-SVN-Commit-Revision: 363180 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.33 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 14 Jul 2020 14:09:31 -0000 Author: markj Date: Tue Jul 14 14:09:29 2020 New Revision: 363180 URL: https://svnweb.freebsd.org/changeset/base/363180 Log: Add a driver for the SafeXcel EIP-97. The EIP-97 is a packet processing module found on the ESPRESSObin. This commit adds a crypto(9) driver for the crypto and hash engine in this device. An initial skeleton driver that could attach and submit requests was written by loos and others at Netgate, and the driver was finished by me. Support for separate AAD and output buffers will be added in a separate commit, to simplify merging to stable/12 (where those features don't exist). Reviewed by: gnn, jhb Feedback from: andrew, cem, manu MFC after: 1 week Sponsored by: Rubicon Communications, LLC (Netgate) Differential Revision: https://reviews.freebsd.org/D25417 Added: head/share/man/man4/safexcel.4 (contents, props changed) head/sys/dev/safexcel/ head/sys/dev/safexcel/safexcel.c (contents, props changed) head/sys/dev/safexcel/safexcel_reg.h (contents, props changed) head/sys/dev/safexcel/safexcel_var.h (contents, props changed) head/sys/modules/safexcel/ head/sys/modules/safexcel/Makefile (contents, props changed) Modified: head/share/man/man4/Makefile head/sys/arm64/conf/GENERIC head/sys/conf/files.arm64 head/sys/modules/Makefile Modified: head/share/man/man4/Makefile ============================================================================== --- head/share/man/man4/Makefile Tue Jul 14 13:21:09 2020 (r363179) +++ head/share/man/man4/Makefile Tue Jul 14 14:09:29 2020 (r363180) @@ -449,6 +449,7 @@ MAN= aac.4 \ rue.4 \ sa.4 \ safe.4 \ + safexcel.4 \ sbp.4 \ sbp_targ.4 \ scc.4 \ Added: head/share/man/man4/safexcel.4 ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/share/man/man4/safexcel.4 Tue Jul 14 14:09:29 2020 (r363180) @@ -0,0 +1,84 @@ +.\"- +.\" Copyright (c) 2020 Rubicon Communications, LLC (Netgate) +.\" +.\" Redistribution and use in source and binary forms, with or without +.\" modification, are permitted provided that the following conditions +.\" are met: +.\" 1. Redistributions of source code must retain the above copyright +.\" notice, this list of conditions and the following disclaimer. +.\" 2. Redistributions in binary form must reproduce the above copyright +.\" notice, this list of conditions and the following disclaimer in the +.\" documentation and/or other materials provided with the distribution. +.\" +.\" THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND +.\" ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +.\" ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE +.\" FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +.\" LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY +.\" OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF +.\" SUCH DAMAGE. +.\" +.\" $FreeBSD$ +.\" +.Dd June 23, 2020 +.Dt SAFEXCEL 4 +.Os +.Sh NAME +.Nm safexcel +.Nd Inside Secure SafeXcel-IP-97 security packet engine +.Sh SYNOPSIS +To compile this driver into the kernel, +place the following lines in your +kernel configuration file: +.Bd -ragged -offset indent +.Cd "device crypto" +.Cd "device cryptodev" +.Cd "device safexcel" +.Ed +.Pp +Alternatively, to load the driver as a +module at boot time, place the following line in +.Xr loader.conf 5 : +.Bd -literal -offset indent +safexcel_load="YES" +.Ed +.Sh DESCRIPTION +The +.Nm +driver implements +.Xr crypto 4 +support for the cryptographic acceleration functions of the EIP-97 device +found on some Marvell systems-on-chip. +The driver can accelerate the following AES modes: +.Pp +.Bl -bullet -compact +.It +AES-CBC +.It +AES-CTR +.It +AES-XTS +.It +AES-GCM +.It +AES-CCM +.El +.Pp +.Nm +also implements SHA1 and SHA2 transforms, and can combine AES-CBC and AES-CTR +with SHA1-HMAC and SHA2-HMAC for encrypt-then-authenticate operations. +.Sh SEE ALSO +.Xr crypto 4 , +.Xr ipsec 4 , +.Xr random 4 , +.Xr geli 8 , +.Xr crypto 9 +.Sh HISTORY +The +.Nm +driver first appeared in +.Fx 13.0 . Modified: head/sys/arm64/conf/GENERIC ============================================================================== --- head/sys/arm64/conf/GENERIC Tue Jul 14 13:21:09 2020 (r363179) +++ head/sys/arm64/conf/GENERIC Tue Jul 14 14:09:29 2020 (r363180) @@ -283,6 +283,9 @@ device mv_ap806_sei # Marvell AP806 SEI device aw_rtc # Allwinner Real-time Clock device mv_rtc # Marvell Real-time Clock +# Crypto accelerators +device safexcel # Inside Secure EIP-97 + # Watchdog controllers device aw_wdog # Allwinner Watchdog Modified: head/sys/conf/files.arm64 ============================================================================== --- head/sys/conf/files.arm64 Tue Jul 14 13:21:09 2020 (r363179) +++ head/sys/conf/files.arm64 Tue Jul 14 14:09:29 2020 (r363180) @@ -324,6 +324,7 @@ dev/pci/pci_dw_if.m optional pci fdt dev/psci/psci.c standard dev/psci/smccc_arm64.S standard dev/psci/smccc.c standard +dev/safexcel/safexcel.c optional safexcel fdt dev/sdhci/sdhci_xenon.c optional sdhci_xenon sdhci fdt dev/uart/uart_cpu_arm64.c optional uart dev/uart/uart_dev_mu.c optional uart uart_mu Added: head/sys/dev/safexcel/safexcel.c ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ head/sys/dev/safexcel/safexcel.c Tue Jul 14 14:09:29 2020 (r363180) @@ -0,0 +1,2637 @@ +/*- + * SPDX-License-Identifier: BSD-2-Clause-FreeBSD + * + * Copyright (c) 2020 Rubicon Communications, LLC (Netgate) + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR + * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES + * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. + * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, + * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT + * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF + * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +__FBSDID("$FreeBSD$"); + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#include +#include +#include + +#include +#include + +#include "cryptodev_if.h" + +#include "safexcel_reg.h" +#include "safexcel_var.h" + +static MALLOC_DEFINE(M_SAFEXCEL, "safexcel_req", "safexcel request buffers"); + +/* + * We only support the EIP97 for now. + */ +static struct ofw_compat_data safexcel_compat[] = { + { "inside-secure,safexcel-eip97ies", (uintptr_t)97 }, + { "inside-secure,safexcel-eip97", (uintptr_t)97 }, + { NULL, 0 } +}; + +const struct safexcel_reg_offsets eip97_regs_offset = { + .hia_aic = SAFEXCEL_EIP97_HIA_AIC_BASE, + .hia_aic_g = SAFEXCEL_EIP97_HIA_AIC_G_BASE, + .hia_aic_r = SAFEXCEL_EIP97_HIA_AIC_R_BASE, + .hia_aic_xdr = SAFEXCEL_EIP97_HIA_AIC_xDR_BASE, + .hia_dfe = SAFEXCEL_EIP97_HIA_DFE_BASE, + .hia_dfe_thr = SAFEXCEL_EIP97_HIA_DFE_THR_BASE, + .hia_dse = SAFEXCEL_EIP97_HIA_DSE_BASE, + .hia_dse_thr = SAFEXCEL_EIP97_HIA_DSE_THR_BASE, + .hia_gen_cfg = SAFEXCEL_EIP97_HIA_GEN_CFG_BASE, + .pe = SAFEXCEL_EIP97_PE_BASE, +}; + +const struct safexcel_reg_offsets eip197_regs_offset = { + .hia_aic = SAFEXCEL_EIP197_HIA_AIC_BASE, + .hia_aic_g = SAFEXCEL_EIP197_HIA_AIC_G_BASE, + .hia_aic_r = SAFEXCEL_EIP197_HIA_AIC_R_BASE, + .hia_aic_xdr = SAFEXCEL_EIP197_HIA_AIC_xDR_BASE, + .hia_dfe = SAFEXCEL_EIP197_HIA_DFE_BASE, + .hia_dfe_thr = SAFEXCEL_EIP197_HIA_DFE_THR_BASE, + .hia_dse = SAFEXCEL_EIP197_HIA_DSE_BASE, + .hia_dse_thr = SAFEXCEL_EIP197_HIA_DSE_THR_BASE, + .hia_gen_cfg = SAFEXCEL_EIP197_HIA_GEN_CFG_BASE, + .pe = SAFEXCEL_EIP197_PE_BASE, +}; + +static struct safexcel_cmd_descr * +safexcel_cmd_descr_next(struct safexcel_cmd_descr_ring *ring) +{ + struct safexcel_cmd_descr *cdesc; + + if (ring->write == ring->read) + return (NULL); + cdesc = &ring->desc[ring->read]; + ring->read = (ring->read + 1) % SAFEXCEL_RING_SIZE; + return (cdesc); +} + +static struct safexcel_res_descr * +safexcel_res_descr_next(struct safexcel_res_descr_ring *ring) +{ + struct safexcel_res_descr *rdesc; + + if (ring->write == ring->read) + return (NULL); + rdesc = &ring->desc[ring->read]; + ring->read = (ring->read + 1) % SAFEXCEL_RING_SIZE; + return (rdesc); +} + +static struct safexcel_request * +safexcel_alloc_request(struct safexcel_softc *sc, struct safexcel_ring *ring) +{ + struct safexcel_request *req; + + mtx_assert(&ring->mtx, MA_OWNED); + + if ((req = STAILQ_FIRST(&ring->free_requests)) != NULL) + STAILQ_REMOVE_HEAD(&ring->free_requests, link); + return (req); +} + +static void +safexcel_free_request(struct safexcel_ring *ring, struct safexcel_request *req) +{ + struct safexcel_context_record *ctx; + + mtx_assert(&ring->mtx, MA_OWNED); + + if (req->dmap_loaded) { + bus_dmamap_unload(ring->data_dtag, req->dmap); + req->dmap_loaded = false; + } + ctx = (struct safexcel_context_record *)req->ctx.vaddr; + explicit_bzero(ctx->data, sizeof(ctx->data)); + explicit_bzero(req->iv, sizeof(req->iv)); + STAILQ_INSERT_TAIL(&ring->free_requests, req, link); +} + +static void +safexcel_enqueue_request(struct safexcel_softc *sc, struct safexcel_ring *ring, + struct safexcel_request *req) +{ + mtx_assert(&ring->mtx, MA_OWNED); + + STAILQ_INSERT_TAIL(&ring->ready_requests, req, link); +} + +static void +safexcel_rdr_intr(struct safexcel_softc *sc, int ringidx) +{ + struct safexcel_cmd_descr *cdesc; + struct safexcel_res_descr *rdesc; + struct safexcel_request *req; + struct safexcel_ring *ring; + uint32_t error, i, ncdescs, nrdescs, nreqs; + + ring = &sc->sc_ring[ringidx]; + + mtx_lock(&ring->mtx); + nreqs = SAFEXCEL_READ(sc, + SAFEXCEL_HIA_RDR(sc, ringidx) + SAFEXCEL_HIA_xDR_PROC_COUNT); + nreqs >>= SAFEXCEL_xDR_PROC_xD_PKT_OFFSET; + nreqs &= SAFEXCEL_xDR_PROC_xD_PKT_MASK; + if (nreqs == 0) { + SAFEXCEL_DPRINTF(sc, 1, + "zero pending requests on ring %d\n", ringidx); + goto out; + } + + ring = &sc->sc_ring[ringidx]; + bus_dmamap_sync(ring->rdr.dma.tag, ring->rdr.dma.map, + BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); + bus_dmamap_sync(ring->cdr.dma.tag, ring->cdr.dma.map, + BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); + bus_dmamap_sync(ring->dma_atok.tag, ring->dma_atok.map, + BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); + + ncdescs = nrdescs = 0; + for (i = 0; i < nreqs; i++) { + req = STAILQ_FIRST(&ring->queued_requests); + KASSERT(req != NULL, ("%s: expected %d pending requests", + __func__, nreqs)); + STAILQ_REMOVE_HEAD(&ring->queued_requests, link); + mtx_unlock(&ring->mtx); + + bus_dmamap_sync(req->ctx.tag, req->ctx.map, + BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); + bus_dmamap_sync(ring->data_dtag, req->dmap, + BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); + + ncdescs += req->cdescs; + while (req->cdescs-- > 0) { + cdesc = safexcel_cmd_descr_next(&ring->cdr); + KASSERT(cdesc != NULL, + ("%s: missing control descriptor", __func__)); + if (req->cdescs == 0) + KASSERT(cdesc->last_seg, + ("%s: chain is not terminated", __func__)); + } + nrdescs += req->rdescs; + while (req->rdescs-- > 0) { + rdesc = safexcel_res_descr_next(&ring->rdr); + error = rdesc->result_data.error_code; + if (error != 0) { + if (error == SAFEXCEL_RESULT_ERR_AUTH_FAILED && + req->crp->crp_etype == 0) { + req->crp->crp_etype = EBADMSG; + } else { + SAFEXCEL_DPRINTF(sc, 1, + "error code %#x\n", error); + req->crp->crp_etype = EIO; + } + } + } + + crypto_done(req->crp); + mtx_lock(&ring->mtx); + safexcel_free_request(ring, req); + } + + if (nreqs != 0) { + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_RDR(sc, ringidx) + SAFEXCEL_HIA_xDR_PROC_COUNT, + SAFEXCEL_xDR_PROC_xD_PKT(nreqs) | + (sc->sc_config.rd_offset * nrdescs * sizeof(uint32_t))); + } +out: + if (!STAILQ_EMPTY(&ring->queued_requests)) { + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_RDR(sc, ringidx) + SAFEXCEL_HIA_xDR_THRESH, + SAFEXCEL_HIA_CDR_THRESH_PKT_MODE | 1); + } + mtx_unlock(&ring->mtx); +} + +static void +safexcel_ring_intr(void *arg) +{ + struct safexcel_softc *sc; + struct safexcel_intr_handle *ih; + uint32_t status, stat; + int ring; + bool blocked, rdrpending; + + ih = arg; + sc = ih->sc; + ring = ih->ring; + + status = SAFEXCEL_READ(sc, SAFEXCEL_HIA_AIC_R(sc) + + SAFEXCEL_HIA_AIC_R_ENABLED_STAT(ring)); + /* CDR interrupts */ + if (status & SAFEXCEL_CDR_IRQ(ring)) { + stat = SAFEXCEL_READ(sc, + SAFEXCEL_HIA_CDR(sc, ring) + SAFEXCEL_HIA_xDR_STAT); + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_CDR(sc, ring) + SAFEXCEL_HIA_xDR_STAT, + stat & SAFEXCEL_CDR_INTR_MASK); + } + /* RDR interrupts */ + rdrpending = false; + if (status & SAFEXCEL_RDR_IRQ(ring)) { + stat = SAFEXCEL_READ(sc, + SAFEXCEL_HIA_RDR(sc, ring) + SAFEXCEL_HIA_xDR_STAT); + if ((stat & SAFEXCEL_xDR_ERR) == 0) + rdrpending = true; + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_RDR(sc, ring) + SAFEXCEL_HIA_xDR_STAT, + stat & SAFEXCEL_RDR_INTR_MASK); + } + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_AIC_R(sc) + SAFEXCEL_HIA_AIC_R_ACK(ring), + status); + + if (rdrpending) + safexcel_rdr_intr(sc, ring); + + mtx_lock(&sc->sc_mtx); + blocked = sc->sc_blocked; + sc->sc_blocked = 0; + mtx_unlock(&sc->sc_mtx); + + if (blocked) + crypto_unblock(sc->sc_cid, blocked); +} + +static int +safexcel_configure(struct safexcel_softc *sc) +{ + uint32_t i, mask, pemask, reg; + device_t dev; + + if (sc->sc_type == 197) { + sc->sc_offsets = eip197_regs_offset; + pemask = SAFEXCEL_N_PES_MASK; + } else { + sc->sc_offsets = eip97_regs_offset; + pemask = EIP97_N_PES_MASK; + } + + dev = sc->sc_dev; + + /* Scan for valid ring interrupt controllers. */ + for (i = 0; i < SAFEXCEL_MAX_RING_AIC; i++) { + reg = SAFEXCEL_READ(sc, SAFEXCEL_HIA_AIC_R(sc) + + SAFEXCEL_HIA_AIC_R_VERSION(i)); + if (SAFEXCEL_REG_LO16(reg) != EIP201_VERSION_LE) + break; + } + sc->sc_config.aic_rings = i; + if (sc->sc_config.aic_rings == 0) + return (-1); + + reg = SAFEXCEL_READ(sc, SAFEXCEL_HIA_AIC_G(sc) + SAFEXCEL_HIA_OPTIONS); + /* Check for 64bit addressing. */ + if ((reg & SAFEXCEL_OPT_ADDR_64) == 0) + return (-1); + /* Check alignment constraints (which we do not support). */ + if (((reg & SAFEXCEL_OPT_TGT_ALIGN_MASK) >> + SAFEXCEL_OPT_TGT_ALIGN_OFFSET) != 0) + return (-1); + + sc->sc_config.hdw = + (reg & SAFEXCEL_xDR_HDW_MASK) >> SAFEXCEL_xDR_HDW_OFFSET; + mask = (1 << sc->sc_config.hdw) - 1; + + sc->sc_config.rings = reg & SAFEXCEL_N_RINGS_MASK; + /* Limit the number of rings to the number of the AIC Rings. */ + sc->sc_config.rings = MIN(sc->sc_config.rings, sc->sc_config.aic_rings); + + sc->sc_config.pes = (reg & pemask) >> SAFEXCEL_N_PES_OFFSET; + + sc->sc_config.cd_size = + sizeof(struct safexcel_cmd_descr) / sizeof(uint32_t); + sc->sc_config.cd_offset = (sc->sc_config.cd_size + mask) & ~mask; + + sc->sc_config.rd_size = + sizeof(struct safexcel_res_descr) / sizeof(uint32_t); + sc->sc_config.rd_offset = (sc->sc_config.rd_size + mask) & ~mask; + + sc->sc_config.atok_offset = + (SAFEXCEL_MAX_ATOKENS * sizeof(struct safexcel_instr) + mask) & + ~mask; + + return (0); +} + +static void +safexcel_init_hia_bus_access(struct safexcel_softc *sc) +{ + uint32_t version, val; + + /* Determine endianness and configure byte swap. */ + version = SAFEXCEL_READ(sc, + SAFEXCEL_HIA_AIC(sc) + SAFEXCEL_HIA_VERSION); + val = SAFEXCEL_READ(sc, SAFEXCEL_HIA_AIC(sc) + SAFEXCEL_HIA_MST_CTRL); + if (SAFEXCEL_REG_HI16(version) == SAFEXCEL_HIA_VERSION_BE) { + val = SAFEXCEL_READ(sc, + SAFEXCEL_HIA_AIC(sc) + SAFEXCEL_HIA_MST_CTRL); + val = val ^ (SAFEXCEL_MST_CTRL_NO_BYTE_SWAP >> 24); + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_AIC(sc) + SAFEXCEL_HIA_MST_CTRL, + val); + } + + /* Configure wr/rd cache values. */ + SAFEXCEL_WRITE(sc, SAFEXCEL_HIA_GEN_CFG(sc) + SAFEXCEL_HIA_MST_CTRL, + SAFEXCEL_MST_CTRL_RD_CACHE(RD_CACHE_4BITS) | + SAFEXCEL_MST_CTRL_WD_CACHE(WR_CACHE_4BITS)); +} + +static void +safexcel_disable_global_interrupts(struct safexcel_softc *sc) +{ + /* Disable and clear pending interrupts. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_AIC_G(sc) + SAFEXCEL_HIA_AIC_G_ENABLE_CTRL, 0); + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_AIC_G(sc) + SAFEXCEL_HIA_AIC_G_ACK, + SAFEXCEL_AIC_G_ACK_ALL_MASK); +} + +/* + * Configure the data fetch engine. This component parses command descriptors + * and sets up DMA transfers from host memory to the corresponding processing + * engine. + */ +static void +safexcel_configure_dfe_engine(struct safexcel_softc *sc, int pe) +{ + /* Reset all DFE threads. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_DFE_THR(sc) + SAFEXCEL_HIA_DFE_THR_CTRL(pe), + SAFEXCEL_DxE_THR_CTRL_RESET_PE); + + /* Deassert the DFE reset. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_DFE_THR(sc) + SAFEXCEL_HIA_DFE_THR_CTRL(pe), 0); + + /* DMA transfer size to use. */ + SAFEXCEL_WRITE(sc, SAFEXCEL_HIA_DFE(sc) + SAFEXCEL_HIA_DFE_CFG(pe), + SAFEXCEL_HIA_DFE_CFG_DIS_DEBUG | + SAFEXCEL_HIA_DxE_CFG_MIN_DATA_SIZE(6) | + SAFEXCEL_HIA_DxE_CFG_MAX_DATA_SIZE(9) | + SAFEXCEL_HIA_DxE_CFG_MIN_CTRL_SIZE(6) | + SAFEXCEL_HIA_DxE_CFG_MAX_CTRL_SIZE(7) | + SAFEXCEL_HIA_DxE_CFG_DATA_CACHE_CTRL(RD_CACHE_3BITS) | + SAFEXCEL_HIA_DxE_CFG_CTRL_CACHE_CTRL(RD_CACHE_3BITS)); + + /* Configure the PE DMA transfer thresholds. */ + SAFEXCEL_WRITE(sc, SAFEXCEL_PE(sc) + SAFEXCEL_PE_IN_DBUF_THRES(pe), + SAFEXCEL_PE_IN_xBUF_THRES_MIN(6) | + SAFEXCEL_PE_IN_xBUF_THRES_MAX(9)); + SAFEXCEL_WRITE(sc, SAFEXCEL_PE(sc) + SAFEXCEL_PE_IN_TBUF_THRES(pe), + SAFEXCEL_PE_IN_xBUF_THRES_MIN(6) | + SAFEXCEL_PE_IN_xBUF_THRES_MAX(7)); +} + +/* + * Configure the data store engine. This component parses result descriptors + * and sets up DMA transfers from the processing engine to host memory. + */ +static int +safexcel_configure_dse(struct safexcel_softc *sc, int pe) +{ + uint32_t val; + int count; + + /* Disable and reset all DSE threads. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_DSE_THR(sc) + SAFEXCEL_HIA_DSE_THR_CTRL(pe), + SAFEXCEL_DxE_THR_CTRL_RESET_PE); + + /* Wait for a second for threads to go idle. */ + for (count = 0;;) { + val = SAFEXCEL_READ(sc, + SAFEXCEL_HIA_DSE_THR(sc) + SAFEXCEL_HIA_DSE_THR_STAT(pe)); + if ((val & SAFEXCEL_DSE_THR_RDR_ID_MASK) == + SAFEXCEL_DSE_THR_RDR_ID_MASK) + break; + if (count++ > 10000) { + device_printf(sc->sc_dev, "DSE reset timeout\n"); + return (-1); + } + DELAY(100); + } + + /* Exit the reset state. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_DSE_THR(sc) + SAFEXCEL_HIA_DSE_THR_CTRL(pe), 0); + + /* DMA transfer size to use */ + SAFEXCEL_WRITE(sc, SAFEXCEL_HIA_DSE(sc) + SAFEXCEL_HIA_DSE_CFG(pe), + SAFEXCEL_HIA_DSE_CFG_DIS_DEBUG | + SAFEXCEL_HIA_DxE_CFG_MIN_DATA_SIZE(7) | + SAFEXCEL_HIA_DxE_CFG_MAX_DATA_SIZE(8) | + SAFEXCEL_HIA_DxE_CFG_DATA_CACHE_CTRL(WR_CACHE_3BITS) | + SAFEXCEL_HIA_DSE_CFG_ALLWAYS_BUFFERABLE); + + /* Configure the procesing engine thresholds */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_PE(sc) + SAFEXCEL_PE_OUT_DBUF_THRES(pe), + SAFEXCEL_PE_OUT_DBUF_THRES_MIN(7) | + SAFEXCEL_PE_OUT_DBUF_THRES_MAX(8)); + + return (0); +} + +static void +safexcel_hw_prepare_rings(struct safexcel_softc *sc) +{ + int i; + + for (i = 0; i < sc->sc_config.rings; i++) { + /* + * Command descriptors. + */ + + /* Clear interrupts for this ring. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_AIC_R(sc) + SAFEXCEL_HIA_AIC_R_ENABLE_CLR(i), + SAFEXCEL_HIA_AIC_R_ENABLE_CLR_ALL_MASK); + + /* Disable external triggering. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_CDR(sc, i) + SAFEXCEL_HIA_xDR_CFG, 0); + + /* Clear the pending prepared counter. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_CDR(sc, i) + SAFEXCEL_HIA_xDR_PREP_COUNT, + SAFEXCEL_xDR_PREP_CLR_COUNT); + + /* Clear the pending processed counter. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_CDR(sc, i) + SAFEXCEL_HIA_xDR_PROC_COUNT, + SAFEXCEL_xDR_PROC_CLR_COUNT); + + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_CDR(sc, i) + SAFEXCEL_HIA_xDR_PREP_PNTR, 0); + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_CDR(sc, i) + SAFEXCEL_HIA_xDR_PROC_PNTR, 0); + + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_CDR(sc, i) + SAFEXCEL_HIA_xDR_RING_SIZE, + SAFEXCEL_RING_SIZE * sc->sc_config.cd_offset * + sizeof(uint32_t)); + + /* + * Result descriptors. + */ + + /* Disable external triggering. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_RDR(sc, i) + SAFEXCEL_HIA_xDR_CFG, 0); + + /* Clear the pending prepared counter. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_RDR(sc, i) + SAFEXCEL_HIA_xDR_PREP_COUNT, + SAFEXCEL_xDR_PREP_CLR_COUNT); + + /* Clear the pending processed counter. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_RDR(sc, i) + SAFEXCEL_HIA_xDR_PROC_COUNT, + SAFEXCEL_xDR_PROC_CLR_COUNT); + + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_RDR(sc, i) + SAFEXCEL_HIA_xDR_PREP_PNTR, 0); + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_RDR(sc, i) + SAFEXCEL_HIA_xDR_PROC_PNTR, 0); + + /* Ring size. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_RDR(sc, i) + SAFEXCEL_HIA_xDR_RING_SIZE, + SAFEXCEL_RING_SIZE * sc->sc_config.rd_offset * + sizeof(uint32_t)); + } +} + +static void +safexcel_hw_setup_rings(struct safexcel_softc *sc) +{ + struct safexcel_ring *ring; + uint32_t cd_size_rnd, mask, rd_size_rnd, val; + int i; + + mask = (1 << sc->sc_config.hdw) - 1; + cd_size_rnd = (sc->sc_config.cd_size + mask) >> sc->sc_config.hdw; + val = (sizeof(struct safexcel_res_descr) - + sizeof(struct safexcel_res_data)) / sizeof(uint32_t); + rd_size_rnd = (val + mask) >> sc->sc_config.hdw; + + for (i = 0; i < sc->sc_config.rings; i++) { + ring = &sc->sc_ring[i]; + + /* + * Command descriptors. + */ + + /* Ring base address. */ + SAFEXCEL_WRITE(sc, SAFEXCEL_HIA_CDR(sc, i) + + SAFEXCEL_HIA_xDR_RING_BASE_ADDR_LO, + SAFEXCEL_ADDR_LO(ring->cdr.dma.paddr)); + SAFEXCEL_WRITE(sc, SAFEXCEL_HIA_CDR(sc, i) + + SAFEXCEL_HIA_xDR_RING_BASE_ADDR_HI, + SAFEXCEL_ADDR_HI(ring->cdr.dma.paddr)); + + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_CDR(sc, i) + SAFEXCEL_HIA_xDR_DESC_SIZE, + SAFEXCEL_xDR_DESC_MODE_64BIT | SAFEXCEL_CDR_DESC_MODE_ADCP | + (sc->sc_config.cd_offset << SAFEXCEL_xDR_DESC_xD_OFFSET) | + sc->sc_config.cd_size); + + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_CDR(sc, i) + SAFEXCEL_HIA_xDR_CFG, + ((SAFEXCEL_FETCH_COUNT * (cd_size_rnd << sc->sc_config.hdw)) << + SAFEXCEL_xDR_xD_FETCH_THRESH) | + (SAFEXCEL_FETCH_COUNT * sc->sc_config.cd_offset)); + + /* Configure DMA tx control. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_CDR(sc, i) + SAFEXCEL_HIA_xDR_DMA_CFG, + SAFEXCEL_HIA_xDR_CFG_WR_CACHE(WR_CACHE_3BITS) | + SAFEXCEL_HIA_xDR_CFG_RD_CACHE(RD_CACHE_3BITS)); + + /* Clear any pending interrupt. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_CDR(sc, i) + SAFEXCEL_HIA_xDR_STAT, + SAFEXCEL_CDR_INTR_MASK); + + /* + * Result descriptors. + */ + + /* Ring base address. */ + SAFEXCEL_WRITE(sc, SAFEXCEL_HIA_RDR(sc, i) + + SAFEXCEL_HIA_xDR_RING_BASE_ADDR_LO, + SAFEXCEL_ADDR_LO(ring->rdr.dma.paddr)); + SAFEXCEL_WRITE(sc, SAFEXCEL_HIA_RDR(sc, i) + + SAFEXCEL_HIA_xDR_RING_BASE_ADDR_HI, + SAFEXCEL_ADDR_HI(ring->rdr.dma.paddr)); + + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_RDR(sc, i) + SAFEXCEL_HIA_xDR_DESC_SIZE, + SAFEXCEL_xDR_DESC_MODE_64BIT | + (sc->sc_config.rd_offset << SAFEXCEL_xDR_DESC_xD_OFFSET) | + sc->sc_config.rd_size); + + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_RDR(sc, i) + SAFEXCEL_HIA_xDR_CFG, + ((SAFEXCEL_FETCH_COUNT * (rd_size_rnd << sc->sc_config.hdw)) << + SAFEXCEL_xDR_xD_FETCH_THRESH) | + (SAFEXCEL_FETCH_COUNT * sc->sc_config.rd_offset)); + + /* Configure DMA tx control. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_RDR(sc, i) + SAFEXCEL_HIA_xDR_DMA_CFG, + SAFEXCEL_HIA_xDR_CFG_WR_CACHE(WR_CACHE_3BITS) | + SAFEXCEL_HIA_xDR_CFG_RD_CACHE(RD_CACHE_3BITS) | + SAFEXCEL_HIA_xDR_WR_RES_BUF | SAFEXCEL_HIA_xDR_WR_CTRL_BUF); + + /* Clear any pending interrupt. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_RDR(sc, i) + SAFEXCEL_HIA_xDR_STAT, + SAFEXCEL_RDR_INTR_MASK); + + /* Enable ring interrupt. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_AIC_R(sc) + SAFEXCEL_HIA_AIC_R_ENABLE_CTRL(i), + SAFEXCEL_RDR_IRQ(i)); + } +} + +/* Reset the command and result descriptor rings. */ +static void +safexcel_hw_reset_rings(struct safexcel_softc *sc) +{ + int i; + + for (i = 0; i < sc->sc_config.rings; i++) { + /* + * Result descriptor ring operations. + */ + + /* Reset ring base address. */ + SAFEXCEL_WRITE(sc, SAFEXCEL_HIA_RDR(sc, i) + + SAFEXCEL_HIA_xDR_RING_BASE_ADDR_LO, 0); + SAFEXCEL_WRITE(sc, SAFEXCEL_HIA_RDR(sc, i) + + SAFEXCEL_HIA_xDR_RING_BASE_ADDR_HI, 0); + + /* Clear the pending prepared counter. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_RDR(sc, i) + SAFEXCEL_HIA_xDR_PREP_COUNT, + SAFEXCEL_xDR_PREP_CLR_COUNT); + + /* Clear the pending processed counter. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_RDR(sc, i) + SAFEXCEL_HIA_xDR_PROC_COUNT, + SAFEXCEL_xDR_PROC_CLR_COUNT); + + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_RDR(sc, i) + SAFEXCEL_HIA_xDR_PREP_PNTR, 0); + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_RDR(sc, i) + SAFEXCEL_HIA_xDR_PROC_PNTR, 0); + + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_RDR(sc, i) + SAFEXCEL_HIA_xDR_RING_SIZE, 0); + + /* Clear any pending interrupt. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_RDR(sc, i) + SAFEXCEL_HIA_xDR_STAT, + SAFEXCEL_RDR_INTR_MASK); + + /* Disable ring interrupt. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_AIC_R(sc) + SAFEXCEL_HIA_AIC_R_ENABLE_CLR(i), + SAFEXCEL_RDR_IRQ(i)); + + /* + * Command descriptor ring operations. + */ + + /* Reset ring base address. */ + SAFEXCEL_WRITE(sc, SAFEXCEL_HIA_CDR(sc, i) + + SAFEXCEL_HIA_xDR_RING_BASE_ADDR_LO, 0); + SAFEXCEL_WRITE(sc, SAFEXCEL_HIA_CDR(sc, i) + + SAFEXCEL_HIA_xDR_RING_BASE_ADDR_HI, 0); + + /* Clear the pending prepared counter. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_CDR(sc, i) + SAFEXCEL_HIA_xDR_PREP_COUNT, + SAFEXCEL_xDR_PREP_CLR_COUNT); + + /* Clear the pending processed counter. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_CDR(sc, i) + SAFEXCEL_HIA_xDR_PROC_COUNT, + SAFEXCEL_xDR_PROC_CLR_COUNT); + + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_CDR(sc, i) + SAFEXCEL_HIA_xDR_PREP_PNTR, 0); + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_CDR(sc, i) + SAFEXCEL_HIA_xDR_PROC_PNTR, 0); + + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_CDR(sc, i) + SAFEXCEL_HIA_xDR_RING_SIZE, 0); + + /* Clear any pending interrupt. */ + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_CDR(sc, i) + SAFEXCEL_HIA_xDR_STAT, + SAFEXCEL_CDR_INTR_MASK); + } +} + +static void +safexcel_enable_pe_engine(struct safexcel_softc *sc, int pe) +{ + int i, ring_mask; + + for (ring_mask = 0, i = 0; i < sc->sc_config.rings; i++) { + ring_mask <<= 1; + ring_mask |= 1; + } + + /* Enable command descriptor rings. */ + SAFEXCEL_WRITE(sc, SAFEXCEL_HIA_DFE_THR(sc) + SAFEXCEL_HIA_DFE_THR_CTRL(pe), + SAFEXCEL_DxE_THR_CTRL_EN | ring_mask); + + /* Enable result descriptor rings. */ + SAFEXCEL_WRITE(sc, SAFEXCEL_HIA_DSE_THR(sc) + SAFEXCEL_HIA_DSE_THR_CTRL(pe), + SAFEXCEL_DxE_THR_CTRL_EN | ring_mask); + + /* Clear any HIA interrupt. */ + SAFEXCEL_WRITE(sc, SAFEXCEL_HIA_AIC_G(sc) + SAFEXCEL_HIA_AIC_G_ACK, + SAFEXCEL_AIC_G_ACK_HIA_MASK); +} + +static void +safexcel_execute(struct safexcel_softc *sc, struct safexcel_ring *ring, + struct safexcel_request *req) +{ + uint32_t ncdescs, nrdescs, nreqs; + int ringidx; + bool busy; + + mtx_assert(&ring->mtx, MA_OWNED); + + ringidx = req->sess->ringidx; + if (STAILQ_EMPTY(&ring->ready_requests)) + return; + busy = !STAILQ_EMPTY(&ring->queued_requests); + ncdescs = nrdescs = nreqs = 0; + while ((req = STAILQ_FIRST(&ring->ready_requests)) != NULL && + req->cdescs + ncdescs <= SAFEXCEL_MAX_BATCH_SIZE && + req->rdescs + nrdescs <= SAFEXCEL_MAX_BATCH_SIZE) { + STAILQ_REMOVE_HEAD(&ring->ready_requests, link); + STAILQ_INSERT_TAIL(&ring->queued_requests, req, link); + ncdescs += req->cdescs; + nrdescs += req->rdescs; + nreqs++; + } + + if (!busy) { + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_RDR(sc, ringidx) + SAFEXCEL_HIA_xDR_THRESH, + SAFEXCEL_HIA_CDR_THRESH_PKT_MODE | nreqs); + } + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_RDR(sc, ringidx) + SAFEXCEL_HIA_xDR_PREP_COUNT, + nrdescs * sc->sc_config.rd_offset * sizeof(uint32_t)); + SAFEXCEL_WRITE(sc, + SAFEXCEL_HIA_CDR(sc, ringidx) + SAFEXCEL_HIA_xDR_PREP_COUNT, + ncdescs * sc->sc_config.cd_offset * sizeof(uint32_t)); +} + +static void +safexcel_init_rings(struct safexcel_softc *sc) +{ + struct safexcel_cmd_descr *cdesc; + struct safexcel_ring *ring; + char buf[32]; + uint64_t atok; + int i, j; + + for (i = 0; i < sc->sc_config.rings; i++) { + ring = &sc->sc_ring[i]; + + snprintf(buf, sizeof(buf), "safexcel_ring%d", i); + mtx_init(&ring->mtx, buf, NULL, MTX_DEF); + STAILQ_INIT(&ring->free_requests); + STAILQ_INIT(&ring->ready_requests); + STAILQ_INIT(&ring->queued_requests); + + ring->cdr.read = ring->cdr.write = 0; + ring->rdr.read = ring->rdr.write = 0; + for (j = 0; j < SAFEXCEL_RING_SIZE; j++) { + cdesc = &ring->cdr.desc[j]; + atok = ring->dma_atok.paddr + + sc->sc_config.atok_offset * j; + cdesc->atok_lo = SAFEXCEL_ADDR_LO(atok); + cdesc->atok_hi = SAFEXCEL_ADDR_HI(atok); + } + } +} + +static void +safexcel_dma_alloc_mem_cb(void *arg, bus_dma_segment_t *segs, int nseg, + int error) +{ + struct safexcel_dma_mem *sdm; + + if (error != 0) + return; + + KASSERT(nseg == 1, ("%s: nsegs is %d", __func__, nseg)); + sdm = arg; + sdm->paddr = segs->ds_addr; +} + +static int +safexcel_dma_alloc_mem(struct safexcel_softc *sc, struct safexcel_dma_mem *sdm, + bus_size_t size) +{ + int error; + + KASSERT(sdm->vaddr == NULL, + ("%s: DMA memory descriptor in use.", __func__)); + + error = bus_dma_tag_create(bus_get_dma_tag(sc->sc_dev), /* parent */ + PAGE_SIZE, 0, /* alignment, boundary */ + BUS_SPACE_MAXADDR_32BIT, /* lowaddr */ + BUS_SPACE_MAXADDR, /* highaddr */ + NULL, NULL, /* filtfunc, filtfuncarg */ + size, 1, /* maxsize, nsegments */ + size, BUS_DMA_COHERENT, /* maxsegsz, flags */ + NULL, NULL, /* lockfunc, lockfuncarg */ + &sdm->tag); /* dmat */ + if (error != 0) { + device_printf(sc->sc_dev, + "failed to allocate busdma tag, error %d\n", error); + goto err1; + } + + error = bus_dmamem_alloc(sdm->tag, (void **)&sdm->vaddr, + BUS_DMA_WAITOK | BUS_DMA_ZERO | BUS_DMA_COHERENT, &sdm->map); + if (error != 0) { + device_printf(sc->sc_dev, + "failed to allocate DMA safe memory, error %d\n", error); + goto err2; + } + + error = bus_dmamap_load(sdm->tag, sdm->map, sdm->vaddr, size, + safexcel_dma_alloc_mem_cb, sdm, BUS_DMA_NOWAIT); + if (error != 0) { + device_printf(sc->sc_dev, + "cannot get address of the DMA memory, error %d\n", error); + goto err3; + } + + return (0); +err3: + bus_dmamem_free(sdm->tag, sdm->vaddr, sdm->map); +err2: + bus_dma_tag_destroy(sdm->tag); +err1: + sdm->vaddr = NULL; + + return (error); +} + +static void +safexcel_dma_free_mem(struct safexcel_dma_mem *sdm) +{ *** DIFF OUTPUT TRUNCATED AT 1000 LINES ***