From owner-svn-src-all@freebsd.org Thu Dec 3 08:30:31 2020 Return-Path: Delivered-To: svn-src-all@mailman.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mailman.nyi.freebsd.org (Postfix) with ESMTP id 8C94A47EC68; Thu, 3 Dec 2020 08:30:31 +0000 (UTC) (envelope-from np@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "Let's Encrypt Authority X3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4Cmptv3M9dz3F45; Thu, 3 Dec 2020 08:30:31 +0000 (UTC) (envelope-from np@FreeBSD.org) Received: from repo.freebsd.org (repo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id 61B361652C; Thu, 3 Dec 2020 08:30:31 +0000 (UTC) (envelope-from np@FreeBSD.org) Received: from repo.freebsd.org ([127.0.1.37]) by repo.freebsd.org (8.15.2/8.15.2) with ESMTP id 0B38UVth086665; Thu, 3 Dec 2020 08:30:31 GMT (envelope-from np@FreeBSD.org) Received: (from np@localhost) by repo.freebsd.org (8.15.2/8.15.2/Submit) id 0B38UUcX086660; Thu, 3 Dec 2020 08:30:30 GMT (envelope-from np@FreeBSD.org) Message-Id: <202012030830.0B38UUcX086660@repo.freebsd.org> X-Authentication-Warning: repo.freebsd.org: np set sender to np@FreeBSD.org using -f From: Navdeep Parhar Date: Thu, 3 Dec 2020 08:30:30 +0000 (UTC) To: src-committers@freebsd.org, svn-src-all@freebsd.org, svn-src-head@freebsd.org Subject: svn commit: r368296 - in head/sys/dev/cxgbe: . common X-SVN-Group: head X-SVN-Commit-Author: np X-SVN-Commit-Paths: in head/sys/dev/cxgbe: . common X-SVN-Commit-Revision: 368296 X-SVN-Commit-Repository: base MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-src-all@freebsd.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: "SVN commit messages for the entire src tree \(except for " user" and " projects" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 03 Dec 2020 08:30:31 -0000 Author: np Date: Thu Dec 3 08:30:29 2020 New Revision: 368296 URL: https://svnweb.freebsd.org/changeset/base/368296 Log: cxgbe(4): Stop but don't free netmap queues when netmap is switched off. It is common for freelists to be starving when a netmap application stops. Mailbox commands to free queues can hang in such a situation. Avoid that by not freeing the queues when netmap is switched off. Instead, use an alternate method to stop the queues without releasing the context ids. If netmap is enabled again later then the same queue is reinitialized for use. Move alloc_nm_rxq and txq to t4_netmap.c while here. MFC after: 1 week Sponsored by: Chelsio Communications Modified: head/sys/dev/cxgbe/adapter.h head/sys/dev/cxgbe/common/common.h head/sys/dev/cxgbe/common/t4_hw.c head/sys/dev/cxgbe/t4_netmap.c head/sys/dev/cxgbe/t4_sge.c Modified: head/sys/dev/cxgbe/adapter.h ============================================================================== --- head/sys/dev/cxgbe/adapter.h Thu Dec 3 05:56:42 2020 (r368295) +++ head/sys/dev/cxgbe/adapter.h Thu Dec 3 08:30:29 2020 (r368296) @@ -1247,6 +1247,12 @@ struct sge_nm_rxq; void cxgbe_nm_attach(struct vi_info *); void cxgbe_nm_detach(struct vi_info *); void service_nm_rxq(struct sge_nm_rxq *); +int alloc_nm_rxq(struct vi_info *, struct sge_nm_rxq *, int, int, + struct sysctl_oid *); +int free_nm_rxq(struct vi_info *, struct sge_nm_rxq *); +int alloc_nm_txq(struct vi_info *, struct sge_nm_txq *, int, int, + struct sysctl_oid *); +int free_nm_txq(struct vi_info *, struct sge_nm_txq *); #endif /* t4_sge.c */ @@ -1259,6 +1265,11 @@ int t4_create_dma_tag(struct adapter *); void t4_sge_sysctls(struct adapter *, struct sysctl_ctx_list *, struct sysctl_oid_list *); int t4_destroy_dma_tag(struct adapter *); +int alloc_ring(struct adapter *, size_t, bus_dma_tag_t *, bus_dmamap_t *, + bus_addr_t *, void **); +int free_ring(struct adapter *, bus_dma_tag_t, bus_dmamap_t, bus_addr_t, + void *); +int sysctl_uint16(SYSCTL_HANDLER_ARGS); int t4_setup_adapter_queues(struct adapter *); int t4_teardown_adapter_queues(struct adapter *); int t4_setup_vi_queues(struct vi_info *); Modified: head/sys/dev/cxgbe/common/common.h ============================================================================== --- head/sys/dev/cxgbe/common/common.h Thu Dec 3 05:56:42 2020 (r368295) +++ head/sys/dev/cxgbe/common/common.h Thu Dec 3 08:30:29 2020 (r368296) @@ -840,6 +840,8 @@ int t4_iq_stop(struct adapter *adap, unsigned int mbox int t4_iq_free(struct adapter *adap, unsigned int mbox, unsigned int pf, unsigned int vf, unsigned int iqtype, unsigned int iqid, unsigned int fl0id, unsigned int fl1id); +int t4_eth_eq_stop(struct adapter *adap, unsigned int mbox, unsigned int pf, + unsigned int vf, unsigned int eqid); int t4_eth_eq_free(struct adapter *adap, unsigned int mbox, unsigned int pf, unsigned int vf, unsigned int eqid); int t4_ctrl_eq_free(struct adapter *adap, unsigned int mbox, unsigned int pf, Modified: head/sys/dev/cxgbe/common/t4_hw.c ============================================================================== --- head/sys/dev/cxgbe/common/t4_hw.c Thu Dec 3 05:56:42 2020 (r368295) +++ head/sys/dev/cxgbe/common/t4_hw.c Thu Dec 3 08:30:29 2020 (r368296) @@ -8620,6 +8620,32 @@ int t4_iq_free(struct adapter *adap, unsigned int mbox } /** + * t4_eth_eq_stop - stop an Ethernet egress queue + * @adap: the adapter + * @mbox: mailbox to use for the FW command + * @pf: the PF owning the queues + * @vf: the VF owning the queues + * @eqid: egress queue id + * + * Stops an Ethernet egress queue. The queue can be reinitialized or + * freed but is not otherwise functional after this call. + */ +int t4_eth_eq_stop(struct adapter *adap, unsigned int mbox, unsigned int pf, + unsigned int vf, unsigned int eqid) +{ + struct fw_eq_eth_cmd c; + + memset(&c, 0, sizeof(c)); + c.op_to_vfn = cpu_to_be32(V_FW_CMD_OP(FW_EQ_ETH_CMD) | + F_FW_CMD_REQUEST | F_FW_CMD_EXEC | + V_FW_EQ_ETH_CMD_PFN(pf) | + V_FW_EQ_ETH_CMD_VFN(vf)); + c.alloc_to_len16 = cpu_to_be32(F_FW_EQ_ETH_CMD_EQSTOP | FW_LEN16(c)); + c.eqid_pkd = cpu_to_be32(V_FW_EQ_ETH_CMD_EQID(eqid)); + return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL); +} + +/** * t4_eth_eq_free - free an Ethernet egress queue * @adap: the adapter * @mbox: mailbox to use for the FW command Modified: head/sys/dev/cxgbe/t4_netmap.c ============================================================================== --- head/sys/dev/cxgbe/t4_netmap.c Thu Dec 3 05:56:42 2020 (r368295) +++ head/sys/dev/cxgbe/t4_netmap.c Thu Dec 3 08:30:29 2020 (r368296) @@ -120,6 +120,166 @@ static int nm_txcsum = 0; SYSCTL_INT(_hw_cxgbe, OID_AUTO, nm_txcsum, CTLFLAG_RWTUN, &nm_txcsum, 0, "Enable transmit checksum offloading."); +static int free_nm_rxq_hwq(struct vi_info *, struct sge_nm_rxq *); +static int free_nm_txq_hwq(struct vi_info *, struct sge_nm_txq *); + +int +alloc_nm_rxq(struct vi_info *vi, struct sge_nm_rxq *nm_rxq, int intr_idx, + int idx, struct sysctl_oid *oid) +{ + int rc; + struct sysctl_oid_list *children; + struct sysctl_ctx_list *ctx; + char name[16]; + size_t len; + struct adapter *sc = vi->adapter; + struct netmap_adapter *na = NA(vi->ifp); + + MPASS(na != NULL); + + len = vi->qsize_rxq * IQ_ESIZE; + rc = alloc_ring(sc, len, &nm_rxq->iq_desc_tag, &nm_rxq->iq_desc_map, + &nm_rxq->iq_ba, (void **)&nm_rxq->iq_desc); + if (rc != 0) + return (rc); + + len = na->num_rx_desc * EQ_ESIZE + sc->params.sge.spg_len; + rc = alloc_ring(sc, len, &nm_rxq->fl_desc_tag, &nm_rxq->fl_desc_map, + &nm_rxq->fl_ba, (void **)&nm_rxq->fl_desc); + if (rc != 0) + return (rc); + + nm_rxq->vi = vi; + nm_rxq->nid = idx; + nm_rxq->iq_cidx = 0; + nm_rxq->iq_sidx = vi->qsize_rxq - sc->params.sge.spg_len / IQ_ESIZE; + nm_rxq->iq_gen = F_RSPD_GEN; + nm_rxq->fl_pidx = nm_rxq->fl_cidx = 0; + nm_rxq->fl_sidx = na->num_rx_desc; + nm_rxq->fl_sidx2 = nm_rxq->fl_sidx; /* copy for rxsync cacheline */ + nm_rxq->intr_idx = intr_idx; + nm_rxq->iq_cntxt_id = INVALID_NM_RXQ_CNTXT_ID; + + ctx = &vi->ctx; + children = SYSCTL_CHILDREN(oid); + + snprintf(name, sizeof(name), "%d", idx); + oid = SYSCTL_ADD_NODE(ctx, children, OID_AUTO, name, + CTLFLAG_RD | CTLFLAG_MPSAFE, NULL, "rx queue"); + children = SYSCTL_CHILDREN(oid); + + SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "abs_id", + CTLTYPE_INT | CTLFLAG_RD | CTLFLAG_MPSAFE, &nm_rxq->iq_abs_id, + 0, sysctl_uint16, "I", "absolute id of the queue"); + SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cntxt_id", + CTLTYPE_INT | CTLFLAG_RD | CTLFLAG_MPSAFE, &nm_rxq->iq_cntxt_id, + 0, sysctl_uint16, "I", "SGE context id of the queue"); + SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cidx", + CTLTYPE_INT | CTLFLAG_RD | CTLFLAG_MPSAFE, &nm_rxq->iq_cidx, 0, + sysctl_uint16, "I", "consumer index"); + + children = SYSCTL_CHILDREN(oid); + oid = SYSCTL_ADD_NODE(ctx, children, OID_AUTO, "fl", + CTLFLAG_RD | CTLFLAG_MPSAFE, NULL, "freelist"); + children = SYSCTL_CHILDREN(oid); + + SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cntxt_id", + CTLTYPE_INT | CTLFLAG_RD | CTLFLAG_MPSAFE, &nm_rxq->fl_cntxt_id, + 0, sysctl_uint16, "I", "SGE context id of the freelist"); + SYSCTL_ADD_UINT(ctx, children, OID_AUTO, "cidx", CTLFLAG_RD, + &nm_rxq->fl_cidx, 0, "consumer index"); + SYSCTL_ADD_UINT(ctx, children, OID_AUTO, "pidx", CTLFLAG_RD, + &nm_rxq->fl_pidx, 0, "producer index"); + + return (rc); +} + +int +free_nm_rxq(struct vi_info *vi, struct sge_nm_rxq *nm_rxq) +{ + struct adapter *sc = vi->adapter; + + if (!(vi->flags & VI_INIT_DONE)) + return (0); + + if (nm_rxq->iq_cntxt_id != INVALID_NM_RXQ_CNTXT_ID) + free_nm_rxq_hwq(vi, nm_rxq); + MPASS(nm_rxq->iq_cntxt_id == INVALID_NM_RXQ_CNTXT_ID); + + free_ring(sc, nm_rxq->iq_desc_tag, nm_rxq->iq_desc_map, nm_rxq->iq_ba, + nm_rxq->iq_desc); + free_ring(sc, nm_rxq->fl_desc_tag, nm_rxq->fl_desc_map, nm_rxq->fl_ba, + nm_rxq->fl_desc); + + return (0); +} + +int +alloc_nm_txq(struct vi_info *vi, struct sge_nm_txq *nm_txq, int iqidx, int idx, + struct sysctl_oid *oid) +{ + int rc; + size_t len; + struct port_info *pi = vi->pi; + struct adapter *sc = pi->adapter; + struct netmap_adapter *na = NA(vi->ifp); + char name[16]; + struct sysctl_oid_list *children = SYSCTL_CHILDREN(oid); + + len = na->num_tx_desc * EQ_ESIZE + sc->params.sge.spg_len; + rc = alloc_ring(sc, len, &nm_txq->desc_tag, &nm_txq->desc_map, + &nm_txq->ba, (void **)&nm_txq->desc); + if (rc) + return (rc); + + nm_txq->pidx = nm_txq->cidx = 0; + nm_txq->sidx = na->num_tx_desc; + nm_txq->nid = idx; + nm_txq->iqidx = iqidx; + nm_txq->cpl_ctrl0 = htobe32(V_TXPKT_OPCODE(CPL_TX_PKT) | + V_TXPKT_INTF(pi->tx_chan) | V_TXPKT_PF(sc->pf) | + V_TXPKT_VF(vi->vin) | V_TXPKT_VF_VLD(vi->vfvld)); + if (sc->params.fw_vers >= FW_VERSION32(1, 24, 11, 0)) + nm_txq->op_pkd = htobe32(V_FW_WR_OP(FW_ETH_TX_PKTS2_WR)); + else + nm_txq->op_pkd = htobe32(V_FW_WR_OP(FW_ETH_TX_PKTS_WR)); + nm_txq->cntxt_id = INVALID_NM_TXQ_CNTXT_ID; + + snprintf(name, sizeof(name), "%d", idx); + oid = SYSCTL_ADD_NODE(&vi->ctx, children, OID_AUTO, name, + CTLFLAG_RD | CTLFLAG_MPSAFE, NULL, "netmap tx queue"); + children = SYSCTL_CHILDREN(oid); + + SYSCTL_ADD_UINT(&vi->ctx, children, OID_AUTO, "cntxt_id", CTLFLAG_RD, + &nm_txq->cntxt_id, 0, "SGE context id of the queue"); + SYSCTL_ADD_PROC(&vi->ctx, children, OID_AUTO, "cidx", + CTLTYPE_INT | CTLFLAG_RD | CTLFLAG_MPSAFE, &nm_txq->cidx, 0, + sysctl_uint16, "I", "consumer index"); + SYSCTL_ADD_PROC(&vi->ctx, children, OID_AUTO, "pidx", + CTLTYPE_INT | CTLFLAG_RD | CTLFLAG_MPSAFE, &nm_txq->pidx, 0, + sysctl_uint16, "I", "producer index"); + + return (rc); +} + +int +free_nm_txq(struct vi_info *vi, struct sge_nm_txq *nm_txq) +{ + struct adapter *sc = vi->adapter; + + if (!(vi->flags & VI_INIT_DONE)) + return (0); + + if (nm_txq->cntxt_id != INVALID_NM_TXQ_CNTXT_ID) + free_nm_txq_hwq(vi, nm_txq); + MPASS(nm_txq->cntxt_id == INVALID_NM_TXQ_CNTXT_ID); + + free_ring(sc, nm_txq->desc_tag, nm_txq->desc_map, nm_txq->ba, + nm_txq->desc); + + return (0); +} + static int alloc_nm_rxq_hwq(struct vi_info *vi, struct sge_nm_rxq *nm_rxq, int cong) { @@ -141,8 +301,15 @@ alloc_nm_rxq_hwq(struct vi_info *vi, struct sge_nm_rxq c.op_to_vfn = htobe32(V_FW_CMD_OP(FW_IQ_CMD) | F_FW_CMD_REQUEST | F_FW_CMD_WRITE | F_FW_CMD_EXEC | V_FW_IQ_CMD_PFN(sc->pf) | V_FW_IQ_CMD_VFN(0)); - c.alloc_to_len16 = htobe32(F_FW_IQ_CMD_ALLOC | F_FW_IQ_CMD_IQSTART | - FW_LEN16(c)); + c.alloc_to_len16 = htobe32(F_FW_IQ_CMD_IQSTART | FW_LEN16(c)); + if (nm_rxq->iq_cntxt_id == INVALID_NM_RXQ_CNTXT_ID) + c.alloc_to_len16 |= htobe32(F_FW_IQ_CMD_ALLOC); + else { + c.iqid = htobe16(nm_rxq->iq_cntxt_id); + c.fl0id = htobe16(nm_rxq->fl_cntxt_id); + c.fl1id = htobe16(0xffff); + c.physiqid = htobe16(nm_rxq->iq_abs_id); + } MPASS(!forwarding_intr_to_fwq(sc)); KASSERT(nm_rxq->intr_idx < sc->intr_count, ("%s: invalid direct intr_idx %d", __func__, nm_rxq->intr_idx)); @@ -276,8 +443,11 @@ alloc_nm_txq_hwq(struct vi_info *vi, struct sge_nm_txq c.op_to_vfn = htobe32(V_FW_CMD_OP(FW_EQ_ETH_CMD) | F_FW_CMD_REQUEST | F_FW_CMD_WRITE | F_FW_CMD_EXEC | V_FW_EQ_ETH_CMD_PFN(sc->pf) | V_FW_EQ_ETH_CMD_VFN(0)); - c.alloc_to_len16 = htobe32(F_FW_EQ_ETH_CMD_ALLOC | - F_FW_EQ_ETH_CMD_EQSTART | FW_LEN16(c)); + c.alloc_to_len16 = htobe32(F_FW_EQ_ETH_CMD_EQSTART | FW_LEN16(c)); + if (nm_txq->cntxt_id == INVALID_NM_TXQ_CNTXT_ID) + c.alloc_to_len16 |= htobe32(F_FW_EQ_ETH_CMD_ALLOC); + else + c.eqid_pkd = htobe32(V_FW_EQ_ETH_CMD_EQID(nm_txq->cntxt_id)); c.autoequiqe_to_viid = htobe32(F_FW_EQ_ETH_CMD_AUTOEQUIQE | F_FW_EQ_ETH_CMD_AUTOEQUEQE | V_FW_EQ_ETH_CMD_VIID(vi->viid)); c.fetchszm_to_iqid = @@ -580,8 +750,7 @@ cxgbe_netmap_on(struct adapter *sc, struct vi_info *vi for_each_nm_rxq(vi, i, nm_rxq) { kring = na->rx_rings[nm_rxq->nid]; - if (!nm_kring_pending_on(kring) || - nm_rxq->iq_cntxt_id != INVALID_NM_RXQ_CNTXT_ID) + if (!nm_kring_pending_on(kring)) continue; alloc_nm_rxq_hwq(vi, nm_rxq, tnl_cong(vi->pi, nm_cong_drop)); @@ -611,8 +780,7 @@ cxgbe_netmap_on(struct adapter *sc, struct vi_info *vi for_each_nm_txq(vi, i, nm_txq) { kring = na->tx_rings[nm_txq->nid]; - if (!nm_kring_pending_on(kring) || - nm_txq->cntxt_id != INVALID_NM_TXQ_CNTXT_ID) + if (!nm_kring_pending_on(kring)) continue; alloc_nm_txq_hwq(vi, nm_txq); @@ -653,23 +821,18 @@ cxgbe_netmap_off(struct adapter *sc, struct vi_info *v return (rc); /* error message logged already. */ for_each_nm_txq(vi, i, nm_txq) { - struct sge_qstat *spg = (void *)&nm_txq->desc[nm_txq->sidx]; - kring = na->tx_rings[nm_txq->nid]; - if (!nm_kring_pending_off(kring) || - nm_txq->cntxt_id == INVALID_NM_TXQ_CNTXT_ID) + if (!nm_kring_pending_off(kring)) continue; + MPASS(nm_txq->cntxt_id != INVALID_NM_TXQ_CNTXT_ID); - /* Wait for hw pidx to catch up ... */ - while (be16toh(nm_txq->pidx) != spg->pidx) - pause("nmpidx", 1); + rc = -t4_eth_eq_stop(sc, sc->mbox, sc->pf, 0, nm_txq->cntxt_id); + if (rc != 0) { + device_printf(vi->dev, + "failed to stop nm_txq[%d]: %d.\n", i, rc); + return (rc); + } - /* ... and then for the cidx. */ - while (spg->pidx != spg->cidx) - pause("nmcidx", 1); - - free_nm_txq_hwq(vi, nm_txq); - /* XXX: netmap, not the driver, should do this. */ kring->rhead = kring->rcur = kring->nr_hwcur = 0; kring->rtail = kring->nr_hwtail = kring->nkr_num_slots - 1; @@ -680,14 +843,21 @@ cxgbe_netmap_off(struct adapter *sc, struct vi_info *v kring = na->rx_rings[nm_rxq->nid]; if (nm_state != NM_OFF && !nm_kring_pending_off(kring)) nactive++; - if (nm_state == NM_OFF || !nm_kring_pending_off(kring)) + if (!nm_kring_pending_off(kring)) continue; - + MPASS(nm_state != NM_OFF); MPASS(nm_rxq->iq_cntxt_id != INVALID_NM_RXQ_CNTXT_ID); + + rc = -t4_iq_stop(sc, sc->mbox, sc->pf, 0, FW_IQ_TYPE_FL_INT_CAP, + nm_rxq->iq_cntxt_id, nm_rxq->fl_cntxt_id, 0xffff); + if (rc != 0) { + device_printf(vi->dev, + "failed to stop nm_rxq[%d]: %d.\n", i, rc); + return (rc); + } + while (!atomic_cmpset_int(&nm_rxq->nm_state, NM_ON, NM_OFF)) pause("nmst", 1); - - free_nm_rxq_hwq(vi, nm_rxq); /* XXX: netmap, not the driver, should do this. */ kring->rhead = kring->rcur = kring->nr_hwcur = 0; Modified: head/sys/dev/cxgbe/t4_sge.c ============================================================================== --- head/sys/dev/cxgbe/t4_sge.c Thu Dec 3 05:56:42 2020 (r368295) +++ head/sys/dev/cxgbe/t4_sge.c Thu Dec 3 08:30:29 2020 (r368296) @@ -222,10 +222,6 @@ static inline void init_iq(struct sge_iq *, struct ada static inline void init_fl(struct adapter *, struct sge_fl *, int, int, char *); static inline void init_eq(struct adapter *, struct sge_eq *, int, int, uint8_t, uint16_t, char *); -static int alloc_ring(struct adapter *, size_t, bus_dma_tag_t *, bus_dmamap_t *, - bus_addr_t *, void **); -static int free_ring(struct adapter *, bus_dma_tag_t, bus_dmamap_t, bus_addr_t, - void *); static int alloc_iq_fl(struct vi_info *, struct sge_iq *, struct sge_fl *, int, int); static int free_iq_fl(struct vi_info *, struct sge_iq *, struct sge_fl *); @@ -245,14 +241,6 @@ static int alloc_ofld_rxq(struct vi_info *, struct sge struct sysctl_oid *); static int free_ofld_rxq(struct vi_info *, struct sge_ofld_rxq *); #endif -#ifdef DEV_NETMAP -static int alloc_nm_rxq(struct vi_info *, struct sge_nm_rxq *, int, int, - struct sysctl_oid *); -static int free_nm_rxq(struct vi_info *, struct sge_nm_rxq *); -static int alloc_nm_txq(struct vi_info *, struct sge_nm_txq *, int, int, - struct sysctl_oid *); -static int free_nm_txq(struct vi_info *, struct sge_nm_txq *); -#endif static int ctrl_eq_alloc(struct adapter *, struct sge_eq *); static int eth_eq_alloc(struct adapter *, struct vi_info *, struct sge_eq *); #if defined(TCP_OFFLOAD) || defined(RATELIMIT) @@ -309,7 +297,6 @@ static int t4_handle_wrerr_rpl(struct adapter *, const static void wrq_tx_drain(void *, int); static void drain_wrq_wr_list(struct adapter *, struct sge_wrq *); -static int sysctl_uint16(SYSCTL_HANDLER_ARGS); static int sysctl_bufsizes(SYSCTL_HANDLER_ARGS); #ifdef RATELIMIT static inline u_int txpkt_eo_len16(u_int, u_int, u_int); @@ -3392,7 +3379,7 @@ init_eq(struct adapter *sc, struct sge_eq *eq, int eqt strlcpy(eq->lockname, name, sizeof(eq->lockname)); } -static int +int alloc_ring(struct adapter *sc, size_t len, bus_dma_tag_t *tag, bus_dmamap_t *map, bus_addr_t *pa, void **va) { @@ -3424,7 +3411,7 @@ done: return (rc); } -static int +int free_ring(struct adapter *sc, bus_dma_tag_t tag, bus_dmamap_t map, bus_addr_t pa, void *va) { @@ -3941,162 +3928,6 @@ free_ofld_rxq(struct vi_info *vi, struct sge_ofld_rxq } #endif -#ifdef DEV_NETMAP -static int -alloc_nm_rxq(struct vi_info *vi, struct sge_nm_rxq *nm_rxq, int intr_idx, - int idx, struct sysctl_oid *oid) -{ - int rc; - struct sysctl_oid_list *children; - struct sysctl_ctx_list *ctx; - char name[16]; - size_t len; - struct adapter *sc = vi->adapter; - struct netmap_adapter *na = NA(vi->ifp); - - MPASS(na != NULL); - - len = vi->qsize_rxq * IQ_ESIZE; - rc = alloc_ring(sc, len, &nm_rxq->iq_desc_tag, &nm_rxq->iq_desc_map, - &nm_rxq->iq_ba, (void **)&nm_rxq->iq_desc); - if (rc != 0) - return (rc); - - len = na->num_rx_desc * EQ_ESIZE + sc->params.sge.spg_len; - rc = alloc_ring(sc, len, &nm_rxq->fl_desc_tag, &nm_rxq->fl_desc_map, - &nm_rxq->fl_ba, (void **)&nm_rxq->fl_desc); - if (rc != 0) - return (rc); - - nm_rxq->vi = vi; - nm_rxq->nid = idx; - nm_rxq->iq_cidx = 0; - nm_rxq->iq_sidx = vi->qsize_rxq - sc->params.sge.spg_len / IQ_ESIZE; - nm_rxq->iq_gen = F_RSPD_GEN; - nm_rxq->fl_pidx = nm_rxq->fl_cidx = 0; - nm_rxq->fl_sidx = na->num_rx_desc; - nm_rxq->fl_sidx2 = nm_rxq->fl_sidx; /* copy for rxsync cacheline */ - nm_rxq->intr_idx = intr_idx; - nm_rxq->iq_cntxt_id = INVALID_NM_RXQ_CNTXT_ID; - - ctx = &vi->ctx; - children = SYSCTL_CHILDREN(oid); - - snprintf(name, sizeof(name), "%d", idx); - oid = SYSCTL_ADD_NODE(ctx, children, OID_AUTO, name, - CTLFLAG_RD | CTLFLAG_MPSAFE, NULL, "rx queue"); - children = SYSCTL_CHILDREN(oid); - - SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "abs_id", - CTLTYPE_INT | CTLFLAG_RD | CTLFLAG_MPSAFE, &nm_rxq->iq_abs_id, - 0, sysctl_uint16, "I", "absolute id of the queue"); - SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cntxt_id", - CTLTYPE_INT | CTLFLAG_RD | CTLFLAG_MPSAFE, &nm_rxq->iq_cntxt_id, - 0, sysctl_uint16, "I", "SGE context id of the queue"); - SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cidx", - CTLTYPE_INT | CTLFLAG_RD | CTLFLAG_MPSAFE, &nm_rxq->iq_cidx, 0, - sysctl_uint16, "I", "consumer index"); - - children = SYSCTL_CHILDREN(oid); - oid = SYSCTL_ADD_NODE(ctx, children, OID_AUTO, "fl", - CTLFLAG_RD | CTLFLAG_MPSAFE, NULL, "freelist"); - children = SYSCTL_CHILDREN(oid); - - SYSCTL_ADD_PROC(ctx, children, OID_AUTO, "cntxt_id", - CTLTYPE_INT | CTLFLAG_RD | CTLFLAG_MPSAFE, &nm_rxq->fl_cntxt_id, - 0, sysctl_uint16, "I", "SGE context id of the freelist"); - SYSCTL_ADD_UINT(ctx, children, OID_AUTO, "cidx", CTLFLAG_RD, - &nm_rxq->fl_cidx, 0, "consumer index"); - SYSCTL_ADD_UINT(ctx, children, OID_AUTO, "pidx", CTLFLAG_RD, - &nm_rxq->fl_pidx, 0, "producer index"); - - return (rc); -} - - -static int -free_nm_rxq(struct vi_info *vi, struct sge_nm_rxq *nm_rxq) -{ - struct adapter *sc = vi->adapter; - - if (vi->flags & VI_INIT_DONE) - MPASS(nm_rxq->iq_cntxt_id == INVALID_NM_RXQ_CNTXT_ID); - else - MPASS(nm_rxq->iq_cntxt_id == 0); - - free_ring(sc, nm_rxq->iq_desc_tag, nm_rxq->iq_desc_map, nm_rxq->iq_ba, - nm_rxq->iq_desc); - free_ring(sc, nm_rxq->fl_desc_tag, nm_rxq->fl_desc_map, nm_rxq->fl_ba, - nm_rxq->fl_desc); - - return (0); -} - -static int -alloc_nm_txq(struct vi_info *vi, struct sge_nm_txq *nm_txq, int iqidx, int idx, - struct sysctl_oid *oid) -{ - int rc; - size_t len; - struct port_info *pi = vi->pi; - struct adapter *sc = pi->adapter; - struct netmap_adapter *na = NA(vi->ifp); - char name[16]; - struct sysctl_oid_list *children = SYSCTL_CHILDREN(oid); - - len = na->num_tx_desc * EQ_ESIZE + sc->params.sge.spg_len; - rc = alloc_ring(sc, len, &nm_txq->desc_tag, &nm_txq->desc_map, - &nm_txq->ba, (void **)&nm_txq->desc); - if (rc) - return (rc); - - nm_txq->pidx = nm_txq->cidx = 0; - nm_txq->sidx = na->num_tx_desc; - nm_txq->nid = idx; - nm_txq->iqidx = iqidx; - nm_txq->cpl_ctrl0 = htobe32(V_TXPKT_OPCODE(CPL_TX_PKT) | - V_TXPKT_INTF(pi->tx_chan) | V_TXPKT_PF(sc->pf) | - V_TXPKT_VF(vi->vin) | V_TXPKT_VF_VLD(vi->vfvld)); - if (sc->params.fw_vers >= FW_VERSION32(1, 24, 11, 0)) - nm_txq->op_pkd = htobe32(V_FW_WR_OP(FW_ETH_TX_PKTS2_WR)); - else - nm_txq->op_pkd = htobe32(V_FW_WR_OP(FW_ETH_TX_PKTS_WR)); - nm_txq->cntxt_id = INVALID_NM_TXQ_CNTXT_ID; - - snprintf(name, sizeof(name), "%d", idx); - oid = SYSCTL_ADD_NODE(&vi->ctx, children, OID_AUTO, name, - CTLFLAG_RD | CTLFLAG_MPSAFE, NULL, "netmap tx queue"); - children = SYSCTL_CHILDREN(oid); - - SYSCTL_ADD_UINT(&vi->ctx, children, OID_AUTO, "cntxt_id", CTLFLAG_RD, - &nm_txq->cntxt_id, 0, "SGE context id of the queue"); - SYSCTL_ADD_PROC(&vi->ctx, children, OID_AUTO, "cidx", - CTLTYPE_INT | CTLFLAG_RD | CTLFLAG_MPSAFE, &nm_txq->cidx, 0, - sysctl_uint16, "I", "consumer index"); - SYSCTL_ADD_PROC(&vi->ctx, children, OID_AUTO, "pidx", - CTLTYPE_INT | CTLFLAG_RD | CTLFLAG_MPSAFE, &nm_txq->pidx, 0, - sysctl_uint16, "I", "producer index"); - - return (rc); -} - -static int -free_nm_txq(struct vi_info *vi, struct sge_nm_txq *nm_txq) -{ - struct adapter *sc = vi->adapter; - - if (vi->flags & VI_INIT_DONE) - MPASS(nm_txq->cntxt_id == INVALID_NM_TXQ_CNTXT_ID); - else - MPASS(nm_txq->cntxt_id == 0); - - free_ring(sc, nm_txq->desc_tag, nm_txq->desc_map, nm_txq->ba, - nm_txq->desc); - - return (0); -} -#endif - /* * Returns a reasonable automatic cidx flush threshold for a given queue size. */ @@ -6146,7 +5977,7 @@ t4_handle_wrerr_rpl(struct adapter *adap, const __be64 return (0); } -static int +int sysctl_uint16(SYSCTL_HANDLER_ARGS) { uint16_t *id = arg1;