From nobody Tue Mar 8 00:21:32 2022 X-Original-To: dev-commits-src-branches@mlmmj.nyi.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1]) by mlmmj.nyi.freebsd.org (Postfix) with ESMTP id 7B7E619FA9B2; Tue, 8 Mar 2022 00:21:33 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from mxrelay.nyi.freebsd.org (mxrelay.nyi.freebsd.org [IPv6:2610:1c1:1:606c::19:3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "mxrelay.nyi.freebsd.org", Issuer "R3" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 4KCGGN6yjSz4lmG; Tue, 8 Mar 2022 00:21:32 +0000 (UTC) (envelope-from git@FreeBSD.org) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1646698893; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=Xc6Kvt4PozrIpRET6fvns5VMxOJCavjAI2gZBg6fe2U=; b=ni2C90Pjg+hYOF3XyAaopTugTxHQVhicEa8/ypDOZ9xZJNXFhxQtD6DsjweJ7QAoXyMqc0 sn/z208RpaFzJaG2THy/y1g+5VTV1py57/pRoHSmJqhlowyeraU14K4amiOe91HG+ZAZUX qcv0wefcB8AKQ+5E9MmbcYyWdU4mcA5YrHfffl+thfPLE43ULmp2+QiyBtaXqBAuuqZqK3 y5Eyoq6p4tAjpurNs1TlRPJYpOVeNiJBe+SXJ4jLNtAzbReliog6El+PyG9kTVb0SNQ+tF s/TAt3dVQpFhkLEOK3noVDArx10fJN+KrcJ22ii605uVVA6dSZmvjrBW1MFolg== Received: from gitrepo.freebsd.org (gitrepo.freebsd.org [IPv6:2610:1c1:1:6068::e6a:5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (Client did not present a certificate) by mxrelay.nyi.freebsd.org (Postfix) with ESMTPS id BD07012940; Tue, 8 Mar 2022 00:21:32 +0000 (UTC) (envelope-from git@FreeBSD.org) Received: from gitrepo.freebsd.org ([127.0.1.44]) by gitrepo.freebsd.org (8.16.1/8.16.1) with ESMTP id 2280LWYS072593; Tue, 8 Mar 2022 00:21:32 GMT (envelope-from git@gitrepo.freebsd.org) Received: (from git@localhost) by gitrepo.freebsd.org (8.16.1/8.16.1/Submit) id 2280LWmM072590; Tue, 8 Mar 2022 00:21:32 GMT (envelope-from git) Date: Tue, 8 Mar 2022 00:21:32 GMT Message-Id: <202203080021.2280LWmM072590@gitrepo.freebsd.org> To: src-committers@FreeBSD.org, dev-commits-src-all@FreeBSD.org, dev-commits-src-branches@FreeBSD.org From: Eric Joyner Subject: git: 41423f3a62e3 - stable/13 - iavf(4): Split source and update to 3.0.26-k List-Id: Commits to the stable branches of the FreeBSD src repository List-Archive: https://lists.freebsd.org/archives/dev-commits-src-branches List-Help: List-Post: List-Subscribe: List-Unsubscribe: Sender: owner-dev-commits-src-branches@freebsd.org X-BeenThere: dev-commits-src-branches@freebsd.org MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-Git-Committer: erj X-Git-Repository: src X-Git-Refname: refs/heads/stable/13 X-Git-Reftype: branch X-Git-Commit: 41423f3a62e3f21f15afec6a0046c95c91c45b60 Auto-Submitted: auto-generated ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=freebsd.org; s=dkim; t=1646698893; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=Xc6Kvt4PozrIpRET6fvns5VMxOJCavjAI2gZBg6fe2U=; b=p1NDA4U00PILy9RuzioezakPn/CWG9HuT85SiHGZvwe82XQVVIk6ynU5U/Eh3dFOM80Q05 5tnPqapeqZZ46agdfCfQLOdEiP/AgfcciSatu3Z58vmGuUWt1f23Oz0UIud9OlimObJtMM szqmaZdTmFSVvbcFUfWPx0RUXOZWfEvUlpn73ubFDPDxfNyBK311i9h4Esgoox9C7Wu30X TXEt64Sd8dv08tIUvPXGmGYgeFqGHxTNwOiuqNsgCmcmUp2BIPpm1+YIR83/wsRBzy4LMN rGYCDNjoOjtWwyAKPVv1MarFTwzQKAa9E06sa3GLwoJk82blXpvh98PbtvX5CA== ARC-Seal: i=1; s=dkim; d=freebsd.org; t=1646698893; a=rsa-sha256; cv=none; b=xl80LQ73cNbYJrTtE8KEPxQ0WY8JDDuntGJN7zH+FlYesAhexBO8Fyub+3Dn48FdxgsAsr UQsEJvTc3LzpDRFs3jjNsy94jGRaft5h7k5hu2uYqurwqupUEdB4OfvmKNgFtjBsNUGwEE NfqGZJLjkqH+6N5oBPQrsT1ZbogadcDMcRWLaArVY5gpYnUlVdILoXbzsy+2hHa5SW/uNC 0UIL/7/hCAOPa9afeFoJqUG4MNoJG9q/NCvKKZpM6uIG5iJvSYBepykeBEd8mdSYiymnKW zvL5UNyALSdRT0KkMjTg2O4+GCDccpOfUnoekDRnhcmchxa3Xf7lBV3vA5yNyw== ARC-Authentication-Results: i=1; mx1.freebsd.org; none X-ThisMailContainsUnwantedMimeParts: N The branch stable/13 has been updated by erj: URL: https://cgit.FreeBSD.org/src/commit/?id=41423f3a62e3f21f15afec6a0046c95c91c45b60 commit 41423f3a62e3f21f15afec6a0046c95c91c45b60 Author: Eric Joyner AuthorDate: 2021-02-12 21:28:18 +0000 Commit: Eric Joyner CommitDate: 2022-03-08 00:00:08 +0000 iavf(4): Split source and update to 3.0.26-k The iavf(4) driver now uses a different source base from ixl(4), since it will be the standard VF driver for new Intel Ethernet products going forward, including ice(4). It continues to use the iflib framework for network drivers. Since it now uses a different source code base, this commit adds a new sys/dev/iavf entry, but it re-uses the existing module name so no configuration changes are necessary. Signed-off-by: Eric Joyner Reviewed by: kbowling@ Tested by: lukasz.szczepaniak@intel.com Sponsored by: Intel Corporation Differential Revision: https://reviews.freebsd.org/D28636 (cherry picked from commit ca853dee3b8f26f53d48d685f32ec0b8396369e8) --- sys/conf/files.amd64 | 34 +- sys/dev/iavf/iavf_adminq.c | 990 +++++++++++++++++ sys/dev/iavf/iavf_adminq.h | 122 ++ sys/dev/iavf/iavf_adminq_cmd.h | 678 ++++++++++++ sys/dev/iavf/iavf_alloc.h | 64 ++ sys/dev/iavf/iavf_common.c | 1053 ++++++++++++++++++ sys/dev/iavf/iavf_debug.h | 131 +++ sys/dev/iavf/iavf_devids.h | 45 + sys/dev/iavf/iavf_drv_info.h | 78 ++ sys/dev/iavf/iavf_iflib.h | 407 +++++++ sys/dev/iavf/iavf_lib.c | 1531 ++++++++++++++++++++++++++ sys/dev/iavf/iavf_lib.h | 512 +++++++++ sys/dev/iavf/iavf_opts.h | 48 + sys/dev/iavf/iavf_osdep.c | 405 +++++++ sys/dev/iavf/iavf_osdep.h | 250 +++++ sys/dev/iavf/iavf_prototype.h | 122 ++ sys/dev/iavf/iavf_register.h | 121 ++ sys/dev/iavf/iavf_status.h | 107 ++ sys/dev/iavf/iavf_sysctls_common.h | 158 +++ sys/dev/iavf/iavf_sysctls_iflib.h | 46 + sys/dev/iavf/iavf_txrx_common.h | 93 ++ sys/dev/iavf/iavf_txrx_iflib.c | 789 +++++++++++++ sys/dev/iavf/iavf_type.h | 1037 +++++++++++++++++ sys/dev/iavf/iavf_vc_common.c | 1330 ++++++++++++++++++++++ sys/dev/iavf/iavf_vc_common.h | 83 ++ sys/dev/iavf/iavf_vc_iflib.c | 178 +++ sys/dev/iavf/if_iavf_iflib.c | 2138 ++++++++++++++++++++++++++++++++++++ sys/dev/iavf/virtchnl.h | 991 +++++++++++++++++ sys/modules/iavf/Makefile | 11 +- 29 files changed, 13536 insertions(+), 16 deletions(-) diff --git a/sys/conf/files.amd64 b/sys/conf/files.amd64 index ddc95b478ea4..cf9548d50418 100644 --- a/sys/conf/files.amd64 +++ b/sys/conf/files.amd64 @@ -171,6 +171,22 @@ dev/axgbe/xgbe-i2c.c optional axp dev/axgbe/xgbe-phy-v2.c optional axp dev/hyperv/vmbus/amd64/hyperv_machdep.c optional hyperv dev/hyperv/vmbus/amd64/vmbus_vector.S optional hyperv +dev/iavf/if_iavf_iflib.c optional iavf pci \ + compile-with "${NORMAL_C} -I$S/dev/iavf" +dev/iavf/iavf_lib.c optional iavf pci \ + compile-with "${NORMAL_C} -I$S/dev/iavf" +dev/iavf/iavf_osdep.c optional iavf pci \ + compile-with "${NORMAL_C} -I$S/dev/iavf" +dev/iavf/iavf_txrx_iflib.c optional iavf pci \ + compile-with "${NORMAL_C} -I$S/dev/iavf" +dev/iavf/iavf_common.c optional iavf pci \ + compile-with "${NORMAL_C} -I$S/dev/iavf" +dev/iavf/iavf_adminq.c optional iavf pci \ + compile-with "${NORMAL_C} -I$S/dev/iavf" +dev/iavf/iavf_vc_common.c optional iavf pci \ + compile-with "${NORMAL_C} -I$S/dev/iavf" +dev/iavf/iavf_vc_iflib.c optional iavf pci \ + compile-with "${NORMAL_C} -I$S/dev/iavf" dev/ice/if_ice_iflib.c optional ice pci \ compile-with "${NORMAL_C} -I$S/dev/ice" dev/ice/ice_lib.c optional ice pci \ @@ -233,23 +249,19 @@ dev/ixl/ixl_pf_iov.c optional ixl pci pci_iov \ compile-with "${NORMAL_C} -I$S/dev/ixl" dev/ixl/ixl_pf_i2c.c optional ixl pci \ compile-with "${NORMAL_C} -I$S/dev/ixl" -dev/ixl/if_iavf.c optional iavf pci \ +dev/ixl/ixl_txrx.c optional ixl pci \ compile-with "${NORMAL_C} -I$S/dev/ixl" -dev/ixl/iavf_vc.c optional iavf pci \ +dev/ixl/i40e_osdep.c optional ixl pci \ compile-with "${NORMAL_C} -I$S/dev/ixl" -dev/ixl/ixl_txrx.c optional ixl pci | iavf pci \ +dev/ixl/i40e_lan_hmc.c optional ixl pci \ compile-with "${NORMAL_C} -I$S/dev/ixl" -dev/ixl/i40e_osdep.c optional ixl pci | iavf pci \ +dev/ixl/i40e_hmc.c optional ixl pci \ compile-with "${NORMAL_C} -I$S/dev/ixl" -dev/ixl/i40e_lan_hmc.c optional ixl pci | iavf pci \ +dev/ixl/i40e_common.c optional ixl pci \ compile-with "${NORMAL_C} -I$S/dev/ixl" -dev/ixl/i40e_hmc.c optional ixl pci | iavf pci \ +dev/ixl/i40e_nvm.c optional ixl pci \ compile-with "${NORMAL_C} -I$S/dev/ixl" -dev/ixl/i40e_common.c optional ixl pci | iavf pci \ - compile-with "${NORMAL_C} -I$S/dev/ixl" -dev/ixl/i40e_nvm.c optional ixl pci | iavf pci \ - compile-with "${NORMAL_C} -I$S/dev/ixl" -dev/ixl/i40e_adminq.c optional ixl pci | iavf pci \ +dev/ixl/i40e_adminq.c optional ixl pci \ compile-with "${NORMAL_C} -I$S/dev/ixl" dev/ixl/i40e_dcb.c optional ixl pci \ compile-with "${NORMAL_C} -I$S/dev/ixl" diff --git a/sys/dev/iavf/iavf_adminq.c b/sys/dev/iavf/iavf_adminq.c new file mode 100644 index 000000000000..75b9c508cc63 --- /dev/null +++ b/sys/dev/iavf/iavf_adminq.c @@ -0,0 +1,990 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright (c) 2021, Intel Corporation + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions are met: + * + * 1. Redistributions of source code must retain the above copyright notice, + * this list of conditions and the following disclaimer. + * + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * 3. Neither the name of the Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived from + * this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE + * POSSIBILITY OF SUCH DAMAGE. + */ +/*$FreeBSD$*/ + +#include "iavf_status.h" +#include "iavf_type.h" +#include "iavf_register.h" +#include "iavf_adminq.h" +#include "iavf_prototype.h" + +/** + * iavf_adminq_init_regs - Initialize AdminQ registers + * @hw: pointer to the hardware structure + * + * This assumes the alloc_asq and alloc_arq functions have already been called + **/ +STATIC void iavf_adminq_init_regs(struct iavf_hw *hw) +{ + /* set head and tail registers in our local struct */ + hw->aq.asq.tail = IAVF_VF_ATQT1; + hw->aq.asq.head = IAVF_VF_ATQH1; + hw->aq.asq.len = IAVF_VF_ATQLEN1; + hw->aq.asq.bal = IAVF_VF_ATQBAL1; + hw->aq.asq.bah = IAVF_VF_ATQBAH1; + hw->aq.arq.tail = IAVF_VF_ARQT1; + hw->aq.arq.head = IAVF_VF_ARQH1; + hw->aq.arq.len = IAVF_VF_ARQLEN1; + hw->aq.arq.bal = IAVF_VF_ARQBAL1; + hw->aq.arq.bah = IAVF_VF_ARQBAH1; +} + +/** + * iavf_alloc_adminq_asq_ring - Allocate Admin Queue send rings + * @hw: pointer to the hardware structure + **/ +enum iavf_status iavf_alloc_adminq_asq_ring(struct iavf_hw *hw) +{ + enum iavf_status ret_code; + + ret_code = iavf_allocate_dma_mem(hw, &hw->aq.asq.desc_buf, + iavf_mem_atq_ring, + (hw->aq.num_asq_entries * + sizeof(struct iavf_aq_desc)), + IAVF_ADMINQ_DESC_ALIGNMENT); + if (ret_code) + return ret_code; + + ret_code = iavf_allocate_virt_mem(hw, &hw->aq.asq.cmd_buf, + (hw->aq.num_asq_entries * + sizeof(struct iavf_asq_cmd_details))); + if (ret_code) { + iavf_free_dma_mem(hw, &hw->aq.asq.desc_buf); + return ret_code; + } + + return ret_code; +} + +/** + * iavf_alloc_adminq_arq_ring - Allocate Admin Queue receive rings + * @hw: pointer to the hardware structure + **/ +enum iavf_status iavf_alloc_adminq_arq_ring(struct iavf_hw *hw) +{ + enum iavf_status ret_code; + + ret_code = iavf_allocate_dma_mem(hw, &hw->aq.arq.desc_buf, + iavf_mem_arq_ring, + (hw->aq.num_arq_entries * + sizeof(struct iavf_aq_desc)), + IAVF_ADMINQ_DESC_ALIGNMENT); + + return ret_code; +} + +/** + * iavf_free_adminq_asq - Free Admin Queue send rings + * @hw: pointer to the hardware structure + * + * This assumes the posted send buffers have already been cleaned + * and de-allocated + **/ +void iavf_free_adminq_asq(struct iavf_hw *hw) +{ + iavf_free_virt_mem(hw, &hw->aq.asq.cmd_buf); + iavf_free_dma_mem(hw, &hw->aq.asq.desc_buf); +} + +/** + * iavf_free_adminq_arq - Free Admin Queue receive rings + * @hw: pointer to the hardware structure + * + * This assumes the posted receive buffers have already been cleaned + * and de-allocated + **/ +void iavf_free_adminq_arq(struct iavf_hw *hw) +{ + iavf_free_dma_mem(hw, &hw->aq.arq.desc_buf); +} + +/** + * iavf_alloc_arq_bufs - Allocate pre-posted buffers for the receive queue + * @hw: pointer to the hardware structure + **/ +STATIC enum iavf_status iavf_alloc_arq_bufs(struct iavf_hw *hw) +{ + enum iavf_status ret_code; + struct iavf_aq_desc *desc; + struct iavf_dma_mem *bi; + int i; + + /* We'll be allocating the buffer info memory first, then we can + * allocate the mapped buffers for the event processing + */ + + /* buffer_info structures do not need alignment */ + ret_code = iavf_allocate_virt_mem(hw, &hw->aq.arq.dma_head, + (hw->aq.num_arq_entries * sizeof(struct iavf_dma_mem))); + if (ret_code) + goto alloc_arq_bufs; + hw->aq.arq.r.arq_bi = (struct iavf_dma_mem *)hw->aq.arq.dma_head.va; + + /* allocate the mapped buffers */ + for (i = 0; i < hw->aq.num_arq_entries; i++) { + bi = &hw->aq.arq.r.arq_bi[i]; + ret_code = iavf_allocate_dma_mem(hw, bi, + iavf_mem_arq_buf, + hw->aq.arq_buf_size, + IAVF_ADMINQ_DESC_ALIGNMENT); + if (ret_code) + goto unwind_alloc_arq_bufs; + + /* now configure the descriptors for use */ + desc = IAVF_ADMINQ_DESC(hw->aq.arq, i); + + desc->flags = CPU_TO_LE16(IAVF_AQ_FLAG_BUF); + if (hw->aq.arq_buf_size > IAVF_AQ_LARGE_BUF) + desc->flags |= CPU_TO_LE16(IAVF_AQ_FLAG_LB); + desc->opcode = 0; + /* This is in accordance with Admin queue design, there is no + * register for buffer size configuration + */ + desc->datalen = CPU_TO_LE16((u16)bi->size); + desc->retval = 0; + desc->cookie_high = 0; + desc->cookie_low = 0; + desc->params.external.addr_high = + CPU_TO_LE32(IAVF_HI_DWORD(bi->pa)); + desc->params.external.addr_low = + CPU_TO_LE32(IAVF_LO_DWORD(bi->pa)); + desc->params.external.param0 = 0; + desc->params.external.param1 = 0; + } + +alloc_arq_bufs: + return ret_code; + +unwind_alloc_arq_bufs: + /* don't try to free the one that failed... */ + i--; + for (; i >= 0; i--) + iavf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]); + iavf_free_virt_mem(hw, &hw->aq.arq.dma_head); + + return ret_code; +} + +/** + * iavf_alloc_asq_bufs - Allocate empty buffer structs for the send queue + * @hw: pointer to the hardware structure + **/ +STATIC enum iavf_status iavf_alloc_asq_bufs(struct iavf_hw *hw) +{ + enum iavf_status ret_code; + struct iavf_dma_mem *bi; + int i; + + /* No mapped memory needed yet, just the buffer info structures */ + ret_code = iavf_allocate_virt_mem(hw, &hw->aq.asq.dma_head, + (hw->aq.num_asq_entries * sizeof(struct iavf_dma_mem))); + if (ret_code) + goto alloc_asq_bufs; + hw->aq.asq.r.asq_bi = (struct iavf_dma_mem *)hw->aq.asq.dma_head.va; + + /* allocate the mapped buffers */ + for (i = 0; i < hw->aq.num_asq_entries; i++) { + bi = &hw->aq.asq.r.asq_bi[i]; + ret_code = iavf_allocate_dma_mem(hw, bi, + iavf_mem_asq_buf, + hw->aq.asq_buf_size, + IAVF_ADMINQ_DESC_ALIGNMENT); + if (ret_code) + goto unwind_alloc_asq_bufs; + } +alloc_asq_bufs: + return ret_code; + +unwind_alloc_asq_bufs: + /* don't try to free the one that failed... */ + i--; + for (; i >= 0; i--) + iavf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]); + iavf_free_virt_mem(hw, &hw->aq.asq.dma_head); + + return ret_code; +} + +/** + * iavf_free_arq_bufs - Free receive queue buffer info elements + * @hw: pointer to the hardware structure + **/ +STATIC void iavf_free_arq_bufs(struct iavf_hw *hw) +{ + int i; + + /* free descriptors */ + for (i = 0; i < hw->aq.num_arq_entries; i++) + iavf_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]); + + /* free the descriptor memory */ + iavf_free_dma_mem(hw, &hw->aq.arq.desc_buf); + + /* free the dma header */ + iavf_free_virt_mem(hw, &hw->aq.arq.dma_head); +} + +/** + * iavf_free_asq_bufs - Free send queue buffer info elements + * @hw: pointer to the hardware structure + **/ +STATIC void iavf_free_asq_bufs(struct iavf_hw *hw) +{ + int i; + + /* only unmap if the address is non-NULL */ + for (i = 0; i < hw->aq.num_asq_entries; i++) + if (hw->aq.asq.r.asq_bi[i].pa) + iavf_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]); + + /* free the buffer info list */ + iavf_free_virt_mem(hw, &hw->aq.asq.cmd_buf); + + /* free the descriptor memory */ + iavf_free_dma_mem(hw, &hw->aq.asq.desc_buf); + + /* free the dma header */ + iavf_free_virt_mem(hw, &hw->aq.asq.dma_head); +} + +/** + * iavf_config_asq_regs - configure ASQ registers + * @hw: pointer to the hardware structure + * + * Configure base address and length registers for the transmit queue + **/ +STATIC enum iavf_status iavf_config_asq_regs(struct iavf_hw *hw) +{ + enum iavf_status ret_code = IAVF_SUCCESS; + u32 reg = 0; + + /* Clear Head and Tail */ + wr32(hw, hw->aq.asq.head, 0); + wr32(hw, hw->aq.asq.tail, 0); + + /* set starting point */ + wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries | + IAVF_VF_ATQLEN1_ATQENABLE_MASK)); + wr32(hw, hw->aq.asq.bal, IAVF_LO_DWORD(hw->aq.asq.desc_buf.pa)); + wr32(hw, hw->aq.asq.bah, IAVF_HI_DWORD(hw->aq.asq.desc_buf.pa)); + + /* Check one register to verify that config was applied */ + reg = rd32(hw, hw->aq.asq.bal); + if (reg != IAVF_LO_DWORD(hw->aq.asq.desc_buf.pa)) + ret_code = IAVF_ERR_ADMIN_QUEUE_ERROR; + + return ret_code; +} + +/** + * iavf_config_arq_regs - ARQ register configuration + * @hw: pointer to the hardware structure + * + * Configure base address and length registers for the receive (event queue) + **/ +STATIC enum iavf_status iavf_config_arq_regs(struct iavf_hw *hw) +{ + enum iavf_status ret_code = IAVF_SUCCESS; + u32 reg = 0; + + /* Clear Head and Tail */ + wr32(hw, hw->aq.arq.head, 0); + wr32(hw, hw->aq.arq.tail, 0); + + /* set starting point */ + wr32(hw, hw->aq.arq.len, (hw->aq.num_arq_entries | + IAVF_VF_ARQLEN1_ARQENABLE_MASK)); + wr32(hw, hw->aq.arq.bal, IAVF_LO_DWORD(hw->aq.arq.desc_buf.pa)); + wr32(hw, hw->aq.arq.bah, IAVF_HI_DWORD(hw->aq.arq.desc_buf.pa)); + + /* Update tail in the HW to post pre-allocated buffers */ + wr32(hw, hw->aq.arq.tail, hw->aq.num_arq_entries - 1); + + /* Check one register to verify that config was applied */ + reg = rd32(hw, hw->aq.arq.bal); + if (reg != IAVF_LO_DWORD(hw->aq.arq.desc_buf.pa)) + ret_code = IAVF_ERR_ADMIN_QUEUE_ERROR; + + return ret_code; +} + +/** + * iavf_init_asq - main initialization routine for ASQ + * @hw: pointer to the hardware structure + * + * This is the main initialization routine for the Admin Send Queue + * Prior to calling this function, drivers *MUST* set the following fields + * in the hw->aq structure: + * - hw->aq.num_asq_entries + * - hw->aq.arq_buf_size + * + * Do *NOT* hold the lock when calling this as the memory allocation routines + * called are not going to be atomic context safe + **/ +enum iavf_status iavf_init_asq(struct iavf_hw *hw) +{ + enum iavf_status ret_code = IAVF_SUCCESS; + + if (hw->aq.asq.count > 0) { + /* queue already initialized */ + ret_code = IAVF_ERR_NOT_READY; + goto init_adminq_exit; + } + + /* verify input for valid configuration */ + if ((hw->aq.num_asq_entries == 0) || + (hw->aq.asq_buf_size == 0)) { + ret_code = IAVF_ERR_CONFIG; + goto init_adminq_exit; + } + + hw->aq.asq.next_to_use = 0; + hw->aq.asq.next_to_clean = 0; + + /* allocate the ring memory */ + ret_code = iavf_alloc_adminq_asq_ring(hw); + if (ret_code != IAVF_SUCCESS) + goto init_adminq_exit; + + /* allocate buffers in the rings */ + ret_code = iavf_alloc_asq_bufs(hw); + if (ret_code != IAVF_SUCCESS) + goto init_adminq_free_rings; + + /* initialize base registers */ + ret_code = iavf_config_asq_regs(hw); + if (ret_code != IAVF_SUCCESS) + goto init_config_regs; + + /* success! */ + hw->aq.asq.count = hw->aq.num_asq_entries; + goto init_adminq_exit; + +init_adminq_free_rings: + iavf_free_adminq_asq(hw); + return ret_code; + +init_config_regs: + iavf_free_asq_bufs(hw); + +init_adminq_exit: + return ret_code; +} + +/** + * iavf_init_arq - initialize ARQ + * @hw: pointer to the hardware structure + * + * The main initialization routine for the Admin Receive (Event) Queue. + * Prior to calling this function, drivers *MUST* set the following fields + * in the hw->aq structure: + * - hw->aq.num_asq_entries + * - hw->aq.arq_buf_size + * + * Do *NOT* hold the lock when calling this as the memory allocation routines + * called are not going to be atomic context safe + **/ +enum iavf_status iavf_init_arq(struct iavf_hw *hw) +{ + enum iavf_status ret_code = IAVF_SUCCESS; + + if (hw->aq.arq.count > 0) { + /* queue already initialized */ + ret_code = IAVF_ERR_NOT_READY; + goto init_adminq_exit; + } + + /* verify input for valid configuration */ + if ((hw->aq.num_arq_entries == 0) || + (hw->aq.arq_buf_size == 0)) { + ret_code = IAVF_ERR_CONFIG; + goto init_adminq_exit; + } + + hw->aq.arq.next_to_use = 0; + hw->aq.arq.next_to_clean = 0; + + /* allocate the ring memory */ + ret_code = iavf_alloc_adminq_arq_ring(hw); + if (ret_code != IAVF_SUCCESS) + goto init_adminq_exit; + + /* allocate buffers in the rings */ + ret_code = iavf_alloc_arq_bufs(hw); + if (ret_code != IAVF_SUCCESS) + goto init_adminq_free_rings; + + /* initialize base registers */ + ret_code = iavf_config_arq_regs(hw); + if (ret_code != IAVF_SUCCESS) + goto init_adminq_free_rings; + + /* success! */ + hw->aq.arq.count = hw->aq.num_arq_entries; + goto init_adminq_exit; + +init_adminq_free_rings: + iavf_free_adminq_arq(hw); + +init_adminq_exit: + return ret_code; +} + +/** + * iavf_shutdown_asq - shutdown the ASQ + * @hw: pointer to the hardware structure + * + * The main shutdown routine for the Admin Send Queue + **/ +enum iavf_status iavf_shutdown_asq(struct iavf_hw *hw) +{ + enum iavf_status ret_code = IAVF_SUCCESS; + + iavf_acquire_spinlock(&hw->aq.asq_spinlock); + + if (hw->aq.asq.count == 0) { + ret_code = IAVF_ERR_NOT_READY; + goto shutdown_asq_out; + } + + /* Stop firmware AdminQ processing */ + wr32(hw, hw->aq.asq.head, 0); + wr32(hw, hw->aq.asq.tail, 0); + wr32(hw, hw->aq.asq.len, 0); + wr32(hw, hw->aq.asq.bal, 0); + wr32(hw, hw->aq.asq.bah, 0); + + hw->aq.asq.count = 0; /* to indicate uninitialized queue */ + + /* free ring buffers */ + iavf_free_asq_bufs(hw); + +shutdown_asq_out: + iavf_release_spinlock(&hw->aq.asq_spinlock); + return ret_code; +} + +/** + * iavf_shutdown_arq - shutdown ARQ + * @hw: pointer to the hardware structure + * + * The main shutdown routine for the Admin Receive Queue + **/ +enum iavf_status iavf_shutdown_arq(struct iavf_hw *hw) +{ + enum iavf_status ret_code = IAVF_SUCCESS; + + iavf_acquire_spinlock(&hw->aq.arq_spinlock); + + if (hw->aq.arq.count == 0) { + ret_code = IAVF_ERR_NOT_READY; + goto shutdown_arq_out; + } + + /* Stop firmware AdminQ processing */ + wr32(hw, hw->aq.arq.head, 0); + wr32(hw, hw->aq.arq.tail, 0); + wr32(hw, hw->aq.arq.len, 0); + wr32(hw, hw->aq.arq.bal, 0); + wr32(hw, hw->aq.arq.bah, 0); + + hw->aq.arq.count = 0; /* to indicate uninitialized queue */ + + /* free ring buffers */ + iavf_free_arq_bufs(hw); + +shutdown_arq_out: + iavf_release_spinlock(&hw->aq.arq_spinlock); + return ret_code; +} + +/** + * iavf_init_adminq - main initialization routine for Admin Queue + * @hw: pointer to the hardware structure + * + * Prior to calling this function, drivers *MUST* set the following fields + * in the hw->aq structure: + * - hw->aq.num_asq_entries + * - hw->aq.num_arq_entries + * - hw->aq.arq_buf_size + * - hw->aq.asq_buf_size + **/ +enum iavf_status iavf_init_adminq(struct iavf_hw *hw) +{ + enum iavf_status ret_code; + + /* verify input for valid configuration */ + if ((hw->aq.num_arq_entries == 0) || + (hw->aq.num_asq_entries == 0) || + (hw->aq.arq_buf_size == 0) || + (hw->aq.asq_buf_size == 0)) { + ret_code = IAVF_ERR_CONFIG; + goto init_adminq_exit; + } + iavf_init_spinlock(&hw->aq.asq_spinlock); + iavf_init_spinlock(&hw->aq.arq_spinlock); + + /* Set up register offsets */ + iavf_adminq_init_regs(hw); + + /* setup ASQ command write back timeout */ + hw->aq.asq_cmd_timeout = IAVF_ASQ_CMD_TIMEOUT; + + /* allocate the ASQ */ + ret_code = iavf_init_asq(hw); + if (ret_code != IAVF_SUCCESS) + goto init_adminq_destroy_spinlocks; + + /* allocate the ARQ */ + ret_code = iavf_init_arq(hw); + if (ret_code != IAVF_SUCCESS) + goto init_adminq_free_asq; + + /* success! */ + goto init_adminq_exit; + +init_adminq_free_asq: + iavf_shutdown_asq(hw); +init_adminq_destroy_spinlocks: + iavf_destroy_spinlock(&hw->aq.asq_spinlock); + iavf_destroy_spinlock(&hw->aq.arq_spinlock); + +init_adminq_exit: + return ret_code; +} + +/** + * iavf_shutdown_adminq - shutdown routine for the Admin Queue + * @hw: pointer to the hardware structure + **/ +enum iavf_status iavf_shutdown_adminq(struct iavf_hw *hw) +{ + enum iavf_status ret_code = IAVF_SUCCESS; + + if (iavf_check_asq_alive(hw)) + iavf_aq_queue_shutdown(hw, true); + + iavf_shutdown_asq(hw); + iavf_shutdown_arq(hw); + iavf_destroy_spinlock(&hw->aq.asq_spinlock); + iavf_destroy_spinlock(&hw->aq.arq_spinlock); + + return ret_code; +} + +/** + * iavf_clean_asq - cleans Admin send queue + * @hw: pointer to the hardware structure + * + * returns the number of free desc + **/ +u16 iavf_clean_asq(struct iavf_hw *hw) +{ + struct iavf_adminq_ring *asq = &(hw->aq.asq); + struct iavf_asq_cmd_details *details; + u16 ntc = asq->next_to_clean; + struct iavf_aq_desc desc_cb; + struct iavf_aq_desc *desc; + + desc = IAVF_ADMINQ_DESC(*asq, ntc); + details = IAVF_ADMINQ_DETAILS(*asq, ntc); + while (rd32(hw, hw->aq.asq.head) != ntc) { + iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE, + "ntc %d head %d.\n", ntc, rd32(hw, hw->aq.asq.head)); + + if (details->callback) { + IAVF_ADMINQ_CALLBACK cb_func = + (IAVF_ADMINQ_CALLBACK)details->callback; + iavf_memcpy(&desc_cb, desc, sizeof(struct iavf_aq_desc), + IAVF_DMA_TO_DMA); + cb_func(hw, &desc_cb); + } + iavf_memset(desc, 0, sizeof(*desc), IAVF_DMA_MEM); + iavf_memset(details, 0, sizeof(*details), IAVF_NONDMA_MEM); + ntc++; + if (ntc == asq->count) + ntc = 0; + desc = IAVF_ADMINQ_DESC(*asq, ntc); + details = IAVF_ADMINQ_DETAILS(*asq, ntc); + } + + asq->next_to_clean = ntc; + + return IAVF_DESC_UNUSED(asq); +} + +/** + * iavf_asq_done - check if FW has processed the Admin Send Queue + * @hw: pointer to the hw struct + * + * Returns true if the firmware has processed all descriptors on the + * admin send queue. Returns false if there are still requests pending. + **/ +bool iavf_asq_done(struct iavf_hw *hw) +{ + /* AQ designers suggest use of head for better + * timing reliability than DD bit + */ + return rd32(hw, hw->aq.asq.head) == hw->aq.asq.next_to_use; + +} + +/** + * iavf_asq_send_command - send command to Admin Queue + * @hw: pointer to the hw struct + * @desc: prefilled descriptor describing the command (non DMA mem) + * @buff: buffer to use for indirect commands + * @buff_size: size of buffer for indirect commands + * @cmd_details: pointer to command details structure + * + * This is the main send command driver routine for the Admin Queue send + * queue. It runs the queue, cleans the queue, etc + **/ +enum iavf_status iavf_asq_send_command(struct iavf_hw *hw, + struct iavf_aq_desc *desc, + void *buff, /* can be NULL */ + u16 buff_size, + struct iavf_asq_cmd_details *cmd_details) +{ + enum iavf_status status = IAVF_SUCCESS; + struct iavf_dma_mem *dma_buff = NULL; + struct iavf_asq_cmd_details *details; + struct iavf_aq_desc *desc_on_ring; + bool cmd_completed = false; + u16 retval = 0; + u32 val = 0; + + iavf_acquire_spinlock(&hw->aq.asq_spinlock); + + hw->aq.asq_last_status = IAVF_AQ_RC_OK; + + if (hw->aq.asq.count == 0) { + iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE, + "AQTX: Admin queue not initialized.\n"); + status = IAVF_ERR_QUEUE_EMPTY; + goto asq_send_command_error; + } + + val = rd32(hw, hw->aq.asq.head); + if (val >= hw->aq.num_asq_entries) { + iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE, + "AQTX: head overrun at %d\n", val); + status = IAVF_ERR_QUEUE_EMPTY; + goto asq_send_command_error; + } + + details = IAVF_ADMINQ_DETAILS(hw->aq.asq, hw->aq.asq.next_to_use); + if (cmd_details) { + iavf_memcpy(details, + cmd_details, + sizeof(struct iavf_asq_cmd_details), + IAVF_NONDMA_TO_NONDMA); + + /* If the cmd_details are defined copy the cookie. The + * CPU_TO_LE32 is not needed here because the data is ignored + * by the FW, only used by the driver + */ + if (details->cookie) { + desc->cookie_high = + CPU_TO_LE32(IAVF_HI_DWORD(details->cookie)); + desc->cookie_low = + CPU_TO_LE32(IAVF_LO_DWORD(details->cookie)); + } + } else { + iavf_memset(details, 0, + sizeof(struct iavf_asq_cmd_details), + IAVF_NONDMA_MEM); + } + + /* clear requested flags and then set additional flags if defined */ + desc->flags &= ~CPU_TO_LE16(details->flags_dis); + desc->flags |= CPU_TO_LE16(details->flags_ena); + + if (buff_size > hw->aq.asq_buf_size) { + iavf_debug(hw, + IAVF_DEBUG_AQ_MESSAGE, + "AQTX: Invalid buffer size: %d.\n", + buff_size); + status = IAVF_ERR_INVALID_SIZE; + goto asq_send_command_error; + } + + if (details->postpone && !details->async) { + iavf_debug(hw, + IAVF_DEBUG_AQ_MESSAGE, + "AQTX: Async flag not set along with postpone flag"); + status = IAVF_ERR_PARAM; + goto asq_send_command_error; + } + + /* call clean and check queue available function to reclaim the + * descriptors that were processed by FW, the function returns the + * number of desc available + */ + /* the clean function called here could be called in a separate thread + * in case of asynchronous completions + */ + if (iavf_clean_asq(hw) == 0) { + iavf_debug(hw, + IAVF_DEBUG_AQ_MESSAGE, + "AQTX: Error queue is full.\n"); + status = IAVF_ERR_ADMIN_QUEUE_FULL; + goto asq_send_command_error; + } + + /* initialize the temp desc pointer with the right desc */ + desc_on_ring = IAVF_ADMINQ_DESC(hw->aq.asq, hw->aq.asq.next_to_use); + + /* if the desc is available copy the temp desc to the right place */ + iavf_memcpy(desc_on_ring, desc, sizeof(struct iavf_aq_desc), + IAVF_NONDMA_TO_DMA); + + /* if buff is not NULL assume indirect command */ + if (buff != NULL) { + dma_buff = &(hw->aq.asq.r.asq_bi[hw->aq.asq.next_to_use]); + /* copy the user buff into the respective DMA buff */ + iavf_memcpy(dma_buff->va, buff, buff_size, + IAVF_NONDMA_TO_DMA); + desc_on_ring->datalen = CPU_TO_LE16(buff_size); + + /* Update the address values in the desc with the pa value + * for respective buffer + */ + desc_on_ring->params.external.addr_high = + CPU_TO_LE32(IAVF_HI_DWORD(dma_buff->pa)); + desc_on_ring->params.external.addr_low = + CPU_TO_LE32(IAVF_LO_DWORD(dma_buff->pa)); + } + + /* bump the tail */ + iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE, "AQTX: desc and buffer:\n"); + iavf_debug_aq(hw, IAVF_DEBUG_AQ_COMMAND, (void *)desc_on_ring, + buff, buff_size); + (hw->aq.asq.next_to_use)++; + if (hw->aq.asq.next_to_use == hw->aq.asq.count) + hw->aq.asq.next_to_use = 0; + if (!details->postpone) + wr32(hw, hw->aq.asq.tail, hw->aq.asq.next_to_use); + + /* if cmd_details are not defined or async flag is not set, + * we need to wait for desc write back + */ + if (!details->async && !details->postpone) { + u32 total_delay = 0; + + do { + /* AQ designers suggest use of head for better + * timing reliability than DD bit + */ + if (iavf_asq_done(hw)) + break; + iavf_usec_delay(50); + total_delay += 50; + } while (total_delay < hw->aq.asq_cmd_timeout); + } + + /* if ready, copy the desc back to temp */ + if (iavf_asq_done(hw)) { + iavf_memcpy(desc, desc_on_ring, sizeof(struct iavf_aq_desc), + IAVF_DMA_TO_NONDMA); + if (buff != NULL) + iavf_memcpy(buff, dma_buff->va, buff_size, + IAVF_DMA_TO_NONDMA); + retval = LE16_TO_CPU(desc->retval); + if (retval != 0) { + iavf_debug(hw, + IAVF_DEBUG_AQ_MESSAGE, + "AQTX: Command completed with error 0x%X.\n", + retval); + + /* strip off FW internal code */ + retval &= 0xff; + } + cmd_completed = true; + if ((enum iavf_admin_queue_err)retval == IAVF_AQ_RC_OK) + status = IAVF_SUCCESS; + else if ((enum iavf_admin_queue_err)retval == IAVF_AQ_RC_EBUSY) + status = IAVF_ERR_NOT_READY; + else + status = IAVF_ERR_ADMIN_QUEUE_ERROR; + hw->aq.asq_last_status = (enum iavf_admin_queue_err)retval; + } + + iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE, + "AQTX: desc and buffer writeback:\n"); + iavf_debug_aq(hw, IAVF_DEBUG_AQ_COMMAND, (void *)desc, buff, buff_size); + + /* save writeback aq if requested */ + if (details->wb_desc) + iavf_memcpy(details->wb_desc, desc_on_ring, + sizeof(struct iavf_aq_desc), IAVF_DMA_TO_NONDMA); + + /* update the error if time out occurred */ + if ((!cmd_completed) && + (!details->async && !details->postpone)) { + if (rd32(hw, hw->aq.asq.len) & IAVF_VF_ATQLEN1_ATQCRIT_MASK) { + iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE, + "AQTX: AQ Critical error.\n"); + status = IAVF_ERR_ADMIN_QUEUE_CRITICAL_ERROR; + } else { + iavf_debug(hw, IAVF_DEBUG_AQ_MESSAGE, + "AQTX: Writeback timeout.\n"); + status = IAVF_ERR_ADMIN_QUEUE_TIMEOUT; + } + } *** 12829 LINES SKIPPED ***