From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 7BF695A88 for ; Fri, 15 May 2015 17:57:19 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga102.jf.intel.com with ESMTP; 15 May 2015 08:57:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,434,1427785200"; d="scan'208";a="571943003" Received: from irvmail001.ir.intel.com ([163.33.26.43]) by orsmga003.jf.intel.com with ESMTP; 15 May 2015 08:57:09 -0700 Received: from sivswdev01.ir.intel.com (sivswdev01.ir.intel.com [10.237.217.45]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id t4FFv7WN030295; Fri, 15 May 2015 16:57:07 +0100 Received: from sivswdev01.ir.intel.com (localhost [127.0.0.1]) by sivswdev01.ir.intel.com with ESMTP id t4FFv6KG016230; Fri, 15 May 2015 16:57:06 +0100 Received: (from bricha3@localhost) by sivswdev01.ir.intel.com with id t4FFv60u016226; Fri, 15 May 2015 16:57:06 +0100 From: Bruce Richardson To: dev@dpdk.org Date: Fri, 15 May 2015 16:56:52 +0100 Message-Id: <1431705423-16134-9-git-send-email-bruce.richardson@intel.com> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1431705423-16134-1-git-send-email-bruce.richardson@intel.com> References: <1431450315-13179-1-git-send-email-bruce.richardson@intel.com> <1431705423-16134-1-git-send-email-bruce.richardson@intel.com> X-Mailman-Approved-At: Fri, 15 May 2015 22:29:14 +0200 Subject: [dpdk-dev] [PATCH v2 08/19] i40e: move i40e PMD to drivers/net directory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 15 May 2015 15:57:28 -0000 Move i40e PMD to drivers/net directory. As part of the move, rename the "i40e" directory, containing the "base driver" code, from "i40e" to "base". Signed-off-by: Bruce Richardson --- drivers/net/Makefile | 2 +- drivers/net/i40e/Makefile | 105 + drivers/net/i40e/base/i40e_adminq.c | 1084 +++++ drivers/net/i40e/base/i40e_adminq.h | 157 + drivers/net/i40e/base/i40e_adminq_cmd.h | 2179 ++++++++++ drivers/net/i40e/base/i40e_alloc.h | 65 + drivers/net/i40e/base/i40e_common.c | 4793 +++++++++++++++++++++ drivers/net/i40e/base/i40e_dcb.c | 479 +++ drivers/net/i40e/base/i40e_dcb.h | 161 + drivers/net/i40e/base/i40e_diag.c | 178 + drivers/net/i40e/base/i40e_diag.h | 61 + drivers/net/i40e/base/i40e_hmc.c | 373 ++ drivers/net/i40e/base/i40e_hmc.h | 243 ++ drivers/net/i40e/base/i40e_lan_hmc.c | 1417 +++++++ drivers/net/i40e/base/i40e_lan_hmc.h | 200 + drivers/net/i40e/base/i40e_nvm.c | 940 +++++ drivers/net/i40e/base/i40e_osdep.h | 197 + drivers/net/i40e/base/i40e_prototype.h | 430 ++ drivers/net/i40e/base/i40e_register.h | 3377 +++++++++++++++ drivers/net/i40e/base/i40e_status.h | 107 + drivers/net/i40e/base/i40e_type.h | 1425 +++++++ drivers/net/i40e/base/i40e_virtchnl.h | 373 ++ drivers/net/i40e/i40e_ethdev.c | 5716 ++++++++++++++++++++++++++ drivers/net/i40e/i40e_ethdev.h | 567 +++ drivers/net/i40e/i40e_ethdev_vf.c | 1893 +++++++++ drivers/net/i40e/i40e_fdir.c | 1361 ++++++ drivers/net/i40e/i40e_logs.h | 78 + drivers/net/i40e/i40e_pf.c | 1063 +++++ drivers/net/i40e/i40e_pf.h | 127 + drivers/net/i40e/i40e_rxtx.c | 2709 ++++++++++++ drivers/net/i40e/i40e_rxtx.h | 211 + drivers/net/i40e/rte_pmd_i40e_version.map | 4 + lib/Makefile | 1 - lib/librte_pmd_i40e/Makefile | 105 - lib/librte_pmd_i40e/i40e/i40e_adminq.c | 1084 ----- lib/librte_pmd_i40e/i40e/i40e_adminq.h | 157 - lib/librte_pmd_i40e/i40e/i40e_adminq_cmd.h | 2179 ---------- lib/librte_pmd_i40e/i40e/i40e_alloc.h | 65 - lib/librte_pmd_i40e/i40e/i40e_common.c | 4793 --------------------- lib/librte_pmd_i40e/i40e/i40e_dcb.c | 479 --- lib/librte_pmd_i40e/i40e/i40e_dcb.h | 161 - lib/librte_pmd_i40e/i40e/i40e_diag.c | 178 - lib/librte_pmd_i40e/i40e/i40e_diag.h | 61 - lib/librte_pmd_i40e/i40e/i40e_hmc.c | 373 -- lib/librte_pmd_i40e/i40e/i40e_hmc.h | 243 -- lib/librte_pmd_i40e/i40e/i40e_lan_hmc.c | 1417 ------- lib/librte_pmd_i40e/i40e/i40e_lan_hmc.h | 200 - lib/librte_pmd_i40e/i40e/i40e_nvm.c | 940 ----- lib/librte_pmd_i40e/i40e/i40e_osdep.h | 197 - lib/librte_pmd_i40e/i40e/i40e_prototype.h | 430 -- lib/librte_pmd_i40e/i40e/i40e_register.h | 3377 --------------- lib/librte_pmd_i40e/i40e/i40e_status.h | 107 - lib/librte_pmd_i40e/i40e/i40e_type.h | 1425 ------- lib/librte_pmd_i40e/i40e/i40e_virtchnl.h | 373 -- lib/librte_pmd_i40e/i40e_ethdev.c | 5716 -------------------------- lib/librte_pmd_i40e/i40e_ethdev.h | 567 --- lib/librte_pmd_i40e/i40e_ethdev_vf.c | 1893 --------- lib/librte_pmd_i40e/i40e_fdir.c | 1361 ------ lib/librte_pmd_i40e/i40e_logs.h | 78 - lib/librte_pmd_i40e/i40e_pf.c | 1063 ----- lib/librte_pmd_i40e/i40e_pf.h | 127 - lib/librte_pmd_i40e/i40e_rxtx.c | 2709 ------------ lib/librte_pmd_i40e/i40e_rxtx.h | 211 - lib/librte_pmd_i40e/rte_pmd_i40e_version.map | 4 - 64 files changed, 32074 insertions(+), 32075 deletions(-) create mode 100644 drivers/net/i40e/Makefile create mode 100644 drivers/net/i40e/base/i40e_adminq.c create mode 100644 drivers/net/i40e/base/i40e_adminq.h create mode 100644 drivers/net/i40e/base/i40e_adminq_cmd.h create mode 100644 drivers/net/i40e/base/i40e_alloc.h create mode 100644 drivers/net/i40e/base/i40e_common.c create mode 100644 drivers/net/i40e/base/i40e_dcb.c create mode 100644 drivers/net/i40e/base/i40e_dcb.h create mode 100644 drivers/net/i40e/base/i40e_diag.c create mode 100644 drivers/net/i40e/base/i40e_diag.h create mode 100644 drivers/net/i40e/base/i40e_hmc.c create mode 100644 drivers/net/i40e/base/i40e_hmc.h create mode 100644 drivers/net/i40e/base/i40e_lan_hmc.c create mode 100644 drivers/net/i40e/base/i40e_lan_hmc.h create mode 100644 drivers/net/i40e/base/i40e_nvm.c create mode 100644 drivers/net/i40e/base/i40e_osdep.h create mode 100644 drivers/net/i40e/base/i40e_prototype.h create mode 100644 drivers/net/i40e/base/i40e_register.h create mode 100644 drivers/net/i40e/base/i40e_status.h create mode 100644 drivers/net/i40e/base/i40e_type.h create mode 100644 drivers/net/i40e/base/i40e_virtchnl.h create mode 100644 drivers/net/i40e/i40e_ethdev.c create mode 100644 drivers/net/i40e/i40e_ethdev.h create mode 100644 drivers/net/i40e/i40e_ethdev_vf.c create mode 100644 drivers/net/i40e/i40e_fdir.c create mode 100644 drivers/net/i40e/i40e_logs.h create mode 100644 drivers/net/i40e/i40e_pf.c create mode 100644 drivers/net/i40e/i40e_pf.h create mode 100644 drivers/net/i40e/i40e_rxtx.c create mode 100644 drivers/net/i40e/i40e_rxtx.h create mode 100644 drivers/net/i40e/rte_pmd_i40e_version.map delete mode 100644 lib/librte_pmd_i40e/Makefile delete mode 100644 lib/librte_pmd_i40e/i40e/i40e_adminq.c delete mode 100644 lib/librte_pmd_i40e/i40e/i40e_adminq.h delete mode 100644 lib/librte_pmd_i40e/i40e/i40e_adminq_cmd.h delete mode 100644 lib/librte_pmd_i40e/i40e/i40e_alloc.h delete mode 100644 lib/librte_pmd_i40e/i40e/i40e_common.c delete mode 100644 lib/librte_pmd_i40e/i40e/i40e_dcb.c delete mode 100644 lib/librte_pmd_i40e/i40e/i40e_dcb.h delete mode 100644 lib/librte_pmd_i40e/i40e/i40e_diag.c delete mode 100644 lib/librte_pmd_i40e/i40e/i40e_diag.h delete mode 100644 lib/librte_pmd_i40e/i40e/i40e_hmc.c delete mode 100644 lib/librte_pmd_i40e/i40e/i40e_hmc.h delete mode 100644 lib/librte_pmd_i40e/i40e/i40e_lan_hmc.c delete mode 100644 lib/librte_pmd_i40e/i40e/i40e_lan_hmc.h delete mode 100644 lib/librte_pmd_i40e/i40e/i40e_nvm.c delete mode 100644 lib/librte_pmd_i40e/i40e/i40e_osdep.h delete mode 100644 lib/librte_pmd_i40e/i40e/i40e_prototype.h delete mode 100644 lib/librte_pmd_i40e/i40e/i40e_register.h delete mode 100644 lib/librte_pmd_i40e/i40e/i40e_status.h delete mode 100644 lib/librte_pmd_i40e/i40e/i40e_type.h delete mode 100644 lib/librte_pmd_i40e/i40e/i40e_virtchnl.h delete mode 100644 lib/librte_pmd_i40e/i40e_ethdev.c delete mode 100644 lib/librte_pmd_i40e/i40e_ethdev.h delete mode 100644 lib/librte_pmd_i40e/i40e_ethdev_vf.c delete mode 100644 lib/librte_pmd_i40e/i40e_fdir.c delete mode 100644 lib/librte_pmd_i40e/i40e_logs.h delete mode 100644 lib/librte_pmd_i40e/i40e_pf.c delete mode 100644 lib/librte_pmd_i40e/i40e_pf.h delete mode 100644 lib/librte_pmd_i40e/i40e_rxtx.c delete mode 100644 lib/librte_pmd_i40e/i40e_rxtx.h delete mode 100644 lib/librte_pmd_i40e/rte_pmd_i40e_version.map diff --git a/drivers/net/Makefile b/drivers/net/Makefile index 6760034..7ce6685 100644 --- a/drivers/net/Makefile +++ b/drivers/net/Makefile @@ -36,8 +36,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += bonding DIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += e1000 DIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += enic DIRS-$(CONFIG_RTE_LIBRTE_FM10K_PMD) += fm10k +DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e #DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += librte_pmd_ixgbe -#DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += librte_pmd_i40e #DIRS-$(CONFIG_RTE_LIBRTE_MLX4_PMD) += librte_pmd_mlx4 #DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += librte_pmd_ring #DIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += librte_pmd_pcap diff --git a/drivers/net/i40e/Makefile b/drivers/net/i40e/Makefile new file mode 100644 index 0000000..37a1c33 --- /dev/null +++ b/drivers/net/i40e/Makefile @@ -0,0 +1,105 @@ +# BSD LICENSE +# +# Copyright(c) 2010-2015 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# +# library name +# +LIB = librte_pmd_i40e.a + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +EXPORT_MAP := rte_pmd_i40e_version.map + +LIBABIVER := 1 + +# +# Add extra flags for base driver files (also known as shared code) +# to disable warnings +# +ifeq ($(CC), icc) +CFLAGS_BASE_DRIVER = -wd593 +else ifeq ($(CC), clang) +CFLAGS_BASE_DRIVER += -Wno-sign-compare +CFLAGS_BASE_DRIVER += -Wno-unused-value +CFLAGS_BASE_DRIVER += -Wno-unused-parameter +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing +CFLAGS_BASE_DRIVER += -Wno-format +CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers +CFLAGS_BASE_DRIVER += -Wno-pointer-to-int-cast +CFLAGS_BASE_DRIVER += -Wno-format-nonliteral +else +CFLAGS_BASE_DRIVER = -Wno-sign-compare +CFLAGS_BASE_DRIVER += -Wno-unused-value +CFLAGS_BASE_DRIVER += -Wno-unused-parameter +CFLAGS_BASE_DRIVER += -Wno-strict-aliasing +CFLAGS_BASE_DRIVER += -Wno-format +CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers +CFLAGS_BASE_DRIVER += -Wno-pointer-to-int-cast +CFLAGS_BASE_DRIVER += -Wno-format-nonliteral +CFLAGS_BASE_DRIVER += -Wno-format-security + +ifeq ($(shell test $(GCC_VERSION) -ge 44 && echo 1), 1) +CFLAGS_BASE_DRIVER += -Wno-unused-but-set-variable +endif + +CFLAGS_i40e_lan_hmc.o += -Wno-error +endif +OBJS_BASE_DRIVER=$(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/base/*.c))) +$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER))) + +VPATH += $(SRCDIR)/base + +# +# all source are stored in SRCS-y +# +SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_adminq.c +SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_common.c +SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_diag.c +SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_hmc.c +SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_lan_hmc.c +SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_nvm.c +SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_dcb.c + +SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_ethdev.c +SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_rxtx.c +SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_ethdev_vf.c +SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_pf.c +SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_fdir.c + +# this lib depends upon: +DEPDIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += lib/librte_eal lib/librte_ether +DEPDIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += lib/librte_mempool lib/librte_mbuf +DEPDIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += lib/librte_net lib/librte_malloc + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/net/i40e/base/i40e_adminq.c b/drivers/net/i40e/base/i40e_adminq.c new file mode 100644 index 0000000..e098ed6 --- /dev/null +++ b/drivers/net/i40e/base/i40e_adminq.c @@ -0,0 +1,1084 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "i40e_status.h" +#include "i40e_type.h" +#include "i40e_register.h" +#include "i40e_adminq.h" +#include "i40e_prototype.h" + +#ifndef VF_DRIVER +/** + * i40e_is_nvm_update_op - return true if this is an NVM update operation + * @desc: API request descriptor + **/ +STATIC INLINE bool i40e_is_nvm_update_op(struct i40e_aq_desc *desc) +{ + return (desc->opcode == CPU_TO_LE16(i40e_aqc_opc_nvm_erase) || + desc->opcode == CPU_TO_LE16(i40e_aqc_opc_nvm_update)); +} + +#endif /* VF_DRIVER */ +/** + * i40e_adminq_init_regs - Initialize AdminQ registers + * @hw: pointer to the hardware structure + * + * This assumes the alloc_asq and alloc_arq functions have already been called + **/ +STATIC void i40e_adminq_init_regs(struct i40e_hw *hw) +{ + /* set head and tail registers in our local struct */ + if (hw->mac.type == I40E_MAC_VF) { + hw->aq.asq.tail = I40E_VF_ATQT1; + hw->aq.asq.head = I40E_VF_ATQH1; + hw->aq.asq.len = I40E_VF_ATQLEN1; + hw->aq.asq.bal = I40E_VF_ATQBAL1; + hw->aq.asq.bah = I40E_VF_ATQBAH1; + hw->aq.arq.tail = I40E_VF_ARQT1; + hw->aq.arq.head = I40E_VF_ARQH1; + hw->aq.arq.len = I40E_VF_ARQLEN1; + hw->aq.arq.bal = I40E_VF_ARQBAL1; + hw->aq.arq.bah = I40E_VF_ARQBAH1; + } else { + hw->aq.asq.tail = I40E_PF_ATQT; + hw->aq.asq.head = I40E_PF_ATQH; + hw->aq.asq.len = I40E_PF_ATQLEN; + hw->aq.asq.bal = I40E_PF_ATQBAL; + hw->aq.asq.bah = I40E_PF_ATQBAH; + hw->aq.arq.tail = I40E_PF_ARQT; + hw->aq.arq.head = I40E_PF_ARQH; + hw->aq.arq.len = I40E_PF_ARQLEN; + hw->aq.arq.bal = I40E_PF_ARQBAL; + hw->aq.arq.bah = I40E_PF_ARQBAH; + } +} + +/** + * i40e_alloc_adminq_asq_ring - Allocate Admin Queue send rings + * @hw: pointer to the hardware structure + **/ +enum i40e_status_code i40e_alloc_adminq_asq_ring(struct i40e_hw *hw) +{ + enum i40e_status_code ret_code; + + ret_code = i40e_allocate_dma_mem(hw, &hw->aq.asq.desc_buf, + i40e_mem_atq_ring, + (hw->aq.num_asq_entries * + sizeof(struct i40e_aq_desc)), + I40E_ADMINQ_DESC_ALIGNMENT); + if (ret_code) + return ret_code; + + ret_code = i40e_allocate_virt_mem(hw, &hw->aq.asq.cmd_buf, + (hw->aq.num_asq_entries * + sizeof(struct i40e_asq_cmd_details))); + if (ret_code) { + i40e_free_dma_mem(hw, &hw->aq.asq.desc_buf); + return ret_code; + } + + return ret_code; +} + +/** + * i40e_alloc_adminq_arq_ring - Allocate Admin Queue receive rings + * @hw: pointer to the hardware structure + **/ +enum i40e_status_code i40e_alloc_adminq_arq_ring(struct i40e_hw *hw) +{ + enum i40e_status_code ret_code; + + ret_code = i40e_allocate_dma_mem(hw, &hw->aq.arq.desc_buf, + i40e_mem_arq_ring, + (hw->aq.num_arq_entries * + sizeof(struct i40e_aq_desc)), + I40E_ADMINQ_DESC_ALIGNMENT); + + return ret_code; +} + +/** + * i40e_free_adminq_asq - Free Admin Queue send rings + * @hw: pointer to the hardware structure + * + * This assumes the posted send buffers have already been cleaned + * and de-allocated + **/ +void i40e_free_adminq_asq(struct i40e_hw *hw) +{ + i40e_free_dma_mem(hw, &hw->aq.asq.desc_buf); +} + +/** + * i40e_free_adminq_arq - Free Admin Queue receive rings + * @hw: pointer to the hardware structure + * + * This assumes the posted receive buffers have already been cleaned + * and de-allocated + **/ +void i40e_free_adminq_arq(struct i40e_hw *hw) +{ + i40e_free_dma_mem(hw, &hw->aq.arq.desc_buf); +} + +/** + * i40e_alloc_arq_bufs - Allocate pre-posted buffers for the receive queue + * @hw: pointer to the hardware structure + **/ +STATIC enum i40e_status_code i40e_alloc_arq_bufs(struct i40e_hw *hw) +{ + enum i40e_status_code ret_code; + struct i40e_aq_desc *desc; + struct i40e_dma_mem *bi; + int i; + + /* We'll be allocating the buffer info memory first, then we can + * allocate the mapped buffers for the event processing + */ + + /* buffer_info structures do not need alignment */ + ret_code = i40e_allocate_virt_mem(hw, &hw->aq.arq.dma_head, + (hw->aq.num_arq_entries * sizeof(struct i40e_dma_mem))); + if (ret_code) + goto alloc_arq_bufs; + hw->aq.arq.r.arq_bi = (struct i40e_dma_mem *)hw->aq.arq.dma_head.va; + + /* allocate the mapped buffers */ + for (i = 0; i < hw->aq.num_arq_entries; i++) { + bi = &hw->aq.arq.r.arq_bi[i]; + ret_code = i40e_allocate_dma_mem(hw, bi, + i40e_mem_arq_buf, + hw->aq.arq_buf_size, + I40E_ADMINQ_DESC_ALIGNMENT); + if (ret_code) + goto unwind_alloc_arq_bufs; + + /* now configure the descriptors for use */ + desc = I40E_ADMINQ_DESC(hw->aq.arq, i); + + desc->flags = CPU_TO_LE16(I40E_AQ_FLAG_BUF); + if (hw->aq.arq_buf_size > I40E_AQ_LARGE_BUF) + desc->flags |= CPU_TO_LE16(I40E_AQ_FLAG_LB); + desc->opcode = 0; + /* This is in accordance with Admin queue design, there is no + * register for buffer size configuration + */ + desc->datalen = CPU_TO_LE16((u16)bi->size); + desc->retval = 0; + desc->cookie_high = 0; + desc->cookie_low = 0; + desc->params.external.addr_high = + CPU_TO_LE32(I40E_HI_DWORD(bi->pa)); + desc->params.external.addr_low = + CPU_TO_LE32(I40E_LO_DWORD(bi->pa)); + desc->params.external.param0 = 0; + desc->params.external.param1 = 0; + } + +alloc_arq_bufs: + return ret_code; + +unwind_alloc_arq_bufs: + /* don't try to free the one that failed... */ + i--; + for (; i >= 0; i--) + i40e_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]); + i40e_free_virt_mem(hw, &hw->aq.arq.dma_head); + + return ret_code; +} + +/** + * i40e_alloc_asq_bufs - Allocate empty buffer structs for the send queue + * @hw: pointer to the hardware structure + **/ +STATIC enum i40e_status_code i40e_alloc_asq_bufs(struct i40e_hw *hw) +{ + enum i40e_status_code ret_code; + struct i40e_dma_mem *bi; + int i; + + /* No mapped memory needed yet, just the buffer info structures */ + ret_code = i40e_allocate_virt_mem(hw, &hw->aq.asq.dma_head, + (hw->aq.num_asq_entries * sizeof(struct i40e_dma_mem))); + if (ret_code) + goto alloc_asq_bufs; + hw->aq.asq.r.asq_bi = (struct i40e_dma_mem *)hw->aq.asq.dma_head.va; + + /* allocate the mapped buffers */ + for (i = 0; i < hw->aq.num_asq_entries; i++) { + bi = &hw->aq.asq.r.asq_bi[i]; + ret_code = i40e_allocate_dma_mem(hw, bi, + i40e_mem_asq_buf, + hw->aq.asq_buf_size, + I40E_ADMINQ_DESC_ALIGNMENT); + if (ret_code) + goto unwind_alloc_asq_bufs; + } +alloc_asq_bufs: + return ret_code; + +unwind_alloc_asq_bufs: + /* don't try to free the one that failed... */ + i--; + for (; i >= 0; i--) + i40e_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]); + i40e_free_virt_mem(hw, &hw->aq.asq.dma_head); + + return ret_code; +} + +/** + * i40e_free_arq_bufs - Free receive queue buffer info elements + * @hw: pointer to the hardware structure + **/ +STATIC void i40e_free_arq_bufs(struct i40e_hw *hw) +{ + int i; + + /* free descriptors */ + for (i = 0; i < hw->aq.num_arq_entries; i++) + i40e_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]); + + /* free the descriptor memory */ + i40e_free_dma_mem(hw, &hw->aq.arq.desc_buf); + + /* free the dma header */ + i40e_free_virt_mem(hw, &hw->aq.arq.dma_head); +} + +/** + * i40e_free_asq_bufs - Free send queue buffer info elements + * @hw: pointer to the hardware structure + **/ +STATIC void i40e_free_asq_bufs(struct i40e_hw *hw) +{ + int i; + + /* only unmap if the address is non-NULL */ + for (i = 0; i < hw->aq.num_asq_entries; i++) + if (hw->aq.asq.r.asq_bi[i].pa) + i40e_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]); + + /* free the buffer info list */ + i40e_free_virt_mem(hw, &hw->aq.asq.cmd_buf); + + /* free the descriptor memory */ + i40e_free_dma_mem(hw, &hw->aq.asq.desc_buf); + + /* free the dma header */ + i40e_free_virt_mem(hw, &hw->aq.asq.dma_head); +} + +/** + * i40e_config_asq_regs - configure ASQ registers + * @hw: pointer to the hardware structure + * + * Configure base address and length registers for the transmit queue + **/ +STATIC enum i40e_status_code i40e_config_asq_regs(struct i40e_hw *hw) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + u32 reg = 0; + + /* Clear Head and Tail */ + wr32(hw, hw->aq.asq.head, 0); + wr32(hw, hw->aq.asq.tail, 0); + + /* set starting point */ + wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries | + I40E_PF_ATQLEN_ATQENABLE_MASK)); + wr32(hw, hw->aq.asq.bal, I40E_LO_DWORD(hw->aq.asq.desc_buf.pa)); + wr32(hw, hw->aq.asq.bah, I40E_HI_DWORD(hw->aq.asq.desc_buf.pa)); + + /* Check one register to verify that config was applied */ + reg = rd32(hw, hw->aq.asq.bal); + if (reg != I40E_LO_DWORD(hw->aq.asq.desc_buf.pa)) + ret_code = I40E_ERR_ADMIN_QUEUE_ERROR; + + return ret_code; +} + +/** + * i40e_config_arq_regs - ARQ register configuration + * @hw: pointer to the hardware structure + * + * Configure base address and length registers for the receive (event queue) + **/ +STATIC enum i40e_status_code i40e_config_arq_regs(struct i40e_hw *hw) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + u32 reg = 0; + + /* Clear Head and Tail */ + wr32(hw, hw->aq.arq.head, 0); + wr32(hw, hw->aq.arq.tail, 0); + + /* set starting point */ + wr32(hw, hw->aq.arq.len, (hw->aq.num_arq_entries | + I40E_PF_ARQLEN_ARQENABLE_MASK)); + wr32(hw, hw->aq.arq.bal, I40E_LO_DWORD(hw->aq.arq.desc_buf.pa)); + wr32(hw, hw->aq.arq.bah, I40E_HI_DWORD(hw->aq.arq.desc_buf.pa)); + + /* Update tail in the HW to post pre-allocated buffers */ + wr32(hw, hw->aq.arq.tail, hw->aq.num_arq_entries - 1); + + /* Check one register to verify that config was applied */ + reg = rd32(hw, hw->aq.arq.bal); + if (reg != I40E_LO_DWORD(hw->aq.arq.desc_buf.pa)) + ret_code = I40E_ERR_ADMIN_QUEUE_ERROR; + + return ret_code; +} + +/** + * i40e_init_asq - main initialization routine for ASQ + * @hw: pointer to the hardware structure + * + * This is the main initialization routine for the Admin Send Queue + * Prior to calling this function, drivers *MUST* set the following fields + * in the hw->aq structure: + * - hw->aq.num_asq_entries + * - hw->aq.arq_buf_size + * + * Do *NOT* hold the lock when calling this as the memory allocation routines + * called are not going to be atomic context safe + **/ +enum i40e_status_code i40e_init_asq(struct i40e_hw *hw) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + + if (hw->aq.asq.count > 0) { + /* queue already initialized */ + ret_code = I40E_ERR_NOT_READY; + goto init_adminq_exit; + } + + /* verify input for valid configuration */ + if ((hw->aq.num_asq_entries == 0) || + (hw->aq.asq_buf_size == 0)) { + ret_code = I40E_ERR_CONFIG; + goto init_adminq_exit; + } + + hw->aq.asq.next_to_use = 0; + hw->aq.asq.next_to_clean = 0; + hw->aq.asq.count = hw->aq.num_asq_entries; + + /* allocate the ring memory */ + ret_code = i40e_alloc_adminq_asq_ring(hw); + if (ret_code != I40E_SUCCESS) + goto init_adminq_exit; + + /* allocate buffers in the rings */ + ret_code = i40e_alloc_asq_bufs(hw); + if (ret_code != I40E_SUCCESS) + goto init_adminq_free_rings; + + /* initialize base registers */ + ret_code = i40e_config_asq_regs(hw); + if (ret_code != I40E_SUCCESS) + goto init_adminq_free_rings; + + /* success! */ + goto init_adminq_exit; + +init_adminq_free_rings: + i40e_free_adminq_asq(hw); + +init_adminq_exit: + return ret_code; +} + +/** + * i40e_init_arq - initialize ARQ + * @hw: pointer to the hardware structure + * + * The main initialization routine for the Admin Receive (Event) Queue. + * Prior to calling this function, drivers *MUST* set the following fields + * in the hw->aq structure: + * - hw->aq.num_asq_entries + * - hw->aq.arq_buf_size + * + * Do *NOT* hold the lock when calling this as the memory allocation routines + * called are not going to be atomic context safe + **/ +enum i40e_status_code i40e_init_arq(struct i40e_hw *hw) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + + if (hw->aq.arq.count > 0) { + /* queue already initialized */ + ret_code = I40E_ERR_NOT_READY; + goto init_adminq_exit; + } + + /* verify input for valid configuration */ + if ((hw->aq.num_arq_entries == 0) || + (hw->aq.arq_buf_size == 0)) { + ret_code = I40E_ERR_CONFIG; + goto init_adminq_exit; + } + + hw->aq.arq.next_to_use = 0; + hw->aq.arq.next_to_clean = 0; + hw->aq.arq.count = hw->aq.num_arq_entries; + + /* allocate the ring memory */ + ret_code = i40e_alloc_adminq_arq_ring(hw); + if (ret_code != I40E_SUCCESS) + goto init_adminq_exit; + + /* allocate buffers in the rings */ + ret_code = i40e_alloc_arq_bufs(hw); + if (ret_code != I40E_SUCCESS) + goto init_adminq_free_rings; + + /* initialize base registers */ + ret_code = i40e_config_arq_regs(hw); + if (ret_code != I40E_SUCCESS) + goto init_adminq_free_rings; + + /* success! */ + goto init_adminq_exit; + +init_adminq_free_rings: + i40e_free_adminq_arq(hw); + +init_adminq_exit: + return ret_code; +} + +/** + * i40e_shutdown_asq - shutdown the ASQ + * @hw: pointer to the hardware structure + * + * The main shutdown routine for the Admin Send Queue + **/ +enum i40e_status_code i40e_shutdown_asq(struct i40e_hw *hw) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + + if (hw->aq.asq.count == 0) + return I40E_ERR_NOT_READY; + + /* Stop firmware AdminQ processing */ + wr32(hw, hw->aq.asq.head, 0); + wr32(hw, hw->aq.asq.tail, 0); + wr32(hw, hw->aq.asq.len, 0); + wr32(hw, hw->aq.asq.bal, 0); + wr32(hw, hw->aq.asq.bah, 0); + + /* make sure spinlock is available */ + i40e_acquire_spinlock(&hw->aq.asq_spinlock); + + hw->aq.asq.count = 0; /* to indicate uninitialized queue */ + + /* free ring buffers */ + i40e_free_asq_bufs(hw); + + i40e_release_spinlock(&hw->aq.asq_spinlock); + + return ret_code; +} + +/** + * i40e_shutdown_arq - shutdown ARQ + * @hw: pointer to the hardware structure + * + * The main shutdown routine for the Admin Receive Queue + **/ +enum i40e_status_code i40e_shutdown_arq(struct i40e_hw *hw) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + + if (hw->aq.arq.count == 0) + return I40E_ERR_NOT_READY; + + /* Stop firmware AdminQ processing */ + wr32(hw, hw->aq.arq.head, 0); + wr32(hw, hw->aq.arq.tail, 0); + wr32(hw, hw->aq.arq.len, 0); + wr32(hw, hw->aq.arq.bal, 0); + wr32(hw, hw->aq.arq.bah, 0); + + /* make sure spinlock is available */ + i40e_acquire_spinlock(&hw->aq.arq_spinlock); + + hw->aq.arq.count = 0; /* to indicate uninitialized queue */ + + /* free ring buffers */ + i40e_free_arq_bufs(hw); + + i40e_release_spinlock(&hw->aq.arq_spinlock); + + return ret_code; +} + +/** + * i40e_init_adminq - main initialization routine for Admin Queue + * @hw: pointer to the hardware structure + * + * Prior to calling this function, drivers *MUST* set the following fields + * in the hw->aq structure: + * - hw->aq.num_asq_entries + * - hw->aq.num_arq_entries + * - hw->aq.arq_buf_size + * - hw->aq.asq_buf_size + **/ +enum i40e_status_code i40e_init_adminq(struct i40e_hw *hw) +{ + enum i40e_status_code ret_code; +#ifndef VF_DRIVER + u16 eetrack_lo, eetrack_hi; + int retry = 0; +#endif + + /* verify input for valid configuration */ + if ((hw->aq.num_arq_entries == 0) || + (hw->aq.num_asq_entries == 0) || + (hw->aq.arq_buf_size == 0) || + (hw->aq.asq_buf_size == 0)) { + ret_code = I40E_ERR_CONFIG; + goto init_adminq_exit; + } + + /* initialize spin locks */ + i40e_init_spinlock(&hw->aq.asq_spinlock); + i40e_init_spinlock(&hw->aq.arq_spinlock); + + /* Set up register offsets */ + i40e_adminq_init_regs(hw); + + /* setup ASQ command write back timeout */ + hw->aq.asq_cmd_timeout = I40E_ASQ_CMD_TIMEOUT; + + /* allocate the ASQ */ + ret_code = i40e_init_asq(hw); + if (ret_code != I40E_SUCCESS) + goto init_adminq_destroy_spinlocks; + + /* allocate the ARQ */ + ret_code = i40e_init_arq(hw); + if (ret_code != I40E_SUCCESS) + goto init_adminq_free_asq; + +#ifndef VF_DRIVER + /* There are some cases where the firmware may not be quite ready + * for AdminQ operations, so we retry the AdminQ setup a few times + * if we see timeouts in this first AQ call. + */ + do { + ret_code = i40e_aq_get_firmware_version(hw, + &hw->aq.fw_maj_ver, + &hw->aq.fw_min_ver, + &hw->aq.api_maj_ver, + &hw->aq.api_min_ver, + NULL); + if (ret_code != I40E_ERR_ADMIN_QUEUE_TIMEOUT) + break; + retry++; + i40e_msec_delay(100); + i40e_resume_aq(hw); + } while (retry < 10); + if (ret_code != I40E_SUCCESS) + goto init_adminq_free_arq; + + /* get the NVM version info */ + i40e_read_nvm_word(hw, I40E_SR_NVM_IMAGE_VERSION, &hw->nvm.version); + i40e_read_nvm_word(hw, I40E_SR_NVM_EETRACK_LO, &eetrack_lo); + i40e_read_nvm_word(hw, I40E_SR_NVM_EETRACK_HI, &eetrack_hi); + hw->nvm.eetrack = (eetrack_hi << 16) | eetrack_lo; + + if (hw->aq.api_maj_ver > I40E_FW_API_VERSION_MAJOR) { + ret_code = I40E_ERR_FIRMWARE_API_VERSION; + goto init_adminq_free_arq; + } + + /* pre-emptive resource lock release */ + i40e_aq_release_resource(hw, I40E_NVM_RESOURCE_ID, 0, NULL); + hw->aq.nvm_busy = false; + + ret_code = i40e_aq_set_hmc_resource_profile(hw, + I40E_HMC_PROFILE_DEFAULT, + 0, + NULL); + ret_code = I40E_SUCCESS; + +#endif /* VF_DRIVER */ + /* success! */ + goto init_adminq_exit; + +#ifndef VF_DRIVER +init_adminq_free_arq: + i40e_shutdown_arq(hw); +#endif +init_adminq_free_asq: + i40e_shutdown_asq(hw); +init_adminq_destroy_spinlocks: + i40e_destroy_spinlock(&hw->aq.asq_spinlock); + i40e_destroy_spinlock(&hw->aq.arq_spinlock); + +init_adminq_exit: + return ret_code; +} + +/** + * i40e_shutdown_adminq - shutdown routine for the Admin Queue + * @hw: pointer to the hardware structure + **/ +enum i40e_status_code i40e_shutdown_adminq(struct i40e_hw *hw) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + + if (i40e_check_asq_alive(hw)) + i40e_aq_queue_shutdown(hw, true); + + i40e_shutdown_asq(hw); + i40e_shutdown_arq(hw); + + /* destroy the spinlocks */ + i40e_destroy_spinlock(&hw->aq.asq_spinlock); + i40e_destroy_spinlock(&hw->aq.arq_spinlock); + + return ret_code; +} + +/** + * i40e_clean_asq - cleans Admin send queue + * @hw: pointer to the hardware structure + * + * returns the number of free desc + **/ +u16 i40e_clean_asq(struct i40e_hw *hw) +{ + struct i40e_adminq_ring *asq = &(hw->aq.asq); + struct i40e_asq_cmd_details *details; + u16 ntc = asq->next_to_clean; + struct i40e_aq_desc desc_cb; + struct i40e_aq_desc *desc; + + desc = I40E_ADMINQ_DESC(*asq, ntc); + details = I40E_ADMINQ_DETAILS(*asq, ntc); + while (rd32(hw, hw->aq.asq.head) != ntc) { + i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, + "%s: ntc %d head %d.\n", __FUNCTION__, ntc, + rd32(hw, hw->aq.asq.head)); + + if (details->callback) { + I40E_ADMINQ_CALLBACK cb_func = + (I40E_ADMINQ_CALLBACK)details->callback; + i40e_memcpy(&desc_cb, desc, + sizeof(struct i40e_aq_desc), I40E_DMA_TO_DMA); + cb_func(hw, &desc_cb); + } + i40e_memset(desc, 0, sizeof(*desc), I40E_DMA_MEM); + i40e_memset(details, 0, sizeof(*details), I40E_NONDMA_MEM); + ntc++; + if (ntc == asq->count) + ntc = 0; + desc = I40E_ADMINQ_DESC(*asq, ntc); + details = I40E_ADMINQ_DETAILS(*asq, ntc); + } + + asq->next_to_clean = ntc; + + return I40E_DESC_UNUSED(asq); +} + +/** + * i40e_asq_done - check if FW has processed the Admin Send Queue + * @hw: pointer to the hw struct + * + * Returns true if the firmware has processed all descriptors on the + * admin send queue. Returns false if there are still requests pending. + **/ +bool i40e_asq_done(struct i40e_hw *hw) +{ + /* AQ designers suggest use of head for better + * timing reliability than DD bit + */ + return rd32(hw, hw->aq.asq.head) == hw->aq.asq.next_to_use; + +} + +/** + * i40e_asq_send_command - send command to Admin Queue + * @hw: pointer to the hw struct + * @desc: prefilled descriptor describing the command (non DMA mem) + * @buff: buffer to use for indirect commands + * @buff_size: size of buffer for indirect commands + * @cmd_details: pointer to command details structure + * + * This is the main send command driver routine for the Admin Queue send + * queue. It runs the queue, cleans the queue, etc + **/ +enum i40e_status_code i40e_asq_send_command(struct i40e_hw *hw, + struct i40e_aq_desc *desc, + void *buff, /* can be NULL */ + u16 buff_size, + struct i40e_asq_cmd_details *cmd_details) +{ + enum i40e_status_code status = I40E_SUCCESS; + struct i40e_dma_mem *dma_buff = NULL; + struct i40e_asq_cmd_details *details; + struct i40e_aq_desc *desc_on_ring; + bool cmd_completed = false; + u16 retval = 0; + u32 val = 0; + + val = rd32(hw, hw->aq.asq.head); + if (val >= hw->aq.num_asq_entries) { + i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, + "AQTX: head overrun at %d\n", val); + status = I40E_ERR_QUEUE_EMPTY; + goto asq_send_command_exit; + } + + if (hw->aq.asq.count == 0) { + i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, + "AQTX: Admin queue not initialized.\n"); + status = I40E_ERR_QUEUE_EMPTY; + goto asq_send_command_exit; + } + +#ifndef VF_DRIVER + if (i40e_is_nvm_update_op(desc) && hw->aq.nvm_busy) { + i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, "AQTX: NVM busy.\n"); + status = I40E_ERR_NVM; + goto asq_send_command_exit; + } + +#endif + details = I40E_ADMINQ_DETAILS(hw->aq.asq, hw->aq.asq.next_to_use); + if (cmd_details) { + i40e_memcpy(details, + cmd_details, + sizeof(struct i40e_asq_cmd_details), + I40E_NONDMA_TO_NONDMA); + + /* If the cmd_details are defined copy the cookie. The + * CPU_TO_LE32 is not needed here because the data is ignored + * by the FW, only used by the driver + */ + if (details->cookie) { + desc->cookie_high = + CPU_TO_LE32(I40E_HI_DWORD(details->cookie)); + desc->cookie_low = + CPU_TO_LE32(I40E_LO_DWORD(details->cookie)); + } + } else { + i40e_memset(details, 0, + sizeof(struct i40e_asq_cmd_details), + I40E_NONDMA_MEM); + } + + /* clear requested flags and then set additional flags if defined */ + desc->flags &= ~CPU_TO_LE16(details->flags_dis); + desc->flags |= CPU_TO_LE16(details->flags_ena); + + i40e_acquire_spinlock(&hw->aq.asq_spinlock); + + if (buff_size > hw->aq.asq_buf_size) { + i40e_debug(hw, + I40E_DEBUG_AQ_MESSAGE, + "AQTX: Invalid buffer size: %d.\n", + buff_size); + status = I40E_ERR_INVALID_SIZE; + goto asq_send_command_error; + } + + if (details->postpone && !details->async) { + i40e_debug(hw, + I40E_DEBUG_AQ_MESSAGE, + "AQTX: Async flag not set along with postpone flag"); + status = I40E_ERR_PARAM; + goto asq_send_command_error; + } + + /* call clean and check queue available function to reclaim the + * descriptors that were processed by FW, the function returns the + * number of desc available + */ + /* the clean function called here could be called in a separate thread + * in case of asynchronous completions + */ + if (i40e_clean_asq(hw) == 0) { + i40e_debug(hw, + I40E_DEBUG_AQ_MESSAGE, + "AQTX: Error queue is full.\n"); + status = I40E_ERR_ADMIN_QUEUE_FULL; + goto asq_send_command_error; + } + + /* initialize the temp desc pointer with the right desc */ + desc_on_ring = I40E_ADMINQ_DESC(hw->aq.asq, hw->aq.asq.next_to_use); + + /* if the desc is available copy the temp desc to the right place */ + i40e_memcpy(desc_on_ring, desc, sizeof(struct i40e_aq_desc), + I40E_NONDMA_TO_DMA); + + /* if buff is not NULL assume indirect command */ + if (buff != NULL) { + dma_buff = &(hw->aq.asq.r.asq_bi[hw->aq.asq.next_to_use]); + /* copy the user buff into the respective DMA buff */ + i40e_memcpy(dma_buff->va, buff, buff_size, + I40E_NONDMA_TO_DMA); + desc_on_ring->datalen = CPU_TO_LE16(buff_size); + + /* Update the address values in the desc with the pa value + * for respective buffer + */ + desc_on_ring->params.external.addr_high = + CPU_TO_LE32(I40E_HI_DWORD(dma_buff->pa)); + desc_on_ring->params.external.addr_low = + CPU_TO_LE32(I40E_LO_DWORD(dma_buff->pa)); + } + + /* bump the tail */ + i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, "AQTX: desc and buffer:\n"); + i40e_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc_on_ring, + buff, buff_size); + (hw->aq.asq.next_to_use)++; + if (hw->aq.asq.next_to_use == hw->aq.asq.count) + hw->aq.asq.next_to_use = 0; + if (!details->postpone) + wr32(hw, hw->aq.asq.tail, hw->aq.asq.next_to_use); + + /* if cmd_details are not defined or async flag is not set, + * we need to wait for desc write back + */ + if (!details->async && !details->postpone) { + u32 total_delay = 0; + + do { + /* AQ designers suggest use of head for better + * timing reliability than DD bit + */ + if (i40e_asq_done(hw)) + break; + /* ugh! delay while spin_lock */ + i40e_msec_delay(1); + total_delay++; + } while (total_delay < hw->aq.asq_cmd_timeout); + } + + /* if ready, copy the desc back to temp */ + if (i40e_asq_done(hw)) { + i40e_memcpy(desc, desc_on_ring, sizeof(struct i40e_aq_desc), + I40E_DMA_TO_NONDMA); + if (buff != NULL) + i40e_memcpy(buff, dma_buff->va, buff_size, + I40E_DMA_TO_NONDMA); + retval = LE16_TO_CPU(desc->retval); + if (retval != 0) { + i40e_debug(hw, + I40E_DEBUG_AQ_MESSAGE, + "AQTX: Command completed with error 0x%X.\n", + retval); + + /* strip off FW internal code */ + retval &= 0xff; + } + cmd_completed = true; + if ((enum i40e_admin_queue_err)retval == I40E_AQ_RC_OK) + status = I40E_SUCCESS; + else + status = I40E_ERR_ADMIN_QUEUE_ERROR; + hw->aq.asq_last_status = (enum i40e_admin_queue_err)retval; + } + + i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, + "AQTX: desc and buffer writeback:\n"); + i40e_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc, buff, buff_size); + + /* update the error if time out occurred */ + if ((!cmd_completed) && + (!details->async && !details->postpone)) { + i40e_debug(hw, + I40E_DEBUG_AQ_MESSAGE, + "AQTX: Writeback timeout.\n"); + status = I40E_ERR_ADMIN_QUEUE_TIMEOUT; + } + +#ifndef VF_DRIVER + if (!status && i40e_is_nvm_update_op(desc)) + hw->aq.nvm_busy = true; + +#endif /* VF_DRIVER */ +asq_send_command_error: + i40e_release_spinlock(&hw->aq.asq_spinlock); +asq_send_command_exit: + return status; +} + +/** + * i40e_fill_default_direct_cmd_desc - AQ descriptor helper function + * @desc: pointer to the temp descriptor (non DMA mem) + * @opcode: the opcode can be used to decide which flags to turn off or on + * + * Fill the desc with default values + **/ +void i40e_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc, + u16 opcode) +{ + /* zero out the desc */ + i40e_memset((void *)desc, 0, sizeof(struct i40e_aq_desc), + I40E_NONDMA_MEM); + desc->opcode = CPU_TO_LE16(opcode); + desc->flags = CPU_TO_LE16(I40E_AQ_FLAG_SI); +} + +/** + * i40e_clean_arq_element + * @hw: pointer to the hw struct + * @e: event info from the receive descriptor, includes any buffers + * @pending: number of events that could be left to process + * + * This function cleans one Admin Receive Queue element and returns + * the contents through e. It can also return how many events are + * left to process through 'pending' + **/ +enum i40e_status_code i40e_clean_arq_element(struct i40e_hw *hw, + struct i40e_arq_event_info *e, + u16 *pending) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + u16 ntc = hw->aq.arq.next_to_clean; + struct i40e_aq_desc *desc; + struct i40e_dma_mem *bi; + u16 desc_idx; + u16 datalen; + u16 flags; + u16 ntu; + + /* take the lock before we start messing with the ring */ + i40e_acquire_spinlock(&hw->aq.arq_spinlock); + + /* set next_to_use to head */ + ntu = (rd32(hw, hw->aq.arq.head) & I40E_PF_ARQH_ARQH_MASK); + if (ntu == ntc) { + /* nothing to do - shouldn't need to update ring's values */ + i40e_debug(hw, + I40E_DEBUG_AQ_MESSAGE, + "AQRX: Queue is empty.\n"); + ret_code = I40E_ERR_ADMIN_QUEUE_NO_WORK; + goto clean_arq_element_out; + } + + /* now clean the next descriptor */ + desc = I40E_ADMINQ_DESC(hw->aq.arq, ntc); + desc_idx = ntc; + + flags = LE16_TO_CPU(desc->flags); + if (flags & I40E_AQ_FLAG_ERR) { + ret_code = I40E_ERR_ADMIN_QUEUE_ERROR; + hw->aq.arq_last_status = + (enum i40e_admin_queue_err)LE16_TO_CPU(desc->retval); + i40e_debug(hw, + I40E_DEBUG_AQ_MESSAGE, + "AQRX: Event received with error 0x%X.\n", + hw->aq.arq_last_status); + } + + i40e_memcpy(&e->desc, desc, sizeof(struct i40e_aq_desc), + I40E_DMA_TO_NONDMA); + datalen = LE16_TO_CPU(desc->datalen); + e->msg_len = min(datalen, e->buf_len); + if (e->msg_buf != NULL && (e->msg_len != 0)) + i40e_memcpy(e->msg_buf, + hw->aq.arq.r.arq_bi[desc_idx].va, + e->msg_len, I40E_DMA_TO_NONDMA); + + i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, "AQRX: desc and buffer:\n"); + i40e_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc, e->msg_buf, + hw->aq.arq_buf_size); + + /* Restore the original datalen and buffer address in the desc, + * FW updates datalen to indicate the event message + * size + */ + bi = &hw->aq.arq.r.arq_bi[ntc]; + i40e_memset((void *)desc, 0, sizeof(struct i40e_aq_desc), I40E_DMA_MEM); + + desc->flags = CPU_TO_LE16(I40E_AQ_FLAG_BUF); + if (hw->aq.arq_buf_size > I40E_AQ_LARGE_BUF) + desc->flags |= CPU_TO_LE16(I40E_AQ_FLAG_LB); + desc->datalen = CPU_TO_LE16((u16)bi->size); + desc->params.external.addr_high = CPU_TO_LE32(I40E_HI_DWORD(bi->pa)); + desc->params.external.addr_low = CPU_TO_LE32(I40E_LO_DWORD(bi->pa)); + + /* set tail = the last cleaned desc index. */ + wr32(hw, hw->aq.arq.tail, ntc); + /* ntc is updated to tail + 1 */ + ntc++; + if (ntc == hw->aq.num_arq_entries) + ntc = 0; + hw->aq.arq.next_to_clean = ntc; + hw->aq.arq.next_to_use = ntu; + +clean_arq_element_out: + /* Set pending if needed, unlock and return */ + if (pending != NULL) + *pending = (ntc > ntu ? hw->aq.arq.count : 0) + (ntu - ntc); + i40e_release_spinlock(&hw->aq.arq_spinlock); + +#ifndef VF_DRIVER + if (i40e_is_nvm_update_op(&e->desc)) { + hw->aq.nvm_busy = false; + if (hw->aq.nvm_release_on_done) { + i40e_release_nvm(hw); + hw->aq.nvm_release_on_done = false; + } + } + +#endif + return ret_code; +} + +void i40e_resume_aq(struct i40e_hw *hw) +{ + /* Registers are reset after PF reset */ + hw->aq.asq.next_to_use = 0; + hw->aq.asq.next_to_clean = 0; + +#if (I40E_VF_ATQLEN_ATQENABLE_MASK != I40E_PF_ATQLEN_ATQENABLE_MASK) +#error I40E_VF_ATQLEN_ATQENABLE_MASK != I40E_PF_ATQLEN_ATQENABLE_MASK +#endif + i40e_config_asq_regs(hw); + + hw->aq.arq.next_to_use = 0; + hw->aq.arq.next_to_clean = 0; + + i40e_config_arq_regs(hw); +} diff --git a/drivers/net/i40e/base/i40e_adminq.h b/drivers/net/i40e/base/i40e_adminq.h new file mode 100644 index 0000000..ea611bd --- /dev/null +++ b/drivers/net/i40e/base/i40e_adminq.h @@ -0,0 +1,157 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _I40E_ADMINQ_H_ +#define _I40E_ADMINQ_H_ + +#include "i40e_osdep.h" +#include "i40e_adminq_cmd.h" + +#define I40E_ADMINQ_DESC(R, i) \ + (&(((struct i40e_aq_desc *)((R).desc_buf.va))[i])) + +#define I40E_ADMINQ_DESC_ALIGNMENT 4096 + +struct i40e_adminq_ring { + struct i40e_virt_mem dma_head; /* space for dma structures */ + struct i40e_dma_mem desc_buf; /* descriptor ring memory */ + struct i40e_virt_mem cmd_buf; /* command buffer memory */ + + union { + struct i40e_dma_mem *asq_bi; + struct i40e_dma_mem *arq_bi; + } r; + + u16 count; /* Number of descriptors */ + u16 rx_buf_len; /* Admin Receive Queue buffer length */ + + /* used for interrupt processing */ + u16 next_to_use; + u16 next_to_clean; + + /* used for queue tracking */ + u32 head; + u32 tail; + u32 len; + u32 bah; + u32 bal; +}; + +/* ASQ transaction details */ +struct i40e_asq_cmd_details { + void *callback; /* cast from type I40E_ADMINQ_CALLBACK */ + u64 cookie; + u16 flags_ena; + u16 flags_dis; + bool async; + bool postpone; +}; + +#define I40E_ADMINQ_DETAILS(R, i) \ + (&(((struct i40e_asq_cmd_details *)((R).cmd_buf.va))[i])) + +/* ARQ event information */ +struct i40e_arq_event_info { + struct i40e_aq_desc desc; + u16 msg_len; + u16 buf_len; + u8 *msg_buf; +}; + +/* Admin Queue information */ +struct i40e_adminq_info { + struct i40e_adminq_ring arq; /* receive queue */ + struct i40e_adminq_ring asq; /* send queue */ + u32 asq_cmd_timeout; /* send queue cmd write back timeout*/ + u16 num_arq_entries; /* receive queue depth */ + u16 num_asq_entries; /* send queue depth */ + u16 arq_buf_size; /* receive queue buffer size */ + u16 asq_buf_size; /* send queue buffer size */ + u16 fw_maj_ver; /* firmware major version */ + u16 fw_min_ver; /* firmware minor version */ + u16 api_maj_ver; /* api major version */ + u16 api_min_ver; /* api minor version */ + bool nvm_busy; + bool nvm_release_on_done; + + struct i40e_spinlock asq_spinlock; /* Send queue spinlock */ + struct i40e_spinlock arq_spinlock; /* Receive queue spinlock */ + + /* last status values on send and receive queues */ + enum i40e_admin_queue_err asq_last_status; + enum i40e_admin_queue_err arq_last_status; +}; + +/** + * i40e_aq_rc_to_posix - convert errors to user-land codes + * aq_rc: AdminQ error code to convert + **/ +STATIC inline int i40e_aq_rc_to_posix(u16 aq_rc) +{ + int aq_to_posix[] = { + 0, /* I40E_AQ_RC_OK */ + -EPERM, /* I40E_AQ_RC_EPERM */ + -ENOENT, /* I40E_AQ_RC_ENOENT */ + -ESRCH, /* I40E_AQ_RC_ESRCH */ + -EINTR, /* I40E_AQ_RC_EINTR */ + -EIO, /* I40E_AQ_RC_EIO */ + -ENXIO, /* I40E_AQ_RC_ENXIO */ + -E2BIG, /* I40E_AQ_RC_E2BIG */ + -EAGAIN, /* I40E_AQ_RC_EAGAIN */ + -ENOMEM, /* I40E_AQ_RC_ENOMEM */ + -EACCES, /* I40E_AQ_RC_EACCES */ + -EFAULT, /* I40E_AQ_RC_EFAULT */ + -EBUSY, /* I40E_AQ_RC_EBUSY */ + -EEXIST, /* I40E_AQ_RC_EEXIST */ + -EINVAL, /* I40E_AQ_RC_EINVAL */ + -ENOTTY, /* I40E_AQ_RC_ENOTTY */ + -ENOSPC, /* I40E_AQ_RC_ENOSPC */ + -ENOSYS, /* I40E_AQ_RC_ENOSYS */ + -ERANGE, /* I40E_AQ_RC_ERANGE */ + -EPIPE, /* I40E_AQ_RC_EFLUSHED */ + -ESPIPE, /* I40E_AQ_RC_BAD_ADDR */ + -EROFS, /* I40E_AQ_RC_EMODE */ + -EFBIG, /* I40E_AQ_RC_EFBIG */ + }; + + return aq_to_posix[aq_rc]; +} + +/* general information */ +#define I40E_AQ_LARGE_BUF 512 +#define I40E_ASQ_CMD_TIMEOUT 100 /* msecs */ + +void i40e_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc, + u16 opcode); + +#endif /* _I40E_ADMINQ_H_ */ diff --git a/drivers/net/i40e/base/i40e_adminq_cmd.h b/drivers/net/i40e/base/i40e_adminq_cmd.h new file mode 100644 index 0000000..5ea9b7d --- /dev/null +++ b/drivers/net/i40e/base/i40e_adminq_cmd.h @@ -0,0 +1,2179 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _I40E_ADMINQ_CMD_H_ +#define _I40E_ADMINQ_CMD_H_ + +/* This header file defines the i40e Admin Queue commands and is shared between + * i40e Firmware and Software. + * + * This file needs to comply with the Linux Kernel coding style. + */ + +#define I40E_FW_API_VERSION_MAJOR 0x0001 +#define I40E_FW_API_VERSION_MINOR 0x0002 + +struct i40e_aq_desc { + __le16 flags; + __le16 opcode; + __le16 datalen; + __le16 retval; + __le32 cookie_high; + __le32 cookie_low; + union { + struct { + __le32 param0; + __le32 param1; + __le32 param2; + __le32 param3; + } internal; + struct { + __le32 param0; + __le32 param1; + __le32 addr_high; + __le32 addr_low; + } external; + u8 raw[16]; + } params; +}; + +/* Flags sub-structure + * |0 |1 |2 |3 |4 |5 |6 |7 |8 |9 |10 |11 |12 |13 |14 |15 | + * |DD |CMP|ERR|VFE| * * RESERVED * * |LB |RD |VFC|BUF|SI |EI |FE | + */ + +/* command flags and offsets*/ +#define I40E_AQ_FLAG_DD_SHIFT 0 +#define I40E_AQ_FLAG_CMP_SHIFT 1 +#define I40E_AQ_FLAG_ERR_SHIFT 2 +#define I40E_AQ_FLAG_VFE_SHIFT 3 +#define I40E_AQ_FLAG_LB_SHIFT 9 +#define I40E_AQ_FLAG_RD_SHIFT 10 +#define I40E_AQ_FLAG_VFC_SHIFT 11 +#define I40E_AQ_FLAG_BUF_SHIFT 12 +#define I40E_AQ_FLAG_SI_SHIFT 13 +#define I40E_AQ_FLAG_EI_SHIFT 14 +#define I40E_AQ_FLAG_FE_SHIFT 15 + +#define I40E_AQ_FLAG_DD (1 << I40E_AQ_FLAG_DD_SHIFT) /* 0x1 */ +#define I40E_AQ_FLAG_CMP (1 << I40E_AQ_FLAG_CMP_SHIFT) /* 0x2 */ +#define I40E_AQ_FLAG_ERR (1 << I40E_AQ_FLAG_ERR_SHIFT) /* 0x4 */ +#define I40E_AQ_FLAG_VFE (1 << I40E_AQ_FLAG_VFE_SHIFT) /* 0x8 */ +#define I40E_AQ_FLAG_LB (1 << I40E_AQ_FLAG_LB_SHIFT) /* 0x200 */ +#define I40E_AQ_FLAG_RD (1 << I40E_AQ_FLAG_RD_SHIFT) /* 0x400 */ +#define I40E_AQ_FLAG_VFC (1 << I40E_AQ_FLAG_VFC_SHIFT) /* 0x800 */ +#define I40E_AQ_FLAG_BUF (1 << I40E_AQ_FLAG_BUF_SHIFT) /* 0x1000 */ +#define I40E_AQ_FLAG_SI (1 << I40E_AQ_FLAG_SI_SHIFT) /* 0x2000 */ +#define I40E_AQ_FLAG_EI (1 << I40E_AQ_FLAG_EI_SHIFT) /* 0x4000 */ +#define I40E_AQ_FLAG_FE (1 << I40E_AQ_FLAG_FE_SHIFT) /* 0x8000 */ + +/* error codes */ +enum i40e_admin_queue_err { + I40E_AQ_RC_OK = 0, /* success */ + I40E_AQ_RC_EPERM = 1, /* Operation not permitted */ + I40E_AQ_RC_ENOENT = 2, /* No such element */ + I40E_AQ_RC_ESRCH = 3, /* Bad opcode */ + I40E_AQ_RC_EINTR = 4, /* operation interrupted */ + I40E_AQ_RC_EIO = 5, /* I/O error */ + I40E_AQ_RC_ENXIO = 6, /* No such resource */ + I40E_AQ_RC_E2BIG = 7, /* Arg too long */ + I40E_AQ_RC_EAGAIN = 8, /* Try again */ + I40E_AQ_RC_ENOMEM = 9, /* Out of memory */ + I40E_AQ_RC_EACCES = 10, /* Permission denied */ + I40E_AQ_RC_EFAULT = 11, /* Bad address */ + I40E_AQ_RC_EBUSY = 12, /* Device or resource busy */ + I40E_AQ_RC_EEXIST = 13, /* object already exists */ + I40E_AQ_RC_EINVAL = 14, /* Invalid argument */ + I40E_AQ_RC_ENOTTY = 15, /* Not a typewriter */ + I40E_AQ_RC_ENOSPC = 16, /* No space left or alloc failure */ + I40E_AQ_RC_ENOSYS = 17, /* Function not implemented */ + I40E_AQ_RC_ERANGE = 18, /* Parameter out of range */ + I40E_AQ_RC_EFLUSHED = 19, /* Cmd flushed due to prev cmd error */ + I40E_AQ_RC_BAD_ADDR = 20, /* Descriptor contains a bad pointer */ + I40E_AQ_RC_EMODE = 21, /* Op not allowed in current dev mode */ + I40E_AQ_RC_EFBIG = 22, /* File too large */ +}; + +/* Admin Queue command opcodes */ +enum i40e_admin_queue_opc { + /* aq commands */ + i40e_aqc_opc_get_version = 0x0001, + i40e_aqc_opc_driver_version = 0x0002, + i40e_aqc_opc_queue_shutdown = 0x0003, + i40e_aqc_opc_set_pf_context = 0x0004, + + /* resource ownership */ + i40e_aqc_opc_request_resource = 0x0008, + i40e_aqc_opc_release_resource = 0x0009, + + i40e_aqc_opc_list_func_capabilities = 0x000A, + i40e_aqc_opc_list_dev_capabilities = 0x000B, + + i40e_aqc_opc_set_cppm_configuration = 0x0103, + i40e_aqc_opc_set_arp_proxy_entry = 0x0104, + i40e_aqc_opc_set_ns_proxy_entry = 0x0105, + + /* LAA */ + i40e_aqc_opc_mng_laa = 0x0106, /* AQ obsolete */ + i40e_aqc_opc_mac_address_read = 0x0107, + i40e_aqc_opc_mac_address_write = 0x0108, + + /* PXE */ + i40e_aqc_opc_clear_pxe_mode = 0x0110, + + /* internal switch commands */ + i40e_aqc_opc_get_switch_config = 0x0200, + i40e_aqc_opc_add_statistics = 0x0201, + i40e_aqc_opc_remove_statistics = 0x0202, + i40e_aqc_opc_set_port_parameters = 0x0203, + i40e_aqc_opc_get_switch_resource_alloc = 0x0204, + + i40e_aqc_opc_add_vsi = 0x0210, + i40e_aqc_opc_update_vsi_parameters = 0x0211, + i40e_aqc_opc_get_vsi_parameters = 0x0212, + + i40e_aqc_opc_add_pv = 0x0220, + i40e_aqc_opc_update_pv_parameters = 0x0221, + i40e_aqc_opc_get_pv_parameters = 0x0222, + + i40e_aqc_opc_add_veb = 0x0230, + i40e_aqc_opc_update_veb_parameters = 0x0231, + i40e_aqc_opc_get_veb_parameters = 0x0232, + + i40e_aqc_opc_delete_element = 0x0243, + + i40e_aqc_opc_add_macvlan = 0x0250, + i40e_aqc_opc_remove_macvlan = 0x0251, + i40e_aqc_opc_add_vlan = 0x0252, + i40e_aqc_opc_remove_vlan = 0x0253, + i40e_aqc_opc_set_vsi_promiscuous_modes = 0x0254, + i40e_aqc_opc_add_tag = 0x0255, + i40e_aqc_opc_remove_tag = 0x0256, + i40e_aqc_opc_add_multicast_etag = 0x0257, + i40e_aqc_opc_remove_multicast_etag = 0x0258, + i40e_aqc_opc_update_tag = 0x0259, + i40e_aqc_opc_add_control_packet_filter = 0x025A, + i40e_aqc_opc_remove_control_packet_filter = 0x025B, + i40e_aqc_opc_add_cloud_filters = 0x025C, + i40e_aqc_opc_remove_cloud_filters = 0x025D, + + i40e_aqc_opc_add_mirror_rule = 0x0260, + i40e_aqc_opc_delete_mirror_rule = 0x0261, + + /* DCB commands */ + i40e_aqc_opc_dcb_ignore_pfc = 0x0301, + i40e_aqc_opc_dcb_updated = 0x0302, + + /* TX scheduler */ + i40e_aqc_opc_configure_vsi_bw_limit = 0x0400, + i40e_aqc_opc_configure_vsi_ets_sla_bw_limit = 0x0406, + i40e_aqc_opc_configure_vsi_tc_bw = 0x0407, + i40e_aqc_opc_query_vsi_bw_config = 0x0408, + i40e_aqc_opc_query_vsi_ets_sla_config = 0x040A, + i40e_aqc_opc_configure_switching_comp_bw_limit = 0x0410, + + i40e_aqc_opc_enable_switching_comp_ets = 0x0413, + i40e_aqc_opc_modify_switching_comp_ets = 0x0414, + i40e_aqc_opc_disable_switching_comp_ets = 0x0415, + i40e_aqc_opc_configure_switching_comp_ets_bw_limit = 0x0416, + i40e_aqc_opc_configure_switching_comp_bw_config = 0x0417, + i40e_aqc_opc_query_switching_comp_ets_config = 0x0418, + i40e_aqc_opc_query_port_ets_config = 0x0419, + i40e_aqc_opc_query_switching_comp_bw_config = 0x041A, + i40e_aqc_opc_suspend_port_tx = 0x041B, + i40e_aqc_opc_resume_port_tx = 0x041C, + i40e_aqc_opc_configure_partition_bw = 0x041D, + + /* hmc */ + i40e_aqc_opc_query_hmc_resource_profile = 0x0500, + i40e_aqc_opc_set_hmc_resource_profile = 0x0501, + + /* phy commands*/ + i40e_aqc_opc_get_phy_abilities = 0x0600, + i40e_aqc_opc_set_phy_config = 0x0601, + i40e_aqc_opc_set_mac_config = 0x0603, + i40e_aqc_opc_set_link_restart_an = 0x0605, + i40e_aqc_opc_get_link_status = 0x0607, + i40e_aqc_opc_set_phy_int_mask = 0x0613, + i40e_aqc_opc_get_local_advt_reg = 0x0614, + i40e_aqc_opc_set_local_advt_reg = 0x0615, + i40e_aqc_opc_get_partner_advt = 0x0616, + i40e_aqc_opc_set_lb_modes = 0x0618, + i40e_aqc_opc_get_phy_wol_caps = 0x0621, + i40e_aqc_opc_set_phy_debug = 0x0622, + i40e_aqc_opc_upload_ext_phy_fm = 0x0625, + + /* NVM commands */ + i40e_aqc_opc_nvm_read = 0x0701, + i40e_aqc_opc_nvm_erase = 0x0702, + i40e_aqc_opc_nvm_update = 0x0703, + i40e_aqc_opc_nvm_config_read = 0x0704, + i40e_aqc_opc_nvm_config_write = 0x0705, + + /* virtualization commands */ + i40e_aqc_opc_send_msg_to_pf = 0x0801, + i40e_aqc_opc_send_msg_to_vf = 0x0802, + i40e_aqc_opc_send_msg_to_peer = 0x0803, + + /* alternate structure */ + i40e_aqc_opc_alternate_write = 0x0900, + i40e_aqc_opc_alternate_write_indirect = 0x0901, + i40e_aqc_opc_alternate_read = 0x0902, + i40e_aqc_opc_alternate_read_indirect = 0x0903, + i40e_aqc_opc_alternate_write_done = 0x0904, + i40e_aqc_opc_alternate_set_mode = 0x0905, + i40e_aqc_opc_alternate_clear_port = 0x0906, + + /* LLDP commands */ + i40e_aqc_opc_lldp_get_mib = 0x0A00, + i40e_aqc_opc_lldp_update_mib = 0x0A01, + i40e_aqc_opc_lldp_add_tlv = 0x0A02, + i40e_aqc_opc_lldp_update_tlv = 0x0A03, + i40e_aqc_opc_lldp_delete_tlv = 0x0A04, + i40e_aqc_opc_lldp_stop = 0x0A05, + i40e_aqc_opc_lldp_start = 0x0A06, + + /* Tunnel commands */ + i40e_aqc_opc_add_udp_tunnel = 0x0B00, + i40e_aqc_opc_del_udp_tunnel = 0x0B01, + i40e_aqc_opc_tunnel_key_structure = 0x0B10, + + /* Async Events */ + i40e_aqc_opc_event_lan_overflow = 0x1001, + + /* OEM commands */ + i40e_aqc_opc_oem_parameter_change = 0xFE00, + i40e_aqc_opc_oem_device_status_change = 0xFE01, + + /* debug commands */ + i40e_aqc_opc_debug_get_deviceid = 0xFF00, + i40e_aqc_opc_debug_set_mode = 0xFF01, + i40e_aqc_opc_debug_read_reg = 0xFF03, + i40e_aqc_opc_debug_write_reg = 0xFF04, + i40e_aqc_opc_debug_modify_reg = 0xFF07, + i40e_aqc_opc_debug_dump_internals = 0xFF08, + i40e_aqc_opc_debug_modify_internals = 0xFF09, +}; + +/* command structures and indirect data structures */ + +/* Structure naming conventions: + * - no suffix for direct command descriptor structures + * - _data for indirect sent data + * - _resp for indirect return data (data which is both will use _data) + * - _completion for direct return data + * - _element_ for repeated elements (may also be _data or _resp) + * + * Command structures are expected to overlay the params.raw member of the basic + * descriptor, and as such cannot exceed 16 bytes in length. + */ + +/* This macro is used to generate a compilation error if a structure + * is not exactly the correct length. It gives a divide by zero error if the + * structure is not of the correct size, otherwise it creates an enum that is + * never used. + */ +#define I40E_CHECK_STRUCT_LEN(n, X) enum i40e_static_assert_enum_##X \ + { i40e_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) } + +/* This macro is used extensively to ensure that command structures are 16 + * bytes in length as they have to map to the raw array of that size. + */ +#define I40E_CHECK_CMD_LENGTH(X) I40E_CHECK_STRUCT_LEN(16, X) + +/* internal (0x00XX) commands */ + +/* Get version (direct 0x0001) */ +struct i40e_aqc_get_version { + __le32 rom_ver; + __le32 fw_build; + __le16 fw_major; + __le16 fw_minor; + __le16 api_major; + __le16 api_minor; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_get_version); + +/* Send driver version (indirect 0x0002) */ +struct i40e_aqc_driver_version { + u8 driver_major_ver; + u8 driver_minor_ver; + u8 driver_build_ver; + u8 driver_subbuild_ver; + u8 reserved[4]; + __le32 address_high; + __le32 address_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_driver_version); + +/* Queue Shutdown (direct 0x0003) */ +struct i40e_aqc_queue_shutdown { + __le32 driver_unloading; +#define I40E_AQ_DRIVER_UNLOADING 0x1 + u8 reserved[12]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_queue_shutdown); + +/* Set PF context (0x0004, direct) */ +struct i40e_aqc_set_pf_context { + u8 pf_id; + u8 reserved[15]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_set_pf_context); + +/* Request resource ownership (direct 0x0008) + * Release resource ownership (direct 0x0009) + */ +#define I40E_AQ_RESOURCE_NVM 1 +#define I40E_AQ_RESOURCE_SDP 2 +#define I40E_AQ_RESOURCE_ACCESS_READ 1 +#define I40E_AQ_RESOURCE_ACCESS_WRITE 2 +#define I40E_AQ_RESOURCE_NVM_READ_TIMEOUT 3000 +#define I40E_AQ_RESOURCE_NVM_WRITE_TIMEOUT 180000 + +struct i40e_aqc_request_resource { + __le16 resource_id; + __le16 access_type; + __le32 timeout; + __le32 resource_number; + u8 reserved[4]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_request_resource); + +/* Get function capabilities (indirect 0x000A) + * Get device capabilities (indirect 0x000B) + */ +struct i40e_aqc_list_capabilites { + u8 command_flags; +#define I40E_AQ_LIST_CAP_PF_INDEX_EN 1 + u8 pf_index; + u8 reserved[2]; + __le32 count; + __le32 addr_high; + __le32 addr_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_list_capabilites); + +struct i40e_aqc_list_capabilities_element_resp { + __le16 id; + u8 major_rev; + u8 minor_rev; + __le32 number; + __le32 logical_id; + __le32 phys_id; + u8 reserved[16]; +}; + +/* list of caps */ + +#define I40E_AQ_CAP_ID_SWITCH_MODE 0x0001 +#define I40E_AQ_CAP_ID_MNG_MODE 0x0002 +#define I40E_AQ_CAP_ID_NPAR_ACTIVE 0x0003 +#define I40E_AQ_CAP_ID_OS2BMC_CAP 0x0004 +#define I40E_AQ_CAP_ID_FUNCTIONS_VALID 0x0005 +#define I40E_AQ_CAP_ID_ALTERNATE_RAM 0x0006 +#define I40E_AQ_CAP_ID_SRIOV 0x0012 +#define I40E_AQ_CAP_ID_VF 0x0013 +#define I40E_AQ_CAP_ID_VMDQ 0x0014 +#define I40E_AQ_CAP_ID_8021QBG 0x0015 +#define I40E_AQ_CAP_ID_8021QBR 0x0016 +#define I40E_AQ_CAP_ID_VSI 0x0017 +#define I40E_AQ_CAP_ID_DCB 0x0018 +#define I40E_AQ_CAP_ID_FCOE 0x0021 +#define I40E_AQ_CAP_ID_RSS 0x0040 +#define I40E_AQ_CAP_ID_RXQ 0x0041 +#define I40E_AQ_CAP_ID_TXQ 0x0042 +#define I40E_AQ_CAP_ID_MSIX 0x0043 +#define I40E_AQ_CAP_ID_VF_MSIX 0x0044 +#define I40E_AQ_CAP_ID_FLOW_DIRECTOR 0x0045 +#define I40E_AQ_CAP_ID_1588 0x0046 +#define I40E_AQ_CAP_ID_IWARP 0x0051 +#define I40E_AQ_CAP_ID_LED 0x0061 +#define I40E_AQ_CAP_ID_SDP 0x0062 +#define I40E_AQ_CAP_ID_MDIO 0x0063 +#define I40E_AQ_CAP_ID_FLEX10 0x00F1 +#define I40E_AQ_CAP_ID_CEM 0x00F2 + +/* Set CPPM Configuration (direct 0x0103) */ +struct i40e_aqc_cppm_configuration { + __le16 command_flags; +#define I40E_AQ_CPPM_EN_LTRC 0x0800 +#define I40E_AQ_CPPM_EN_DMCTH 0x1000 +#define I40E_AQ_CPPM_EN_DMCTLX 0x2000 +#define I40E_AQ_CPPM_EN_HPTC 0x4000 +#define I40E_AQ_CPPM_EN_DMARC 0x8000 + __le16 ttlx; + __le32 dmacr; + __le16 dmcth; + u8 hptc; + u8 reserved; + __le32 pfltrc; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_cppm_configuration); + +/* Set ARP Proxy command / response (indirect 0x0104) */ +struct i40e_aqc_arp_proxy_data { + __le16 command_flags; +#define I40E_AQ_ARP_INIT_IPV4 0x0008 +#define I40E_AQ_ARP_UNSUP_CTL 0x0010 +#define I40E_AQ_ARP_ENA 0x0020 +#define I40E_AQ_ARP_ADD_IPV4 0x0040 +#define I40E_AQ_ARP_DEL_IPV4 0x0080 + __le16 table_id; + __le32 pfpm_proxyfc; + __le32 ip_addr; + u8 mac_addr[6]; +}; + +/* Set NS Proxy Table Entry Command (indirect 0x0105) */ +struct i40e_aqc_ns_proxy_data { + __le16 table_idx_mac_addr_0; + __le16 table_idx_mac_addr_1; + __le16 table_idx_ipv6_0; + __le16 table_idx_ipv6_1; + __le16 control; +#define I40E_AQ_NS_PROXY_ADD_0 0x0100 +#define I40E_AQ_NS_PROXY_DEL_0 0x0200 +#define I40E_AQ_NS_PROXY_ADD_1 0x0400 +#define I40E_AQ_NS_PROXY_DEL_1 0x0800 +#define I40E_AQ_NS_PROXY_ADD_IPV6_0 0x1000 +#define I40E_AQ_NS_PROXY_DEL_IPV6_0 0x2000 +#define I40E_AQ_NS_PROXY_ADD_IPV6_1 0x4000 +#define I40E_AQ_NS_PROXY_DEL_IPV6_1 0x8000 +#define I40E_AQ_NS_PROXY_COMMAND_SEQ 0x0001 +#define I40E_AQ_NS_PROXY_INIT_IPV6_TBL 0x0002 +#define I40E_AQ_NS_PROXY_INIT_MAC_TBL 0x0004 + u8 mac_addr_0[6]; + u8 mac_addr_1[6]; + u8 local_mac_addr[6]; + u8 ipv6_addr_0[16]; /* Warning! spec specifies BE byte order */ + u8 ipv6_addr_1[16]; +}; + +/* Manage LAA Command (0x0106) - obsolete */ +struct i40e_aqc_mng_laa { + __le16 command_flags; +#define I40E_AQ_LAA_FLAG_WR 0x8000 + u8 reserved[2]; + __le32 sal; + __le16 sah; + u8 reserved2[6]; +}; + +/* Manage MAC Address Read Command (indirect 0x0107) */ +struct i40e_aqc_mac_address_read { + __le16 command_flags; +#define I40E_AQC_LAN_ADDR_VALID 0x10 +#define I40E_AQC_SAN_ADDR_VALID 0x20 +#define I40E_AQC_PORT_ADDR_VALID 0x40 +#define I40E_AQC_WOL_ADDR_VALID 0x80 +#define I40E_AQC_ADDR_VALID_MASK 0xf0 + u8 reserved[6]; + __le32 addr_high; + __le32 addr_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_mac_address_read); + +struct i40e_aqc_mac_address_read_data { + u8 pf_lan_mac[6]; + u8 pf_san_mac[6]; + u8 port_mac[6]; + u8 pf_wol_mac[6]; +}; + +I40E_CHECK_STRUCT_LEN(24, i40e_aqc_mac_address_read_data); + +/* Manage MAC Address Write Command (0x0108) */ +struct i40e_aqc_mac_address_write { + __le16 command_flags; +#define I40E_AQC_WRITE_TYPE_LAA_ONLY 0x0000 +#define I40E_AQC_WRITE_TYPE_LAA_WOL 0x4000 +#define I40E_AQC_WRITE_TYPE_PORT 0x8000 +#define I40E_AQC_WRITE_TYPE_MASK 0xc000 + __le16 mac_sah; + __le32 mac_sal; + u8 reserved[8]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_mac_address_write); + +/* PXE commands (0x011x) */ + +/* Clear PXE Command and response (direct 0x0110) */ +struct i40e_aqc_clear_pxe { + u8 rx_cnt; + u8 reserved[15]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_clear_pxe); + +/* Switch configuration commands (0x02xx) */ + +/* Used by many indirect commands that only pass an seid and a buffer in the + * command + */ +struct i40e_aqc_switch_seid { + __le16 seid; + u8 reserved[6]; + __le32 addr_high; + __le32 addr_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_switch_seid); + +/* Get Switch Configuration command (indirect 0x0200) + * uses i40e_aqc_switch_seid for the descriptor + */ +struct i40e_aqc_get_switch_config_header_resp { + __le16 num_reported; + __le16 num_total; + u8 reserved[12]; +}; + +struct i40e_aqc_switch_config_element_resp { + u8 element_type; +#define I40E_AQ_SW_ELEM_TYPE_MAC 1 +#define I40E_AQ_SW_ELEM_TYPE_PF 2 +#define I40E_AQ_SW_ELEM_TYPE_VF 3 +#define I40E_AQ_SW_ELEM_TYPE_EMP 4 +#define I40E_AQ_SW_ELEM_TYPE_BMC 5 +#define I40E_AQ_SW_ELEM_TYPE_PV 16 +#define I40E_AQ_SW_ELEM_TYPE_VEB 17 +#define I40E_AQ_SW_ELEM_TYPE_PA 18 +#define I40E_AQ_SW_ELEM_TYPE_VSI 19 + u8 revision; +#define I40E_AQ_SW_ELEM_REV_1 1 + __le16 seid; + __le16 uplink_seid; + __le16 downlink_seid; + u8 reserved[3]; + u8 connection_type; +#define I40E_AQ_CONN_TYPE_REGULAR 0x1 +#define I40E_AQ_CONN_TYPE_DEFAULT 0x2 +#define I40E_AQ_CONN_TYPE_CASCADED 0x3 + __le16 scheduler_id; + __le16 element_info; +}; + +/* Get Switch Configuration (indirect 0x0200) + * an array of elements are returned in the response buffer + * the first in the array is the header, remainder are elements + */ +struct i40e_aqc_get_switch_config_resp { + struct i40e_aqc_get_switch_config_header_resp header; + struct i40e_aqc_switch_config_element_resp element[1]; +}; + +/* Add Statistics (direct 0x0201) + * Remove Statistics (direct 0x0202) + */ +struct i40e_aqc_add_remove_statistics { + __le16 seid; + __le16 vlan; + __le16 stat_index; + u8 reserved[10]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_add_remove_statistics); + +/* Set Port Parameters command (direct 0x0203) */ +struct i40e_aqc_set_port_parameters { + __le16 command_flags; +#define I40E_AQ_SET_P_PARAMS_SAVE_BAD_PACKETS 1 +#define I40E_AQ_SET_P_PARAMS_PAD_SHORT_PACKETS 2 /* must set! */ +#define I40E_AQ_SET_P_PARAMS_DOUBLE_VLAN_ENA 4 + __le16 bad_frame_vsi; + __le16 default_seid; /* reserved for command */ + u8 reserved[10]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_set_port_parameters); + +/* Get Switch Resource Allocation (indirect 0x0204) */ +struct i40e_aqc_get_switch_resource_alloc { + u8 num_entries; /* reserved for command */ + u8 reserved[7]; + __le32 addr_high; + __le32 addr_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_get_switch_resource_alloc); + +/* expect an array of these structs in the response buffer */ +struct i40e_aqc_switch_resource_alloc_element_resp { + u8 resource_type; +#define I40E_AQ_RESOURCE_TYPE_VEB 0x0 +#define I40E_AQ_RESOURCE_TYPE_VSI 0x1 +#define I40E_AQ_RESOURCE_TYPE_MACADDR 0x2 +#define I40E_AQ_RESOURCE_TYPE_STAG 0x3 +#define I40E_AQ_RESOURCE_TYPE_ETAG 0x4 +#define I40E_AQ_RESOURCE_TYPE_MULTICAST_HASH 0x5 +#define I40E_AQ_RESOURCE_TYPE_UNICAST_HASH 0x6 +#define I40E_AQ_RESOURCE_TYPE_VLAN 0x7 +#define I40E_AQ_RESOURCE_TYPE_VSI_LIST_ENTRY 0x8 +#define I40E_AQ_RESOURCE_TYPE_ETAG_LIST_ENTRY 0x9 +#define I40E_AQ_RESOURCE_TYPE_VLAN_STAT_POOL 0xA +#define I40E_AQ_RESOURCE_TYPE_MIRROR_RULE 0xB +#define I40E_AQ_RESOURCE_TYPE_QUEUE_SETS 0xC +#define I40E_AQ_RESOURCE_TYPE_VLAN_FILTERS 0xD +#define I40E_AQ_RESOURCE_TYPE_INNER_MAC_FILTERS 0xF +#define I40E_AQ_RESOURCE_TYPE_IP_FILTERS 0x10 +#define I40E_AQ_RESOURCE_TYPE_GRE_VN_KEYS 0x11 +#define I40E_AQ_RESOURCE_TYPE_VN2_KEYS 0x12 +#define I40E_AQ_RESOURCE_TYPE_TUNNEL_PORTS 0x13 + u8 reserved1; + __le16 guaranteed; + __le16 total; + __le16 used; + __le16 total_unalloced; + u8 reserved2[6]; +}; + +/* Add VSI (indirect 0x0210) + * this indirect command uses struct i40e_aqc_vsi_properties_data + * as the indirect buffer (128 bytes) + * + * Update VSI (indirect 0x211) + * uses the same data structure as Add VSI + * + * Get VSI (indirect 0x0212) + * uses the same completion and data structure as Add VSI + */ +struct i40e_aqc_add_get_update_vsi { + __le16 uplink_seid; + u8 connection_type; +#define I40E_AQ_VSI_CONN_TYPE_NORMAL 0x1 +#define I40E_AQ_VSI_CONN_TYPE_DEFAULT 0x2 +#define I40E_AQ_VSI_CONN_TYPE_CASCADED 0x3 + u8 reserved1; + u8 vf_id; + u8 reserved2; + __le16 vsi_flags; +#define I40E_AQ_VSI_TYPE_SHIFT 0x0 +#define I40E_AQ_VSI_TYPE_MASK (0x3 << I40E_AQ_VSI_TYPE_SHIFT) +#define I40E_AQ_VSI_TYPE_VF 0x0 +#define I40E_AQ_VSI_TYPE_VMDQ2 0x1 +#define I40E_AQ_VSI_TYPE_PF 0x2 +#define I40E_AQ_VSI_TYPE_EMP_MNG 0x3 +#define I40E_AQ_VSI_FLAG_CASCADED_PV 0x4 + __le32 addr_high; + __le32 addr_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_add_get_update_vsi); + +struct i40e_aqc_add_get_update_vsi_completion { + __le16 seid; + __le16 vsi_number; + __le16 vsi_used; + __le16 vsi_free; + __le32 addr_high; + __le32 addr_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_add_get_update_vsi_completion); + +struct i40e_aqc_vsi_properties_data { + /* first 96 byte are written by SW */ + __le16 valid_sections; +#define I40E_AQ_VSI_PROP_SWITCH_VALID 0x0001 +#define I40E_AQ_VSI_PROP_SECURITY_VALID 0x0002 +#define I40E_AQ_VSI_PROP_VLAN_VALID 0x0004 +#define I40E_AQ_VSI_PROP_CAS_PV_VALID 0x0008 +#define I40E_AQ_VSI_PROP_INGRESS_UP_VALID 0x0010 +#define I40E_AQ_VSI_PROP_EGRESS_UP_VALID 0x0020 +#define I40E_AQ_VSI_PROP_QUEUE_MAP_VALID 0x0040 +#define I40E_AQ_VSI_PROP_QUEUE_OPT_VALID 0x0080 +#define I40E_AQ_VSI_PROP_OUTER_UP_VALID 0x0100 +#define I40E_AQ_VSI_PROP_SCHED_VALID 0x0200 + /* switch section */ + __le16 switch_id; /* 12bit id combined with flags below */ +#define I40E_AQ_VSI_SW_ID_SHIFT 0x0000 +#define I40E_AQ_VSI_SW_ID_MASK (0xFFF << I40E_AQ_VSI_SW_ID_SHIFT) +#define I40E_AQ_VSI_SW_ID_FLAG_NOT_STAG 0x1000 +#define I40E_AQ_VSI_SW_ID_FLAG_ALLOW_LB 0x2000 +#define I40E_AQ_VSI_SW_ID_FLAG_LOCAL_LB 0x4000 + u8 sw_reserved[2]; + /* security section */ + u8 sec_flags; +#define I40E_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD 0x01 +#define I40E_AQ_VSI_SEC_FLAG_ENABLE_VLAN_CHK 0x02 +#define I40E_AQ_VSI_SEC_FLAG_ENABLE_MAC_CHK 0x04 + u8 sec_reserved; + /* VLAN section */ + __le16 pvid; /* VLANS include priority bits */ + __le16 fcoe_pvid; + u8 port_vlan_flags; +#define I40E_AQ_VSI_PVLAN_MODE_SHIFT 0x00 +#define I40E_AQ_VSI_PVLAN_MODE_MASK (0x03 << \ + I40E_AQ_VSI_PVLAN_MODE_SHIFT) +#define I40E_AQ_VSI_PVLAN_MODE_TAGGED 0x01 +#define I40E_AQ_VSI_PVLAN_MODE_UNTAGGED 0x02 +#define I40E_AQ_VSI_PVLAN_MODE_ALL 0x03 +#define I40E_AQ_VSI_PVLAN_INSERT_PVID 0x04 +#define I40E_AQ_VSI_PVLAN_EMOD_SHIFT 0x03 +#define I40E_AQ_VSI_PVLAN_EMOD_MASK (0x3 << \ + I40E_AQ_VSI_PVLAN_EMOD_SHIFT) +#define I40E_AQ_VSI_PVLAN_EMOD_STR_BOTH 0x0 +#define I40E_AQ_VSI_PVLAN_EMOD_STR_UP 0x08 +#define I40E_AQ_VSI_PVLAN_EMOD_STR 0x10 +#define I40E_AQ_VSI_PVLAN_EMOD_NOTHING 0x18 + u8 pvlan_reserved[3]; + /* ingress egress up sections */ + __le32 ingress_table; /* bitmap, 3 bits per up */ +#define I40E_AQ_VSI_UP_TABLE_UP0_SHIFT 0 +#define I40E_AQ_VSI_UP_TABLE_UP0_MASK (0x7 << \ + I40E_AQ_VSI_UP_TABLE_UP0_SHIFT) +#define I40E_AQ_VSI_UP_TABLE_UP1_SHIFT 3 +#define I40E_AQ_VSI_UP_TABLE_UP1_MASK (0x7 << \ + I40E_AQ_VSI_UP_TABLE_UP1_SHIFT) +#define I40E_AQ_VSI_UP_TABLE_UP2_SHIFT 6 +#define I40E_AQ_VSI_UP_TABLE_UP2_MASK (0x7 << \ + I40E_AQ_VSI_UP_TABLE_UP2_SHIFT) +#define I40E_AQ_VSI_UP_TABLE_UP3_SHIFT 9 +#define I40E_AQ_VSI_UP_TABLE_UP3_MASK (0x7 << \ + I40E_AQ_VSI_UP_TABLE_UP3_SHIFT) +#define I40E_AQ_VSI_UP_TABLE_UP4_SHIFT 12 +#define I40E_AQ_VSI_UP_TABLE_UP4_MASK (0x7 << \ + I40E_AQ_VSI_UP_TABLE_UP4_SHIFT) +#define I40E_AQ_VSI_UP_TABLE_UP5_SHIFT 15 +#define I40E_AQ_VSI_UP_TABLE_UP5_MASK (0x7 << \ + I40E_AQ_VSI_UP_TABLE_UP5_SHIFT) +#define I40E_AQ_VSI_UP_TABLE_UP6_SHIFT 18 +#define I40E_AQ_VSI_UP_TABLE_UP6_MASK (0x7 << \ + I40E_AQ_VSI_UP_TABLE_UP6_SHIFT) +#define I40E_AQ_VSI_UP_TABLE_UP7_SHIFT 21 +#define I40E_AQ_VSI_UP_TABLE_UP7_MASK (0x7 << \ + I40E_AQ_VSI_UP_TABLE_UP7_SHIFT) + __le32 egress_table; /* same defines as for ingress table */ + /* cascaded PV section */ + __le16 cas_pv_tag; + u8 cas_pv_flags; +#define I40E_AQ_VSI_CAS_PV_TAGX_SHIFT 0x00 +#define I40E_AQ_VSI_CAS_PV_TAGX_MASK (0x03 << \ + I40E_AQ_VSI_CAS_PV_TAGX_SHIFT) +#define I40E_AQ_VSI_CAS_PV_TAGX_LEAVE 0x00 +#define I40E_AQ_VSI_CAS_PV_TAGX_REMOVE 0x01 +#define I40E_AQ_VSI_CAS_PV_TAGX_COPY 0x02 +#define I40E_AQ_VSI_CAS_PV_INSERT_TAG 0x10 +#define I40E_AQ_VSI_CAS_PV_ETAG_PRUNE 0x20 +#define I40E_AQ_VSI_CAS_PV_ACCEPT_HOST_TAG 0x40 + u8 cas_pv_reserved; + /* queue mapping section */ + __le16 mapping_flags; +#define I40E_AQ_VSI_QUE_MAP_CONTIG 0x0 +#define I40E_AQ_VSI_QUE_MAP_NONCONTIG 0x1 + __le16 queue_mapping[16]; +#define I40E_AQ_VSI_QUEUE_SHIFT 0x0 +#define I40E_AQ_VSI_QUEUE_MASK (0x7FF << I40E_AQ_VSI_QUEUE_SHIFT) + __le16 tc_mapping[8]; +#define I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT 0 +#define I40E_AQ_VSI_TC_QUE_OFFSET_MASK (0x1FF << \ + I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT) +#define I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT 9 +#define I40E_AQ_VSI_TC_QUE_NUMBER_MASK (0x7 << \ + I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT) + /* queueing option section */ + u8 queueing_opt_flags; +#define I40E_AQ_VSI_QUE_OPT_TCP_ENA 0x10 +#define I40E_AQ_VSI_QUE_OPT_FCOE_ENA 0x20 + u8 queueing_opt_reserved[3]; + /* scheduler section */ + u8 up_enable_bits; + u8 sched_reserved; + /* outer up section */ + __le32 outer_up_table; /* same structure and defines as ingress table */ + u8 cmd_reserved[8]; + /* last 32 bytes are written by FW */ + __le16 qs_handle[8]; +#define I40E_AQ_VSI_QS_HANDLE_INVALID 0xFFFF + __le16 stat_counter_idx; + __le16 sched_id; + u8 resp_reserved[12]; +}; + +I40E_CHECK_STRUCT_LEN(128, i40e_aqc_vsi_properties_data); + +/* Add Port Virtualizer (direct 0x0220) + * also used for update PV (direct 0x0221) but only flags are used + * (IS_CTRL_PORT only works on add PV) + */ +struct i40e_aqc_add_update_pv { + __le16 command_flags; +#define I40E_AQC_PV_FLAG_PV_TYPE 0x1 +#define I40E_AQC_PV_FLAG_FWD_UNKNOWN_STAG_EN 0x2 +#define I40E_AQC_PV_FLAG_FWD_UNKNOWN_ETAG_EN 0x4 +#define I40E_AQC_PV_FLAG_IS_CTRL_PORT 0x8 + __le16 uplink_seid; + __le16 connected_seid; + u8 reserved[10]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_add_update_pv); + +struct i40e_aqc_add_update_pv_completion { + /* reserved for update; for add also encodes error if rc == ENOSPC */ + __le16 pv_seid; +#define I40E_AQC_PV_ERR_FLAG_NO_PV 0x1 +#define I40E_AQC_PV_ERR_FLAG_NO_SCHED 0x2 +#define I40E_AQC_PV_ERR_FLAG_NO_COUNTER 0x4 +#define I40E_AQC_PV_ERR_FLAG_NO_ENTRY 0x8 + u8 reserved[14]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_add_update_pv_completion); + +/* Get PV Params (direct 0x0222) + * uses i40e_aqc_switch_seid for the descriptor + */ + +struct i40e_aqc_get_pv_params_completion { + __le16 seid; + __le16 default_stag; + __le16 pv_flags; /* same flags as add_pv */ +#define I40E_AQC_GET_PV_PV_TYPE 0x1 +#define I40E_AQC_GET_PV_FRWD_UNKNOWN_STAG 0x2 +#define I40E_AQC_GET_PV_FRWD_UNKNOWN_ETAG 0x4 + u8 reserved[8]; + __le16 default_port_seid; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_get_pv_params_completion); + +/* Add VEB (direct 0x0230) */ +struct i40e_aqc_add_veb { + __le16 uplink_seid; + __le16 downlink_seid; + __le16 veb_flags; +#define I40E_AQC_ADD_VEB_FLOATING 0x1 +#define I40E_AQC_ADD_VEB_PORT_TYPE_SHIFT 1 +#define I40E_AQC_ADD_VEB_PORT_TYPE_MASK (0x3 << \ + I40E_AQC_ADD_VEB_PORT_TYPE_SHIFT) +#define I40E_AQC_ADD_VEB_PORT_TYPE_DEFAULT 0x2 +#define I40E_AQC_ADD_VEB_PORT_TYPE_DATA 0x4 +#define I40E_AQC_ADD_VEB_ENABLE_L2_FILTER 0x8 + u8 enable_tcs; + u8 reserved[9]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_add_veb); + +struct i40e_aqc_add_veb_completion { + u8 reserved[6]; + __le16 switch_seid; + /* also encodes error if rc == ENOSPC; codes are the same as add_pv */ + __le16 veb_seid; +#define I40E_AQC_VEB_ERR_FLAG_NO_VEB 0x1 +#define I40E_AQC_VEB_ERR_FLAG_NO_SCHED 0x2 +#define I40E_AQC_VEB_ERR_FLAG_NO_COUNTER 0x4 +#define I40E_AQC_VEB_ERR_FLAG_NO_ENTRY 0x8 + __le16 statistic_index; + __le16 vebs_used; + __le16 vebs_free; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_add_veb_completion); + +/* Get VEB Parameters (direct 0x0232) + * uses i40e_aqc_switch_seid for the descriptor + */ +struct i40e_aqc_get_veb_parameters_completion { + __le16 seid; + __le16 switch_id; + __le16 veb_flags; /* only the first/last flags from 0x0230 is valid */ + __le16 statistic_index; + __le16 vebs_used; + __le16 vebs_free; + u8 reserved[4]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_get_veb_parameters_completion); + +/* Delete Element (direct 0x0243) + * uses the generic i40e_aqc_switch_seid + */ + +/* Add MAC-VLAN (indirect 0x0250) */ + +/* used for the command for most vlan commands */ +struct i40e_aqc_macvlan { + __le16 num_addresses; + __le16 seid[3]; +#define I40E_AQC_MACVLAN_CMD_SEID_NUM_SHIFT 0 +#define I40E_AQC_MACVLAN_CMD_SEID_NUM_MASK (0x3FF << \ + I40E_AQC_MACVLAN_CMD_SEID_NUM_SHIFT) +#define I40E_AQC_MACVLAN_CMD_SEID_VALID 0x8000 + __le32 addr_high; + __le32 addr_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_macvlan); + +/* indirect data for command and response */ +struct i40e_aqc_add_macvlan_element_data { + u8 mac_addr[6]; + __le16 vlan_tag; + __le16 flags; +#define I40E_AQC_MACVLAN_ADD_PERFECT_MATCH 0x0001 +#define I40E_AQC_MACVLAN_ADD_HASH_MATCH 0x0002 +#define I40E_AQC_MACVLAN_ADD_IGNORE_VLAN 0x0004 +#define I40E_AQC_MACVLAN_ADD_TO_QUEUE 0x0008 + __le16 queue_number; +#define I40E_AQC_MACVLAN_CMD_QUEUE_SHIFT 0 +#define I40E_AQC_MACVLAN_CMD_QUEUE_MASK (0x7FF << \ + I40E_AQC_MACVLAN_CMD_SEID_NUM_SHIFT) + /* response section */ + u8 match_method; +#define I40E_AQC_MM_PERFECT_MATCH 0x01 +#define I40E_AQC_MM_HASH_MATCH 0x02 +#define I40E_AQC_MM_ERR_NO_RES 0xFF + u8 reserved1[3]; +}; + +struct i40e_aqc_add_remove_macvlan_completion { + __le16 perfect_mac_used; + __le16 perfect_mac_free; + __le16 unicast_hash_free; + __le16 multicast_hash_free; + __le32 addr_high; + __le32 addr_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_add_remove_macvlan_completion); + +/* Remove MAC-VLAN (indirect 0x0251) + * uses i40e_aqc_macvlan for the descriptor + * data points to an array of num_addresses of elements + */ + +struct i40e_aqc_remove_macvlan_element_data { + u8 mac_addr[6]; + __le16 vlan_tag; + u8 flags; +#define I40E_AQC_MACVLAN_DEL_PERFECT_MATCH 0x01 +#define I40E_AQC_MACVLAN_DEL_HASH_MATCH 0x02 +#define I40E_AQC_MACVLAN_DEL_IGNORE_VLAN 0x08 +#define I40E_AQC_MACVLAN_DEL_ALL_VSIS 0x10 + u8 reserved[3]; + /* reply section */ + u8 error_code; +#define I40E_AQC_REMOVE_MACVLAN_SUCCESS 0x0 +#define I40E_AQC_REMOVE_MACVLAN_FAIL 0xFF + u8 reply_reserved[3]; +}; + +/* Add VLAN (indirect 0x0252) + * Remove VLAN (indirect 0x0253) + * use the generic i40e_aqc_macvlan for the command + */ +struct i40e_aqc_add_remove_vlan_element_data { + __le16 vlan_tag; + u8 vlan_flags; +/* flags for add VLAN */ +#define I40E_AQC_ADD_VLAN_LOCAL 0x1 +#define I40E_AQC_ADD_PVLAN_TYPE_SHIFT 1 +#define I40E_AQC_ADD_PVLAN_TYPE_MASK (0x3 << I40E_AQC_ADD_PVLAN_TYPE_SHIFT) +#define I40E_AQC_ADD_PVLAN_TYPE_REGULAR 0x0 +#define I40E_AQC_ADD_PVLAN_TYPE_PRIMARY 0x2 +#define I40E_AQC_ADD_PVLAN_TYPE_SECONDARY 0x4 +#define I40E_AQC_VLAN_PTYPE_SHIFT 3 +#define I40E_AQC_VLAN_PTYPE_MASK (0x3 << I40E_AQC_VLAN_PTYPE_SHIFT) +#define I40E_AQC_VLAN_PTYPE_REGULAR_VSI 0x0 +#define I40E_AQC_VLAN_PTYPE_PROMISC_VSI 0x8 +#define I40E_AQC_VLAN_PTYPE_COMMUNITY_VSI 0x10 +#define I40E_AQC_VLAN_PTYPE_ISOLATED_VSI 0x18 +/* flags for remove VLAN */ +#define I40E_AQC_REMOVE_VLAN_ALL 0x1 + u8 reserved; + u8 result; +/* flags for add VLAN */ +#define I40E_AQC_ADD_VLAN_SUCCESS 0x0 +#define I40E_AQC_ADD_VLAN_FAIL_REQUEST 0xFE +#define I40E_AQC_ADD_VLAN_FAIL_RESOURCE 0xFF +/* flags for remove VLAN */ +#define I40E_AQC_REMOVE_VLAN_SUCCESS 0x0 +#define I40E_AQC_REMOVE_VLAN_FAIL 0xFF + u8 reserved1[3]; +}; + +struct i40e_aqc_add_remove_vlan_completion { + u8 reserved[4]; + __le16 vlans_used; + __le16 vlans_free; + __le32 addr_high; + __le32 addr_low; +}; + +/* Set VSI Promiscuous Modes (direct 0x0254) */ +struct i40e_aqc_set_vsi_promiscuous_modes { + __le16 promiscuous_flags; + __le16 valid_flags; +/* flags used for both fields above */ +#define I40E_AQC_SET_VSI_PROMISC_UNICAST 0x01 +#define I40E_AQC_SET_VSI_PROMISC_MULTICAST 0x02 +#define I40E_AQC_SET_VSI_PROMISC_BROADCAST 0x04 +#define I40E_AQC_SET_VSI_DEFAULT 0x08 +#define I40E_AQC_SET_VSI_PROMISC_VLAN 0x10 + __le16 seid; +#define I40E_AQC_VSI_PROM_CMD_SEID_MASK 0x3FF + __le16 vlan_tag; +#define I40E_AQC_SET_VSI_VLAN_VALID 0x8000 + u8 reserved[8]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_set_vsi_promiscuous_modes); + +/* Add S/E-tag command (direct 0x0255) + * Uses generic i40e_aqc_add_remove_tag_completion for completion + */ +struct i40e_aqc_add_tag { + __le16 flags; +#define I40E_AQC_ADD_TAG_FLAG_TO_QUEUE 0x0001 + __le16 seid; +#define I40E_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT 0 +#define I40E_AQC_ADD_TAG_CMD_SEID_NUM_MASK (0x3FF << \ + I40E_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT) + __le16 tag; + __le16 queue_number; + u8 reserved[8]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_add_tag); + +struct i40e_aqc_add_remove_tag_completion { + u8 reserved[12]; + __le16 tags_used; + __le16 tags_free; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_add_remove_tag_completion); + +/* Remove S/E-tag command (direct 0x0256) + * Uses generic i40e_aqc_add_remove_tag_completion for completion + */ +struct i40e_aqc_remove_tag { + __le16 seid; +#define I40E_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT 0 +#define I40E_AQC_REMOVE_TAG_CMD_SEID_NUM_MASK (0x3FF << \ + I40E_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT) + __le16 tag; + u8 reserved[12]; +}; + +/* Add multicast E-Tag (direct 0x0257) + * del multicast E-Tag (direct 0x0258) only uses pv_seid and etag fields + * and no external data + */ +struct i40e_aqc_add_remove_mcast_etag { + __le16 pv_seid; + __le16 etag; + u8 num_unicast_etags; + u8 reserved[3]; + __le32 addr_high; /* address of array of 2-byte s-tags */ + __le32 addr_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_add_remove_mcast_etag); + +struct i40e_aqc_add_remove_mcast_etag_completion { + u8 reserved[4]; + __le16 mcast_etags_used; + __le16 mcast_etags_free; + __le32 addr_high; + __le32 addr_low; + +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_add_remove_mcast_etag_completion); + +/* Update S/E-Tag (direct 0x0259) */ +struct i40e_aqc_update_tag { + __le16 seid; +#define I40E_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT 0 +#define I40E_AQC_UPDATE_TAG_CMD_SEID_NUM_MASK (0x3FF << \ + I40E_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT) + __le16 old_tag; + __le16 new_tag; + u8 reserved[10]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_update_tag); + +struct i40e_aqc_update_tag_completion { + u8 reserved[12]; + __le16 tags_used; + __le16 tags_free; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_update_tag_completion); + +/* Add Control Packet filter (direct 0x025A) + * Remove Control Packet filter (direct 0x025B) + * uses the i40e_aqc_add_oveb_cloud, + * and the generic direct completion structure + */ +struct i40e_aqc_add_remove_control_packet_filter { + u8 mac[6]; + __le16 etype; + __le16 flags; +#define I40E_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC 0x0001 +#define I40E_AQC_ADD_CONTROL_PACKET_FLAGS_DROP 0x0002 +#define I40E_AQC_ADD_CONTROL_PACKET_FLAGS_TO_QUEUE 0x0004 +#define I40E_AQC_ADD_CONTROL_PACKET_FLAGS_TX 0x0008 +#define I40E_AQC_ADD_CONTROL_PACKET_FLAGS_RX 0x0000 + __le16 seid; +#define I40E_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT 0 +#define I40E_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_MASK (0x3FF << \ + I40E_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT) + __le16 queue; + u8 reserved[2]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_add_remove_control_packet_filter); + +struct i40e_aqc_add_remove_control_packet_filter_completion { + __le16 mac_etype_used; + __le16 etype_used; + __le16 mac_etype_free; + __le16 etype_free; + u8 reserved[8]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_add_remove_control_packet_filter_completion); + +/* Add Cloud filters (indirect 0x025C) + * Remove Cloud filters (indirect 0x025D) + * uses the i40e_aqc_add_remove_cloud_filters, + * and the generic indirect completion structure + */ +struct i40e_aqc_add_remove_cloud_filters { + u8 num_filters; + u8 reserved; + __le16 seid; +#define I40E_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT 0 +#define I40E_AQC_ADD_CLOUD_CMD_SEID_NUM_MASK (0x3FF << \ + I40E_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT) + u8 reserved2[4]; + __le32 addr_high; + __le32 addr_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_add_remove_cloud_filters); + +struct i40e_aqc_add_remove_cloud_filters_element_data { + u8 outer_mac[6]; + u8 inner_mac[6]; + __le16 inner_vlan; + union { + struct { + u8 reserved[12]; + u8 data[4]; + } v4; + struct { + u8 data[16]; + } v6; + } ipaddr; + __le16 flags; +#define I40E_AQC_ADD_CLOUD_FILTER_SHIFT 0 +#define I40E_AQC_ADD_CLOUD_FILTER_MASK (0x3F << \ + I40E_AQC_ADD_CLOUD_FILTER_SHIFT) +/* 0x0000 reserved */ +#define I40E_AQC_ADD_CLOUD_FILTER_OIP 0x0001 +/* 0x0002 reserved */ +#define I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN 0x0003 +#define I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID 0x0004 +/* 0x0005 reserved */ +#define I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID 0x0006 +/* 0x0007 reserved */ +/* 0x0008 reserved */ +#define I40E_AQC_ADD_CLOUD_FILTER_OMAC 0x0009 +#define I40E_AQC_ADD_CLOUD_FILTER_IMAC 0x000A +#define I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC 0x000B +#define I40E_AQC_ADD_CLOUD_FILTER_IIP 0x000C + +#define I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE 0x0080 +#define I40E_AQC_ADD_CLOUD_VNK_SHIFT 6 +#define I40E_AQC_ADD_CLOUD_VNK_MASK 0x00C0 +#define I40E_AQC_ADD_CLOUD_FLAGS_IPV4 0 +#define I40E_AQC_ADD_CLOUD_FLAGS_IPV6 0x0100 + +#define I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT 9 +#define I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK 0x1E00 +#define I40E_AQC_ADD_CLOUD_TNL_TYPE_XVLAN 0 +#define I40E_AQC_ADD_CLOUD_TNL_TYPE_NVGRE_OMAC 1 +#define I40E_AQC_ADD_CLOUD_TNL_TYPE_NGE 2 +#define I40E_AQC_ADD_CLOUD_TNL_TYPE_IP 3 + + __le32 tenant_id; + u8 reserved[4]; + __le16 queue_number; +#define I40E_AQC_ADD_CLOUD_QUEUE_SHIFT 0 +#define I40E_AQC_ADD_CLOUD_QUEUE_MASK (0x3F << \ + I40E_AQC_ADD_CLOUD_QUEUE_SHIFT) + u8 reserved2[14]; + /* response section */ + u8 allocation_result; +#define I40E_AQC_ADD_CLOUD_FILTER_SUCCESS 0x0 +#define I40E_AQC_ADD_CLOUD_FILTER_FAIL 0xFF + u8 response_reserved[7]; +}; + +struct i40e_aqc_remove_cloud_filters_completion { + __le16 perfect_ovlan_used; + __le16 perfect_ovlan_free; + __le16 vlan_used; + __le16 vlan_free; + __le32 addr_high; + __le32 addr_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_remove_cloud_filters_completion); + +/* Add Mirror Rule (indirect or direct 0x0260) + * Delete Mirror Rule (indirect or direct 0x0261) + * note: some rule types (4,5) do not use an external buffer. + * take care to set the flags correctly. + */ +struct i40e_aqc_add_delete_mirror_rule { + __le16 seid; + __le16 rule_type; +#define I40E_AQC_MIRROR_RULE_TYPE_SHIFT 0 +#define I40E_AQC_MIRROR_RULE_TYPE_MASK (0x7 << \ + I40E_AQC_MIRROR_RULE_TYPE_SHIFT) +#define I40E_AQC_MIRROR_RULE_TYPE_VPORT_INGRESS 1 +#define I40E_AQC_MIRROR_RULE_TYPE_VPORT_EGRESS 2 +#define I40E_AQC_MIRROR_RULE_TYPE_VLAN 3 +#define I40E_AQC_MIRROR_RULE_TYPE_ALL_INGRESS 4 +#define I40E_AQC_MIRROR_RULE_TYPE_ALL_EGRESS 5 + __le16 num_entries; + __le16 destination; /* VSI for add, rule id for delete */ + __le32 addr_high; /* address of array of 2-byte VSI or VLAN ids */ + __le32 addr_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_add_delete_mirror_rule); + +struct i40e_aqc_add_delete_mirror_rule_completion { + u8 reserved[2]; + __le16 rule_id; /* only used on add */ + __le16 mirror_rules_used; + __le16 mirror_rules_free; + __le32 addr_high; + __le32 addr_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_add_delete_mirror_rule_completion); + +/* DCB 0x03xx*/ + +/* PFC Ignore (direct 0x0301) + * the command and response use the same descriptor structure + */ +struct i40e_aqc_pfc_ignore { + u8 tc_bitmap; + u8 command_flags; /* unused on response */ +#define I40E_AQC_PFC_IGNORE_SET 0x80 +#define I40E_AQC_PFC_IGNORE_CLEAR 0x0 + u8 reserved[14]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_pfc_ignore); + +/* DCB Update (direct 0x0302) uses the i40e_aq_desc structure + * with no parameters + */ + +/* TX scheduler 0x04xx */ + +/* Almost all the indirect commands use + * this generic struct to pass the SEID in param0 + */ +struct i40e_aqc_tx_sched_ind { + __le16 vsi_seid; + u8 reserved[6]; + __le32 addr_high; + __le32 addr_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_tx_sched_ind); + +/* Several commands respond with a set of queue set handles */ +struct i40e_aqc_qs_handles_resp { + __le16 qs_handles[8]; +}; + +/* Configure VSI BW limits (direct 0x0400) */ +struct i40e_aqc_configure_vsi_bw_limit { + __le16 vsi_seid; + u8 reserved[2]; + __le16 credit; + u8 reserved1[2]; + u8 max_credit; /* 0-3, limit = 2^max */ + u8 reserved2[7]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_configure_vsi_bw_limit); + +/* Configure VSI Bandwidth Limit per Traffic Type (indirect 0x0406) + * responds with i40e_aqc_qs_handles_resp + */ +struct i40e_aqc_configure_vsi_ets_sla_bw_data { + u8 tc_valid_bits; + u8 reserved[15]; + __le16 tc_bw_credits[8]; /* FW writesback QS handles here */ + + /* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */ + __le16 tc_bw_max[2]; + u8 reserved1[28]; +}; + +/* Configure VSI Bandwidth Allocation per Traffic Type (indirect 0x0407) + * responds with i40e_aqc_qs_handles_resp + */ +struct i40e_aqc_configure_vsi_tc_bw_data { + u8 tc_valid_bits; + u8 reserved[3]; + u8 tc_bw_credits[8]; + u8 reserved1[4]; + __le16 qs_handles[8]; +}; + +/* Query vsi bw configuration (indirect 0x0408) */ +struct i40e_aqc_query_vsi_bw_config_resp { + u8 tc_valid_bits; + u8 tc_suspended_bits; + u8 reserved[14]; + __le16 qs_handles[8]; + u8 reserved1[4]; + __le16 port_bw_limit; + u8 reserved2[2]; + u8 max_bw; /* 0-3, limit = 2^max */ + u8 reserved3[23]; +}; + +/* Query VSI Bandwidth Allocation per Traffic Type (indirect 0x040A) */ +struct i40e_aqc_query_vsi_ets_sla_config_resp { + u8 tc_valid_bits; + u8 reserved[3]; + u8 share_credits[8]; + __le16 credits[8]; + + /* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */ + __le16 tc_bw_max[2]; +}; + +/* Configure Switching Component Bandwidth Limit (direct 0x0410) */ +struct i40e_aqc_configure_switching_comp_bw_limit { + __le16 seid; + u8 reserved[2]; + __le16 credit; + u8 reserved1[2]; + u8 max_bw; /* 0-3, limit = 2^max */ + u8 reserved2[7]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_configure_switching_comp_bw_limit); + +/* Enable Physical Port ETS (indirect 0x0413) + * Modify Physical Port ETS (indirect 0x0414) + * Disable Physical Port ETS (indirect 0x0415) + */ +struct i40e_aqc_configure_switching_comp_ets_data { + u8 reserved[4]; + u8 tc_valid_bits; + u8 seepage; +#define I40E_AQ_ETS_SEEPAGE_EN_MASK 0x1 + u8 tc_strict_priority_flags; + u8 reserved1[17]; + u8 tc_bw_share_credits[8]; + u8 reserved2[96]; +}; + +/* Configure Switching Component Bandwidth Limits per Tc (indirect 0x0416) */ +struct i40e_aqc_configure_switching_comp_ets_bw_limit_data { + u8 tc_valid_bits; + u8 reserved[15]; + __le16 tc_bw_credit[8]; + + /* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */ + __le16 tc_bw_max[2]; + u8 reserved1[28]; +}; + +/* Configure Switching Component Bandwidth Allocation per Tc + * (indirect 0x0417) + */ +struct i40e_aqc_configure_switching_comp_bw_config_data { + u8 tc_valid_bits; + u8 reserved[2]; + u8 absolute_credits; /* bool */ + u8 tc_bw_share_credits[8]; + u8 reserved1[20]; +}; + +/* Query Switching Component Configuration (indirect 0x0418) */ +struct i40e_aqc_query_switching_comp_ets_config_resp { + u8 tc_valid_bits; + u8 reserved[35]; + __le16 port_bw_limit; + u8 reserved1[2]; + u8 tc_bw_max; /* 0-3, limit = 2^max */ + u8 reserved2[23]; +}; + +/* Query PhysicalPort ETS Configuration (indirect 0x0419) */ +struct i40e_aqc_query_port_ets_config_resp { + u8 reserved[4]; + u8 tc_valid_bits; + u8 reserved1; + u8 tc_strict_priority_bits; + u8 reserved2; + u8 tc_bw_share_credits[8]; + __le16 tc_bw_limits[8]; + + /* 4 bits per tc 0-7, 4th bit reserved, limit = 2^max */ + __le16 tc_bw_max[2]; + u8 reserved3[32]; +}; + +/* Query Switching Component Bandwidth Allocation per Traffic Type + * (indirect 0x041A) + */ +struct i40e_aqc_query_switching_comp_bw_config_resp { + u8 tc_valid_bits; + u8 reserved[2]; + u8 absolute_credits_enable; /* bool */ + u8 tc_bw_share_credits[8]; + __le16 tc_bw_limits[8]; + + /* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */ + __le16 tc_bw_max[2]; +}; + +/* Suspend/resume port TX traffic + * (direct 0x041B and 0x041C) uses the generic SEID struct + */ + +/* Configure partition BW + * (indirect 0x041D) + */ +struct i40e_aqc_configure_partition_bw_data { + __le16 pf_valid_bits; + u8 min_bw[16]; /* guaranteed bandwidth */ + u8 max_bw[16]; /* bandwidth limit */ +}; + +/* Get and set the active HMC resource profile and status. + * (direct 0x0500) and (direct 0x0501) + */ +struct i40e_aq_get_set_hmc_resource_profile { + u8 pm_profile; + u8 pe_vf_enabled; + u8 reserved[14]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aq_get_set_hmc_resource_profile); + +enum i40e_aq_hmc_profile { + /* I40E_HMC_PROFILE_NO_CHANGE = 0, reserved */ + I40E_HMC_PROFILE_DEFAULT = 1, + I40E_HMC_PROFILE_FAVOR_VF = 2, + I40E_HMC_PROFILE_EQUAL = 3, +}; + +#define I40E_AQ_GET_HMC_RESOURCE_PROFILE_PM_MASK 0xF +#define I40E_AQ_GET_HMC_RESOURCE_PROFILE_COUNT_MASK 0x3F + +/* Get PHY Abilities (indirect 0x0600) uses the generic indirect struct */ + +/* set in param0 for get phy abilities to report qualified modules */ +#define I40E_AQ_PHY_REPORT_QUALIFIED_MODULES 0x0001 +#define I40E_AQ_PHY_REPORT_INITIAL_VALUES 0x0002 + +enum i40e_aq_phy_type { + I40E_PHY_TYPE_SGMII = 0x0, + I40E_PHY_TYPE_1000BASE_KX = 0x1, + I40E_PHY_TYPE_10GBASE_KX4 = 0x2, + I40E_PHY_TYPE_10GBASE_KR = 0x3, + I40E_PHY_TYPE_40GBASE_KR4 = 0x4, + I40E_PHY_TYPE_XAUI = 0x5, + I40E_PHY_TYPE_XFI = 0x6, + I40E_PHY_TYPE_SFI = 0x7, + I40E_PHY_TYPE_XLAUI = 0x8, + I40E_PHY_TYPE_XLPPI = 0x9, + I40E_PHY_TYPE_40GBASE_CR4_CU = 0xA, + I40E_PHY_TYPE_10GBASE_CR1_CU = 0xB, + I40E_PHY_TYPE_10GBASE_AOC = 0xC, + I40E_PHY_TYPE_40GBASE_AOC = 0xD, + I40E_PHY_TYPE_100BASE_TX = 0x11, + I40E_PHY_TYPE_1000BASE_T = 0x12, + I40E_PHY_TYPE_10GBASE_T = 0x13, + I40E_PHY_TYPE_10GBASE_SR = 0x14, + I40E_PHY_TYPE_10GBASE_LR = 0x15, + I40E_PHY_TYPE_10GBASE_SFPP_CU = 0x16, + I40E_PHY_TYPE_10GBASE_CR1 = 0x17, + I40E_PHY_TYPE_40GBASE_CR4 = 0x18, + I40E_PHY_TYPE_40GBASE_SR4 = 0x19, + I40E_PHY_TYPE_40GBASE_LR4 = 0x1A, + I40E_PHY_TYPE_1000BASE_SX = 0x1B, + I40E_PHY_TYPE_1000BASE_LX = 0x1C, + I40E_PHY_TYPE_1000BASE_T_OPTICAL = 0x1D, + I40E_PHY_TYPE_20GBASE_KR2 = 0x1E, + I40E_PHY_TYPE_MAX +}; + +#define I40E_LINK_SPEED_100MB_SHIFT 0x1 +#define I40E_LINK_SPEED_1000MB_SHIFT 0x2 +#define I40E_LINK_SPEED_10GB_SHIFT 0x3 +#define I40E_LINK_SPEED_40GB_SHIFT 0x4 +#define I40E_LINK_SPEED_20GB_SHIFT 0x5 + +enum i40e_aq_link_speed { + I40E_LINK_SPEED_UNKNOWN = 0, + I40E_LINK_SPEED_100MB = (1 << I40E_LINK_SPEED_100MB_SHIFT), + I40E_LINK_SPEED_1GB = (1 << I40E_LINK_SPEED_1000MB_SHIFT), + I40E_LINK_SPEED_10GB = (1 << I40E_LINK_SPEED_10GB_SHIFT), + I40E_LINK_SPEED_40GB = (1 << I40E_LINK_SPEED_40GB_SHIFT), + I40E_LINK_SPEED_20GB = (1 << I40E_LINK_SPEED_20GB_SHIFT) +}; + +struct i40e_aqc_module_desc { + u8 oui[3]; + u8 reserved1; + u8 part_number[16]; + u8 revision[4]; + u8 reserved2[8]; +}; + +struct i40e_aq_get_phy_abilities_resp { + __le32 phy_type; /* bitmap using the above enum for offsets */ + u8 link_speed; /* bitmap using the above enum bit patterns */ + u8 abilities; +#define I40E_AQ_PHY_FLAG_PAUSE_TX 0x01 +#define I40E_AQ_PHY_FLAG_PAUSE_RX 0x02 +#define I40E_AQ_PHY_FLAG_LOW_POWER 0x04 +#define I40E_AQ_PHY_LINK_ENABLED 0x08 +#define I40E_AQ_PHY_AN_ENABLED 0x10 +#define I40E_AQ_PHY_FLAG_MODULE_QUAL 0x20 + __le16 eee_capability; +#define I40E_AQ_EEE_100BASE_TX 0x0002 +#define I40E_AQ_EEE_1000BASE_T 0x0004 +#define I40E_AQ_EEE_10GBASE_T 0x0008 +#define I40E_AQ_EEE_1000BASE_KX 0x0010 +#define I40E_AQ_EEE_10GBASE_KX4 0x0020 +#define I40E_AQ_EEE_10GBASE_KR 0x0040 + __le32 eeer_val; + u8 d3_lpan; +#define I40E_AQ_SET_PHY_D3_LPAN_ENA 0x01 + u8 reserved[3]; + u8 phy_id[4]; + u8 module_type[3]; + u8 qualified_module_count; +#define I40E_AQ_PHY_MAX_QMS 16 + struct i40e_aqc_module_desc qualified_module[I40E_AQ_PHY_MAX_QMS]; +}; + +/* Set PHY Config (direct 0x0601) */ +struct i40e_aq_set_phy_config { /* same bits as above in all */ + __le32 phy_type; + u8 link_speed; + u8 abilities; +/* bits 0-2 use the values from get_phy_abilities_resp */ +#define I40E_AQ_PHY_ENABLE_LINK 0x08 +#define I40E_AQ_PHY_ENABLE_AN 0x10 +#define I40E_AQ_PHY_ENABLE_ATOMIC_LINK 0x20 + __le16 eee_capability; + __le32 eeer; + u8 low_power_ctrl; + u8 reserved[3]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aq_set_phy_config); + +/* Set MAC Config command data structure (direct 0x0603) */ +struct i40e_aq_set_mac_config { + __le16 max_frame_size; + u8 params; +#define I40E_AQ_SET_MAC_CONFIG_CRC_EN 0x04 +#define I40E_AQ_SET_MAC_CONFIG_PACING_MASK 0x78 +#define I40E_AQ_SET_MAC_CONFIG_PACING_SHIFT 3 +#define I40E_AQ_SET_MAC_CONFIG_PACING_NONE 0x0 +#define I40E_AQ_SET_MAC_CONFIG_PACING_1B_13TX 0xF +#define I40E_AQ_SET_MAC_CONFIG_PACING_1DW_9TX 0x9 +#define I40E_AQ_SET_MAC_CONFIG_PACING_1DW_4TX 0x8 +#define I40E_AQ_SET_MAC_CONFIG_PACING_3DW_7TX 0x7 +#define I40E_AQ_SET_MAC_CONFIG_PACING_2DW_3TX 0x6 +#define I40E_AQ_SET_MAC_CONFIG_PACING_1DW_1TX 0x5 +#define I40E_AQ_SET_MAC_CONFIG_PACING_3DW_2TX 0x4 +#define I40E_AQ_SET_MAC_CONFIG_PACING_7DW_3TX 0x3 +#define I40E_AQ_SET_MAC_CONFIG_PACING_4DW_1TX 0x2 +#define I40E_AQ_SET_MAC_CONFIG_PACING_9DW_1TX 0x1 + u8 tx_timer_priority; /* bitmap */ + __le16 tx_timer_value; + __le16 fc_refresh_threshold; + u8 reserved[8]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aq_set_mac_config); + +/* Restart Auto-Negotiation (direct 0x605) */ +struct i40e_aqc_set_link_restart_an { + u8 command; +#define I40E_AQ_PHY_RESTART_AN 0x02 +#define I40E_AQ_PHY_LINK_ENABLE 0x04 + u8 reserved[15]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_set_link_restart_an); + +/* Get Link Status cmd & response data structure (direct 0x0607) */ +struct i40e_aqc_get_link_status { + __le16 command_flags; /* only field set on command */ +#define I40E_AQ_LSE_MASK 0x3 +#define I40E_AQ_LSE_NOP 0x0 +#define I40E_AQ_LSE_DISABLE 0x2 +#define I40E_AQ_LSE_ENABLE 0x3 +/* only response uses this flag */ +#define I40E_AQ_LSE_IS_ENABLED 0x1 + u8 phy_type; /* i40e_aq_phy_type */ + u8 link_speed; /* i40e_aq_link_speed */ + u8 link_info; +#define I40E_AQ_LINK_UP 0x01 +#define I40E_AQ_LINK_FAULT 0x02 +#define I40E_AQ_LINK_FAULT_TX 0x04 +#define I40E_AQ_LINK_FAULT_RX 0x08 +#define I40E_AQ_LINK_FAULT_REMOTE 0x10 +#define I40E_AQ_MEDIA_AVAILABLE 0x40 +#define I40E_AQ_SIGNAL_DETECT 0x80 + u8 an_info; +#define I40E_AQ_AN_COMPLETED 0x01 +#define I40E_AQ_LP_AN_ABILITY 0x02 +#define I40E_AQ_PD_FAULT 0x04 +#define I40E_AQ_FEC_EN 0x08 +#define I40E_AQ_PHY_LOW_POWER 0x10 +#define I40E_AQ_LINK_PAUSE_TX 0x20 +#define I40E_AQ_LINK_PAUSE_RX 0x40 +#define I40E_AQ_QUALIFIED_MODULE 0x80 + u8 ext_info; +#define I40E_AQ_LINK_PHY_TEMP_ALARM 0x01 +#define I40E_AQ_LINK_XCESSIVE_ERRORS 0x02 +#define I40E_AQ_LINK_TX_SHIFT 0x02 +#define I40E_AQ_LINK_TX_MASK (0x03 << I40E_AQ_LINK_TX_SHIFT) +#define I40E_AQ_LINK_TX_ACTIVE 0x00 +#define I40E_AQ_LINK_TX_DRAINED 0x01 +#define I40E_AQ_LINK_TX_FLUSHED 0x03 +#define I40E_AQ_LINK_FORCED_40G 0x10 + u8 loopback; /* use defines from i40e_aqc_set_lb_mode */ + __le16 max_frame_size; + u8 config; +#define I40E_AQ_CONFIG_CRC_ENA 0x04 +#define I40E_AQ_CONFIG_PACING_MASK 0x78 + u8 reserved[5]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_get_link_status); + +/* Set event mask command (direct 0x613) */ +struct i40e_aqc_set_phy_int_mask { + u8 reserved[8]; + __le16 event_mask; +#define I40E_AQ_EVENT_LINK_UPDOWN 0x0002 +#define I40E_AQ_EVENT_MEDIA_NA 0x0004 +#define I40E_AQ_EVENT_LINK_FAULT 0x0008 +#define I40E_AQ_EVENT_PHY_TEMP_ALARM 0x0010 +#define I40E_AQ_EVENT_EXCESSIVE_ERRORS 0x0020 +#define I40E_AQ_EVENT_SIGNAL_DETECT 0x0040 +#define I40E_AQ_EVENT_AN_COMPLETED 0x0080 +#define I40E_AQ_EVENT_MODULE_QUAL_FAIL 0x0100 +#define I40E_AQ_EVENT_PORT_TX_SUSPENDED 0x0200 + u8 reserved1[6]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_set_phy_int_mask); + +/* Get Local AN advt register (direct 0x0614) + * Set Local AN advt register (direct 0x0615) + * Get Link Partner AN advt register (direct 0x0616) + */ +struct i40e_aqc_an_advt_reg { + __le32 local_an_reg0; + __le16 local_an_reg1; + u8 reserved[10]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_an_advt_reg); + +/* Set Loopback mode (0x0618) */ +struct i40e_aqc_set_lb_mode { + __le16 lb_mode; +#define I40E_AQ_LB_PHY_LOCAL 0x01 +#define I40E_AQ_LB_PHY_REMOTE 0x02 +#define I40E_AQ_LB_MAC_LOCAL 0x04 + u8 reserved[14]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_set_lb_mode); + +/* Set PHY Debug command (0x0622) */ +struct i40e_aqc_set_phy_debug { + u8 command_flags; +#define I40E_AQ_PHY_DEBUG_RESET_INTERNAL 0x02 +#define I40E_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT 2 +#define I40E_AQ_PHY_DEBUG_RESET_EXTERNAL_MASK (0x03 << \ + I40E_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT) +#define I40E_AQ_PHY_DEBUG_RESET_EXTERNAL_NONE 0x00 +#define I40E_AQ_PHY_DEBUG_RESET_EXTERNAL_HARD 0x01 +#define I40E_AQ_PHY_DEBUG_RESET_EXTERNAL_SOFT 0x02 +#define I40E_AQ_PHY_DEBUG_DISABLE_LINK_FW 0x10 + u8 reserved[15]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_set_phy_debug); + +enum i40e_aq_phy_reg_type { + I40E_AQC_PHY_REG_INTERNAL = 0x1, + I40E_AQC_PHY_REG_EXERNAL_BASET = 0x2, + I40E_AQC_PHY_REG_EXERNAL_MODULE = 0x3 +}; + +/* NVM Read command (indirect 0x0701) + * NVM Erase commands (direct 0x0702) + * NVM Update commands (indirect 0x0703) + */ +struct i40e_aqc_nvm_update { + u8 command_flags; +#define I40E_AQ_NVM_LAST_CMD 0x01 +#define I40E_AQ_NVM_FLASH_ONLY 0x80 + u8 module_pointer; + __le16 length; + __le32 offset; + __le32 addr_high; + __le32 addr_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_nvm_update); + +/* NVM Config Read (indirect 0x0704) */ +struct i40e_aqc_nvm_config_read { + __le16 cmd_flags; +#define ANVM_SINGLE_OR_MULTIPLE_FEATURES_MASK 1 +#define ANVM_READ_SINGLE_FEATURE 0 +#define ANVM_READ_MULTIPLE_FEATURES 1 + __le16 element_count; + __le16 element_id; /* Feature/field ID */ + u8 reserved[2]; + __le32 address_high; + __le32 address_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_nvm_config_read); + +/* NVM Config Write (indirect 0x0705) */ +struct i40e_aqc_nvm_config_write { + __le16 cmd_flags; + __le16 element_count; + u8 reserved[4]; + __le32 address_high; + __le32 address_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_nvm_config_write); + +struct i40e_aqc_nvm_config_data_feature { + __le16 feature_id; + __le16 instance_id; + __le16 feature_options; + __le16 feature_selection; +}; + +struct i40e_aqc_nvm_config_data_immediate_field { +#define ANVM_FEATURE_OR_IMMEDIATE_MASK 0x2 + __le16 field_id; + __le16 instance_id; + __le16 field_options; + __le16 field_value; +}; + +/* Send to PF command (indirect 0x0801) id is only used by PF + * Send to VF command (indirect 0x0802) id is only used by PF + * Send to Peer PF command (indirect 0x0803) + */ +struct i40e_aqc_pf_vf_message { + __le32 id; + u8 reserved[4]; + __le32 addr_high; + __le32 addr_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_pf_vf_message); + +/* Alternate structure */ + +/* Direct write (direct 0x0900) + * Direct read (direct 0x0902) + */ +struct i40e_aqc_alternate_write { + __le32 address0; + __le32 data0; + __le32 address1; + __le32 data1; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_alternate_write); + +/* Indirect write (indirect 0x0901) + * Indirect read (indirect 0x0903) + */ + +struct i40e_aqc_alternate_ind_write { + __le32 address; + __le32 length; + __le32 addr_high; + __le32 addr_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_alternate_ind_write); + +/* Done alternate write (direct 0x0904) + * uses i40e_aq_desc + */ +struct i40e_aqc_alternate_write_done { + __le16 cmd_flags; +#define I40E_AQ_ALTERNATE_MODE_BIOS_MASK 1 +#define I40E_AQ_ALTERNATE_MODE_BIOS_LEGACY 0 +#define I40E_AQ_ALTERNATE_MODE_BIOS_UEFI 1 +#define I40E_AQ_ALTERNATE_RESET_NEEDED 2 + u8 reserved[14]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_alternate_write_done); + +/* Set OEM mode (direct 0x0905) */ +struct i40e_aqc_alternate_set_mode { + __le32 mode; +#define I40E_AQ_ALTERNATE_MODE_NONE 0 +#define I40E_AQ_ALTERNATE_MODE_OEM 1 + u8 reserved[12]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_alternate_set_mode); + +/* Clear port Alternate RAM (direct 0x0906) uses i40e_aq_desc */ + +/* async events 0x10xx */ + +/* Lan Queue Overflow Event (direct, 0x1001) */ +struct i40e_aqc_lan_overflow { + __le32 prtdcb_rupto; + __le32 otx_ctl; + u8 reserved[8]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_lan_overflow); + +/* Get LLDP MIB (indirect 0x0A00) */ +struct i40e_aqc_lldp_get_mib { + u8 type; + u8 reserved1; +#define I40E_AQ_LLDP_MIB_TYPE_MASK 0x3 +#define I40E_AQ_LLDP_MIB_LOCAL 0x0 +#define I40E_AQ_LLDP_MIB_REMOTE 0x1 +#define I40E_AQ_LLDP_MIB_LOCAL_AND_REMOTE 0x2 +#define I40E_AQ_LLDP_BRIDGE_TYPE_MASK 0xC +#define I40E_AQ_LLDP_BRIDGE_TYPE_SHIFT 0x2 +#define I40E_AQ_LLDP_BRIDGE_TYPE_NEAREST_BRIDGE 0x0 +#define I40E_AQ_LLDP_BRIDGE_TYPE_NON_TPMR 0x1 +#define I40E_AQ_LLDP_TX_SHIFT 0x4 +#define I40E_AQ_LLDP_TX_MASK (0x03 << I40E_AQ_LLDP_TX_SHIFT) +/* TX pause flags use I40E_AQ_LINK_TX_* above */ + __le16 local_len; + __le16 remote_len; + u8 reserved2[2]; + __le32 addr_high; + __le32 addr_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_lldp_get_mib); + +/* Configure LLDP MIB Change Event (direct 0x0A01) + * also used for the event (with type in the command field) + */ +struct i40e_aqc_lldp_update_mib { + u8 command; +#define I40E_AQ_LLDP_MIB_UPDATE_ENABLE 0x0 +#define I40E_AQ_LLDP_MIB_UPDATE_DISABLE 0x1 + u8 reserved[7]; + __le32 addr_high; + __le32 addr_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_lldp_update_mib); + +/* Add LLDP TLV (indirect 0x0A02) + * Delete LLDP TLV (indirect 0x0A04) + */ +struct i40e_aqc_lldp_add_tlv { + u8 type; /* only nearest bridge and non-TPMR from 0x0A00 */ + u8 reserved1[1]; + __le16 len; + u8 reserved2[4]; + __le32 addr_high; + __le32 addr_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_lldp_add_tlv); + +/* Update LLDP TLV (indirect 0x0A03) */ +struct i40e_aqc_lldp_update_tlv { + u8 type; /* only nearest bridge and non-TPMR from 0x0A00 */ + u8 reserved; + __le16 old_len; + __le16 new_offset; + __le16 new_len; + __le32 addr_high; + __le32 addr_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_lldp_update_tlv); + +/* Stop LLDP (direct 0x0A05) */ +struct i40e_aqc_lldp_stop { + u8 command; +#define I40E_AQ_LLDP_AGENT_STOP 0x0 +#define I40E_AQ_LLDP_AGENT_SHUTDOWN 0x1 + u8 reserved[15]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_lldp_stop); + +/* Start LLDP (direct 0x0A06) */ + +struct i40e_aqc_lldp_start { + u8 command; +#define I40E_AQ_LLDP_AGENT_START 0x1 + u8 reserved[15]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_lldp_start); + +/* Apply MIB changes (0x0A07) + * uses the generic struc as it contains no data + */ + +/* Add Udp Tunnel command and completion (direct 0x0B00) */ +struct i40e_aqc_add_udp_tunnel { + __le16 udp_port; + u8 reserved0[3]; + u8 protocol_type; +#define I40E_AQC_TUNNEL_TYPE_VXLAN 0x00 +#define I40E_AQC_TUNNEL_TYPE_NGE 0x01 +#define I40E_AQC_TUNNEL_TYPE_TEREDO 0x10 + u8 reserved1[10]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_add_udp_tunnel); + +struct i40e_aqc_add_udp_tunnel_completion { + __le16 udp_port; + u8 filter_entry_index; + u8 multiple_pfs; +#define I40E_AQC_SINGLE_PF 0x0 +#define I40E_AQC_MULTIPLE_PFS 0x1 + u8 total_filters; + u8 reserved[11]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_add_udp_tunnel_completion); + +/* remove UDP Tunnel command (0x0B01) */ +struct i40e_aqc_remove_udp_tunnel { + u8 reserved[2]; + u8 index; /* 0 to 15 */ + u8 reserved2[13]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_remove_udp_tunnel); + +struct i40e_aqc_del_udp_tunnel_completion { + __le16 udp_port; + u8 index; /* 0 to 15 */ + u8 multiple_pfs; + u8 total_filters_used; + u8 reserved1[11]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_del_udp_tunnel_completion); + +/* tunnel key structure 0x0B10 */ + +struct i40e_aqc_tunnel_key_structure { + u8 key1_off; + u8 key2_off; + u8 key1_len; /* 0 to 15 */ + u8 key2_len; /* 0 to 15 */ + u8 flags; +#define I40E_AQC_TUNNEL_KEY_STRUCT_OVERRIDE 0x01 +/* response flags */ +#define I40E_AQC_TUNNEL_KEY_STRUCT_SUCCESS 0x01 +#define I40E_AQC_TUNNEL_KEY_STRUCT_MODIFIED 0x02 +#define I40E_AQC_TUNNEL_KEY_STRUCT_OVERRIDDEN 0x03 + u8 network_key_index; +#define I40E_AQC_NETWORK_KEY_INDEX_VXLAN 0x0 +#define I40E_AQC_NETWORK_KEY_INDEX_NGE 0x1 +#define I40E_AQC_NETWORK_KEY_INDEX_FLEX_MAC_IN_UDP 0x2 +#define I40E_AQC_NETWORK_KEY_INDEX_GRE 0x3 + u8 reserved[10]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_tunnel_key_structure); + +/* OEM mode commands (direct 0xFE0x) */ +struct i40e_aqc_oem_param_change { + __le32 param_type; +#define I40E_AQ_OEM_PARAM_TYPE_PF_CTL 0 +#define I40E_AQ_OEM_PARAM_TYPE_BW_CTL 1 +#define I40E_AQ_OEM_PARAM_MAC 2 + __le32 param_value1; + u8 param_value2[8]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_oem_param_change); + +struct i40e_aqc_oem_state_change { + __le32 state; +#define I40E_AQ_OEM_STATE_LINK_DOWN 0x0 +#define I40E_AQ_OEM_STATE_LINK_UP 0x1 + u8 reserved[12]; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_oem_state_change); + +/* debug commands */ + +/* get device id (0xFF00) uses the generic structure */ + +/* set test more (0xFF01, internal) */ + +struct i40e_acq_set_test_mode { + u8 mode; +#define I40E_AQ_TEST_PARTIAL 0 +#define I40E_AQ_TEST_FULL 1 +#define I40E_AQ_TEST_NVM 2 + u8 reserved[3]; + u8 command; +#define I40E_AQ_TEST_OPEN 0 +#define I40E_AQ_TEST_CLOSE 1 +#define I40E_AQ_TEST_INC 2 + u8 reserved2[3]; + __le32 address_high; + __le32 address_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_acq_set_test_mode); + +/* Debug Read Register command (0xFF03) + * Debug Write Register command (0xFF04) + */ +struct i40e_aqc_debug_reg_read_write { + __le32 reserved; + __le32 address; + __le32 value_high; + __le32 value_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_debug_reg_read_write); + +/* Scatter/gather Reg Read (indirect 0xFF05) + * Scatter/gather Reg Write (indirect 0xFF06) + */ + +/* i40e_aq_desc is used for the command */ +struct i40e_aqc_debug_reg_sg_element_data { + __le32 address; + __le32 value; +}; + +/* Debug Modify register (direct 0xFF07) */ +struct i40e_aqc_debug_modify_reg { + __le32 address; + __le32 value; + __le32 clear_mask; + __le32 set_mask; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_debug_modify_reg); + +/* dump internal data (0xFF08, indirect) */ + +#define I40E_AQ_CLUSTER_ID_AUX 0 +#define I40E_AQ_CLUSTER_ID_SWITCH_FLU 1 +#define I40E_AQ_CLUSTER_ID_TXSCHED 2 +#define I40E_AQ_CLUSTER_ID_HMC 3 +#define I40E_AQ_CLUSTER_ID_MAC0 4 +#define I40E_AQ_CLUSTER_ID_MAC1 5 +#define I40E_AQ_CLUSTER_ID_MAC2 6 +#define I40E_AQ_CLUSTER_ID_MAC3 7 +#define I40E_AQ_CLUSTER_ID_DCB 8 +#define I40E_AQ_CLUSTER_ID_EMP_MEM 9 +#define I40E_AQ_CLUSTER_ID_PKT_BUF 10 +#define I40E_AQ_CLUSTER_ID_ALTRAM 11 + +struct i40e_aqc_debug_dump_internals { + u8 cluster_id; + u8 table_id; + __le16 data_size; + __le32 idx; + __le32 address_high; + __le32 address_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_debug_dump_internals); + +struct i40e_aqc_debug_modify_internals { + u8 cluster_id; + u8 cluster_specific_params[7]; + __le32 address_high; + __le32 address_low; +}; + +I40E_CHECK_CMD_LENGTH(i40e_aqc_debug_modify_internals); + +#endif diff --git a/drivers/net/i40e/base/i40e_alloc.h b/drivers/net/i40e/base/i40e_alloc.h new file mode 100644 index 0000000..6e81cd5 --- /dev/null +++ b/drivers/net/i40e/base/i40e_alloc.h @@ -0,0 +1,65 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _I40E_ALLOC_H_ +#define _I40E_ALLOC_H_ + +struct i40e_hw; + +/* Memory allocation types */ +enum i40e_memory_type { + i40e_mem_arq_buf = 0, /* ARQ indirect command buffer */ + i40e_mem_asq_buf = 1, + i40e_mem_atq_buf = 2, /* ATQ indirect command buffer */ + i40e_mem_arq_ring = 3, /* ARQ descriptor ring */ + i40e_mem_atq_ring = 4, /* ATQ descriptor ring */ + i40e_mem_pd = 5, /* Page Descriptor */ + i40e_mem_bp = 6, /* Backing Page - 4KB */ + i40e_mem_bp_jumbo = 7, /* Backing Page - > 4KB */ + i40e_mem_reserved +}; + +/* prototype for functions used for dynamic memory allocation */ +enum i40e_status_code i40e_allocate_dma_mem(struct i40e_hw *hw, + struct i40e_dma_mem *mem, + enum i40e_memory_type type, + u64 size, u32 alignment); +enum i40e_status_code i40e_free_dma_mem(struct i40e_hw *hw, + struct i40e_dma_mem *mem); +enum i40e_status_code i40e_allocate_virt_mem(struct i40e_hw *hw, + struct i40e_virt_mem *mem, + u32 size); +enum i40e_status_code i40e_free_virt_mem(struct i40e_hw *hw, + struct i40e_virt_mem *mem); + +#endif /* _I40E_ALLOC_H_ */ diff --git a/drivers/net/i40e/base/i40e_common.c b/drivers/net/i40e/base/i40e_common.c new file mode 100644 index 0000000..ffaa777 --- /dev/null +++ b/drivers/net/i40e/base/i40e_common.c @@ -0,0 +1,4793 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "i40e_type.h" +#include "i40e_adminq.h" +#include "i40e_prototype.h" +#include "i40e_virtchnl.h" + +/** + * i40e_set_mac_type - Sets MAC type + * @hw: pointer to the HW structure + * + * This function sets the mac type of the adapter based on the + * vendor ID and device ID stored in the hw structure. + **/ +#ifdef VF_DRIVER +enum i40e_status_code i40e_set_mac_type(struct i40e_hw *hw) +#else +STATIC enum i40e_status_code i40e_set_mac_type(struct i40e_hw *hw) +#endif +{ + enum i40e_status_code status = I40E_SUCCESS; + + DEBUGFUNC("i40e_set_mac_type\n"); + + if (hw->vendor_id == I40E_INTEL_VENDOR_ID) { + switch (hw->device_id) { + case I40E_DEV_ID_SFP_XL710: + case I40E_DEV_ID_QEMU: + case I40E_DEV_ID_KX_A: + case I40E_DEV_ID_KX_B: + case I40E_DEV_ID_KX_C: + case I40E_DEV_ID_QSFP_A: + case I40E_DEV_ID_QSFP_B: + case I40E_DEV_ID_QSFP_C: + case I40E_DEV_ID_10G_BASE_T: + hw->mac.type = I40E_MAC_XL710; + break; + case I40E_DEV_ID_VF: + case I40E_DEV_ID_VF_HV: + hw->mac.type = I40E_MAC_VF; + break; + default: + hw->mac.type = I40E_MAC_GENERIC; + break; + } + } else { + status = I40E_ERR_DEVICE_NOT_SUPPORTED; + } + + DEBUGOUT2("i40e_set_mac_type found mac: %d, returns: %d\n", + hw->mac.type, status); + return status; +} + +/** + * i40e_debug_aq + * @hw: debug mask related to admin queue + * @mask: debug mask + * @desc: pointer to admin queue descriptor + * @buffer: pointer to command buffer + * @buf_len: max length of buffer + * + * Dumps debug log about adminq command with descriptor contents. + **/ +void i40e_debug_aq(struct i40e_hw *hw, enum i40e_debug_mask mask, void *desc, + void *buffer, u16 buf_len) +{ + struct i40e_aq_desc *aq_desc = (struct i40e_aq_desc *)desc; + u16 len = LE16_TO_CPU(aq_desc->datalen); + u8 *aq_buffer = (u8 *)buffer; + u32 data[4]; + u32 i = 0; + + if ((!(mask & hw->debug_mask)) || (desc == NULL)) + return; + + i40e_debug(hw, mask, + "AQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n", + aq_desc->opcode, aq_desc->flags, aq_desc->datalen, + aq_desc->retval); + i40e_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n", + aq_desc->cookie_high, aq_desc->cookie_low); + i40e_debug(hw, mask, "\tparam (0,1) 0x%08X 0x%08X\n", + aq_desc->params.internal.param0, + aq_desc->params.internal.param1); + i40e_debug(hw, mask, "\taddr (h,l) 0x%08X 0x%08X\n", + aq_desc->params.external.addr_high, + aq_desc->params.external.addr_low); + + if ((buffer != NULL) && (aq_desc->datalen != 0)) { + i40e_memset(data, 0, sizeof(data), I40E_NONDMA_MEM); + i40e_debug(hw, mask, "AQ CMD Buffer:\n"); + if (buf_len < len) + len = buf_len; + for (i = 0; i < len; i++) { + data[((i % 16) / 4)] |= + ((u32)aq_buffer[i]) << (8 * (i % 4)); + if ((i % 16) == 15) { + i40e_debug(hw, mask, + "\t0x%04X %08X %08X %08X %08X\n", + i - 15, data[0], data[1], data[2], + data[3]); + i40e_memset(data, 0, sizeof(data), + I40E_NONDMA_MEM); + } + } + if ((i % 16) != 0) + i40e_debug(hw, mask, "\t0x%04X %08X %08X %08X %08X\n", + i - (i % 16), data[0], data[1], data[2], + data[3]); + } +} + +/** + * i40e_check_asq_alive + * @hw: pointer to the hw struct + * + * Returns true if Queue is enabled else false. + **/ +bool i40e_check_asq_alive(struct i40e_hw *hw) +{ + if (hw->aq.asq.len) + return !!(rd32(hw, hw->aq.asq.len) & I40E_PF_ATQLEN_ATQENABLE_MASK); + else + return false; +} + +/** + * i40e_aq_queue_shutdown + * @hw: pointer to the hw struct + * @unloading: is the driver unloading itself + * + * Tell the Firmware that we're shutting down the AdminQ and whether + * or not the driver is unloading as well. + **/ +enum i40e_status_code i40e_aq_queue_shutdown(struct i40e_hw *hw, + bool unloading) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_queue_shutdown *cmd = + (struct i40e_aqc_queue_shutdown *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_queue_shutdown); + + if (unloading) + cmd->driver_unloading = CPU_TO_LE32(I40E_AQ_DRIVER_UNLOADING); + status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL); + + return status; +} + +/* The i40e_ptype_lookup table is used to convert from the 8-bit ptype in the + * hardware to a bit-field that can be used by SW to more easily determine the + * packet type. + * + * Macros are used to shorten the table lines and make this table human + * readable. + * + * We store the PTYPE in the top byte of the bit field - this is just so that + * we can check that the table doesn't have a row missing, as the index into + * the table should be the PTYPE. + * + * Typical work flow: + * + * IF NOT i40e_ptype_lookup[ptype].known + * THEN + * Packet is unknown + * ELSE IF i40e_ptype_lookup[ptype].outer_ip == I40E_RX_PTYPE_OUTER_IP + * Use the rest of the fields to look at the tunnels, inner protocols, etc + * ELSE + * Use the enum i40e_rx_l2_ptype to decode the packet type + * ENDIF + */ + +/* macro to make the table lines short */ +#define I40E_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\ + { PTYPE, \ + 1, \ + I40E_RX_PTYPE_OUTER_##OUTER_IP, \ + I40E_RX_PTYPE_OUTER_##OUTER_IP_VER, \ + I40E_RX_PTYPE_##OUTER_FRAG, \ + I40E_RX_PTYPE_TUNNEL_##T, \ + I40E_RX_PTYPE_TUNNEL_END_##TE, \ + I40E_RX_PTYPE_##TEF, \ + I40E_RX_PTYPE_INNER_PROT_##I, \ + I40E_RX_PTYPE_PAYLOAD_LAYER_##PL } + +#define I40E_PTT_UNUSED_ENTRY(PTYPE) \ + { PTYPE, 0, 0, 0, 0, 0, 0, 0, 0, 0 } + +/* shorter macros makes the table fit but are terse */ +#define I40E_RX_PTYPE_NOF I40E_RX_PTYPE_NOT_FRAG +#define I40E_RX_PTYPE_FRG I40E_RX_PTYPE_FRAG +#define I40E_RX_PTYPE_INNER_PROT_TS I40E_RX_PTYPE_INNER_PROT_TIMESYNC + +/* Lookup table mapping the HW PTYPE to the bit field for decoding */ +struct i40e_rx_ptype_decoded i40e_ptype_lookup[] = { + /* L2 Packet types */ + I40E_PTT_UNUSED_ENTRY(0), + I40E_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), + I40E_PTT(2, L2, NONE, NOF, NONE, NONE, NOF, TS, PAY2), + I40E_PTT(3, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), + I40E_PTT_UNUSED_ENTRY(4), + I40E_PTT_UNUSED_ENTRY(5), + I40E_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), + I40E_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), + I40E_PTT_UNUSED_ENTRY(8), + I40E_PTT_UNUSED_ENTRY(9), + I40E_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), + I40E_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), + I40E_PTT(12, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), + I40E_PTT(13, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), + I40E_PTT(14, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), + I40E_PTT(15, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), + I40E_PTT(16, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), + I40E_PTT(17, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), + I40E_PTT(18, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), + I40E_PTT(19, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), + I40E_PTT(20, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), + I40E_PTT(21, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), + + /* Non Tunneled IPv4 */ + I40E_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3), + I40E_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3), + I40E_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP, PAY4), + I40E_PTT_UNUSED_ENTRY(25), + I40E_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP, PAY4), + I40E_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4), + I40E_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4), + + /* IPv4 --> IPv4 */ + I40E_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3), + I40E_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3), + I40E_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP, PAY4), + I40E_PTT_UNUSED_ENTRY(32), + I40E_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP, PAY4), + I40E_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), + I40E_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), + + /* IPv4 --> IPv6 */ + I40E_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3), + I40E_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3), + I40E_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP, PAY4), + I40E_PTT_UNUSED_ENTRY(39), + I40E_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP, PAY4), + I40E_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), + I40E_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), + + /* IPv4 --> GRE/NAT */ + I40E_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), + + /* IPv4 --> GRE/NAT --> IPv4 */ + I40E_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), + I40E_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), + I40E_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), + I40E_PTT_UNUSED_ENTRY(47), + I40E_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), + I40E_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), + I40E_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), + + /* IPv4 --> GRE/NAT --> IPv6 */ + I40E_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), + I40E_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), + I40E_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), + I40E_PTT_UNUSED_ENTRY(54), + I40E_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), + I40E_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), + I40E_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), + + /* IPv4 --> GRE/NAT --> MAC */ + I40E_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), + + /* IPv4 --> GRE/NAT --> MAC --> IPv4 */ + I40E_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), + I40E_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), + I40E_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), + I40E_PTT_UNUSED_ENTRY(62), + I40E_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), + I40E_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), + I40E_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), + + /* IPv4 --> GRE/NAT -> MAC --> IPv6 */ + I40E_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), + I40E_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), + I40E_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), + I40E_PTT_UNUSED_ENTRY(69), + I40E_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), + I40E_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), + I40E_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), + + /* IPv4 --> GRE/NAT --> MAC/VLAN */ + I40E_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), + + /* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */ + I40E_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), + I40E_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), + I40E_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), + I40E_PTT_UNUSED_ENTRY(77), + I40E_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), + I40E_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), + I40E_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), + + /* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */ + I40E_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), + I40E_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), + I40E_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), + I40E_PTT_UNUSED_ENTRY(84), + I40E_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), + I40E_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), + I40E_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), + + /* Non Tunneled IPv6 */ + I40E_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3), + I40E_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3), + I40E_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY3), + I40E_PTT_UNUSED_ENTRY(91), + I40E_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4), + I40E_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4), + I40E_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4), + + /* IPv6 --> IPv4 */ + I40E_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3), + I40E_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3), + I40E_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP, PAY4), + I40E_PTT_UNUSED_ENTRY(98), + I40E_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP, PAY4), + I40E_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), + I40E_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), + + /* IPv6 --> IPv6 */ + I40E_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3), + I40E_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3), + I40E_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP, PAY4), + I40E_PTT_UNUSED_ENTRY(105), + I40E_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP, PAY4), + I40E_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), + I40E_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), + + /* IPv6 --> GRE/NAT */ + I40E_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), + + /* IPv6 --> GRE/NAT -> IPv4 */ + I40E_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), + I40E_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), + I40E_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), + I40E_PTT_UNUSED_ENTRY(113), + I40E_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), + I40E_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), + I40E_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), + + /* IPv6 --> GRE/NAT -> IPv6 */ + I40E_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), + I40E_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), + I40E_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), + I40E_PTT_UNUSED_ENTRY(120), + I40E_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), + I40E_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), + I40E_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), + + /* IPv6 --> GRE/NAT -> MAC */ + I40E_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), + + /* IPv6 --> GRE/NAT -> MAC -> IPv4 */ + I40E_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), + I40E_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), + I40E_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), + I40E_PTT_UNUSED_ENTRY(128), + I40E_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), + I40E_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), + I40E_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), + + /* IPv6 --> GRE/NAT -> MAC -> IPv6 */ + I40E_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), + I40E_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), + I40E_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), + I40E_PTT_UNUSED_ENTRY(135), + I40E_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), + I40E_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), + I40E_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), + + /* IPv6 --> GRE/NAT -> MAC/VLAN */ + I40E_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), + + /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */ + I40E_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), + I40E_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), + I40E_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), + I40E_PTT_UNUSED_ENTRY(143), + I40E_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), + I40E_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), + I40E_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), + + /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */ + I40E_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), + I40E_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), + I40E_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), + I40E_PTT_UNUSED_ENTRY(150), + I40E_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), + I40E_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), + I40E_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), + + /* unused entries */ + I40E_PTT_UNUSED_ENTRY(154), + I40E_PTT_UNUSED_ENTRY(155), + I40E_PTT_UNUSED_ENTRY(156), + I40E_PTT_UNUSED_ENTRY(157), + I40E_PTT_UNUSED_ENTRY(158), + I40E_PTT_UNUSED_ENTRY(159), + + I40E_PTT_UNUSED_ENTRY(160), + I40E_PTT_UNUSED_ENTRY(161), + I40E_PTT_UNUSED_ENTRY(162), + I40E_PTT_UNUSED_ENTRY(163), + I40E_PTT_UNUSED_ENTRY(164), + I40E_PTT_UNUSED_ENTRY(165), + I40E_PTT_UNUSED_ENTRY(166), + I40E_PTT_UNUSED_ENTRY(167), + I40E_PTT_UNUSED_ENTRY(168), + I40E_PTT_UNUSED_ENTRY(169), + + I40E_PTT_UNUSED_ENTRY(170), + I40E_PTT_UNUSED_ENTRY(171), + I40E_PTT_UNUSED_ENTRY(172), + I40E_PTT_UNUSED_ENTRY(173), + I40E_PTT_UNUSED_ENTRY(174), + I40E_PTT_UNUSED_ENTRY(175), + I40E_PTT_UNUSED_ENTRY(176), + I40E_PTT_UNUSED_ENTRY(177), + I40E_PTT_UNUSED_ENTRY(178), + I40E_PTT_UNUSED_ENTRY(179), + + I40E_PTT_UNUSED_ENTRY(180), + I40E_PTT_UNUSED_ENTRY(181), + I40E_PTT_UNUSED_ENTRY(182), + I40E_PTT_UNUSED_ENTRY(183), + I40E_PTT_UNUSED_ENTRY(184), + I40E_PTT_UNUSED_ENTRY(185), + I40E_PTT_UNUSED_ENTRY(186), + I40E_PTT_UNUSED_ENTRY(187), + I40E_PTT_UNUSED_ENTRY(188), + I40E_PTT_UNUSED_ENTRY(189), + + I40E_PTT_UNUSED_ENTRY(190), + I40E_PTT_UNUSED_ENTRY(191), + I40E_PTT_UNUSED_ENTRY(192), + I40E_PTT_UNUSED_ENTRY(193), + I40E_PTT_UNUSED_ENTRY(194), + I40E_PTT_UNUSED_ENTRY(195), + I40E_PTT_UNUSED_ENTRY(196), + I40E_PTT_UNUSED_ENTRY(197), + I40E_PTT_UNUSED_ENTRY(198), + I40E_PTT_UNUSED_ENTRY(199), + + I40E_PTT_UNUSED_ENTRY(200), + I40E_PTT_UNUSED_ENTRY(201), + I40E_PTT_UNUSED_ENTRY(202), + I40E_PTT_UNUSED_ENTRY(203), + I40E_PTT_UNUSED_ENTRY(204), + I40E_PTT_UNUSED_ENTRY(205), + I40E_PTT_UNUSED_ENTRY(206), + I40E_PTT_UNUSED_ENTRY(207), + I40E_PTT_UNUSED_ENTRY(208), + I40E_PTT_UNUSED_ENTRY(209), + + I40E_PTT_UNUSED_ENTRY(210), + I40E_PTT_UNUSED_ENTRY(211), + I40E_PTT_UNUSED_ENTRY(212), + I40E_PTT_UNUSED_ENTRY(213), + I40E_PTT_UNUSED_ENTRY(214), + I40E_PTT_UNUSED_ENTRY(215), + I40E_PTT_UNUSED_ENTRY(216), + I40E_PTT_UNUSED_ENTRY(217), + I40E_PTT_UNUSED_ENTRY(218), + I40E_PTT_UNUSED_ENTRY(219), + + I40E_PTT_UNUSED_ENTRY(220), + I40E_PTT_UNUSED_ENTRY(221), + I40E_PTT_UNUSED_ENTRY(222), + I40E_PTT_UNUSED_ENTRY(223), + I40E_PTT_UNUSED_ENTRY(224), + I40E_PTT_UNUSED_ENTRY(225), + I40E_PTT_UNUSED_ENTRY(226), + I40E_PTT_UNUSED_ENTRY(227), + I40E_PTT_UNUSED_ENTRY(228), + I40E_PTT_UNUSED_ENTRY(229), + + I40E_PTT_UNUSED_ENTRY(230), + I40E_PTT_UNUSED_ENTRY(231), + I40E_PTT_UNUSED_ENTRY(232), + I40E_PTT_UNUSED_ENTRY(233), + I40E_PTT_UNUSED_ENTRY(234), + I40E_PTT_UNUSED_ENTRY(235), + I40E_PTT_UNUSED_ENTRY(236), + I40E_PTT_UNUSED_ENTRY(237), + I40E_PTT_UNUSED_ENTRY(238), + I40E_PTT_UNUSED_ENTRY(239), + + I40E_PTT_UNUSED_ENTRY(240), + I40E_PTT_UNUSED_ENTRY(241), + I40E_PTT_UNUSED_ENTRY(242), + I40E_PTT_UNUSED_ENTRY(243), + I40E_PTT_UNUSED_ENTRY(244), + I40E_PTT_UNUSED_ENTRY(245), + I40E_PTT_UNUSED_ENTRY(246), + I40E_PTT_UNUSED_ENTRY(247), + I40E_PTT_UNUSED_ENTRY(248), + I40E_PTT_UNUSED_ENTRY(249), + + I40E_PTT_UNUSED_ENTRY(250), + I40E_PTT_UNUSED_ENTRY(251), + I40E_PTT_UNUSED_ENTRY(252), + I40E_PTT_UNUSED_ENTRY(253), + I40E_PTT_UNUSED_ENTRY(254), + I40E_PTT_UNUSED_ENTRY(255) +}; + +#ifndef VF_DRIVER + +/** + * i40e_init_shared_code - Initialize the shared code + * @hw: pointer to hardware structure + * + * This assigns the MAC type and PHY code and inits the NVM. + * Does not touch the hardware. This function must be called prior to any + * other function in the shared code. The i40e_hw structure should be + * memset to 0 prior to calling this function. The following fields in + * hw structure should be filled in prior to calling this function: + * hw_addr, back, device_id, vendor_id, subsystem_device_id, + * subsystem_vendor_id, and revision_id + **/ +enum i40e_status_code i40e_init_shared_code(struct i40e_hw *hw) +{ + enum i40e_status_code status = I40E_SUCCESS; + u32 reg; + + DEBUGFUNC("i40e_init_shared_code"); + + i40e_set_mac_type(hw); + + switch (hw->mac.type) { + case I40E_MAC_XL710: + break; + default: + return I40E_ERR_DEVICE_NOT_SUPPORTED; + } + + hw->phy.get_link_info = true; + + /* Determine port number */ + reg = rd32(hw, I40E_PFGEN_PORTNUM); + reg = ((reg & I40E_PFGEN_PORTNUM_PORT_NUM_MASK) >> + I40E_PFGEN_PORTNUM_PORT_NUM_SHIFT); + hw->port = (u8)reg; + + /* Determine the PF number based on the PCI fn */ + reg = rd32(hw, I40E_GLPCI_CAPSUP); + if (reg & I40E_GLPCI_CAPSUP_ARI_EN_MASK) + hw->pf_id = (u8)((hw->bus.device << 3) | hw->bus.func); + else + hw->pf_id = (u8)hw->bus.func; + + status = i40e_init_nvm(hw); + return status; +} + +/** + * i40e_aq_mac_address_read - Retrieve the MAC addresses + * @hw: pointer to the hw struct + * @flags: a return indicator of what addresses were added to the addr store + * @addrs: the requestor's mac addr store + * @cmd_details: pointer to command details structure or NULL + **/ +STATIC enum i40e_status_code i40e_aq_mac_address_read(struct i40e_hw *hw, + u16 *flags, + struct i40e_aqc_mac_address_read_data *addrs, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_mac_address_read *cmd_data = + (struct i40e_aqc_mac_address_read *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_mac_address_read); + desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_BUF); + + status = i40e_asq_send_command(hw, &desc, addrs, + sizeof(*addrs), cmd_details); + *flags = LE16_TO_CPU(cmd_data->command_flags); + + return status; +} + +/** + * i40e_aq_mac_address_write - Change the MAC addresses + * @hw: pointer to the hw struct + * @flags: indicates which MAC to be written + * @mac_addr: address to write + * @cmd_details: pointer to command details structure or NULL + **/ +enum i40e_status_code i40e_aq_mac_address_write(struct i40e_hw *hw, + u16 flags, u8 *mac_addr, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_mac_address_write *cmd_data = + (struct i40e_aqc_mac_address_write *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_mac_address_write); + cmd_data->command_flags = CPU_TO_LE16(flags); + cmd_data->mac_sah = CPU_TO_LE16((u16)mac_addr[0] << 8 | mac_addr[1]); + cmd_data->mac_sal = CPU_TO_LE32(((u32)mac_addr[2] << 24) | + ((u32)mac_addr[3] << 16) | + ((u32)mac_addr[4] << 8) | + mac_addr[5]); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_get_mac_addr - get MAC address + * @hw: pointer to the HW structure + * @mac_addr: pointer to MAC address + * + * Reads the adapter's MAC address from register + **/ +enum i40e_status_code i40e_get_mac_addr(struct i40e_hw *hw, u8 *mac_addr) +{ + struct i40e_aqc_mac_address_read_data addrs; + enum i40e_status_code status; + u16 flags = 0; + + status = i40e_aq_mac_address_read(hw, &flags, &addrs, NULL); + + if (flags & I40E_AQC_LAN_ADDR_VALID) + memcpy(mac_addr, &addrs.pf_lan_mac, sizeof(addrs.pf_lan_mac)); + + return status; +} + +/** + * i40e_get_port_mac_addr - get Port MAC address + * @hw: pointer to the HW structure + * @mac_addr: pointer to Port MAC address + * + * Reads the adapter's Port MAC address + **/ +enum i40e_status_code i40e_get_port_mac_addr(struct i40e_hw *hw, u8 *mac_addr) +{ + struct i40e_aqc_mac_address_read_data addrs; + enum i40e_status_code status; + u16 flags = 0; + + status = i40e_aq_mac_address_read(hw, &flags, &addrs, NULL); + if (status) + return status; + + if (flags & I40E_AQC_PORT_ADDR_VALID) + memcpy(mac_addr, &addrs.port_mac, sizeof(addrs.port_mac)); + else + status = I40E_ERR_INVALID_MAC_ADDR; + + return status; +} + +/** + * i40e_pre_tx_queue_cfg - pre tx queue configure + * @hw: pointer to the HW structure + * @queue: target pf queue index + * @enable: state change request + * + * Handles hw requirement to indicate intention to enable + * or disable target queue. + **/ +void i40e_pre_tx_queue_cfg(struct i40e_hw *hw, u32 queue, bool enable) +{ + u32 abs_queue_idx = hw->func_caps.base_queue + queue; + u32 reg_block = 0; + u32 reg_val; + + if (abs_queue_idx >= 128) { + reg_block = abs_queue_idx / 128; + abs_queue_idx %= 128; + } + + reg_val = rd32(hw, I40E_GLLAN_TXPRE_QDIS(reg_block)); + reg_val &= ~I40E_GLLAN_TXPRE_QDIS_QINDX_MASK; + reg_val |= (abs_queue_idx << I40E_GLLAN_TXPRE_QDIS_QINDX_SHIFT); + + if (enable) + reg_val |= I40E_GLLAN_TXPRE_QDIS_CLEAR_QDIS_MASK; + else + reg_val |= I40E_GLLAN_TXPRE_QDIS_SET_QDIS_MASK; + + wr32(hw, I40E_GLLAN_TXPRE_QDIS(reg_block), reg_val); +} + +/** + * i40e_validate_mac_addr - Validate unicast MAC address + * @mac_addr: pointer to MAC address + * + * Tests a MAC address to ensure it is a valid Individual Address + **/ +enum i40e_status_code i40e_validate_mac_addr(u8 *mac_addr) +{ + enum i40e_status_code status = I40E_SUCCESS; + + DEBUGFUNC("i40e_validate_mac_addr"); + + /* Broadcast addresses ARE multicast addresses + * Make sure it is not a multicast address + * Reject the zero address + */ + if (I40E_IS_MULTICAST(mac_addr) || + (mac_addr[0] == 0 && mac_addr[1] == 0 && mac_addr[2] == 0 && + mac_addr[3] == 0 && mac_addr[4] == 0 && mac_addr[5] == 0)) + status = I40E_ERR_INVALID_MAC_ADDR; + + return status; +} + +/** + * i40e_get_media_type - Gets media type + * @hw: pointer to the hardware structure + **/ +STATIC enum i40e_media_type i40e_get_media_type(struct i40e_hw *hw) +{ + enum i40e_media_type media; + + switch (hw->phy.link_info.phy_type) { + case I40E_PHY_TYPE_10GBASE_SR: + case I40E_PHY_TYPE_10GBASE_LR: + case I40E_PHY_TYPE_1000BASE_SX: + case I40E_PHY_TYPE_1000BASE_LX: + case I40E_PHY_TYPE_40GBASE_SR4: + case I40E_PHY_TYPE_40GBASE_LR4: + media = I40E_MEDIA_TYPE_FIBER; + break; + case I40E_PHY_TYPE_100BASE_TX: + case I40E_PHY_TYPE_1000BASE_T: + case I40E_PHY_TYPE_10GBASE_T: + media = I40E_MEDIA_TYPE_BASET; + break; + case I40E_PHY_TYPE_10GBASE_CR1_CU: + case I40E_PHY_TYPE_40GBASE_CR4_CU: + case I40E_PHY_TYPE_10GBASE_CR1: + case I40E_PHY_TYPE_40GBASE_CR4: + case I40E_PHY_TYPE_10GBASE_SFPP_CU: + media = I40E_MEDIA_TYPE_DA; + break; + case I40E_PHY_TYPE_1000BASE_KX: + case I40E_PHY_TYPE_10GBASE_KX4: + case I40E_PHY_TYPE_10GBASE_KR: + case I40E_PHY_TYPE_40GBASE_KR4: + media = I40E_MEDIA_TYPE_BACKPLANE; + break; + case I40E_PHY_TYPE_SGMII: + case I40E_PHY_TYPE_XAUI: + case I40E_PHY_TYPE_XFI: + case I40E_PHY_TYPE_XLAUI: + case I40E_PHY_TYPE_XLPPI: + default: + media = I40E_MEDIA_TYPE_UNKNOWN; + break; + } + + return media; +} + +#define I40E_PF_RESET_WAIT_COUNT 100 +/** + * i40e_pf_reset - Reset the PF + * @hw: pointer to the hardware structure + * + * Assuming someone else has triggered a global reset, + * assure the global reset is complete and then reset the PF + **/ +enum i40e_status_code i40e_pf_reset(struct i40e_hw *hw) +{ + u32 cnt = 0; + u32 cnt1 = 0; + u32 reg = 0; + u32 grst_del; + + /* Poll for Global Reset steady state in case of recent GRST. + * The grst delay value is in 100ms units, and we'll wait a + * couple counts longer to be sure we don't just miss the end. + */ + grst_del = rd32(hw, I40E_GLGEN_RSTCTL) & I40E_GLGEN_RSTCTL_GRSTDEL_MASK + >> I40E_GLGEN_RSTCTL_GRSTDEL_SHIFT; + for (cnt = 0; cnt < grst_del + 2; cnt++) { + reg = rd32(hw, I40E_GLGEN_RSTAT); + if (!(reg & I40E_GLGEN_RSTAT_DEVSTATE_MASK)) + break; + i40e_msec_delay(100); + } + if (reg & I40E_GLGEN_RSTAT_DEVSTATE_MASK) { + DEBUGOUT("Global reset polling failed to complete.\n"); + return I40E_ERR_RESET_FAILED; + } + + /* Now Wait for the FW to be ready */ + for (cnt1 = 0; cnt1 < I40E_PF_RESET_WAIT_COUNT; cnt1++) { + reg = rd32(hw, I40E_GLNVM_ULD); + reg &= (I40E_GLNVM_ULD_CONF_CORE_DONE_MASK | + I40E_GLNVM_ULD_CONF_GLOBAL_DONE_MASK); + if (reg == (I40E_GLNVM_ULD_CONF_CORE_DONE_MASK | + I40E_GLNVM_ULD_CONF_GLOBAL_DONE_MASK)) { + DEBUGOUT1("Core and Global modules ready %d\n", cnt1); + break; + } + i40e_msec_delay(10); + } + if (!(reg & (I40E_GLNVM_ULD_CONF_CORE_DONE_MASK | + I40E_GLNVM_ULD_CONF_GLOBAL_DONE_MASK))) { + DEBUGOUT("wait for FW Reset complete timedout\n"); + DEBUGOUT1("I40E_GLNVM_ULD = 0x%x\n", reg); + return I40E_ERR_RESET_FAILED; + } + + /* If there was a Global Reset in progress when we got here, + * we don't need to do the PF Reset + */ + if (!cnt) { + reg = rd32(hw, I40E_PFGEN_CTRL); + wr32(hw, I40E_PFGEN_CTRL, + (reg | I40E_PFGEN_CTRL_PFSWR_MASK)); + for (cnt = 0; cnt < I40E_PF_RESET_WAIT_COUNT; cnt++) { + reg = rd32(hw, I40E_PFGEN_CTRL); + if (!(reg & I40E_PFGEN_CTRL_PFSWR_MASK)) + break; + i40e_msec_delay(1); + } + if (reg & I40E_PFGEN_CTRL_PFSWR_MASK) { + DEBUGOUT("PF reset polling failed to complete.\n"); + return I40E_ERR_RESET_FAILED; + } + } + + i40e_clear_pxe_mode(hw); + + + return I40E_SUCCESS; +} + +/** + * i40e_clear_hw - clear out any left over hw state + * @hw: pointer to the hw struct + * + * Clear queues and interrupts, typically called at init time, + * but after the capabilities have been found so we know how many + * queues and msix vectors have been allocated. + **/ +void i40e_clear_hw(struct i40e_hw *hw) +{ + u32 num_queues, base_queue; + u32 num_pf_int; + u32 num_vf_int; + u32 num_vfs; + u32 i, j; + u32 val; + u32 eol = 0x7ff; + + /* get number of interrupts, queues, and vfs */ + val = rd32(hw, I40E_GLPCI_CNF2); + num_pf_int = (val & I40E_GLPCI_CNF2_MSI_X_PF_N_MASK) >> + I40E_GLPCI_CNF2_MSI_X_PF_N_SHIFT; + num_vf_int = (val & I40E_GLPCI_CNF2_MSI_X_VF_N_MASK) >> + I40E_GLPCI_CNF2_MSI_X_VF_N_SHIFT; + + val = rd32(hw, I40E_PFLAN_QALLOC); + base_queue = (val & I40E_PFLAN_QALLOC_FIRSTQ_MASK) >> + I40E_PFLAN_QALLOC_FIRSTQ_SHIFT; + j = (val & I40E_PFLAN_QALLOC_LASTQ_MASK) >> + I40E_PFLAN_QALLOC_LASTQ_SHIFT; + if (val & I40E_PFLAN_QALLOC_VALID_MASK) + num_queues = (j - base_queue) + 1; + else + num_queues = 0; + + val = rd32(hw, I40E_PF_VT_PFALLOC); + i = (val & I40E_PF_VT_PFALLOC_FIRSTVF_MASK) >> + I40E_PF_VT_PFALLOC_FIRSTVF_SHIFT; + j = (val & I40E_PF_VT_PFALLOC_LASTVF_MASK) >> + I40E_PF_VT_PFALLOC_LASTVF_SHIFT; + if (val & I40E_PF_VT_PFALLOC_VALID_MASK) + num_vfs = (j - i) + 1; + else + num_vfs = 0; + + /* stop all the interrupts */ + wr32(hw, I40E_PFINT_ICR0_ENA, 0); + val = 0x3 << I40E_PFINT_DYN_CTLN_ITR_INDX_SHIFT; + for (i = 0; i < num_pf_int - 2; i++) + wr32(hw, I40E_PFINT_DYN_CTLN(i), val); + + /* Set the FIRSTQ_INDX field to 0x7FF in PFINT_LNKLSTx */ + val = eol << I40E_PFINT_LNKLST0_FIRSTQ_INDX_SHIFT; + wr32(hw, I40E_PFINT_LNKLST0, val); + for (i = 0; i < num_pf_int - 2; i++) + wr32(hw, I40E_PFINT_LNKLSTN(i), val); + val = eol << I40E_VPINT_LNKLST0_FIRSTQ_INDX_SHIFT; + for (i = 0; i < num_vfs; i++) + wr32(hw, I40E_VPINT_LNKLST0(i), val); + for (i = 0; i < num_vf_int - 2; i++) + wr32(hw, I40E_VPINT_LNKLSTN(i), val); + + /* warn the HW of the coming Tx disables */ + for (i = 0; i < num_queues; i++) { + u32 abs_queue_idx = base_queue + i; + u32 reg_block = 0; + + if (abs_queue_idx >= 128) { + reg_block = abs_queue_idx / 128; + abs_queue_idx %= 128; + } + + val = rd32(hw, I40E_GLLAN_TXPRE_QDIS(reg_block)); + val &= ~I40E_GLLAN_TXPRE_QDIS_QINDX_MASK; + val |= (abs_queue_idx << I40E_GLLAN_TXPRE_QDIS_QINDX_SHIFT); + val |= I40E_GLLAN_TXPRE_QDIS_SET_QDIS_MASK; + + wr32(hw, I40E_GLLAN_TXPRE_QDIS(reg_block), val); + } + i40e_usec_delay(400); + + /* stop all the queues */ + for (i = 0; i < num_queues; i++) { + wr32(hw, I40E_QINT_TQCTL(i), 0); + wr32(hw, I40E_QTX_ENA(i), 0); + wr32(hw, I40E_QINT_RQCTL(i), 0); + wr32(hw, I40E_QRX_ENA(i), 0); + } + + /* short wait for all queue disables to settle */ + i40e_usec_delay(50); +} + +/** + * i40e_clear_pxe_mode - clear pxe operations mode + * @hw: pointer to the hw struct + * + * Make sure all PXE mode settings are cleared, including things + * like descriptor fetch/write-back mode. + **/ +void i40e_clear_pxe_mode(struct i40e_hw *hw) +{ + if (i40e_check_asq_alive(hw)) + i40e_aq_clear_pxe_mode(hw, NULL); +} + +/** + * i40e_led_is_mine - helper to find matching led + * @hw: pointer to the hw struct + * @idx: index into GPIO registers + * + * returns: 0 if no match, otherwise the value of the GPIO_CTL register + */ +static u32 i40e_led_is_mine(struct i40e_hw *hw, int idx) +{ + u32 gpio_val = 0; + u32 port; + + if (!hw->func_caps.led[idx]) + return 0; + + gpio_val = rd32(hw, I40E_GLGEN_GPIO_CTL(idx)); + port = (gpio_val & I40E_GLGEN_GPIO_CTL_PRT_NUM_MASK) >> + I40E_GLGEN_GPIO_CTL_PRT_NUM_SHIFT; + + /* if PRT_NUM_NA is 1 then this LED is not port specific, OR + * if it is not our port then ignore + */ + if ((gpio_val & I40E_GLGEN_GPIO_CTL_PRT_NUM_NA_MASK) || + (port != hw->port)) + return 0; + + return gpio_val; +} + +#define I40E_LED0 22 +#define I40E_LINK_ACTIVITY 0xC + +/** + * i40e_led_get - return current on/off mode + * @hw: pointer to the hw struct + * + * The value returned is the 'mode' field as defined in the + * GPIO register definitions: 0x0 = off, 0xf = on, and other + * values are variations of possible behaviors relating to + * blink, link, and wire. + **/ +u32 i40e_led_get(struct i40e_hw *hw) +{ + u32 mode = 0; + int i; + + /* as per the documentation GPIO 22-29 are the LED + * GPIO pins named LED0..LED7 + */ + for (i = I40E_LED0; i <= I40E_GLGEN_GPIO_CTL_MAX_INDEX; i++) { + u32 gpio_val = i40e_led_is_mine(hw, i); + + if (!gpio_val) + continue; + + mode = (gpio_val & I40E_GLGEN_GPIO_CTL_LED_MODE_MASK) >> + I40E_GLGEN_GPIO_CTL_LED_MODE_SHIFT; + break; + } + + return mode; +} + +/** + * i40e_led_set - set new on/off mode + * @hw: pointer to the hw struct + * @mode: 0=off, 0xf=on (else see manual for mode details) + * @blink: true if the LED should blink when on, false if steady + * + * if this function is used to turn on the blink it should + * be used to disable the blink when restoring the original state. + **/ +void i40e_led_set(struct i40e_hw *hw, u32 mode, bool blink) +{ + int i; + + if (mode & 0xfffffff0) + DEBUGOUT1("invalid mode passed in %X\n", mode); + + /* as per the documentation GPIO 22-29 are the LED + * GPIO pins named LED0..LED7 + */ + for (i = I40E_LED0; i <= I40E_GLGEN_GPIO_CTL_MAX_INDEX; i++) { + u32 gpio_val = i40e_led_is_mine(hw, i); + + if (!gpio_val) + continue; + + gpio_val &= ~I40E_GLGEN_GPIO_CTL_LED_MODE_MASK; + /* this & is a bit of paranoia, but serves as a range check */ + gpio_val |= ((mode << I40E_GLGEN_GPIO_CTL_LED_MODE_SHIFT) & + I40E_GLGEN_GPIO_CTL_LED_MODE_MASK); + + if (mode == I40E_LINK_ACTIVITY) + blink = false; + + gpio_val |= (blink ? 1 : 0) << + I40E_GLGEN_GPIO_CTL_LED_BLINK_SHIFT; + + wr32(hw, I40E_GLGEN_GPIO_CTL(i), gpio_val); + break; + } +} + +/* Admin command wrappers */ + +/** + * i40e_aq_get_phy_capabilities + * @hw: pointer to the hw struct + * @abilities: structure for PHY capabilities to be filled + * @qualified_modules: report Qualified Modules + * @report_init: report init capabilities (active are default) + * @cmd_details: pointer to command details structure or NULL + * + * Returns the various PHY abilities supported on the Port. + **/ +enum i40e_status_code i40e_aq_get_phy_capabilities(struct i40e_hw *hw, + bool qualified_modules, bool report_init, + struct i40e_aq_get_phy_abilities_resp *abilities, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + enum i40e_status_code status; + u16 abilities_size = sizeof(struct i40e_aq_get_phy_abilities_resp); + + if (!abilities) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_get_phy_abilities); + + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF); + if (abilities_size > I40E_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); + + if (qualified_modules) + desc.params.external.param0 |= + CPU_TO_LE32(I40E_AQ_PHY_REPORT_QUALIFIED_MODULES); + + if (report_init) + desc.params.external.param0 |= + CPU_TO_LE32(I40E_AQ_PHY_REPORT_INITIAL_VALUES); + + status = i40e_asq_send_command(hw, &desc, abilities, abilities_size, + cmd_details); + + if (hw->aq.asq_last_status == I40E_AQ_RC_EIO) + status = I40E_ERR_UNKNOWN_PHY; + + return status; +} + +/** + * i40e_aq_set_phy_config + * @hw: pointer to the hw struct + * @config: structure with PHY configuration to be set + * @cmd_details: pointer to command details structure or NULL + * + * Set the various PHY configuration parameters + * supported on the Port.One or more of the Set PHY config parameters may be + * ignored in an MFP mode as the PF may not have the privilege to set some + * of the PHY Config parameters. This status will be indicated by the + * command response. + **/ +enum i40e_status_code i40e_aq_set_phy_config(struct i40e_hw *hw, + struct i40e_aq_set_phy_config *config, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aq_set_phy_config *cmd = + (struct i40e_aq_set_phy_config *)&desc.params.raw; + enum i40e_status_code status; + + if (!config) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_set_phy_config); + + *cmd = *config; + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_set_fc + * @hw: pointer to the hw struct + * + * Set the requested flow control mode using set_phy_config. + **/ +enum i40e_status_code i40e_set_fc(struct i40e_hw *hw, u8 *aq_failures, + bool atomic_restart) +{ + enum i40e_fc_mode fc_mode = hw->fc.requested_mode; + struct i40e_aq_get_phy_abilities_resp abilities; + struct i40e_aq_set_phy_config config; + enum i40e_status_code status; + u8 pause_mask = 0x0; + + *aq_failures = 0x0; + + switch (fc_mode) { + case I40E_FC_FULL: + pause_mask |= I40E_AQ_PHY_FLAG_PAUSE_TX; + pause_mask |= I40E_AQ_PHY_FLAG_PAUSE_RX; + break; + case I40E_FC_RX_PAUSE: + pause_mask |= I40E_AQ_PHY_FLAG_PAUSE_RX; + break; + case I40E_FC_TX_PAUSE: + pause_mask |= I40E_AQ_PHY_FLAG_PAUSE_TX; + break; + default: + break; + } + + /* Get the current phy config */ + status = i40e_aq_get_phy_capabilities(hw, false, false, &abilities, + NULL); + if (status) { + *aq_failures |= I40E_SET_FC_AQ_FAIL_GET; + return status; + } + + memset(&config, 0, sizeof(struct i40e_aq_set_phy_config)); + /* clear the old pause settings */ + config.abilities = abilities.abilities & ~(I40E_AQ_PHY_FLAG_PAUSE_TX) & + ~(I40E_AQ_PHY_FLAG_PAUSE_RX); + /* set the new abilities */ + config.abilities |= pause_mask; + /* If the abilities have changed, then set the new config */ + if (config.abilities != abilities.abilities) { + /* Auto restart link so settings take effect */ + if (atomic_restart) + config.abilities |= I40E_AQ_PHY_ENABLE_ATOMIC_LINK; + /* Copy over all the old settings */ + config.phy_type = abilities.phy_type; + config.link_speed = abilities.link_speed; + config.eee_capability = abilities.eee_capability; + config.eeer = abilities.eeer_val; + config.low_power_ctrl = abilities.d3_lpan; + status = i40e_aq_set_phy_config(hw, &config, NULL); + + if (status) + *aq_failures |= I40E_SET_FC_AQ_FAIL_SET; + } + /* Update the link info */ + status = i40e_update_link_info(hw, true); + if (status) { + /* Wait a little bit (on 40G cards it sometimes takes a really + * long time for link to come back from the atomic reset) + * and try once more + */ + i40e_msec_delay(1000); + status = i40e_update_link_info(hw, true); + } + if (status) + *aq_failures |= I40E_SET_FC_AQ_FAIL_UPDATE; + + return status; +} + +/** + * i40e_aq_set_mac_config + * @hw: pointer to the hw struct + * @max_frame_size: Maximum Frame Size to be supported by the port + * @crc_en: Tell HW to append a CRC to outgoing frames + * @pacing: Pacing configurations + * @cmd_details: pointer to command details structure or NULL + * + * Configure MAC settings for frame size, jumbo frame support and the + * addition of a CRC by the hardware. + **/ +enum i40e_status_code i40e_aq_set_mac_config(struct i40e_hw *hw, + u16 max_frame_size, + bool crc_en, u16 pacing, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aq_set_mac_config *cmd = + (struct i40e_aq_set_mac_config *)&desc.params.raw; + enum i40e_status_code status; + + if (max_frame_size == 0) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_set_mac_config); + + cmd->max_frame_size = CPU_TO_LE16(max_frame_size); + cmd->params = ((u8)pacing & 0x0F) << 3; + if (crc_en) + cmd->params |= I40E_AQ_SET_MAC_CONFIG_CRC_EN; + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_aq_clear_pxe_mode + * @hw: pointer to the hw struct + * @cmd_details: pointer to command details structure or NULL + * + * Tell the firmware that the driver is taking over from PXE + **/ +enum i40e_status_code i40e_aq_clear_pxe_mode(struct i40e_hw *hw, + struct i40e_asq_cmd_details *cmd_details) +{ + enum i40e_status_code status; + struct i40e_aq_desc desc; + struct i40e_aqc_clear_pxe *cmd = + (struct i40e_aqc_clear_pxe *)&desc.params.raw; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_clear_pxe_mode); + + cmd->rx_cnt = 0x2; + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + wr32(hw, I40E_GLLAN_RCTL_0, 0x1); + + return status; +} + +/** + * i40e_aq_set_link_restart_an + * @hw: pointer to the hw struct + * @enable_link: if true: enable link, if false: disable link + * @cmd_details: pointer to command details structure or NULL + * + * Sets up the link and restarts the Auto-Negotiation over the link. + **/ +enum i40e_status_code i40e_aq_set_link_restart_an(struct i40e_hw *hw, + bool enable_link, struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_set_link_restart_an *cmd = + (struct i40e_aqc_set_link_restart_an *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_set_link_restart_an); + + cmd->command = I40E_AQ_PHY_RESTART_AN; + if (enable_link) + cmd->command |= I40E_AQ_PHY_LINK_ENABLE; + else + cmd->command &= ~I40E_AQ_PHY_LINK_ENABLE; + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_aq_get_link_info + * @hw: pointer to the hw struct + * @enable_lse: enable/disable LinkStatusEvent reporting + * @link: pointer to link status structure - optional + * @cmd_details: pointer to command details structure or NULL + * + * Returns the link status of the adapter. + **/ +enum i40e_status_code i40e_aq_get_link_info(struct i40e_hw *hw, + bool enable_lse, struct i40e_link_status *link, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_get_link_status *resp = + (struct i40e_aqc_get_link_status *)&desc.params.raw; + struct i40e_link_status *hw_link_info = &hw->phy.link_info; + enum i40e_status_code status; + bool tx_pause, rx_pause; + u16 command_flags; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_get_link_status); + + if (enable_lse) + command_flags = I40E_AQ_LSE_ENABLE; + else + command_flags = I40E_AQ_LSE_DISABLE; + resp->command_flags = CPU_TO_LE16(command_flags); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + if (status != I40E_SUCCESS) + goto aq_get_link_info_exit; + + /* save off old link status information */ + i40e_memcpy(&hw->phy.link_info_old, hw_link_info, + sizeof(struct i40e_link_status), I40E_NONDMA_TO_NONDMA); + + /* update link status */ + hw_link_info->phy_type = (enum i40e_aq_phy_type)resp->phy_type; + hw->phy.media_type = i40e_get_media_type(hw); + hw_link_info->link_speed = (enum i40e_aq_link_speed)resp->link_speed; + hw_link_info->link_info = resp->link_info; + hw_link_info->an_info = resp->an_info; + hw_link_info->ext_info = resp->ext_info; + hw_link_info->loopback = resp->loopback; + hw_link_info->max_frame_size = LE16_TO_CPU(resp->max_frame_size); + hw_link_info->pacing = resp->config & I40E_AQ_CONFIG_PACING_MASK; + + /* update fc info */ + tx_pause = !!(resp->an_info & I40E_AQ_LINK_PAUSE_TX); + rx_pause = !!(resp->an_info & I40E_AQ_LINK_PAUSE_RX); + if (tx_pause & rx_pause) + hw->fc.current_mode = I40E_FC_FULL; + else if (tx_pause) + hw->fc.current_mode = I40E_FC_TX_PAUSE; + else if (rx_pause) + hw->fc.current_mode = I40E_FC_RX_PAUSE; + else + hw->fc.current_mode = I40E_FC_NONE; + + if (resp->config & I40E_AQ_CONFIG_CRC_ENA) + hw_link_info->crc_enable = true; + else + hw_link_info->crc_enable = false; + + if (resp->command_flags & CPU_TO_LE16(I40E_AQ_LSE_ENABLE)) + hw_link_info->lse_enable = true; + else + hw_link_info->lse_enable = false; + + /* save link status information */ + if (link) + i40e_memcpy(link, hw_link_info, sizeof(struct i40e_link_status), + I40E_NONDMA_TO_NONDMA); + + /* flag cleared so helper functions don't call AQ again */ + hw->phy.get_link_info = false; + +aq_get_link_info_exit: + return status; +} + +/** + * i40e_update_link_info + * @hw: pointer to the hw struct + * @enable_lse: enable/disable LinkStatusEvent reporting + * + * Returns the link status of the adapter + **/ +enum i40e_status_code i40e_update_link_info(struct i40e_hw *hw, + bool enable_lse) +{ + struct i40e_aq_get_phy_abilities_resp abilities; + enum i40e_status_code status; + + status = i40e_aq_get_link_info(hw, enable_lse, NULL, NULL); + if (status) + return status; + + status = i40e_aq_get_phy_capabilities(hw, false, false, + &abilities, NULL); + if (status) + return status; + + if (abilities.abilities & I40E_AQ_PHY_AN_ENABLED) + hw->phy.link_info.an_enabled = true; + else + hw->phy.link_info.an_enabled = false; + + return status; +} + +/** + * i40e_aq_set_phy_int_mask + * @hw: pointer to the hw struct + * @mask: interrupt mask to be set + * @cmd_details: pointer to command details structure or NULL + * + * Set link interrupt mask. + **/ +enum i40e_status_code i40e_aq_set_phy_int_mask(struct i40e_hw *hw, + u16 mask, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_set_phy_int_mask *cmd = + (struct i40e_aqc_set_phy_int_mask *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_set_phy_int_mask); + + cmd->event_mask = CPU_TO_LE16(mask); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_aq_get_local_advt_reg + * @hw: pointer to the hw struct + * @advt_reg: local AN advertisement register value + * @cmd_details: pointer to command details structure or NULL + * + * Get the Local AN advertisement register value. + **/ +enum i40e_status_code i40e_aq_get_local_advt_reg(struct i40e_hw *hw, + u64 *advt_reg, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_an_advt_reg *resp = + (struct i40e_aqc_an_advt_reg *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_get_local_advt_reg); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + if (status != I40E_SUCCESS) + goto aq_get_local_advt_reg_exit; + + *advt_reg = (u64)(LE16_TO_CPU(resp->local_an_reg1)) << 32; + *advt_reg |= LE32_TO_CPU(resp->local_an_reg0); + +aq_get_local_advt_reg_exit: + return status; +} + +/** + * i40e_aq_set_local_advt_reg + * @hw: pointer to the hw struct + * @advt_reg: local AN advertisement register value + * @cmd_details: pointer to command details structure or NULL + * + * Get the Local AN advertisement register value. + **/ +enum i40e_status_code i40e_aq_set_local_advt_reg(struct i40e_hw *hw, + u64 advt_reg, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_an_advt_reg *cmd = + (struct i40e_aqc_an_advt_reg *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_get_local_advt_reg); + + cmd->local_an_reg0 = CPU_TO_LE32(I40E_LO_DWORD(advt_reg)); + cmd->local_an_reg1 = CPU_TO_LE16(I40E_HI_DWORD(advt_reg)); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_aq_get_partner_advt + * @hw: pointer to the hw struct + * @advt_reg: AN partner advertisement register value + * @cmd_details: pointer to command details structure or NULL + * + * Get the link partner AN advertisement register value. + **/ +enum i40e_status_code i40e_aq_get_partner_advt(struct i40e_hw *hw, + u64 *advt_reg, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_an_advt_reg *resp = + (struct i40e_aqc_an_advt_reg *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_get_partner_advt); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + if (status != I40E_SUCCESS) + goto aq_get_partner_advt_exit; + + *advt_reg = (u64)(LE16_TO_CPU(resp->local_an_reg1)) << 32; + *advt_reg |= LE32_TO_CPU(resp->local_an_reg0); + +aq_get_partner_advt_exit: + return status; +} + +/** + * i40e_aq_set_lb_modes + * @hw: pointer to the hw struct + * @lb_modes: loopback mode to be set + * @cmd_details: pointer to command details structure or NULL + * + * Sets loopback modes. + **/ +enum i40e_status_code i40e_aq_set_lb_modes(struct i40e_hw *hw, + u16 lb_modes, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_set_lb_mode *cmd = + (struct i40e_aqc_set_lb_mode *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_set_lb_modes); + + cmd->lb_mode = CPU_TO_LE16(lb_modes); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_aq_set_phy_debug + * @hw: pointer to the hw struct + * @cmd_flags: debug command flags + * @cmd_details: pointer to command details structure or NULL + * + * Reset the external PHY. + **/ +enum i40e_status_code i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_set_phy_debug *cmd = + (struct i40e_aqc_set_phy_debug *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_set_phy_debug); + + cmd->command_flags = cmd_flags; + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_aq_add_vsi + * @hw: pointer to the hw struct + * @vsi_ctx: pointer to a vsi context struct + * @cmd_details: pointer to command details structure or NULL + * + * Add a VSI context to the hardware. +**/ +enum i40e_status_code i40e_aq_add_vsi(struct i40e_hw *hw, + struct i40e_vsi_context *vsi_ctx, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_add_get_update_vsi *cmd = + (struct i40e_aqc_add_get_update_vsi *)&desc.params.raw; + struct i40e_aqc_add_get_update_vsi_completion *resp = + (struct i40e_aqc_add_get_update_vsi_completion *) + &desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_add_vsi); + + cmd->uplink_seid = CPU_TO_LE16(vsi_ctx->uplink_seid); + cmd->connection_type = vsi_ctx->connection_type; + cmd->vf_id = vsi_ctx->vf_num; + cmd->vsi_flags = CPU_TO_LE16(vsi_ctx->flags); + + desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); + + status = i40e_asq_send_command(hw, &desc, &vsi_ctx->info, + sizeof(vsi_ctx->info), cmd_details); + + if (status != I40E_SUCCESS) + goto aq_add_vsi_exit; + + vsi_ctx->seid = LE16_TO_CPU(resp->seid); + vsi_ctx->vsi_number = LE16_TO_CPU(resp->vsi_number); + vsi_ctx->vsis_allocated = LE16_TO_CPU(resp->vsi_used); + vsi_ctx->vsis_unallocated = LE16_TO_CPU(resp->vsi_free); + +aq_add_vsi_exit: + return status; +} + +/** + * i40e_aq_set_default_vsi + * @hw: pointer to the hw struct + * @seid: vsi number + * @cmd_details: pointer to command details structure or NULL + **/ +enum i40e_status_code i40e_aq_set_default_vsi(struct i40e_hw *hw, + u16 seid, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_set_vsi_promiscuous_modes *cmd = + (struct i40e_aqc_set_vsi_promiscuous_modes *) + &desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_set_vsi_promiscuous_modes); + + cmd->promiscuous_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_DEFAULT); + cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_DEFAULT); + cmd->seid = CPU_TO_LE16(seid); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_aq_set_vsi_unicast_promiscuous + * @hw: pointer to the hw struct + * @seid: vsi number + * @set: set unicast promiscuous enable/disable + * @cmd_details: pointer to command details structure or NULL + **/ +enum i40e_status_code i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw, + u16 seid, bool set, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_set_vsi_promiscuous_modes *cmd = + (struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw; + enum i40e_status_code status; + u16 flags = 0; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_set_vsi_promiscuous_modes); + + if (set) + flags |= I40E_AQC_SET_VSI_PROMISC_UNICAST; + + cmd->promiscuous_flags = CPU_TO_LE16(flags); + + cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_UNICAST); + + cmd->seid = CPU_TO_LE16(seid); + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_aq_set_vsi_multicast_promiscuous + * @hw: pointer to the hw struct + * @seid: vsi number + * @set: set multicast promiscuous enable/disable + * @cmd_details: pointer to command details structure or NULL + **/ +enum i40e_status_code i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw, + u16 seid, bool set, struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_set_vsi_promiscuous_modes *cmd = + (struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw; + enum i40e_status_code status; + u16 flags = 0; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_set_vsi_promiscuous_modes); + + if (set) + flags |= I40E_AQC_SET_VSI_PROMISC_MULTICAST; + + cmd->promiscuous_flags = CPU_TO_LE16(flags); + + cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_MULTICAST); + + cmd->seid = CPU_TO_LE16(seid); + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_aq_set_vsi_broadcast + * @hw: pointer to the hw struct + * @seid: vsi number + * @set_filter: true to set filter, false to clear filter + * @cmd_details: pointer to command details structure or NULL + * + * Set or clear the broadcast promiscuous flag (filter) for a given VSI. + **/ +enum i40e_status_code i40e_aq_set_vsi_broadcast(struct i40e_hw *hw, + u16 seid, bool set_filter, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_set_vsi_promiscuous_modes *cmd = + (struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_set_vsi_promiscuous_modes); + + if (set_filter) + cmd->promiscuous_flags + |= CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_BROADCAST); + else + cmd->promiscuous_flags + &= CPU_TO_LE16(~I40E_AQC_SET_VSI_PROMISC_BROADCAST); + + cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_BROADCAST); + cmd->seid = CPU_TO_LE16(seid); + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_get_vsi_params - get VSI configuration info + * @hw: pointer to the hw struct + * @vsi_ctx: pointer to a vsi context struct + * @cmd_details: pointer to command details structure or NULL + **/ +enum i40e_status_code i40e_aq_get_vsi_params(struct i40e_hw *hw, + struct i40e_vsi_context *vsi_ctx, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_add_get_update_vsi *cmd = + (struct i40e_aqc_add_get_update_vsi *)&desc.params.raw; + struct i40e_aqc_add_get_update_vsi_completion *resp = + (struct i40e_aqc_add_get_update_vsi_completion *) + &desc.params.raw; + enum i40e_status_code status; + + UNREFERENCED_1PARAMETER(cmd_details); + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_get_vsi_parameters); + + cmd->uplink_seid = CPU_TO_LE16(vsi_ctx->seid); + + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF); + + status = i40e_asq_send_command(hw, &desc, &vsi_ctx->info, + sizeof(vsi_ctx->info), NULL); + + if (status != I40E_SUCCESS) + goto aq_get_vsi_params_exit; + + vsi_ctx->seid = LE16_TO_CPU(resp->seid); + vsi_ctx->vsi_number = LE16_TO_CPU(resp->vsi_number); + vsi_ctx->vsis_allocated = LE16_TO_CPU(resp->vsi_used); + vsi_ctx->vsis_unallocated = LE16_TO_CPU(resp->vsi_free); + +aq_get_vsi_params_exit: + return status; +} + +/** + * i40e_aq_update_vsi_params + * @hw: pointer to the hw struct + * @vsi_ctx: pointer to a vsi context struct + * @cmd_details: pointer to command details structure or NULL + * + * Update a VSI context. + **/ +enum i40e_status_code i40e_aq_update_vsi_params(struct i40e_hw *hw, + struct i40e_vsi_context *vsi_ctx, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_add_get_update_vsi *cmd = + (struct i40e_aqc_add_get_update_vsi *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_update_vsi_parameters); + cmd->uplink_seid = CPU_TO_LE16(vsi_ctx->seid); + + desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); + + status = i40e_asq_send_command(hw, &desc, &vsi_ctx->info, + sizeof(vsi_ctx->info), cmd_details); + + return status; +} + +/** + * i40e_aq_get_switch_config + * @hw: pointer to the hardware structure + * @buf: pointer to the result buffer + * @buf_size: length of input buffer + * @start_seid: seid to start for the report, 0 == beginning + * @cmd_details: pointer to command details structure or NULL + * + * Fill the buf with switch configuration returned from AdminQ command + **/ +enum i40e_status_code i40e_aq_get_switch_config(struct i40e_hw *hw, + struct i40e_aqc_get_switch_config_resp *buf, + u16 buf_size, u16 *start_seid, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_switch_seid *scfg = + (struct i40e_aqc_switch_seid *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_get_switch_config); + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF); + if (buf_size > I40E_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); + scfg->seid = CPU_TO_LE16(*start_seid); + + status = i40e_asq_send_command(hw, &desc, buf, buf_size, cmd_details); + *start_seid = LE16_TO_CPU(scfg->seid); + + return status; +} + +/** + * i40e_aq_get_firmware_version + * @hw: pointer to the hw struct + * @fw_major_version: firmware major version + * @fw_minor_version: firmware minor version + * @api_major_version: major queue version + * @api_minor_version: minor queue version + * @cmd_details: pointer to command details structure or NULL + * + * Get the firmware version from the admin queue commands + **/ +enum i40e_status_code i40e_aq_get_firmware_version(struct i40e_hw *hw, + u16 *fw_major_version, u16 *fw_minor_version, + u16 *api_major_version, u16 *api_minor_version, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_get_version *resp = + (struct i40e_aqc_get_version *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_get_version); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + if (status == I40E_SUCCESS) { + if (fw_major_version != NULL) + *fw_major_version = LE16_TO_CPU(resp->fw_major); + if (fw_minor_version != NULL) + *fw_minor_version = LE16_TO_CPU(resp->fw_minor); + if (api_major_version != NULL) + *api_major_version = LE16_TO_CPU(resp->api_major); + if (api_minor_version != NULL) + *api_minor_version = LE16_TO_CPU(resp->api_minor); + + /* A workaround to fix the API version in SW */ + if (api_major_version && api_minor_version && + fw_major_version && fw_minor_version && + ((*api_major_version == 1) && (*api_minor_version == 1)) && + (((*fw_major_version == 4) && (*fw_minor_version >= 2)) || + (*fw_major_version > 4))) + *api_minor_version = 2; + } + + return status; +} + +/** + * i40e_aq_send_driver_version + * @hw: pointer to the hw struct + * @dv: driver's major, minor version + * @cmd_details: pointer to command details structure or NULL + * + * Send the driver version to the firmware + **/ +enum i40e_status_code i40e_aq_send_driver_version(struct i40e_hw *hw, + struct i40e_driver_version *dv, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_driver_version *cmd = + (struct i40e_aqc_driver_version *)&desc.params.raw; + enum i40e_status_code status; + u16 len; + + if (dv == NULL) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_driver_version); + + desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_SI); + cmd->driver_major_ver = dv->major_version; + cmd->driver_minor_ver = dv->minor_version; + cmd->driver_build_ver = dv->build_version; + cmd->driver_subbuild_ver = dv->subbuild_version; + + len = 0; + while (len < sizeof(dv->driver_string) && + (dv->driver_string[len] < 0x80) && + dv->driver_string[len]) + len++; + status = i40e_asq_send_command(hw, &desc, dv->driver_string, + len, cmd_details); + + return status; +} + +/** + * i40e_get_link_status - get status of the HW network link + * @hw: pointer to the hw struct + * + * Returns true if link is up, false if link is down. + * + * Side effect: LinkStatusEvent reporting becomes enabled + **/ +bool i40e_get_link_status(struct i40e_hw *hw) +{ + enum i40e_status_code status = I40E_SUCCESS; + bool link_status = false; + + if (hw->phy.get_link_info) { + status = i40e_aq_get_link_info(hw, true, NULL, NULL); + + if (status != I40E_SUCCESS) + goto i40e_get_link_status_exit; + } + + link_status = hw->phy.link_info.link_info & I40E_AQ_LINK_UP; + +i40e_get_link_status_exit: + return link_status; +} + +/** + * i40e_get_link_speed + * @hw: pointer to the hw struct + * + * Returns the link speed of the adapter. + **/ +enum i40e_aq_link_speed i40e_get_link_speed(struct i40e_hw *hw) +{ + enum i40e_aq_link_speed speed = I40E_LINK_SPEED_UNKNOWN; + enum i40e_status_code status = I40E_SUCCESS; + + if (hw->phy.get_link_info) { + status = i40e_aq_get_link_info(hw, true, NULL, NULL); + + if (status != I40E_SUCCESS) + goto i40e_link_speed_exit; + } + + speed = hw->phy.link_info.link_speed; + +i40e_link_speed_exit: + return speed; +} + +/** + * i40e_aq_add_veb - Insert a VEB between the VSI and the MAC + * @hw: pointer to the hw struct + * @uplink_seid: the MAC or other gizmo SEID + * @downlink_seid: the VSI SEID + * @enabled_tc: bitmap of TCs to be enabled + * @default_port: true for default port VSI, false for control port + * @enable_l2_filtering: true to add L2 filter table rules to regular forwarding rules for cloud support + * @veb_seid: pointer to where to put the resulting VEB SEID + * @cmd_details: pointer to command details structure or NULL + * + * This asks the FW to add a VEB between the uplink and downlink + * elements. If the uplink SEID is 0, this will be a floating VEB. + **/ +enum i40e_status_code i40e_aq_add_veb(struct i40e_hw *hw, u16 uplink_seid, + u16 downlink_seid, u8 enabled_tc, + bool default_port, bool enable_l2_filtering, + u16 *veb_seid, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_add_veb *cmd = + (struct i40e_aqc_add_veb *)&desc.params.raw; + struct i40e_aqc_add_veb_completion *resp = + (struct i40e_aqc_add_veb_completion *)&desc.params.raw; + enum i40e_status_code status; + u16 veb_flags = 0; + + /* SEIDs need to either both be set or both be 0 for floating VEB */ + if (!!uplink_seid != !!downlink_seid) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_veb); + + cmd->uplink_seid = CPU_TO_LE16(uplink_seid); + cmd->downlink_seid = CPU_TO_LE16(downlink_seid); + cmd->enable_tcs = enabled_tc; + if (!uplink_seid) + veb_flags |= I40E_AQC_ADD_VEB_FLOATING; + if (default_port) + veb_flags |= I40E_AQC_ADD_VEB_PORT_TYPE_DEFAULT; + else + veb_flags |= I40E_AQC_ADD_VEB_PORT_TYPE_DATA; + + if (enable_l2_filtering) + veb_flags |= I40E_AQC_ADD_VEB_ENABLE_L2_FILTER; + + cmd->veb_flags = CPU_TO_LE16(veb_flags); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + if (!status && veb_seid) + *veb_seid = LE16_TO_CPU(resp->veb_seid); + + return status; +} + +/** + * i40e_aq_get_veb_parameters - Retrieve VEB parameters + * @hw: pointer to the hw struct + * @veb_seid: the SEID of the VEB to query + * @switch_id: the uplink switch id + * @floating: set to true if the VEB is floating + * @statistic_index: index of the stats counter block for this VEB + * @vebs_used: number of VEB's used by function + * @vebs_free: total VEB's not reserved by any function + * @cmd_details: pointer to command details structure or NULL + * + * This retrieves the parameters for a particular VEB, specified by + * uplink_seid, and returns them to the caller. + **/ +enum i40e_status_code i40e_aq_get_veb_parameters(struct i40e_hw *hw, + u16 veb_seid, u16 *switch_id, + bool *floating, u16 *statistic_index, + u16 *vebs_used, u16 *vebs_free, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_get_veb_parameters_completion *cmd_resp = + (struct i40e_aqc_get_veb_parameters_completion *) + &desc.params.raw; + enum i40e_status_code status; + + if (veb_seid == 0) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_get_veb_parameters); + cmd_resp->seid = CPU_TO_LE16(veb_seid); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + if (status) + goto get_veb_exit; + + if (switch_id) + *switch_id = LE16_TO_CPU(cmd_resp->switch_id); + if (statistic_index) + *statistic_index = LE16_TO_CPU(cmd_resp->statistic_index); + if (vebs_used) + *vebs_used = LE16_TO_CPU(cmd_resp->vebs_used); + if (vebs_free) + *vebs_free = LE16_TO_CPU(cmd_resp->vebs_free); + if (floating) { + u16 flags = LE16_TO_CPU(cmd_resp->veb_flags); + if (flags & I40E_AQC_ADD_VEB_FLOATING) + *floating = true; + else + *floating = false; + } + +get_veb_exit: + return status; +} + +/** + * i40e_aq_add_macvlan + * @hw: pointer to the hw struct + * @seid: VSI for the mac address + * @mv_list: list of macvlans to be added + * @count: length of the list + * @cmd_details: pointer to command details structure or NULL + * + * Add MAC/VLAN addresses to the HW filtering + **/ +enum i40e_status_code i40e_aq_add_macvlan(struct i40e_hw *hw, u16 seid, + struct i40e_aqc_add_macvlan_element_data *mv_list, + u16 count, struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_macvlan *cmd = + (struct i40e_aqc_macvlan *)&desc.params.raw; + enum i40e_status_code status; + u16 buf_size; + + if (count == 0 || !mv_list || !hw) + return I40E_ERR_PARAM; + + buf_size = count * sizeof(struct i40e_aqc_add_macvlan_element_data); + + /* prep the rest of the request */ + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_macvlan); + cmd->num_addresses = CPU_TO_LE16(count); + cmd->seid[0] = CPU_TO_LE16(I40E_AQC_MACVLAN_CMD_SEID_VALID | seid); + cmd->seid[1] = 0; + cmd->seid[2] = 0; + + desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); + if (buf_size > I40E_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); + + status = i40e_asq_send_command(hw, &desc, mv_list, buf_size, + cmd_details); + + return status; +} + +/** + * i40e_aq_remove_macvlan + * @hw: pointer to the hw struct + * @seid: VSI for the mac address + * @mv_list: list of macvlans to be removed + * @count: length of the list + * @cmd_details: pointer to command details structure or NULL + * + * Remove MAC/VLAN addresses from the HW filtering + **/ +enum i40e_status_code i40e_aq_remove_macvlan(struct i40e_hw *hw, u16 seid, + struct i40e_aqc_remove_macvlan_element_data *mv_list, + u16 count, struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_macvlan *cmd = + (struct i40e_aqc_macvlan *)&desc.params.raw; + enum i40e_status_code status; + u16 buf_size; + + if (count == 0 || !mv_list || !hw) + return I40E_ERR_PARAM; + + buf_size = count * sizeof(struct i40e_aqc_remove_macvlan_element_data); + + /* prep the rest of the request */ + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_remove_macvlan); + cmd->num_addresses = CPU_TO_LE16(count); + cmd->seid[0] = CPU_TO_LE16(I40E_AQC_MACVLAN_CMD_SEID_VALID | seid); + cmd->seid[1] = 0; + cmd->seid[2] = 0; + + desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); + if (buf_size > I40E_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); + + status = i40e_asq_send_command(hw, &desc, mv_list, buf_size, + cmd_details); + + return status; +} + +/** + * i40e_aq_add_vlan - Add VLAN ids to the HW filtering + * @hw: pointer to the hw struct + * @seid: VSI for the vlan filters + * @v_list: list of vlan filters to be added + * @count: length of the list + * @cmd_details: pointer to command details structure or NULL + **/ +enum i40e_status_code i40e_aq_add_vlan(struct i40e_hw *hw, u16 seid, + struct i40e_aqc_add_remove_vlan_element_data *v_list, + u8 count, struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_macvlan *cmd = + (struct i40e_aqc_macvlan *)&desc.params.raw; + enum i40e_status_code status; + u16 buf_size; + + if (count == 0 || !v_list || !hw) + return I40E_ERR_PARAM; + + buf_size = count * sizeof(struct i40e_aqc_add_remove_vlan_element_data); + + /* prep the rest of the request */ + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_vlan); + cmd->num_addresses = CPU_TO_LE16(count); + cmd->seid[0] = CPU_TO_LE16(seid | I40E_AQC_MACVLAN_CMD_SEID_VALID); + cmd->seid[1] = 0; + cmd->seid[2] = 0; + + desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); + if (buf_size > I40E_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); + + status = i40e_asq_send_command(hw, &desc, v_list, buf_size, + cmd_details); + + return status; +} + +/** + * i40e_aq_remove_vlan - Remove VLANs from the HW filtering + * @hw: pointer to the hw struct + * @seid: VSI for the vlan filters + * @v_list: list of macvlans to be removed + * @count: length of the list + * @cmd_details: pointer to command details structure or NULL + **/ +enum i40e_status_code i40e_aq_remove_vlan(struct i40e_hw *hw, u16 seid, + struct i40e_aqc_add_remove_vlan_element_data *v_list, + u8 count, struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_macvlan *cmd = + (struct i40e_aqc_macvlan *)&desc.params.raw; + enum i40e_status_code status; + u16 buf_size; + + if (count == 0 || !v_list || !hw) + return I40E_ERR_PARAM; + + buf_size = count * sizeof(struct i40e_aqc_add_remove_vlan_element_data); + + /* prep the rest of the request */ + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_remove_vlan); + cmd->num_addresses = CPU_TO_LE16(count); + cmd->seid[0] = CPU_TO_LE16(seid | I40E_AQC_MACVLAN_CMD_SEID_VALID); + cmd->seid[1] = 0; + cmd->seid[2] = 0; + + desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); + if (buf_size > I40E_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); + + status = i40e_asq_send_command(hw, &desc, v_list, buf_size, + cmd_details); + + return status; +} + +/** + * i40e_aq_send_msg_to_vf + * @hw: pointer to the hardware structure + * @vfid: vf id to send msg + * @v_opcode: opcodes for VF-PF communication + * @v_retval: return error code + * @msg: pointer to the msg buffer + * @msglen: msg length + * @cmd_details: pointer to command details + * + * send msg to vf + **/ +enum i40e_status_code i40e_aq_send_msg_to_vf(struct i40e_hw *hw, u16 vfid, + u32 v_opcode, u32 v_retval, u8 *msg, u16 msglen, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_pf_vf_message *cmd = + (struct i40e_aqc_pf_vf_message *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_send_msg_to_vf); + cmd->id = CPU_TO_LE32(vfid); + desc.cookie_high = CPU_TO_LE32(v_opcode); + desc.cookie_low = CPU_TO_LE32(v_retval); + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_SI); + if (msglen) { + desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | + I40E_AQ_FLAG_RD)); + if (msglen > I40E_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); + desc.datalen = CPU_TO_LE16(msglen); + } + status = i40e_asq_send_command(hw, &desc, msg, msglen, cmd_details); + + return status; +} + +/** + * i40e_aq_debug_write_register + * @hw: pointer to the hw struct + * @reg_addr: register address + * @reg_val: register value + * @cmd_details: pointer to command details structure or NULL + * + * Write to a register using the admin queue commands + **/ +enum i40e_status_code i40e_aq_debug_write_register(struct i40e_hw *hw, + u32 reg_addr, u64 reg_val, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_debug_reg_read_write *cmd = + (struct i40e_aqc_debug_reg_read_write *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_debug_write_reg); + + cmd->address = CPU_TO_LE32(reg_addr); + cmd->value_high = CPU_TO_LE32((u32)(reg_val >> 32)); + cmd->value_low = CPU_TO_LE32((u32)(reg_val & 0xFFFFFFFF)); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_aq_get_hmc_resource_profile + * @hw: pointer to the hw struct + * @profile: type of profile the HMC is to be set as + * @pe_vf_enabled_count: the number of PE enabled VFs the system has + * @cmd_details: pointer to command details structure or NULL + * + * query the HMC profile of the device. + **/ +enum i40e_status_code i40e_aq_get_hmc_resource_profile(struct i40e_hw *hw, + enum i40e_aq_hmc_profile *profile, + u8 *pe_vf_enabled_count, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aq_get_set_hmc_resource_profile *resp = + (struct i40e_aq_get_set_hmc_resource_profile *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_query_hmc_resource_profile); + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + *profile = (enum i40e_aq_hmc_profile)(resp->pm_profile & + I40E_AQ_GET_HMC_RESOURCE_PROFILE_PM_MASK); + *pe_vf_enabled_count = resp->pe_vf_enabled & + I40E_AQ_GET_HMC_RESOURCE_PROFILE_COUNT_MASK; + + return status; +} + +/** + * i40e_aq_set_hmc_resource_profile + * @hw: pointer to the hw struct + * @profile: type of profile the HMC is to be set as + * @pe_vf_enabled_count: the number of PE enabled VFs the system has + * @cmd_details: pointer to command details structure or NULL + * + * set the HMC profile of the device. + **/ +enum i40e_status_code i40e_aq_set_hmc_resource_profile(struct i40e_hw *hw, + enum i40e_aq_hmc_profile profile, + u8 pe_vf_enabled_count, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aq_get_set_hmc_resource_profile *cmd = + (struct i40e_aq_get_set_hmc_resource_profile *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_set_hmc_resource_profile); + + cmd->pm_profile = (u8)profile; + cmd->pe_vf_enabled = pe_vf_enabled_count; + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_aq_request_resource + * @hw: pointer to the hw struct + * @resource: resource id + * @access: access type + * @sdp_number: resource number + * @timeout: the maximum time in ms that the driver may hold the resource + * @cmd_details: pointer to command details structure or NULL + * + * requests common resource using the admin queue commands + **/ +enum i40e_status_code i40e_aq_request_resource(struct i40e_hw *hw, + enum i40e_aq_resources_ids resource, + enum i40e_aq_resource_access_type access, + u8 sdp_number, u64 *timeout, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_request_resource *cmd_resp = + (struct i40e_aqc_request_resource *)&desc.params.raw; + enum i40e_status_code status; + + DEBUGFUNC("i40e_aq_request_resource"); + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_request_resource); + + cmd_resp->resource_id = CPU_TO_LE16(resource); + cmd_resp->access_type = CPU_TO_LE16(access); + cmd_resp->resource_number = CPU_TO_LE32(sdp_number); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + /* The completion specifies the maximum time in ms that the driver + * may hold the resource in the Timeout field. + * If the resource is held by someone else, the command completes with + * busy return value and the timeout field indicates the maximum time + * the current owner of the resource has to free it. + */ + if (status == I40E_SUCCESS || hw->aq.asq_last_status == I40E_AQ_RC_EBUSY) + *timeout = LE32_TO_CPU(cmd_resp->timeout); + + return status; +} + +/** + * i40e_aq_release_resource + * @hw: pointer to the hw struct + * @resource: resource id + * @sdp_number: resource number + * @cmd_details: pointer to command details structure or NULL + * + * release common resource using the admin queue commands + **/ +enum i40e_status_code i40e_aq_release_resource(struct i40e_hw *hw, + enum i40e_aq_resources_ids resource, + u8 sdp_number, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_request_resource *cmd = + (struct i40e_aqc_request_resource *)&desc.params.raw; + enum i40e_status_code status; + + DEBUGFUNC("i40e_aq_release_resource"); + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_release_resource); + + cmd->resource_id = CPU_TO_LE16(resource); + cmd->resource_number = CPU_TO_LE32(sdp_number); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_aq_read_nvm + * @hw: pointer to the hw struct + * @module_pointer: module pointer location in words from the NVM beginning + * @offset: byte offset from the module beginning + * @length: length of the section to be read (in bytes from the offset) + * @data: command buffer (size [bytes] = length) + * @last_command: tells if this is the last command in a series + * @cmd_details: pointer to command details structure or NULL + * + * Read the NVM using the admin queue commands + **/ +enum i40e_status_code i40e_aq_read_nvm(struct i40e_hw *hw, u8 module_pointer, + u32 offset, u16 length, void *data, + bool last_command, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_nvm_update *cmd = + (struct i40e_aqc_nvm_update *)&desc.params.raw; + enum i40e_status_code status; + + DEBUGFUNC("i40e_aq_read_nvm"); + + /* In offset the highest byte must be zeroed. */ + if (offset & 0xFF000000) { + status = I40E_ERR_PARAM; + goto i40e_aq_read_nvm_exit; + } + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_nvm_read); + + /* If this is the last command in a series, set the proper flag. */ + if (last_command) + cmd->command_flags |= I40E_AQ_NVM_LAST_CMD; + cmd->module_pointer = module_pointer; + cmd->offset = CPU_TO_LE32(offset); + cmd->length = CPU_TO_LE16(length); + + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF); + if (length > I40E_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); + + status = i40e_asq_send_command(hw, &desc, data, length, cmd_details); + +i40e_aq_read_nvm_exit: + return status; +} + +/** + * i40e_aq_erase_nvm + * @hw: pointer to the hw struct + * @module_pointer: module pointer location in words from the NVM beginning + * @offset: offset in the module (expressed in 4 KB from module's beginning) + * @length: length of the section to be erased (expressed in 4 KB) + * @last_command: tells if this is the last command in a series + * @cmd_details: pointer to command details structure or NULL + * + * Erase the NVM sector using the admin queue commands + **/ +enum i40e_status_code i40e_aq_erase_nvm(struct i40e_hw *hw, u8 module_pointer, + u32 offset, u16 length, bool last_command, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_nvm_update *cmd = + (struct i40e_aqc_nvm_update *)&desc.params.raw; + enum i40e_status_code status; + + DEBUGFUNC("i40e_aq_erase_nvm"); + + /* In offset the highest byte must be zeroed. */ + if (offset & 0xFF000000) { + status = I40E_ERR_PARAM; + goto i40e_aq_erase_nvm_exit; + } + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_nvm_erase); + + /* If this is the last command in a series, set the proper flag. */ + if (last_command) + cmd->command_flags |= I40E_AQ_NVM_LAST_CMD; + cmd->module_pointer = module_pointer; + cmd->offset = CPU_TO_LE32(offset); + cmd->length = CPU_TO_LE16(length); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + +i40e_aq_erase_nvm_exit: + return status; +} + +#define I40E_DEV_FUNC_CAP_SWITCH_MODE 0x01 +#define I40E_DEV_FUNC_CAP_MGMT_MODE 0x02 +#define I40E_DEV_FUNC_CAP_NPAR 0x03 +#define I40E_DEV_FUNC_CAP_OS2BMC 0x04 +#define I40E_DEV_FUNC_CAP_VALID_FUNC 0x05 +#define I40E_DEV_FUNC_CAP_SRIOV_1_1 0x12 +#define I40E_DEV_FUNC_CAP_VF 0x13 +#define I40E_DEV_FUNC_CAP_VMDQ 0x14 +#define I40E_DEV_FUNC_CAP_802_1_QBG 0x15 +#define I40E_DEV_FUNC_CAP_802_1_QBH 0x16 +#define I40E_DEV_FUNC_CAP_VSI 0x17 +#define I40E_DEV_FUNC_CAP_DCB 0x18 +#define I40E_DEV_FUNC_CAP_FCOE 0x21 +#define I40E_DEV_FUNC_CAP_RSS 0x40 +#define I40E_DEV_FUNC_CAP_RX_QUEUES 0x41 +#define I40E_DEV_FUNC_CAP_TX_QUEUES 0x42 +#define I40E_DEV_FUNC_CAP_MSIX 0x43 +#define I40E_DEV_FUNC_CAP_MSIX_VF 0x44 +#define I40E_DEV_FUNC_CAP_FLOW_DIRECTOR 0x45 +#define I40E_DEV_FUNC_CAP_IEEE_1588 0x46 +#define I40E_DEV_FUNC_CAP_MFP_MODE_1 0xF1 +#define I40E_DEV_FUNC_CAP_CEM 0xF2 +#define I40E_DEV_FUNC_CAP_IWARP 0x51 +#define I40E_DEV_FUNC_CAP_LED 0x61 +#define I40E_DEV_FUNC_CAP_SDP 0x62 +#define I40E_DEV_FUNC_CAP_MDIO 0x63 + +/** + * i40e_parse_discover_capabilities + * @hw: pointer to the hw struct + * @buff: pointer to a buffer containing device/function capability records + * @cap_count: number of capability records in the list + * @list_type_opc: type of capabilities list to parse + * + * Parse the device/function capabilities list. + **/ +STATIC void i40e_parse_discover_capabilities(struct i40e_hw *hw, void *buff, + u32 cap_count, + enum i40e_admin_queue_opc list_type_opc) +{ + struct i40e_aqc_list_capabilities_element_resp *cap; + u32 number, logical_id, phys_id; + struct i40e_hw_capabilities *p; + u32 i = 0; + u16 id; + + cap = (struct i40e_aqc_list_capabilities_element_resp *) buff; + + if (list_type_opc == i40e_aqc_opc_list_dev_capabilities) + p = (struct i40e_hw_capabilities *)&hw->dev_caps; + else if (list_type_opc == i40e_aqc_opc_list_func_capabilities) + p = (struct i40e_hw_capabilities *)&hw->func_caps; + else + return; + + for (i = 0; i < cap_count; i++, cap++) { + id = LE16_TO_CPU(cap->id); + number = LE32_TO_CPU(cap->number); + logical_id = LE32_TO_CPU(cap->logical_id); + phys_id = LE32_TO_CPU(cap->phys_id); + + switch (id) { + case I40E_DEV_FUNC_CAP_SWITCH_MODE: + p->switch_mode = number; + break; + case I40E_DEV_FUNC_CAP_MGMT_MODE: + p->management_mode = number; + break; + case I40E_DEV_FUNC_CAP_NPAR: + p->npar_enable = number; + break; + case I40E_DEV_FUNC_CAP_OS2BMC: + p->os2bmc = number; + break; + case I40E_DEV_FUNC_CAP_VALID_FUNC: + p->valid_functions = number; + break; + case I40E_DEV_FUNC_CAP_SRIOV_1_1: + if (number == 1) + p->sr_iov_1_1 = true; + break; + case I40E_DEV_FUNC_CAP_VF: + p->num_vfs = number; + p->vf_base_id = logical_id; + break; + case I40E_DEV_FUNC_CAP_VMDQ: + if (number == 1) + p->vmdq = true; + break; + case I40E_DEV_FUNC_CAP_802_1_QBG: + if (number == 1) + p->evb_802_1_qbg = true; + break; + case I40E_DEV_FUNC_CAP_802_1_QBH: + if (number == 1) + p->evb_802_1_qbh = true; + break; + case I40E_DEV_FUNC_CAP_VSI: + p->num_vsis = number; + break; + case I40E_DEV_FUNC_CAP_DCB: + if (number == 1) { + p->dcb = true; + p->enabled_tcmap = logical_id; + p->maxtc = phys_id; + } + break; + case I40E_DEV_FUNC_CAP_FCOE: + if (number == 1) + p->fcoe = true; + break; + case I40E_DEV_FUNC_CAP_RSS: + p->rss = true; + p->rss_table_size = number; + p->rss_table_entry_width = logical_id; + break; + case I40E_DEV_FUNC_CAP_RX_QUEUES: + p->num_rx_qp = number; + p->base_queue = phys_id; + break; + case I40E_DEV_FUNC_CAP_TX_QUEUES: + p->num_tx_qp = number; + p->base_queue = phys_id; + break; + case I40E_DEV_FUNC_CAP_MSIX: + p->num_msix_vectors = number; + break; + case I40E_DEV_FUNC_CAP_MSIX_VF: + p->num_msix_vectors_vf = number; + break; + case I40E_DEV_FUNC_CAP_MFP_MODE_1: + if (number == 1) + p->mfp_mode_1 = true; + break; + case I40E_DEV_FUNC_CAP_CEM: + if (number == 1) + p->mgmt_cem = true; + break; + case I40E_DEV_FUNC_CAP_IWARP: + if (number == 1) + p->iwarp = true; + break; + case I40E_DEV_FUNC_CAP_LED: + if (phys_id < I40E_HW_CAP_MAX_GPIO) + p->led[phys_id] = true; + break; + case I40E_DEV_FUNC_CAP_SDP: + if (phys_id < I40E_HW_CAP_MAX_GPIO) + p->sdp[phys_id] = true; + break; + case I40E_DEV_FUNC_CAP_MDIO: + if (number == 1) { + p->mdio_port_num = phys_id; + p->mdio_port_mode = logical_id; + } + break; + case I40E_DEV_FUNC_CAP_IEEE_1588: + if (number == 1) + p->ieee_1588 = true; + break; + case I40E_DEV_FUNC_CAP_FLOW_DIRECTOR: + p->fd = true; + p->fd_filters_guaranteed = number; + p->fd_filters_best_effort = logical_id; + break; + default: + break; + } + } + + /* Software override ensuring FCoE is disabled if npar or mfp + * mode because it is not supported in these modes. + */ + if (p->npar_enable || p->mfp_mode_1) + p->fcoe = false; + + /* additional HW specific goodies that might + * someday be HW version specific + */ + p->rx_buf_chain_len = I40E_MAX_CHAINED_RX_BUFFERS; +} + +/** + * i40e_aq_discover_capabilities + * @hw: pointer to the hw struct + * @buff: a virtual buffer to hold the capabilities + * @buff_size: Size of the virtual buffer + * @data_size: Size of the returned data, or buff size needed if AQ err==ENOMEM + * @list_type_opc: capabilities type to discover - pass in the command opcode + * @cmd_details: pointer to command details structure or NULL + * + * Get the device capabilities descriptions from the firmware + **/ +enum i40e_status_code i40e_aq_discover_capabilities(struct i40e_hw *hw, + void *buff, u16 buff_size, u16 *data_size, + enum i40e_admin_queue_opc list_type_opc, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aqc_list_capabilites *cmd; + struct i40e_aq_desc desc; + enum i40e_status_code status = I40E_SUCCESS; + + cmd = (struct i40e_aqc_list_capabilites *)&desc.params.raw; + + if (list_type_opc != i40e_aqc_opc_list_func_capabilities && + list_type_opc != i40e_aqc_opc_list_dev_capabilities) { + status = I40E_ERR_PARAM; + goto exit; + } + + i40e_fill_default_direct_cmd_desc(&desc, list_type_opc); + + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF); + if (buff_size > I40E_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); + + status = i40e_asq_send_command(hw, &desc, buff, buff_size, cmd_details); + *data_size = LE16_TO_CPU(desc.datalen); + + if (status) + goto exit; + + i40e_parse_discover_capabilities(hw, buff, LE32_TO_CPU(cmd->count), + list_type_opc); + +exit: + return status; +} + +/** + * i40e_aq_update_nvm + * @hw: pointer to the hw struct + * @module_pointer: module pointer location in words from the NVM beginning + * @offset: byte offset from the module beginning + * @length: length of the section to be written (in bytes from the offset) + * @data: command buffer (size [bytes] = length) + * @last_command: tells if this is the last command in a series + * @cmd_details: pointer to command details structure or NULL + * + * Update the NVM using the admin queue commands + **/ +enum i40e_status_code i40e_aq_update_nvm(struct i40e_hw *hw, u8 module_pointer, + u32 offset, u16 length, void *data, + bool last_command, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_nvm_update *cmd = + (struct i40e_aqc_nvm_update *)&desc.params.raw; + enum i40e_status_code status; + + DEBUGFUNC("i40e_aq_update_nvm"); + + /* In offset the highest byte must be zeroed. */ + if (offset & 0xFF000000) { + status = I40E_ERR_PARAM; + goto i40e_aq_update_nvm_exit; + } + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_nvm_update); + + /* If this is the last command in a series, set the proper flag. */ + if (last_command) + cmd->command_flags |= I40E_AQ_NVM_LAST_CMD; + cmd->module_pointer = module_pointer; + cmd->offset = CPU_TO_LE32(offset); + cmd->length = CPU_TO_LE16(length); + + desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); + if (length > I40E_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); + + status = i40e_asq_send_command(hw, &desc, data, length, cmd_details); + +i40e_aq_update_nvm_exit: + return status; +} + +/** + * i40e_aq_get_lldp_mib + * @hw: pointer to the hw struct + * @bridge_type: type of bridge requested + * @mib_type: Local, Remote or both Local and Remote MIBs + * @buff: pointer to a user supplied buffer to store the MIB block + * @buff_size: size of the buffer (in bytes) + * @local_len : length of the returned Local LLDP MIB + * @remote_len: length of the returned Remote LLDP MIB + * @cmd_details: pointer to command details structure or NULL + * + * Requests the complete LLDP MIB (entire packet). + **/ +enum i40e_status_code i40e_aq_get_lldp_mib(struct i40e_hw *hw, u8 bridge_type, + u8 mib_type, void *buff, u16 buff_size, + u16 *local_len, u16 *remote_len, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_lldp_get_mib *cmd = + (struct i40e_aqc_lldp_get_mib *)&desc.params.raw; + struct i40e_aqc_lldp_get_mib *resp = + (struct i40e_aqc_lldp_get_mib *)&desc.params.raw; + enum i40e_status_code status; + + if (buff_size == 0 || !buff) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_get_mib); + /* Indirect Command */ + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF); + + cmd->type = mib_type & I40E_AQ_LLDP_MIB_TYPE_MASK; + cmd->type |= ((bridge_type << I40E_AQ_LLDP_BRIDGE_TYPE_SHIFT) & + I40E_AQ_LLDP_BRIDGE_TYPE_MASK); + + desc.datalen = CPU_TO_LE16(buff_size); + + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF); + if (buff_size > I40E_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); + + status = i40e_asq_send_command(hw, &desc, buff, buff_size, cmd_details); + if (!status) { + if (local_len != NULL) + *local_len = LE16_TO_CPU(resp->local_len); + if (remote_len != NULL) + *remote_len = LE16_TO_CPU(resp->remote_len); + } + + return status; +} + +/** + * i40e_aq_cfg_lldp_mib_change_event + * @hw: pointer to the hw struct + * @enable_update: Enable or Disable event posting + * @cmd_details: pointer to command details structure or NULL + * + * Enable or Disable posting of an event on ARQ when LLDP MIB + * associated with the interface changes + **/ +enum i40e_status_code i40e_aq_cfg_lldp_mib_change_event(struct i40e_hw *hw, + bool enable_update, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_lldp_update_mib *cmd = + (struct i40e_aqc_lldp_update_mib *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_update_mib); + + if (!enable_update) + cmd->command |= I40E_AQ_LLDP_MIB_UPDATE_DISABLE; + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_aq_add_lldp_tlv + * @hw: pointer to the hw struct + * @bridge_type: type of bridge + * @buff: buffer with TLV to add + * @buff_size: length of the buffer + * @tlv_len: length of the TLV to be added + * @mib_len: length of the LLDP MIB returned in response + * @cmd_details: pointer to command details structure or NULL + * + * Add the specified TLV to LLDP Local MIB for the given bridge type, + * it is responsibility of the caller to make sure that the TLV is not + * already present in the LLDPDU. + * In return firmware will write the complete LLDP MIB with the newly + * added TLV in the response buffer. + **/ +enum i40e_status_code i40e_aq_add_lldp_tlv(struct i40e_hw *hw, u8 bridge_type, + void *buff, u16 buff_size, u16 tlv_len, + u16 *mib_len, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_lldp_add_tlv *cmd = + (struct i40e_aqc_lldp_add_tlv *)&desc.params.raw; + enum i40e_status_code status; + + if (buff_size == 0 || !buff || tlv_len == 0) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_add_tlv); + + /* Indirect Command */ + desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); + if (buff_size > I40E_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); + desc.datalen = CPU_TO_LE16(buff_size); + + cmd->type = ((bridge_type << I40E_AQ_LLDP_BRIDGE_TYPE_SHIFT) & + I40E_AQ_LLDP_BRIDGE_TYPE_MASK); + cmd->len = CPU_TO_LE16(tlv_len); + + status = i40e_asq_send_command(hw, &desc, buff, buff_size, cmd_details); + if (!status) { + if (mib_len != NULL) + *mib_len = LE16_TO_CPU(desc.datalen); + } + + return status; +} + +/** + * i40e_aq_update_lldp_tlv + * @hw: pointer to the hw struct + * @bridge_type: type of bridge + * @buff: buffer with TLV to update + * @buff_size: size of the buffer holding original and updated TLVs + * @old_len: Length of the Original TLV + * @new_len: Length of the Updated TLV + * @offset: offset of the updated TLV in the buff + * @mib_len: length of the returned LLDP MIB + * @cmd_details: pointer to command details structure or NULL + * + * Update the specified TLV to the LLDP Local MIB for the given bridge type. + * Firmware will place the complete LLDP MIB in response buffer with the + * updated TLV. + **/ +enum i40e_status_code i40e_aq_update_lldp_tlv(struct i40e_hw *hw, + u8 bridge_type, void *buff, u16 buff_size, + u16 old_len, u16 new_len, u16 offset, + u16 *mib_len, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_lldp_update_tlv *cmd = + (struct i40e_aqc_lldp_update_tlv *)&desc.params.raw; + enum i40e_status_code status; + + if (buff_size == 0 || !buff || offset == 0 || + old_len == 0 || new_len == 0) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_update_tlv); + + /* Indirect Command */ + desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); + if (buff_size > I40E_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); + desc.datalen = CPU_TO_LE16(buff_size); + + cmd->type = ((bridge_type << I40E_AQ_LLDP_BRIDGE_TYPE_SHIFT) & + I40E_AQ_LLDP_BRIDGE_TYPE_MASK); + cmd->old_len = CPU_TO_LE16(old_len); + cmd->new_offset = CPU_TO_LE16(offset); + cmd->new_len = CPU_TO_LE16(new_len); + + status = i40e_asq_send_command(hw, &desc, buff, buff_size, cmd_details); + if (!status) { + if (mib_len != NULL) + *mib_len = LE16_TO_CPU(desc.datalen); + } + + return status; +} + +/** + * i40e_aq_delete_lldp_tlv + * @hw: pointer to the hw struct + * @bridge_type: type of bridge + * @buff: pointer to a user supplied buffer that has the TLV + * @buff_size: length of the buffer + * @tlv_len: length of the TLV to be deleted + * @mib_len: length of the returned LLDP MIB + * @cmd_details: pointer to command details structure or NULL + * + * Delete the specified TLV from LLDP Local MIB for the given bridge type. + * The firmware places the entire LLDP MIB in the response buffer. + **/ +enum i40e_status_code i40e_aq_delete_lldp_tlv(struct i40e_hw *hw, + u8 bridge_type, void *buff, u16 buff_size, + u16 tlv_len, u16 *mib_len, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_lldp_add_tlv *cmd = + (struct i40e_aqc_lldp_add_tlv *)&desc.params.raw; + enum i40e_status_code status; + + if (buff_size == 0 || !buff) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_delete_tlv); + + /* Indirect Command */ + desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); + if (buff_size > I40E_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); + desc.datalen = CPU_TO_LE16(buff_size); + cmd->len = CPU_TO_LE16(tlv_len); + cmd->type = ((bridge_type << I40E_AQ_LLDP_BRIDGE_TYPE_SHIFT) & + I40E_AQ_LLDP_BRIDGE_TYPE_MASK); + + status = i40e_asq_send_command(hw, &desc, buff, buff_size, cmd_details); + if (!status) { + if (mib_len != NULL) + *mib_len = LE16_TO_CPU(desc.datalen); + } + + return status; +} + +/** + * i40e_aq_stop_lldp + * @hw: pointer to the hw struct + * @shutdown_agent: True if LLDP Agent needs to be Shutdown + * @cmd_details: pointer to command details structure or NULL + * + * Stop or Shutdown the embedded LLDP Agent + **/ +enum i40e_status_code i40e_aq_stop_lldp(struct i40e_hw *hw, bool shutdown_agent, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_lldp_stop *cmd = + (struct i40e_aqc_lldp_stop *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_stop); + + if (shutdown_agent) + cmd->command |= I40E_AQ_LLDP_AGENT_SHUTDOWN; + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_aq_start_lldp + * @hw: pointer to the hw struct + * @cmd_details: pointer to command details structure or NULL + * + * Start the embedded LLDP Agent on all ports. + **/ +enum i40e_status_code i40e_aq_start_lldp(struct i40e_hw *hw, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_lldp_start *cmd = + (struct i40e_aqc_lldp_start *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_start); + + cmd->command = I40E_AQ_LLDP_AGENT_START; + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_aq_add_udp_tunnel + * @hw: pointer to the hw struct + * @udp_port: the UDP port to add + * @header_len: length of the tunneling header length in DWords + * @protocol_index: protocol index type + * @filter_index: pointer to filter index + * @cmd_details: pointer to command details structure or NULL + **/ +enum i40e_status_code i40e_aq_add_udp_tunnel(struct i40e_hw *hw, + u16 udp_port, u8 protocol_index, + u8 *filter_index, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_add_udp_tunnel *cmd = + (struct i40e_aqc_add_udp_tunnel *)&desc.params.raw; + struct i40e_aqc_del_udp_tunnel_completion *resp = + (struct i40e_aqc_del_udp_tunnel_completion *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_udp_tunnel); + + cmd->udp_port = CPU_TO_LE16(udp_port); + cmd->protocol_type = protocol_index; + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + if (!status) + *filter_index = resp->index; + + return status; +} + +/** + * i40e_aq_del_udp_tunnel + * @hw: pointer to the hw struct + * @index: filter index + * @cmd_details: pointer to command details structure or NULL + **/ +enum i40e_status_code i40e_aq_del_udp_tunnel(struct i40e_hw *hw, u8 index, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_remove_udp_tunnel *cmd = + (struct i40e_aqc_remove_udp_tunnel *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_del_udp_tunnel); + + cmd->index = index; + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_aq_get_switch_resource_alloc (0x0204) + * @hw: pointer to the hw struct + * @num_entries: pointer to u8 to store the number of resource entries returned + * @buf: pointer to a user supplied buffer. This buffer must be large enough + * to store the resource information for all resource types. Each + * resource type is a i40e_aqc_switch_resource_alloc_data structure. + * @count: size, in bytes, of the buffer provided + * @cmd_details: pointer to command details structure or NULL + * + * Query the resources allocated to a function. + **/ +enum i40e_status_code i40e_aq_get_switch_resource_alloc(struct i40e_hw *hw, + u8 *num_entries, + struct i40e_aqc_switch_resource_alloc_element_resp *buf, + u16 count, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_get_switch_resource_alloc *cmd_resp = + (struct i40e_aqc_get_switch_resource_alloc *)&desc.params.raw; + enum i40e_status_code status; + u16 length = count + * sizeof(struct i40e_aqc_switch_resource_alloc_element_resp); + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_get_switch_resource_alloc); + + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF); + if (length > I40E_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); + + status = i40e_asq_send_command(hw, &desc, buf, length, cmd_details); + + if (!status) + *num_entries = cmd_resp->num_entries; + + return status; +} + +/** + * i40e_aq_delete_element - Delete switch element + * @hw: pointer to the hw struct + * @seid: the SEID to delete from the switch + * @cmd_details: pointer to command details structure or NULL + * + * This deletes a switch element from the switch. + **/ +enum i40e_status_code i40e_aq_delete_element(struct i40e_hw *hw, u16 seid, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_switch_seid *cmd = + (struct i40e_aqc_switch_seid *)&desc.params.raw; + enum i40e_status_code status; + + if (seid == 0) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_delete_element); + + cmd->seid = CPU_TO_LE16(seid); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40_aq_add_pvirt - Instantiate a Port Virtualizer on a port + * @hw: pointer to the hw struct + * @flags: component flags + * @mac_seid: uplink seid (MAC SEID) + * @vsi_seid: connected vsi seid + * @ret_seid: seid of create pv component + * + * This instantiates an i40e port virtualizer with specified flags. + * Depending on specified flags the port virtualizer can act as a + * 802.1Qbr port virtualizer or a 802.1Qbg S-component. + */ +enum i40e_status_code i40e_aq_add_pvirt(struct i40e_hw *hw, u16 flags, + u16 mac_seid, u16 vsi_seid, + u16 *ret_seid) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_add_update_pv *cmd = + (struct i40e_aqc_add_update_pv *)&desc.params.raw; + struct i40e_aqc_add_update_pv_completion *resp = + (struct i40e_aqc_add_update_pv_completion *)&desc.params.raw; + enum i40e_status_code status; + + if (vsi_seid == 0) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_pv); + cmd->command_flags = CPU_TO_LE16(flags); + cmd->uplink_seid = CPU_TO_LE16(mac_seid); + cmd->connected_seid = CPU_TO_LE16(vsi_seid); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL); + if (!status && ret_seid) + *ret_seid = LE16_TO_CPU(resp->pv_seid); + + return status; +} + +/** + * i40e_aq_add_tag - Add an S/E-tag + * @hw: pointer to the hw struct + * @direct_to_queue: should s-tag direct flow to a specific queue + * @vsi_seid: VSI SEID to use this tag + * @tag: value of the tag + * @queue_num: queue number, only valid is direct_to_queue is true + * @tags_used: return value, number of tags in use by this PF + * @tags_free: return value, number of unallocated tags + * @cmd_details: pointer to command details structure or NULL + * + * This associates an S- or E-tag to a VSI in the switch complex. It returns + * the number of tags allocated by the PF, and the number of unallocated + * tags available. + **/ +enum i40e_status_code i40e_aq_add_tag(struct i40e_hw *hw, bool direct_to_queue, + u16 vsi_seid, u16 tag, u16 queue_num, + u16 *tags_used, u16 *tags_free, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_add_tag *cmd = + (struct i40e_aqc_add_tag *)&desc.params.raw; + struct i40e_aqc_add_remove_tag_completion *resp = + (struct i40e_aqc_add_remove_tag_completion *)&desc.params.raw; + enum i40e_status_code status; + + if (vsi_seid == 0) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_tag); + + cmd->seid = CPU_TO_LE16(vsi_seid); + cmd->tag = CPU_TO_LE16(tag); + if (direct_to_queue) { + cmd->flags = CPU_TO_LE16(I40E_AQC_ADD_TAG_FLAG_TO_QUEUE); + cmd->queue_number = CPU_TO_LE16(queue_num); + } + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + if (!status) { + if (tags_used != NULL) + *tags_used = LE16_TO_CPU(resp->tags_used); + if (tags_free != NULL) + *tags_free = LE16_TO_CPU(resp->tags_free); + } + + return status; +} + +/** + * i40e_aq_remove_tag - Remove an S- or E-tag + * @hw: pointer to the hw struct + * @vsi_seid: VSI SEID this tag is associated with + * @tag: value of the S-tag to delete + * @tags_used: return value, number of tags in use by this PF + * @tags_free: return value, number of unallocated tags + * @cmd_details: pointer to command details structure or NULL + * + * This deletes an S- or E-tag from a VSI in the switch complex. It returns + * the number of tags allocated by the PF, and the number of unallocated + * tags available. + **/ +enum i40e_status_code i40e_aq_remove_tag(struct i40e_hw *hw, u16 vsi_seid, + u16 tag, u16 *tags_used, u16 *tags_free, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_remove_tag *cmd = + (struct i40e_aqc_remove_tag *)&desc.params.raw; + struct i40e_aqc_add_remove_tag_completion *resp = + (struct i40e_aqc_add_remove_tag_completion *)&desc.params.raw; + enum i40e_status_code status; + + if (vsi_seid == 0) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_remove_tag); + + cmd->seid = CPU_TO_LE16(vsi_seid); + cmd->tag = CPU_TO_LE16(tag); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + if (!status) { + if (tags_used != NULL) + *tags_used = LE16_TO_CPU(resp->tags_used); + if (tags_free != NULL) + *tags_free = LE16_TO_CPU(resp->tags_free); + } + + return status; +} + +/** + * i40e_aq_add_mcast_etag - Add a multicast E-tag + * @hw: pointer to the hw struct + * @pv_seid: Port Virtualizer of this SEID to associate E-tag with + * @etag: value of E-tag to add + * @num_tags_in_buf: number of unicast E-tags in indirect buffer + * @buf: address of indirect buffer + * @tags_used: return value, number of E-tags in use by this port + * @tags_free: return value, number of unallocated M-tags + * @cmd_details: pointer to command details structure or NULL + * + * This associates a multicast E-tag to a port virtualizer. It will return + * the number of tags allocated by the PF, and the number of unallocated + * tags available. + * + * The indirect buffer pointed to by buf is a list of 2-byte E-tags, + * num_tags_in_buf long. + **/ +enum i40e_status_code i40e_aq_add_mcast_etag(struct i40e_hw *hw, u16 pv_seid, + u16 etag, u8 num_tags_in_buf, void *buf, + u16 *tags_used, u16 *tags_free, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_add_remove_mcast_etag *cmd = + (struct i40e_aqc_add_remove_mcast_etag *)&desc.params.raw; + struct i40e_aqc_add_remove_mcast_etag_completion *resp = + (struct i40e_aqc_add_remove_mcast_etag_completion *)&desc.params.raw; + enum i40e_status_code status; + u16 length = sizeof(u16) * num_tags_in_buf; + + if ((pv_seid == 0) || (buf == NULL) || (num_tags_in_buf == 0)) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_add_multicast_etag); + + cmd->pv_seid = CPU_TO_LE16(pv_seid); + cmd->etag = CPU_TO_LE16(etag); + cmd->num_unicast_etags = num_tags_in_buf; + + desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); + if (length > I40E_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); + + status = i40e_asq_send_command(hw, &desc, buf, length, cmd_details); + + if (!status) { + if (tags_used != NULL) + *tags_used = LE16_TO_CPU(resp->mcast_etags_used); + if (tags_free != NULL) + *tags_free = LE16_TO_CPU(resp->mcast_etags_free); + } + + return status; +} + +/** + * i40e_aq_remove_mcast_etag - Remove a multicast E-tag + * @hw: pointer to the hw struct + * @pv_seid: Port Virtualizer SEID this M-tag is associated with + * @etag: value of the E-tag to remove + * @tags_used: return value, number of tags in use by this port + * @tags_free: return value, number of unallocated tags + * @cmd_details: pointer to command details structure or NULL + * + * This deletes an E-tag from the port virtualizer. It will return + * the number of tags allocated by the port, and the number of unallocated + * tags available. + **/ +enum i40e_status_code i40e_aq_remove_mcast_etag(struct i40e_hw *hw, u16 pv_seid, + u16 etag, u16 *tags_used, u16 *tags_free, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_add_remove_mcast_etag *cmd = + (struct i40e_aqc_add_remove_mcast_etag *)&desc.params.raw; + struct i40e_aqc_add_remove_mcast_etag_completion *resp = + (struct i40e_aqc_add_remove_mcast_etag_completion *)&desc.params.raw; + enum i40e_status_code status; + + + if (pv_seid == 0) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_remove_multicast_etag); + + cmd->pv_seid = CPU_TO_LE16(pv_seid); + cmd->etag = CPU_TO_LE16(etag); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + if (!status) { + if (tags_used != NULL) + *tags_used = LE16_TO_CPU(resp->mcast_etags_used); + if (tags_free != NULL) + *tags_free = LE16_TO_CPU(resp->mcast_etags_free); + } + + return status; +} + +/** + * i40e_aq_update_tag - Update an S/E-tag + * @hw: pointer to the hw struct + * @vsi_seid: VSI SEID using this S-tag + * @old_tag: old tag value + * @new_tag: new tag value + * @tags_used: return value, number of tags in use by this PF + * @tags_free: return value, number of unallocated tags + * @cmd_details: pointer to command details structure or NULL + * + * This updates the value of the tag currently attached to this VSI + * in the switch complex. It will return the number of tags allocated + * by the PF, and the number of unallocated tags available. + **/ +enum i40e_status_code i40e_aq_update_tag(struct i40e_hw *hw, u16 vsi_seid, + u16 old_tag, u16 new_tag, u16 *tags_used, + u16 *tags_free, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_update_tag *cmd = + (struct i40e_aqc_update_tag *)&desc.params.raw; + struct i40e_aqc_update_tag_completion *resp = + (struct i40e_aqc_update_tag_completion *)&desc.params.raw; + enum i40e_status_code status; + + if (vsi_seid == 0) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_update_tag); + + cmd->seid = CPU_TO_LE16(vsi_seid); + cmd->old_tag = CPU_TO_LE16(old_tag); + cmd->new_tag = CPU_TO_LE16(new_tag); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + if (!status) { + if (tags_used != NULL) + *tags_used = LE16_TO_CPU(resp->tags_used); + if (tags_free != NULL) + *tags_free = LE16_TO_CPU(resp->tags_free); + } + + return status; +} + +/** + * i40e_aq_dcb_ignore_pfc - Ignore PFC for given TCs + * @hw: pointer to the hw struct + * @tcmap: TC map for request/release any ignore PFC condition + * @request: request or release ignore PFC condition + * @tcmap_ret: return TCs for which PFC is currently ignored + * @cmd_details: pointer to command details structure or NULL + * + * This sends out request/release to ignore PFC condition for a TC. + * It will return the TCs for which PFC is currently ignored. + **/ +enum i40e_status_code i40e_aq_dcb_ignore_pfc(struct i40e_hw *hw, u8 tcmap, + bool request, u8 *tcmap_ret, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_pfc_ignore *cmd_resp = + (struct i40e_aqc_pfc_ignore *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_dcb_ignore_pfc); + + if (request) + cmd_resp->command_flags = I40E_AQC_PFC_IGNORE_SET; + + cmd_resp->tc_bitmap = tcmap; + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + if (!status) { + if (tcmap_ret != NULL) + *tcmap_ret = cmd_resp->tc_bitmap; + } + + return status; +} + +/** + * i40e_aq_dcb_updated - DCB Updated Command + * @hw: pointer to the hw struct + * @cmd_details: pointer to command details structure or NULL + * + * When LLDP is handled in PF this command is used by the PF + * to notify EMP that a DCB setting is modified. + * When LLDP is handled in EMP this command is used by the PF + * to notify EMP whenever one of the following parameters get + * modified: + * - PFCLinkDelayAllowance in PRTDCB_GENC.PFCLDA + * - PCIRTT in PRTDCB_GENC.PCIRTT + * - Maximum Frame Size for non-FCoE TCs set by PRTDCB_TDPUC.MAX_TXFRAME. + * EMP will return when the shared RPB settings have been + * recomputed and modified. The retval field in the descriptor + * will be set to 0 when RPB is modified. + **/ +enum i40e_status_code i40e_aq_dcb_updated(struct i40e_hw *hw, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_dcb_updated); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_aq_add_statistics - Add a statistics block to a VLAN in a switch. + * @hw: pointer to the hw struct + * @seid: defines the SEID of the switch for which the stats are requested + * @vlan_id: the VLAN ID for which the statistics are requested + * @stat_index: index of the statistics counters block assigned to this VLAN + * @cmd_details: pointer to command details structure or NULL + * + * XL710 supports 128 smonVlanStats counters.This command is used to + * allocate a set of smonVlanStats counters to a specific VLAN in a specific + * switch. + **/ +enum i40e_status_code i40e_aq_add_statistics(struct i40e_hw *hw, u16 seid, + u16 vlan_id, u16 *stat_index, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_add_remove_statistics *cmd_resp = + (struct i40e_aqc_add_remove_statistics *)&desc.params.raw; + enum i40e_status_code status; + + if ((seid == 0) || (stat_index == NULL)) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_statistics); + + cmd_resp->seid = CPU_TO_LE16(seid); + cmd_resp->vlan = CPU_TO_LE16(vlan_id); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + if (!status) + *stat_index = LE16_TO_CPU(cmd_resp->stat_index); + + return status; +} + +/** + * i40e_aq_remove_statistics - Remove a statistics block to a VLAN in a switch. + * @hw: pointer to the hw struct + * @seid: defines the SEID of the switch for which the stats are requested + * @vlan_id: the VLAN ID for which the statistics are requested + * @stat_index: index of the statistics counters block assigned to this VLAN + * @cmd_details: pointer to command details structure or NULL + * + * XL710 supports 128 smonVlanStats counters.This command is used to + * deallocate a set of smonVlanStats counters to a specific VLAN in a specific + * switch. + **/ +enum i40e_status_code i40e_aq_remove_statistics(struct i40e_hw *hw, u16 seid, + u16 vlan_id, u16 stat_index, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_add_remove_statistics *cmd = + (struct i40e_aqc_add_remove_statistics *)&desc.params.raw; + enum i40e_status_code status; + + if (seid == 0) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_remove_statistics); + + cmd->seid = CPU_TO_LE16(seid); + cmd->vlan = CPU_TO_LE16(vlan_id); + cmd->stat_index = CPU_TO_LE16(stat_index); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_aq_set_port_parameters - set physical port parameters. + * @hw: pointer to the hw struct + * @bad_frame_vsi: defines the VSI to which bad frames are forwarded + * @save_bad_pac: if set packets with errors are forwarded to the bad frames VSI + * @pad_short_pac: if set transmit packets smaller than 60 bytes are padded + * @double_vlan: if set double VLAN is enabled + * @cmd_details: pointer to command details structure or NULL + **/ +enum i40e_status_code i40e_aq_set_port_parameters(struct i40e_hw *hw, + u16 bad_frame_vsi, bool save_bad_pac, + bool pad_short_pac, bool double_vlan, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aqc_set_port_parameters *cmd; + enum i40e_status_code status; + struct i40e_aq_desc desc; + u16 command_flags = 0; + + cmd = (struct i40e_aqc_set_port_parameters *)&desc.params.raw; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_set_port_parameters); + + cmd->bad_frame_vsi = CPU_TO_LE16(bad_frame_vsi); + if (save_bad_pac) + command_flags |= I40E_AQ_SET_P_PARAMS_SAVE_BAD_PACKETS; + if (pad_short_pac) + command_flags |= I40E_AQ_SET_P_PARAMS_PAD_SHORT_PACKETS; + if (double_vlan) + command_flags |= I40E_AQ_SET_P_PARAMS_DOUBLE_VLAN_ENA; + cmd->command_flags = CPU_TO_LE16(command_flags); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_aq_tx_sched_cmd - generic Tx scheduler AQ command handler + * @hw: pointer to the hw struct + * @seid: seid for the physical port/switching component/vsi + * @buff: Indirect buffer to hold data parameters and response + * @buff_size: Indirect buffer size + * @opcode: Tx scheduler AQ command opcode + * @cmd_details: pointer to command details structure or NULL + * + * Generic command handler for Tx scheduler AQ commands + **/ +static enum i40e_status_code i40e_aq_tx_sched_cmd(struct i40e_hw *hw, u16 seid, + void *buff, u16 buff_size, + enum i40e_admin_queue_opc opcode, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_tx_sched_ind *cmd = + (struct i40e_aqc_tx_sched_ind *)&desc.params.raw; + enum i40e_status_code status; + bool cmd_param_flag = false; + + switch (opcode) { + case i40e_aqc_opc_configure_vsi_ets_sla_bw_limit: + case i40e_aqc_opc_configure_vsi_tc_bw: + case i40e_aqc_opc_enable_switching_comp_ets: + case i40e_aqc_opc_modify_switching_comp_ets: + case i40e_aqc_opc_disable_switching_comp_ets: + case i40e_aqc_opc_configure_switching_comp_ets_bw_limit: + case i40e_aqc_opc_configure_switching_comp_bw_config: + cmd_param_flag = true; + break; + case i40e_aqc_opc_query_vsi_bw_config: + case i40e_aqc_opc_query_vsi_ets_sla_config: + case i40e_aqc_opc_query_switching_comp_ets_config: + case i40e_aqc_opc_query_port_ets_config: + case i40e_aqc_opc_query_switching_comp_bw_config: + cmd_param_flag = false; + break; + default: + return I40E_ERR_PARAM; + } + + i40e_fill_default_direct_cmd_desc(&desc, opcode); + + /* Indirect command */ + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF); + if (cmd_param_flag) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_RD); + if (buff_size > I40E_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); + + desc.datalen = CPU_TO_LE16(buff_size); + + cmd->vsi_seid = CPU_TO_LE16(seid); + + status = i40e_asq_send_command(hw, &desc, buff, buff_size, cmd_details); + + return status; +} + +/** + * i40e_aq_config_vsi_bw_limit - Configure VSI BW Limit + * @hw: pointer to the hw struct + * @seid: VSI seid + * @credit: BW limit credits (0 = disabled) + * @max_credit: Max BW limit credits + * @cmd_details: pointer to command details structure or NULL + **/ +enum i40e_status_code i40e_aq_config_vsi_bw_limit(struct i40e_hw *hw, + u16 seid, u16 credit, u8 max_credit, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_configure_vsi_bw_limit *cmd = + (struct i40e_aqc_configure_vsi_bw_limit *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_configure_vsi_bw_limit); + + cmd->vsi_seid = CPU_TO_LE16(seid); + cmd->credit = CPU_TO_LE16(credit); + cmd->max_credit = max_credit; + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_aq_config_switch_comp_bw_limit - Configure Switching component BW Limit + * @hw: pointer to the hw struct + * @seid: switching component seid + * @credit: BW limit credits (0 = disabled) + * @max_bw: Max BW limit credits + * @cmd_details: pointer to command details structure or NULL + **/ +enum i40e_status_code i40e_aq_config_switch_comp_bw_limit(struct i40e_hw *hw, + u16 seid, u16 credit, u8 max_bw, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_configure_switching_comp_bw_limit *cmd = + (struct i40e_aqc_configure_switching_comp_bw_limit *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_configure_switching_comp_bw_limit); + + cmd->seid = CPU_TO_LE16(seid); + cmd->credit = CPU_TO_LE16(credit); + cmd->max_bw = max_bw; + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_aq_config_vsi_ets_sla_bw_limit - Config VSI BW Limit per TC + * @hw: pointer to the hw struct + * @seid: VSI seid + * @bw_data: Buffer holding enabled TCs, per TC BW limit/credits + * @cmd_details: pointer to command details structure or NULL + **/ +enum i40e_status_code i40e_aq_config_vsi_ets_sla_bw_limit(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_configure_vsi_ets_sla_bw_data *bw_data, + struct i40e_asq_cmd_details *cmd_details) +{ + return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), + i40e_aqc_opc_configure_vsi_ets_sla_bw_limit, + cmd_details); +} + +/** + * i40e_aq_config_vsi_tc_bw - Config VSI BW Allocation per TC + * @hw: pointer to the hw struct + * @seid: VSI seid + * @bw_data: Buffer holding enabled TCs, relative TC BW limit/credits + * @cmd_details: pointer to command details structure or NULL + **/ +enum i40e_status_code i40e_aq_config_vsi_tc_bw(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_configure_vsi_tc_bw_data *bw_data, + struct i40e_asq_cmd_details *cmd_details) +{ + return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), + i40e_aqc_opc_configure_vsi_tc_bw, + cmd_details); +} + +/** + * i40e_aq_config_switch_comp_ets_bw_limit - Config Switch comp BW Limit per TC + * @hw: pointer to the hw struct + * @seid: seid of the switching component + * @bw_data: Buffer holding enabled TCs, per TC BW limit/credits + * @cmd_details: pointer to command details structure or NULL + **/ +enum i40e_status_code i40e_aq_config_switch_comp_ets_bw_limit( + struct i40e_hw *hw, u16 seid, + struct i40e_aqc_configure_switching_comp_ets_bw_limit_data *bw_data, + struct i40e_asq_cmd_details *cmd_details) +{ + return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), + i40e_aqc_opc_configure_switching_comp_ets_bw_limit, + cmd_details); +} + +/** + * i40e_aq_query_vsi_bw_config - Query VSI BW configuration + * @hw: pointer to the hw struct + * @seid: seid of the VSI + * @bw_data: Buffer to hold VSI BW configuration + * @cmd_details: pointer to command details structure or NULL + **/ +enum i40e_status_code i40e_aq_query_vsi_bw_config(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_query_vsi_bw_config_resp *bw_data, + struct i40e_asq_cmd_details *cmd_details) +{ + return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), + i40e_aqc_opc_query_vsi_bw_config, + cmd_details); +} + +/** + * i40e_aq_query_vsi_ets_sla_config - Query VSI BW configuration per TC + * @hw: pointer to the hw struct + * @seid: seid of the VSI + * @bw_data: Buffer to hold VSI BW configuration per TC + * @cmd_details: pointer to command details structure or NULL + **/ +enum i40e_status_code i40e_aq_query_vsi_ets_sla_config(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_query_vsi_ets_sla_config_resp *bw_data, + struct i40e_asq_cmd_details *cmd_details) +{ + return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), + i40e_aqc_opc_query_vsi_ets_sla_config, + cmd_details); +} + +/** + * i40e_aq_query_switch_comp_ets_config - Query Switch comp BW config per TC + * @hw: pointer to the hw struct + * @seid: seid of the switching component + * @bw_data: Buffer to hold switching component's per TC BW config + * @cmd_details: pointer to command details structure or NULL + **/ +enum i40e_status_code i40e_aq_query_switch_comp_ets_config(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_query_switching_comp_ets_config_resp *bw_data, + struct i40e_asq_cmd_details *cmd_details) +{ + return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), + i40e_aqc_opc_query_switching_comp_ets_config, + cmd_details); +} + +/** + * i40e_aq_query_port_ets_config - Query Physical Port ETS configuration + * @hw: pointer to the hw struct + * @seid: seid of the VSI or switching component connected to Physical Port + * @bw_data: Buffer to hold current ETS configuration for the Physical Port + * @cmd_details: pointer to command details structure or NULL + **/ +enum i40e_status_code i40e_aq_query_port_ets_config(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_query_port_ets_config_resp *bw_data, + struct i40e_asq_cmd_details *cmd_details) +{ + return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), + i40e_aqc_opc_query_port_ets_config, + cmd_details); +} + +/** + * i40e_aq_query_switch_comp_bw_config - Query Switch comp BW configuration + * @hw: pointer to the hw struct + * @seid: seid of the switching component + * @bw_data: Buffer to hold switching component's BW configuration + * @cmd_details: pointer to command details structure or NULL + **/ +enum i40e_status_code i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_query_switching_comp_bw_config_resp *bw_data, + struct i40e_asq_cmd_details *cmd_details) +{ + return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), + i40e_aqc_opc_query_switching_comp_bw_config, + cmd_details); +} + +/** + * i40e_validate_filter_settings + * @hw: pointer to the hardware structure + * @settings: Filter control settings + * + * Check and validate the filter control settings passed. + * The function checks for the valid filter/context sizes being + * passed for FCoE and PE. + * + * Returns I40E_SUCCESS if the values passed are valid and within + * range else returns an error. + **/ +STATIC enum i40e_status_code i40e_validate_filter_settings(struct i40e_hw *hw, + struct i40e_filter_control_settings *settings) +{ + u32 fcoe_cntx_size, fcoe_filt_size; + u32 pe_cntx_size, pe_filt_size; + u32 fcoe_fmax; + + u32 val; + + /* Validate FCoE settings passed */ + switch (settings->fcoe_filt_num) { + case I40E_HASH_FILTER_SIZE_1K: + case I40E_HASH_FILTER_SIZE_2K: + case I40E_HASH_FILTER_SIZE_4K: + case I40E_HASH_FILTER_SIZE_8K: + case I40E_HASH_FILTER_SIZE_16K: + case I40E_HASH_FILTER_SIZE_32K: + fcoe_filt_size = I40E_HASH_FILTER_BASE_SIZE; + fcoe_filt_size <<= (u32)settings->fcoe_filt_num; + break; + default: + return I40E_ERR_PARAM; + } + + switch (settings->fcoe_cntx_num) { + case I40E_DMA_CNTX_SIZE_512: + case I40E_DMA_CNTX_SIZE_1K: + case I40E_DMA_CNTX_SIZE_2K: + case I40E_DMA_CNTX_SIZE_4K: + fcoe_cntx_size = I40E_DMA_CNTX_BASE_SIZE; + fcoe_cntx_size <<= (u32)settings->fcoe_cntx_num; + break; + default: + return I40E_ERR_PARAM; + } + + /* Validate PE settings passed */ + switch (settings->pe_filt_num) { + case I40E_HASH_FILTER_SIZE_1K: + case I40E_HASH_FILTER_SIZE_2K: + case I40E_HASH_FILTER_SIZE_4K: + case I40E_HASH_FILTER_SIZE_8K: + case I40E_HASH_FILTER_SIZE_16K: + case I40E_HASH_FILTER_SIZE_32K: + case I40E_HASH_FILTER_SIZE_64K: + case I40E_HASH_FILTER_SIZE_128K: + case I40E_HASH_FILTER_SIZE_256K: + case I40E_HASH_FILTER_SIZE_512K: + case I40E_HASH_FILTER_SIZE_1M: + pe_filt_size = I40E_HASH_FILTER_BASE_SIZE; + pe_filt_size <<= (u32)settings->pe_filt_num; + break; + default: + return I40E_ERR_PARAM; + } + + switch (settings->pe_cntx_num) { + case I40E_DMA_CNTX_SIZE_512: + case I40E_DMA_CNTX_SIZE_1K: + case I40E_DMA_CNTX_SIZE_2K: + case I40E_DMA_CNTX_SIZE_4K: + case I40E_DMA_CNTX_SIZE_8K: + case I40E_DMA_CNTX_SIZE_16K: + case I40E_DMA_CNTX_SIZE_32K: + case I40E_DMA_CNTX_SIZE_64K: + case I40E_DMA_CNTX_SIZE_128K: + case I40E_DMA_CNTX_SIZE_256K: + pe_cntx_size = I40E_DMA_CNTX_BASE_SIZE; + pe_cntx_size <<= (u32)settings->pe_cntx_num; + break; + default: + return I40E_ERR_PARAM; + } + + /* FCHSIZE + FCDSIZE should not be greater than PMFCOEFMAX */ + val = rd32(hw, I40E_GLHMC_FCOEFMAX); + fcoe_fmax = (val & I40E_GLHMC_FCOEFMAX_PMFCOEFMAX_MASK) + >> I40E_GLHMC_FCOEFMAX_PMFCOEFMAX_SHIFT; + if (fcoe_filt_size + fcoe_cntx_size > fcoe_fmax) + return I40E_ERR_INVALID_SIZE; + + return I40E_SUCCESS; +} + +/** + * i40e_set_filter_control + * @hw: pointer to the hardware structure + * @settings: Filter control settings + * + * Set the Queue Filters for PE/FCoE and enable filters required + * for a single PF. It is expected that these settings are programmed + * at the driver initialization time. + **/ +enum i40e_status_code i40e_set_filter_control(struct i40e_hw *hw, + struct i40e_filter_control_settings *settings) +{ + enum i40e_status_code ret = I40E_SUCCESS; + u32 hash_lut_size = 0; + u32 val; + + if (!settings) + return I40E_ERR_PARAM; + + /* Validate the input settings */ + ret = i40e_validate_filter_settings(hw, settings); + if (ret) + return ret; + + /* Read the PF Queue Filter control register */ + val = rd32(hw, I40E_PFQF_CTL_0); + + /* Program required PE hash buckets for the PF */ + val &= ~I40E_PFQF_CTL_0_PEHSIZE_MASK; + val |= ((u32)settings->pe_filt_num << I40E_PFQF_CTL_0_PEHSIZE_SHIFT) & + I40E_PFQF_CTL_0_PEHSIZE_MASK; + /* Program required PE contexts for the PF */ + val &= ~I40E_PFQF_CTL_0_PEDSIZE_MASK; + val |= ((u32)settings->pe_cntx_num << I40E_PFQF_CTL_0_PEDSIZE_SHIFT) & + I40E_PFQF_CTL_0_PEDSIZE_MASK; + + /* Program required FCoE hash buckets for the PF */ + val &= ~I40E_PFQF_CTL_0_PFFCHSIZE_MASK; + val |= ((u32)settings->fcoe_filt_num << + I40E_PFQF_CTL_0_PFFCHSIZE_SHIFT) & + I40E_PFQF_CTL_0_PFFCHSIZE_MASK; + /* Program required FCoE DDP contexts for the PF */ + val &= ~I40E_PFQF_CTL_0_PFFCDSIZE_MASK; + val |= ((u32)settings->fcoe_cntx_num << + I40E_PFQF_CTL_0_PFFCDSIZE_SHIFT) & + I40E_PFQF_CTL_0_PFFCDSIZE_MASK; + + /* Program Hash LUT size for the PF */ + val &= ~I40E_PFQF_CTL_0_HASHLUTSIZE_MASK; + if (settings->hash_lut_size == I40E_HASH_LUT_SIZE_512) + hash_lut_size = 1; + val |= (hash_lut_size << I40E_PFQF_CTL_0_HASHLUTSIZE_SHIFT) & + I40E_PFQF_CTL_0_HASHLUTSIZE_MASK; + + /* Enable FDIR, Ethertype and MACVLAN filters for PF and VFs */ + if (settings->enable_fdir) + val |= I40E_PFQF_CTL_0_FD_ENA_MASK; + if (settings->enable_ethtype) + val |= I40E_PFQF_CTL_0_ETYPE_ENA_MASK; + if (settings->enable_macvlan) + val |= I40E_PFQF_CTL_0_MACVLAN_ENA_MASK; + + wr32(hw, I40E_PFQF_CTL_0, val); + + return I40E_SUCCESS; +} + +/** + * i40e_aq_add_rem_control_packet_filter - Add or Remove Control Packet Filter + * @hw: pointer to the hw struct + * @mac_addr: MAC address to use in the filter + * @ethtype: Ethertype to use in the filter + * @flags: Flags that needs to be applied to the filter + * @vsi_seid: seid of the control VSI + * @queue: VSI queue number to send the packet to + * @is_add: Add control packet filter if True else remove + * @stats: Structure to hold information on control filter counts + * @cmd_details: pointer to command details structure or NULL + * + * This command will Add or Remove control packet filter for a control VSI. + * In return it will update the total number of perfect filter count in + * the stats member. + **/ +enum i40e_status_code i40e_aq_add_rem_control_packet_filter(struct i40e_hw *hw, + u8 *mac_addr, u16 ethtype, u16 flags, + u16 vsi_seid, u16 queue, bool is_add, + struct i40e_control_filter_stats *stats, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_add_remove_control_packet_filter *cmd = + (struct i40e_aqc_add_remove_control_packet_filter *) + &desc.params.raw; + struct i40e_aqc_add_remove_control_packet_filter_completion *resp = + (struct i40e_aqc_add_remove_control_packet_filter_completion *) + &desc.params.raw; + enum i40e_status_code status; + + if (vsi_seid == 0) + return I40E_ERR_PARAM; + + if (is_add) { + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_add_control_packet_filter); + cmd->queue = CPU_TO_LE16(queue); + } else { + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_remove_control_packet_filter); + } + + if (mac_addr) + i40e_memcpy(cmd->mac, mac_addr, I40E_ETH_LENGTH_OF_ADDRESS, + I40E_NONDMA_TO_NONDMA); + + cmd->etype = CPU_TO_LE16(ethtype); + cmd->flags = CPU_TO_LE16(flags); + cmd->seid = CPU_TO_LE16(vsi_seid); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + if (!status && stats) { + stats->mac_etype_used = LE16_TO_CPU(resp->mac_etype_used); + stats->etype_used = LE16_TO_CPU(resp->etype_used); + stats->mac_etype_free = LE16_TO_CPU(resp->mac_etype_free); + stats->etype_free = LE16_TO_CPU(resp->etype_free); + } + + return status; +} + +/** + * i40e_aq_add_cloud_filters + * @hw: pointer to the hardware structure + * @seid: VSI seid to add cloud filters from + * @filters: Buffer which contains the filters to be added + * @filter_count: number of filters contained in the buffer + * + * Set the cloud filters for a given VSI. The contents of the + * i40e_aqc_add_remove_cloud_filters_element_data are filled + * in by the caller of the function. + * + **/ +enum i40e_status_code i40e_aq_add_cloud_filters(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_add_remove_cloud_filters_element_data *filters, + u8 filter_count) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_add_remove_cloud_filters *cmd = + (struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw; + u16 buff_len; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_add_cloud_filters); + + buff_len = sizeof(struct i40e_aqc_add_remove_cloud_filters_element_data) * + filter_count; + desc.datalen = CPU_TO_LE16(buff_len); + desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); + cmd->num_filters = filter_count; + cmd->seid = CPU_TO_LE16(seid); + + status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL); + + return status; +} + +/** + * i40e_aq_remove_cloud_filters + * @hw: pointer to the hardware structure + * @seid: VSI seid to remove cloud filters from + * @filters: Buffer which contains the filters to be removed + * @filter_count: number of filters contained in the buffer + * + * Remove the cloud filters for a given VSI. The contents of the + * i40e_aqc_add_remove_cloud_filters_element_data are filled + * in by the caller of the function. + * + **/ +enum i40e_status_code i40e_aq_remove_cloud_filters(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_add_remove_cloud_filters_element_data *filters, + u8 filter_count) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_add_remove_cloud_filters *cmd = + (struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw; + enum i40e_status_code status; + u16 buff_len; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_remove_cloud_filters); + + buff_len = sizeof(struct i40e_aqc_add_remove_cloud_filters_element_data) * + filter_count; + desc.datalen = CPU_TO_LE16(buff_len); + desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); + cmd->num_filters = filter_count; + cmd->seid = CPU_TO_LE16(seid); + + status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL); + + return status; +} + +/** + * i40e_aq_alternate_write + * @hw: pointer to the hardware structure + * @reg_addr0: address of first dword to be read + * @reg_val0: value to be written under 'reg_addr0' + * @reg_addr1: address of second dword to be read + * @reg_val1: value to be written under 'reg_addr1' + * + * Write one or two dwords to alternate structure. Fields are indicated + * by 'reg_addr0' and 'reg_addr1' register numbers. + * + **/ +enum i40e_status_code i40e_aq_alternate_write(struct i40e_hw *hw, + u32 reg_addr0, u32 reg_val0, + u32 reg_addr1, u32 reg_val1) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_alternate_write *cmd_resp = + (struct i40e_aqc_alternate_write *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_alternate_write); + cmd_resp->address0 = CPU_TO_LE32(reg_addr0); + cmd_resp->address1 = CPU_TO_LE32(reg_addr1); + cmd_resp->data0 = CPU_TO_LE32(reg_val0); + cmd_resp->data1 = CPU_TO_LE32(reg_val1); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL); + + return status; +} + +/** + * i40e_aq_alternate_write_indirect + * @hw: pointer to the hardware structure + * @addr: address of a first register to be modified + * @dw_count: number of alternate structure fields to write + * @buffer: pointer to the command buffer + * + * Write 'dw_count' dwords from 'buffer' to alternate structure + * starting at 'addr'. + * + **/ +enum i40e_status_code i40e_aq_alternate_write_indirect(struct i40e_hw *hw, + u32 addr, u32 dw_count, void *buffer) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_alternate_ind_write *cmd_resp = + (struct i40e_aqc_alternate_ind_write *)&desc.params.raw; + enum i40e_status_code status; + + if (buffer == NULL) + return I40E_ERR_PARAM; + + /* Indirect command */ + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_alternate_write_indirect); + + desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_RD); + desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_BUF); + if (dw_count > (I40E_AQ_LARGE_BUF/4)) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); + + cmd_resp->address = CPU_TO_LE32(addr); + cmd_resp->length = CPU_TO_LE32(dw_count); + cmd_resp->addr_high = CPU_TO_LE32(I40E_HI_WORD((u64)buffer)); + cmd_resp->addr_low = CPU_TO_LE32(I40E_LO_DWORD((u64)buffer)); + + status = i40e_asq_send_command(hw, &desc, buffer, + I40E_LO_DWORD(4*dw_count), NULL); + + return status; +} + +/** + * i40e_aq_alternate_read + * @hw: pointer to the hardware structure + * @reg_addr0: address of first dword to be read + * @reg_val0: pointer for data read from 'reg_addr0' + * @reg_addr1: address of second dword to be read + * @reg_val1: pointer for data read from 'reg_addr1' + * + * Read one or two dwords from alternate structure. Fields are indicated + * by 'reg_addr0' and 'reg_addr1' register numbers. If 'reg_val1' pointer + * is not passed then only register at 'reg_addr0' is read. + * + **/ +enum i40e_status_code i40e_aq_alternate_read(struct i40e_hw *hw, + u32 reg_addr0, u32 *reg_val0, + u32 reg_addr1, u32 *reg_val1) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_alternate_write *cmd_resp = + (struct i40e_aqc_alternate_write *)&desc.params.raw; + enum i40e_status_code status; + + if (reg_val0 == NULL) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_alternate_read); + cmd_resp->address0 = CPU_TO_LE32(reg_addr0); + cmd_resp->address1 = CPU_TO_LE32(reg_addr1); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL); + + if (status == I40E_SUCCESS) { + *reg_val0 = LE32_TO_CPU(cmd_resp->data0); + + if (reg_val1 != NULL) + *reg_val1 = LE32_TO_CPU(cmd_resp->data1); + } + + return status; +} + +/** + * i40e_aq_alternate_read_indirect + * @hw: pointer to the hardware structure + * @addr: address of the alternate structure field + * @dw_count: number of alternate structure fields to read + * @buffer: pointer to the command buffer + * + * Read 'dw_count' dwords from alternate structure starting at 'addr' and + * place them in 'buffer'. The buffer should be allocated by caller. + * + **/ +enum i40e_status_code i40e_aq_alternate_read_indirect(struct i40e_hw *hw, + u32 addr, u32 dw_count, void *buffer) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_alternate_ind_write *cmd_resp = + (struct i40e_aqc_alternate_ind_write *)&desc.params.raw; + enum i40e_status_code status; + + if (buffer == NULL) + return I40E_ERR_PARAM; + + /* Indirect command */ + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_alternate_read_indirect); + + desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_RD); + desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_BUF); + if (dw_count > (I40E_AQ_LARGE_BUF/4)) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); + + cmd_resp->address = CPU_TO_LE32(addr); + cmd_resp->length = CPU_TO_LE32(dw_count); + cmd_resp->addr_high = CPU_TO_LE32(I40E_HI_DWORD((u64)buffer)); + cmd_resp->addr_low = CPU_TO_LE32(I40E_LO_DWORD((u64)buffer)); + + status = i40e_asq_send_command(hw, &desc, buffer, + I40E_LO_DWORD(4*dw_count), NULL); + + return status; +} + +/** + * i40e_aq_alternate_clear + * @hw: pointer to the HW structure. + * + * Clear the alternate structures of the port from which the function + * is called. + * + **/ +enum i40e_status_code i40e_aq_alternate_clear(struct i40e_hw *hw) +{ + struct i40e_aq_desc desc; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_alternate_clear_port); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL); + + return status; +} + +/** + * i40e_aq_alternate_write_done + * @hw: pointer to the HW structure. + * @bios_mode: indicates whether the command is executed by UEFI or legacy BIOS + * @reset_needed: indicates the SW should trigger GLOBAL reset + * + * Indicates to the FW that alternate structures have been changed. + * + **/ +enum i40e_status_code i40e_aq_alternate_write_done(struct i40e_hw *hw, + u8 bios_mode, bool *reset_needed) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_alternate_write_done *cmd = + (struct i40e_aqc_alternate_write_done *)&desc.params.raw; + enum i40e_status_code status; + + if (reset_needed == NULL) + return I40E_ERR_PARAM; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_alternate_write_done); + + cmd->cmd_flags = CPU_TO_LE16(bios_mode); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL); + if (!status) + *reset_needed = ((LE16_TO_CPU(cmd->cmd_flags) & + I40E_AQ_ALTERNATE_RESET_NEEDED) != 0); + + return status; +} + +/** + * i40e_aq_set_oem_mode + * @hw: pointer to the HW structure. + * @oem_mode: the OEM mode to be used + * + * Sets the device to a specific operating mode. Currently the only supported + * mode is no_clp, which causes FW to refrain from using Alternate RAM. + * + **/ +enum i40e_status_code i40e_aq_set_oem_mode(struct i40e_hw *hw, + u8 oem_mode) +{ + struct i40e_aq_desc desc; + struct i40e_aqc_alternate_write_done *cmd = + (struct i40e_aqc_alternate_write_done *)&desc.params.raw; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_alternate_set_mode); + + cmd->cmd_flags = CPU_TO_LE16(oem_mode); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL); + + return status; +} + +/** + * i40e_aq_resume_port_tx + * @hw: pointer to the hardware structure + * @cmd_details: pointer to command details structure or NULL + * + * Resume port's Tx traffic + **/ +enum i40e_status_code i40e_aq_resume_port_tx(struct i40e_hw *hw, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_resume_port_tx); + + status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); + + return status; +} + +/** + * i40e_set_pci_config_data - store PCI bus info + * @hw: pointer to hardware structure + * @link_status: the link status word from PCI config space + * + * Stores the PCI bus info (speed, width, type) within the i40e_hw structure + **/ +void i40e_set_pci_config_data(struct i40e_hw *hw, u16 link_status) +{ + hw->bus.type = i40e_bus_type_pci_express; + + switch (link_status & I40E_PCI_LINK_WIDTH) { + case I40E_PCI_LINK_WIDTH_1: + hw->bus.width = i40e_bus_width_pcie_x1; + break; + case I40E_PCI_LINK_WIDTH_2: + hw->bus.width = i40e_bus_width_pcie_x2; + break; + case I40E_PCI_LINK_WIDTH_4: + hw->bus.width = i40e_bus_width_pcie_x4; + break; + case I40E_PCI_LINK_WIDTH_8: + hw->bus.width = i40e_bus_width_pcie_x8; + break; + default: + hw->bus.width = i40e_bus_width_unknown; + break; + } + + switch (link_status & I40E_PCI_LINK_SPEED) { + case I40E_PCI_LINK_SPEED_2500: + hw->bus.speed = i40e_bus_speed_2500; + break; + case I40E_PCI_LINK_SPEED_5000: + hw->bus.speed = i40e_bus_speed_5000; + break; + case I40E_PCI_LINK_SPEED_8000: + hw->bus.speed = i40e_bus_speed_8000; + break; + default: + hw->bus.speed = i40e_bus_speed_unknown; + break; + } +} + +/** + * i40e_read_bw_from_alt_ram + * @hw: pointer to the hardware structure + * @max_bw: pointer for max_bw read + * @min_bw: pointer for min_bw read + * @min_valid: pointer for bool that is true if min_bw is a valid value + * @max_valid: pointer for bool that is true if max_bw is a valid value + * + * Read bw from the alternate ram for the given pf + **/ +enum i40e_status_code i40e_read_bw_from_alt_ram(struct i40e_hw *hw, + u32 *max_bw, u32 *min_bw, + bool *min_valid, bool *max_valid) +{ + enum i40e_status_code status; + u32 max_bw_addr, min_bw_addr; + + /* Calculate the address of the min/max bw registers */ + max_bw_addr = I40E_ALT_STRUCT_FIRST_PF_OFFSET + + I40E_ALT_STRUCT_MAX_BW_OFFSET + + (I40E_ALT_STRUCT_DWORDS_PER_PF*hw->pf_id); + min_bw_addr = I40E_ALT_STRUCT_FIRST_PF_OFFSET + + I40E_ALT_STRUCT_MIN_BW_OFFSET + + (I40E_ALT_STRUCT_DWORDS_PER_PF*hw->pf_id); + + /* Read the bandwidths from alt ram */ + status = i40e_aq_alternate_read(hw, max_bw_addr, max_bw, + min_bw_addr, min_bw); + + if (*min_bw & I40E_ALT_BW_VALID_MASK) + *min_valid = true; + else + *min_valid = false; + + if (*max_bw & I40E_ALT_BW_VALID_MASK) + *max_valid = true; + else + *max_valid = false; + + return status; +} + +/** + * i40e_aq_configure_partition_bw + * @hw: pointer to the hardware structure + * @bw_data: Buffer holding valid pfs and bw limits + * @cmd_details: pointer to command details + * + * Configure partitions guaranteed/max bw + **/ +enum i40e_status_code i40e_aq_configure_partition_bw(struct i40e_hw *hw, + struct i40e_aqc_configure_partition_bw_data *bw_data, + struct i40e_asq_cmd_details *cmd_details) +{ + enum i40e_status_code status; + struct i40e_aq_desc desc; + u16 bwd_size = sizeof(struct i40e_aqc_configure_partition_bw_data); + + i40e_fill_default_direct_cmd_desc(&desc, + i40e_aqc_opc_configure_partition_bw); + + /* Indirect command */ + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF); + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_RD); + + if (bwd_size > I40E_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); + + desc.datalen = CPU_TO_LE16(bwd_size); + + status = i40e_asq_send_command(hw, &desc, bw_data, bwd_size, cmd_details); + + return status; +} + +/** + * i40e_aq_send_msg_to_pf + * @hw: pointer to the hardware structure + * @v_opcode: opcodes for VF-PF communication + * @v_retval: return error code + * @msg: pointer to the msg buffer + * @msglen: msg length + * @cmd_details: pointer to command details + * + * Send message to PF driver using admin queue. By default, this message + * is sent asynchronously, i.e. i40e_asq_send_command() does not wait for + * completion before returning. + **/ +enum i40e_status_code i40e_aq_send_msg_to_pf(struct i40e_hw *hw, + enum i40e_virtchnl_ops v_opcode, + enum i40e_status_code v_retval, + u8 *msg, u16 msglen, + struct i40e_asq_cmd_details *cmd_details) +{ + struct i40e_aq_desc desc; + struct i40e_asq_cmd_details details; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_send_msg_to_pf); + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_SI); + desc.cookie_high = CPU_TO_LE32(v_opcode); + desc.cookie_low = CPU_TO_LE32(v_retval); + if (msglen) { + desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF + | I40E_AQ_FLAG_RD)); + if (msglen > I40E_AQ_LARGE_BUF) + desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); + desc.datalen = CPU_TO_LE16(msglen); + } + if (!cmd_details) { + i40e_memset(&details, 0, sizeof(details), I40E_NONDMA_MEM); + details.async = true; + cmd_details = &details; + } + status = i40e_asq_send_command(hw, (struct i40e_aq_desc *)&desc, msg, + msglen, cmd_details); + return status; +} + +/** + * i40e_vf_parse_hw_config + * @hw: pointer to the hardware structure + * @msg: pointer to the virtual channel VF resource structure + * + * Given a VF resource message from the PF, populate the hw struct + * with appropriate information. + **/ +void i40e_vf_parse_hw_config(struct i40e_hw *hw, + struct i40e_virtchnl_vf_resource *msg) +{ + struct i40e_virtchnl_vsi_resource *vsi_res; + int i; + + vsi_res = &msg->vsi_res[0]; + + hw->dev_caps.num_vsis = msg->num_vsis; + hw->dev_caps.num_rx_qp = msg->num_queue_pairs; + hw->dev_caps.num_tx_qp = msg->num_queue_pairs; + hw->dev_caps.num_msix_vectors_vf = msg->max_vectors; + hw->dev_caps.dcb = msg->vf_offload_flags & + I40E_VIRTCHNL_VF_OFFLOAD_L2; + hw->dev_caps.fcoe = (msg->vf_offload_flags & + I40E_VIRTCHNL_VF_OFFLOAD_FCOE) ? 1 : 0; + hw->dev_caps.iwarp = (msg->vf_offload_flags & + I40E_VIRTCHNL_VF_OFFLOAD_IWARP) ? 1 : 0; + for (i = 0; i < msg->num_vsis; i++) { + if (vsi_res->vsi_type == I40E_VSI_SRIOV) { + i40e_memcpy(hw->mac.perm_addr, + vsi_res->default_mac_addr, + I40E_ETH_LENGTH_OF_ADDRESS, + I40E_NONDMA_TO_NONDMA); + i40e_memcpy(hw->mac.addr, vsi_res->default_mac_addr, + I40E_ETH_LENGTH_OF_ADDRESS, + I40E_NONDMA_TO_NONDMA); + } + vsi_res++; + } +} + +/** + * i40e_vf_reset + * @hw: pointer to the hardware structure + * + * Send a VF_RESET message to the PF. Does not wait for response from PF + * as none will be forthcoming. Immediately after calling this function, + * the admin queue should be shut down and (optionally) reinitialized. + **/ +enum i40e_status_code i40e_vf_reset(struct i40e_hw *hw) +{ + return i40e_aq_send_msg_to_pf(hw, I40E_VIRTCHNL_OP_RESET_VF, + I40E_SUCCESS, NULL, 0, NULL); +} +#endif /* VF_DRIVER */ diff --git a/drivers/net/i40e/base/i40e_dcb.c b/drivers/net/i40e/base/i40e_dcb.c new file mode 100644 index 0000000..d067028 --- /dev/null +++ b/drivers/net/i40e/base/i40e_dcb.c @@ -0,0 +1,479 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "i40e_adminq.h" +#include "i40e_prototype.h" +#include "i40e_dcb.h" + +/** + * i40e_get_dcbx_status + * @hw: pointer to the hw struct + * @status: Embedded DCBX Engine Status + * + * Get the DCBX status from the Firmware + **/ +enum i40e_status_code i40e_get_dcbx_status(struct i40e_hw *hw, u16 *status) +{ + u32 reg; + + if (!status) + return I40E_ERR_PARAM; + + reg = rd32(hw, I40E_PRTDCB_GENS); + *status = (u16)((reg & I40E_PRTDCB_GENS_DCBX_STATUS_MASK) >> + I40E_PRTDCB_GENS_DCBX_STATUS_SHIFT); + + return I40E_SUCCESS; +} + +/** + * i40e_parse_ieee_etscfg_tlv + * @tlv: IEEE 802.1Qaz ETS CFG TLV + * @dcbcfg: Local store to update ETS CFG data + * + * Parses IEEE 802.1Qaz ETS CFG TLV + **/ +static void i40e_parse_ieee_etscfg_tlv(struct i40e_lldp_org_tlv *tlv, + struct i40e_dcbx_config *dcbcfg) +{ + struct i40e_ieee_ets_config *etscfg; + u8 *buf = tlv->tlvinfo; + u16 offset = 0; + u8 priority; + int i; + + /* First Octet post subtype + * -------------------------- + * |will-|CBS | Re- | Max | + * |ing | |served| TCs | + * -------------------------- + * |1bit | 1bit|3 bits|3bits| + */ + etscfg = &dcbcfg->etscfg; + etscfg->willing = (u8)((buf[offset] & I40E_IEEE_ETS_WILLING_MASK) >> + I40E_IEEE_ETS_WILLING_SHIFT); + etscfg->cbs = (u8)((buf[offset] & I40E_IEEE_ETS_CBS_MASK) >> + I40E_IEEE_ETS_CBS_SHIFT); + etscfg->maxtcs = (u8)((buf[offset] & I40E_IEEE_ETS_MAXTC_MASK) >> + I40E_IEEE_ETS_MAXTC_SHIFT); + + /* Move offset to Priority Assignment Table */ + offset++; + + /* Priority Assignment Table (4 octets) + * Octets:| 1 | 2 | 3 | 4 | + * ----------------------------------------- + * |pri0|pri1|pri2|pri3|pri4|pri5|pri6|pri7| + * ----------------------------------------- + * Bits:|7 4|3 0|7 4|3 0|7 4|3 0|7 4|3 0| + * ----------------------------------------- + */ + for (i = 0; i < 4; i++) { + priority = (u8)((buf[offset] & I40E_IEEE_ETS_PRIO_1_MASK) >> + I40E_IEEE_ETS_PRIO_1_SHIFT); + etscfg->prioritytable[i * 2] = priority; + priority = (u8)((buf[offset] & I40E_IEEE_ETS_PRIO_0_MASK) >> + I40E_IEEE_ETS_PRIO_0_SHIFT); + etscfg->prioritytable[i * 2 + 1] = priority; + offset++; + } + + /* TC Bandwidth Table (8 octets) + * Octets:| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | + * --------------------------------- + * |tc0|tc1|tc2|tc3|tc4|tc5|tc6|tc7| + * --------------------------------- + */ + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) + etscfg->tcbwtable[i] = buf[offset++]; + + /* TSA Assignment Table (8 octets) + * Octets:| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | + * --------------------------------- + * |tc0|tc1|tc2|tc3|tc4|tc5|tc6|tc7| + * --------------------------------- + */ + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) + etscfg->tsatable[i] = buf[offset++]; +} + +/** + * i40e_parse_ieee_etsrec_tlv + * @tlv: IEEE 802.1Qaz ETS REC TLV + * @dcbcfg: Local store to update ETS REC data + * + * Parses IEEE 802.1Qaz ETS REC TLV + **/ +static void i40e_parse_ieee_etsrec_tlv(struct i40e_lldp_org_tlv *tlv, + struct i40e_dcbx_config *dcbcfg) +{ + u8 *buf = tlv->tlvinfo; + u16 offset = 0; + u8 priority; + int i; + + /* Move offset to priority table */ + offset++; + + /* Priority Assignment Table (4 octets) + * Octets:| 1 | 2 | 3 | 4 | + * ----------------------------------------- + * |pri0|pri1|pri2|pri3|pri4|pri5|pri6|pri7| + * ----------------------------------------- + * Bits:|7 4|3 0|7 4|3 0|7 4|3 0|7 4|3 0| + * ----------------------------------------- + */ + for (i = 0; i < 4; i++) { + priority = (u8)((buf[offset] & I40E_IEEE_ETS_PRIO_1_MASK) >> + I40E_IEEE_ETS_PRIO_1_SHIFT); + dcbcfg->etsrec.prioritytable[i*2] = priority; + priority = (u8)((buf[offset] & I40E_IEEE_ETS_PRIO_0_MASK) >> + I40E_IEEE_ETS_PRIO_0_SHIFT); + dcbcfg->etsrec.prioritytable[i*2 + 1] = priority; + offset++; + } + + /* TC Bandwidth Table (8 octets) + * Octets:| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | + * --------------------------------- + * |tc0|tc1|tc2|tc3|tc4|tc5|tc6|tc7| + * --------------------------------- + */ + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) + dcbcfg->etsrec.tcbwtable[i] = buf[offset++]; + + /* TSA Assignment Table (8 octets) + * Octets:| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | + * --------------------------------- + * |tc0|tc1|tc2|tc3|tc4|tc5|tc6|tc7| + * --------------------------------- + */ + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) + dcbcfg->etsrec.tsatable[i] = buf[offset++]; +} + +/** + * i40e_parse_ieee_pfccfg_tlv + * @tlv: IEEE 802.1Qaz PFC CFG TLV + * @dcbcfg: Local store to update PFC CFG data + * + * Parses IEEE 802.1Qaz PFC CFG TLV + **/ +static void i40e_parse_ieee_pfccfg_tlv(struct i40e_lldp_org_tlv *tlv, + struct i40e_dcbx_config *dcbcfg) +{ + u8 *buf = tlv->tlvinfo; + + /* ---------------------------------------- + * |will-|MBC | Re- | PFC | PFC Enable | + * |ing | |served| cap | | + * ----------------------------------------- + * |1bit | 1bit|2 bits|4bits| 1 octet | + */ + dcbcfg->pfc.willing = (u8)((buf[0] & I40E_IEEE_PFC_WILLING_MASK) >> + I40E_IEEE_PFC_WILLING_SHIFT); + dcbcfg->pfc.mbc = (u8)((buf[0] & I40E_IEEE_PFC_MBC_MASK) >> + I40E_IEEE_PFC_MBC_SHIFT); + dcbcfg->pfc.pfccap = (u8)((buf[0] & I40E_IEEE_PFC_CAP_MASK) >> + I40E_IEEE_PFC_CAP_SHIFT); + dcbcfg->pfc.pfcenable = buf[1]; +} + +/** + * i40e_parse_ieee_app_tlv + * @tlv: IEEE 802.1Qaz APP TLV + * @dcbcfg: Local store to update APP PRIO data + * + * Parses IEEE 802.1Qaz APP PRIO TLV + **/ +static void i40e_parse_ieee_app_tlv(struct i40e_lldp_org_tlv *tlv, + struct i40e_dcbx_config *dcbcfg) +{ + u16 typelength; + u16 offset = 0; + u16 length; + int i = 0; + u8 *buf; + + typelength = I40E_NTOHS(tlv->typelength); + length = (u16)((typelength & I40E_LLDP_TLV_LEN_MASK) >> + I40E_LLDP_TLV_LEN_SHIFT); + buf = tlv->tlvinfo; + + /* The App priority table starts 5 octets after TLV header */ + length -= (sizeof(tlv->ouisubtype) + 1); + + /* Move offset to App Priority Table */ + offset++; + + /* Application Priority Table (3 octets) + * Octets:| 1 | 2 | 3 | + * ----------------------------------------- + * |Priority|Rsrvd| Sel | Protocol ID | + * ----------------------------------------- + * Bits:|23 21|20 19|18 16|15 0| + * ----------------------------------------- + */ + while (offset < length) { + dcbcfg->app[i].priority = (u8)((buf[offset] & + I40E_IEEE_APP_PRIO_MASK) >> + I40E_IEEE_APP_PRIO_SHIFT); + dcbcfg->app[i].selector = (u8)((buf[offset] & + I40E_IEEE_APP_SEL_MASK) >> + I40E_IEEE_APP_SEL_SHIFT); + dcbcfg->app[i].protocolid = (buf[offset + 1] << 0x8) | + buf[offset + 2]; + /* Move to next app */ + offset += 3; + i++; + if (i >= I40E_DCBX_MAX_APPS) + break; + } + + dcbcfg->numapps = i; +} + +/** + * i40e_parse_ieee_etsrec_tlv + * @tlv: IEEE 802.1Qaz TLV + * @dcbcfg: Local store to update ETS REC data + * + * Get the TLV subtype and send it to parsing function + * based on the subtype value + **/ +static void i40e_parse_ieee_tlv(struct i40e_lldp_org_tlv *tlv, + struct i40e_dcbx_config *dcbcfg) +{ + u32 ouisubtype; + u8 subtype; + + ouisubtype = I40E_NTOHL(tlv->ouisubtype); + subtype = (u8)((ouisubtype & I40E_LLDP_TLV_SUBTYPE_MASK) >> + I40E_LLDP_TLV_SUBTYPE_SHIFT); + switch (subtype) { + case I40E_IEEE_SUBTYPE_ETS_CFG: + i40e_parse_ieee_etscfg_tlv(tlv, dcbcfg); + break; + case I40E_IEEE_SUBTYPE_ETS_REC: + i40e_parse_ieee_etsrec_tlv(tlv, dcbcfg); + break; + case I40E_IEEE_SUBTYPE_PFC_CFG: + i40e_parse_ieee_pfccfg_tlv(tlv, dcbcfg); + break; + case I40E_IEEE_SUBTYPE_APP_PRI: + i40e_parse_ieee_app_tlv(tlv, dcbcfg); + break; + default: + break; + } +} + +/** + * i40e_parse_org_tlv + * @tlv: Organization specific TLV + * @dcbcfg: Local store to update ETS REC data + * + * Currently only IEEE 802.1Qaz TLV is supported, all others + * will be returned + **/ +static void i40e_parse_org_tlv(struct i40e_lldp_org_tlv *tlv, + struct i40e_dcbx_config *dcbcfg) +{ + u32 ouisubtype; + u32 oui; + + ouisubtype = I40E_NTOHL(tlv->ouisubtype); + oui = (u32)((ouisubtype & I40E_LLDP_TLV_OUI_MASK) >> + I40E_LLDP_TLV_OUI_SHIFT); + switch (oui) { + case I40E_IEEE_8021QAZ_OUI: + i40e_parse_ieee_tlv(tlv, dcbcfg); + break; + default: + break; + } +} + +/** + * i40e_lldp_to_dcb_config + * @lldpmib: LLDPDU to be parsed + * @dcbcfg: store for LLDPDU data + * + * Parse DCB configuration from the LLDPDU + **/ +enum i40e_status_code i40e_lldp_to_dcb_config(u8 *lldpmib, + struct i40e_dcbx_config *dcbcfg) +{ + enum i40e_status_code ret = I40E_SUCCESS; + struct i40e_lldp_org_tlv *tlv; + u16 type; + u16 length; + u16 typelength; + u16 offset = 0; + + if (!lldpmib || !dcbcfg) + return I40E_ERR_PARAM; + + /* set to the start of LLDPDU */ + lldpmib += I40E_LLDP_MIB_HLEN; + tlv = (struct i40e_lldp_org_tlv *)lldpmib; + while (1) { + typelength = I40E_NTOHS(tlv->typelength); + type = (u16)((typelength & I40E_LLDP_TLV_TYPE_MASK) >> + I40E_LLDP_TLV_TYPE_SHIFT); + length = (u16)((typelength & I40E_LLDP_TLV_LEN_MASK) >> + I40E_LLDP_TLV_LEN_SHIFT); + offset += sizeof(typelength) + length; + + /* END TLV or beyond LLDPDU size */ + if ((type == I40E_TLV_TYPE_END) || (offset > I40E_LLDPDU_SIZE)) + break; + + switch (type) { + case I40E_TLV_TYPE_ORG: + i40e_parse_org_tlv(tlv, dcbcfg); + break; + default: + break; + } + + /* Move to next TLV */ + tlv = (struct i40e_lldp_org_tlv *)((char *)tlv + + sizeof(tlv->typelength) + + length); + } + + return ret; +} + +/** + * i40e_aq_get_dcb_config + * @hw: pointer to the hw struct + * @mib_type: mib type for the query + * @bridgetype: bridge type for the query (remote) + * @dcbcfg: store for LLDPDU data + * + * Query DCB configuration from the Firmware + **/ +enum i40e_status_code i40e_aq_get_dcb_config(struct i40e_hw *hw, u8 mib_type, + u8 bridgetype, + struct i40e_dcbx_config *dcbcfg) +{ + enum i40e_status_code ret = I40E_SUCCESS; + struct i40e_virt_mem mem; + u8 *lldpmib; + + /* Allocate the LLDPDU */ + ret = i40e_allocate_virt_mem(hw, &mem, I40E_LLDPDU_SIZE); + if (ret) + return ret; + + lldpmib = (u8 *)mem.va; + ret = i40e_aq_get_lldp_mib(hw, bridgetype, mib_type, + (void *)lldpmib, I40E_LLDPDU_SIZE, + NULL, NULL, NULL); + if (ret) + goto free_mem; + + /* Parse LLDP MIB to get dcb configuration */ + ret = i40e_lldp_to_dcb_config(lldpmib, dcbcfg); + +free_mem: + i40e_free_virt_mem(hw, &mem); + return ret; +} + +/** + * i40e_get_dcb_config + * @hw: pointer to the hw struct + * + * Get DCB configuration from the Firmware + **/ +enum i40e_status_code i40e_get_dcb_config(struct i40e_hw *hw) +{ + enum i40e_status_code ret = I40E_SUCCESS; + + /* Get Local DCB Config */ + ret = i40e_aq_get_dcb_config(hw, I40E_AQ_LLDP_MIB_LOCAL, 0, + &hw->local_dcbx_config); + if (ret) + goto out; + + /* Get Remote DCB Config */ + ret = i40e_aq_get_dcb_config(hw, I40E_AQ_LLDP_MIB_REMOTE, + I40E_AQ_LLDP_BRIDGE_TYPE_NEAREST_BRIDGE, + &hw->remote_dcbx_config); +out: + return ret; +} + +/** + * i40e_init_dcb + * @hw: pointer to the hw struct + * + * Update DCB configuration from the Firmware + **/ +enum i40e_status_code i40e_init_dcb(struct i40e_hw *hw) +{ + enum i40e_status_code ret = I40E_SUCCESS; + + if (!hw->func_caps.dcb) + return ret; + + /* Get DCBX status */ + ret = i40e_get_dcbx_status(hw, &hw->dcbx_status); + if (ret) + return ret; + + /* Check the DCBX Status */ + switch (hw->dcbx_status) { + case I40E_DCBX_STATUS_DONE: + case I40E_DCBX_STATUS_IN_PROGRESS: + /* Get current DCBX configuration */ + ret = i40e_get_dcb_config(hw); + break; + case I40E_DCBX_STATUS_DISABLED: + return ret; + case I40E_DCBX_STATUS_NOT_STARTED: + case I40E_DCBX_STATUS_MULTIPLE_PEERS: + default: + break; + } + + /* Configure the LLDP MIB change event */ + ret = i40e_aq_cfg_lldp_mib_change_event(hw, true, NULL); + if (ret) + return ret; + + return ret; +} diff --git a/drivers/net/i40e/base/i40e_dcb.h b/drivers/net/i40e/base/i40e_dcb.h new file mode 100644 index 0000000..2261e08 --- /dev/null +++ b/drivers/net/i40e/base/i40e_dcb.h @@ -0,0 +1,161 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _I40E_DCB_H_ +#define _I40E_DCB_H_ + +#include "i40e_type.h" + +#define I40E_DCBX_OFFLOAD_DISABLED 0 +#define I40E_DCBX_OFFLOAD_ENABLED 1 + +#define I40E_DCBX_STATUS_NOT_STARTED 0 +#define I40E_DCBX_STATUS_IN_PROGRESS 1 +#define I40E_DCBX_STATUS_DONE 2 +#define I40E_DCBX_STATUS_MULTIPLE_PEERS 3 +#define I40E_DCBX_STATUS_DISABLED 7 + +#define I40E_TLV_TYPE_END 0 +#define I40E_TLV_TYPE_ORG 127 + +#define I40E_IEEE_8021QAZ_OUI 0x0080C2 +#define I40E_IEEE_SUBTYPE_ETS_CFG 9 +#define I40E_IEEE_SUBTYPE_ETS_REC 10 +#define I40E_IEEE_SUBTYPE_PFC_CFG 11 +#define I40E_IEEE_SUBTYPE_APP_PRI 12 + +#define I40E_LLDP_ADMINSTATUS_DISABLED 0 +#define I40E_LLDP_ADMINSTATUS_ENABLED_RX 1 +#define I40E_LLDP_ADMINSTATUS_ENABLED_TX 2 +#define I40E_LLDP_ADMINSTATUS_ENABLED_RXTX 3 + +/* Defines for LLDP TLV header */ +#define I40E_LLDP_MIB_HLEN 14 +#define I40E_LLDP_TLV_LEN_SHIFT 0 +#define I40E_LLDP_TLV_LEN_MASK (0x01FF << I40E_LLDP_TLV_LEN_SHIFT) +#define I40E_LLDP_TLV_TYPE_SHIFT 9 +#define I40E_LLDP_TLV_TYPE_MASK (0x7F << I40E_LLDP_TLV_TYPE_SHIFT) +#define I40E_LLDP_TLV_SUBTYPE_SHIFT 0 +#define I40E_LLDP_TLV_SUBTYPE_MASK (0xFF << I40E_LLDP_TLV_SUBTYPE_SHIFT) +#define I40E_LLDP_TLV_OUI_SHIFT 8 +#define I40E_LLDP_TLV_OUI_MASK (0xFFFFFF << I40E_LLDP_TLV_OUI_SHIFT) + +/* Defines for IEEE ETS TLV */ +#define I40E_IEEE_ETS_MAXTC_SHIFT 0 +#define I40E_IEEE_ETS_MAXTC_MASK (0x7 << I40E_IEEE_ETS_MAXTC_SHIFT) +#define I40E_IEEE_ETS_CBS_SHIFT 6 +#define I40E_IEEE_ETS_CBS_MASK (0x1 << I40E_IEEE_ETS_CBS_SHIFT) +#define I40E_IEEE_ETS_WILLING_SHIFT 7 +#define I40E_IEEE_ETS_WILLING_MASK (0x1 << I40E_IEEE_ETS_WILLING_SHIFT) +#define I40E_IEEE_ETS_PRIO_0_SHIFT 0 +#define I40E_IEEE_ETS_PRIO_0_MASK (0x7 << I40E_IEEE_ETS_PRIO_0_SHIFT) +#define I40E_IEEE_ETS_PRIO_1_SHIFT 4 +#define I40E_IEEE_ETS_PRIO_1_MASK (0x7 << I40E_IEEE_ETS_PRIO_1_SHIFT) + +/* Defines for IEEE TSA types */ +#define I40E_IEEE_TSA_STRICT 0 +#define I40E_IEEE_TSA_CBS 1 +#define I40E_IEEE_TSA_ETS 2 +#define I40E_IEEE_TSA_VENDOR 255 + +/* Defines for IEEE PFC TLV */ +#define I40E_IEEE_PFC_CAP_SHIFT 0 +#define I40E_IEEE_PFC_CAP_MASK (0xF << I40E_IEEE_PFC_CAP_SHIFT) +#define I40E_IEEE_PFC_MBC_SHIFT 6 +#define I40E_IEEE_PFC_MBC_MASK (0x1 << I40E_IEEE_PFC_MBC_SHIFT) +#define I40E_IEEE_PFC_WILLING_SHIFT 7 +#define I40E_IEEE_PFC_WILLING_MASK (0x1 << I40E_IEEE_PFC_WILLING_SHIFT) + +/* Defines for IEEE APP TLV */ +#define I40E_IEEE_APP_SEL_SHIFT 0 +#define I40E_IEEE_APP_SEL_MASK (0x7 << I40E_IEEE_APP_SEL_SHIFT) +#define I40E_IEEE_APP_PRIO_SHIFT 5 +#define I40E_IEEE_APP_PRIO_MASK (0x7 << I40E_IEEE_APP_PRIO_SHIFT) + + +#pragma pack(1) + +/* IEEE 802.1AB LLDP TLV structure */ +struct i40e_lldp_generic_tlv { + __be16 typelength; + u8 tlvinfo[1]; +}; + +/* IEEE 802.1AB LLDP Organization specific TLV */ +struct i40e_lldp_org_tlv { + __be16 typelength; + __be32 ouisubtype; + u8 tlvinfo[1]; +}; +#pragma pack() + +/* + * TODO: The below structures related LLDP/DCBX variables + * and statistics are defined but need to find how to get + * the required information from the Firmware to use them + */ + +/* IEEE 802.1AB LLDP Agent Statistics */ +struct i40e_lldp_stats { + u64 remtablelastchangetime; + u64 remtableinserts; + u64 remtabledeletes; + u64 remtabledrops; + u64 remtableageouts; + u64 txframestotal; + u64 rxframesdiscarded; + u64 rxportframeerrors; + u64 rxportframestotal; + u64 rxporttlvsdiscardedtotal; + u64 rxporttlvsunrecognizedtotal; + u64 remtoomanyneighbors; +}; + +/* IEEE 802.1Qaz DCBX variables */ +struct i40e_dcbx_variables { + u32 defmaxtrafficclasses; + u32 defprioritytcmapping; + u32 deftcbandwidth; + u32 deftsaassignment; +}; + +enum i40e_status_code i40e_get_dcbx_status(struct i40e_hw *hw, + u16 *status); +enum i40e_status_code i40e_lldp_to_dcb_config(u8 *lldpmib, + struct i40e_dcbx_config *dcbcfg); +enum i40e_status_code i40e_aq_get_dcb_config(struct i40e_hw *hw, u8 mib_type, + u8 bridgetype, + struct i40e_dcbx_config *dcbcfg); +enum i40e_status_code i40e_get_dcb_config(struct i40e_hw *hw); +enum i40e_status_code i40e_init_dcb(struct i40e_hw *hw); +#endif /* _I40E_DCB_H_ */ diff --git a/drivers/net/i40e/base/i40e_diag.c b/drivers/net/i40e/base/i40e_diag.c new file mode 100644 index 0000000..167fcf8 --- /dev/null +++ b/drivers/net/i40e/base/i40e_diag.c @@ -0,0 +1,178 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "i40e_diag.h" +#include "i40e_prototype.h" + +/** + * i40e_diag_set_loopback + * @hw: pointer to the hw struct + * @mode: loopback mode + * + * Set chosen loopback mode + **/ +enum i40e_status_code i40e_diag_set_loopback(struct i40e_hw *hw, + enum i40e_lb_mode mode) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + + if (i40e_aq_set_lb_modes(hw, mode, NULL)) + ret_code = I40E_ERR_DIAG_TEST_FAILED; + + return ret_code; +} + +/** + * i40e_diag_reg_pattern_test + * @hw: pointer to the hw struct + * @reg: reg to be tested + * @mask: bits to be touched + **/ +static enum i40e_status_code i40e_diag_reg_pattern_test(struct i40e_hw *hw, + u32 reg, u32 mask) +{ + const u32 patterns[] = {0x5A5A5A5A, 0xA5A5A5A5, 0x00000000, 0xFFFFFFFF}; + u32 pat, val, orig_val; + int i; + + orig_val = rd32(hw, reg); + for (i = 0; i < ARRAY_SIZE(patterns); i++) { + pat = patterns[i]; + wr32(hw, reg, (pat & mask)); + val = rd32(hw, reg); + if ((val & mask) != (pat & mask)) { + return I40E_ERR_DIAG_TEST_FAILED; + } + } + + wr32(hw, reg, orig_val); + val = rd32(hw, reg); + if (val != orig_val) { + return I40E_ERR_DIAG_TEST_FAILED; + } + + return I40E_SUCCESS; +} + +struct i40e_diag_reg_test_info i40e_reg_list[] = { + /* offset mask elements stride */ + {I40E_QTX_CTL(0), 0x0000FFBF, 1, I40E_QTX_CTL(1) - I40E_QTX_CTL(0)}, + {I40E_PFINT_ITR0(0), 0x00000FFF, 3, I40E_PFINT_ITR0(1) - I40E_PFINT_ITR0(0)}, + {I40E_PFINT_ITRN(0, 0), 0x00000FFF, 1, I40E_PFINT_ITRN(0, 1) - I40E_PFINT_ITRN(0, 0)}, + {I40E_PFINT_ITRN(1, 0), 0x00000FFF, 1, I40E_PFINT_ITRN(1, 1) - I40E_PFINT_ITRN(1, 0)}, + {I40E_PFINT_ITRN(2, 0), 0x00000FFF, 1, I40E_PFINT_ITRN(2, 1) - I40E_PFINT_ITRN(2, 0)}, + {I40E_PFINT_STAT_CTL0, 0x0000000C, 1, 0}, + {I40E_PFINT_LNKLST0, 0x00001FFF, 1, 0}, + {I40E_PFINT_LNKLSTN(0), 0x000007FF, 1, I40E_PFINT_LNKLSTN(1) - I40E_PFINT_LNKLSTN(0)}, + {I40E_QINT_TQCTL(0), 0x000000FF, 1, I40E_QINT_TQCTL(1) - I40E_QINT_TQCTL(0)}, + {I40E_QINT_RQCTL(0), 0x000000FF, 1, I40E_QINT_RQCTL(1) - I40E_QINT_RQCTL(0)}, + {I40E_PFINT_ICR0_ENA, 0xF7F20000, 1, 0}, + { 0 } +}; + +/** + * i40e_diag_reg_test + * @hw: pointer to the hw struct + * + * Perform registers diagnostic test + **/ +enum i40e_status_code i40e_diag_reg_test(struct i40e_hw *hw) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + u32 reg, mask; + u32 i, j; + + for (i = 0; i40e_reg_list[i].offset != 0 && + ret_code == I40E_SUCCESS; i++) { + + /* set actual reg range for dynamically allocated resources */ + if (i40e_reg_list[i].offset == I40E_QTX_CTL(0) && + hw->func_caps.num_tx_qp != 0) + i40e_reg_list[i].elements = hw->func_caps.num_tx_qp; + if ((i40e_reg_list[i].offset == I40E_PFINT_ITRN(0, 0) || + i40e_reg_list[i].offset == I40E_PFINT_ITRN(1, 0) || + i40e_reg_list[i].offset == I40E_PFINT_ITRN(2, 0) || + i40e_reg_list[i].offset == I40E_QINT_TQCTL(0) || + i40e_reg_list[i].offset == I40E_QINT_RQCTL(0)) && + hw->func_caps.num_msix_vectors != 0) + i40e_reg_list[i].elements = + hw->func_caps.num_msix_vectors - 1; + + /* test register access */ + mask = i40e_reg_list[i].mask; + for (j = 0; j < i40e_reg_list[i].elements && + ret_code == I40E_SUCCESS; j++) { + reg = i40e_reg_list[i].offset + + (j * i40e_reg_list[i].stride); + ret_code = i40e_diag_reg_pattern_test(hw, reg, mask); + } + } + + return ret_code; +} + +/** + * i40e_diag_eeprom_test + * @hw: pointer to the hw struct + * + * Perform EEPROM diagnostic test + **/ +enum i40e_status_code i40e_diag_eeprom_test(struct i40e_hw *hw) +{ + enum i40e_status_code ret_code; + u16 reg_val; + + /* read NVM control word and if NVM valid, validate EEPROM checksum*/ + ret_code = i40e_read_nvm_word(hw, I40E_SR_NVM_CONTROL_WORD, ®_val); + if ((ret_code == I40E_SUCCESS) && + ((reg_val & I40E_SR_CONTROL_WORD_1_MASK) == + (0x01 << I40E_SR_CONTROL_WORD_1_SHIFT))) { + ret_code = i40e_validate_nvm_checksum(hw, NULL); + } else { + ret_code = I40E_ERR_DIAG_TEST_FAILED; + } + + return ret_code; +} + +/** + * i40e_diag_fw_alive_test + * @hw: pointer to the hw struct + * + * Perform FW alive diagnostic test + **/ +enum i40e_status_code i40e_diag_fw_alive_test(struct i40e_hw *hw) +{ + UNREFERENCED_1PARAMETER(hw); + return I40E_SUCCESS; +} diff --git a/drivers/net/i40e/base/i40e_diag.h b/drivers/net/i40e/base/i40e_diag.h new file mode 100644 index 0000000..feb4d4b --- /dev/null +++ b/drivers/net/i40e/base/i40e_diag.h @@ -0,0 +1,61 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _I40E_DIAG_H_ +#define _I40E_DIAG_H_ + +#include "i40e_type.h" + +enum i40e_lb_mode { + I40E_LB_MODE_NONE = 0x0, + I40E_LB_MODE_PHY_LOCAL = I40E_AQ_LB_PHY_LOCAL, + I40E_LB_MODE_PHY_REMOTE = I40E_AQ_LB_PHY_REMOTE, + I40E_LB_MODE_MAC_LOCAL = I40E_AQ_LB_MAC_LOCAL, +}; + +struct i40e_diag_reg_test_info { + u32 offset; /* the base register */ + u32 mask; /* bits that can be tested */ + u32 elements; /* number of elements if array */ + u32 stride; /* bytes between each element */ +}; + +extern struct i40e_diag_reg_test_info i40e_reg_list[]; + +enum i40e_status_code i40e_diag_set_loopback(struct i40e_hw *hw, + enum i40e_lb_mode mode); +enum i40e_status_code i40e_diag_fw_alive_test(struct i40e_hw *hw); +enum i40e_status_code i40e_diag_reg_test(struct i40e_hw *hw); +enum i40e_status_code i40e_diag_eeprom_test(struct i40e_hw *hw); + +#endif /* _I40E_DIAG_H_ */ diff --git a/drivers/net/i40e/base/i40e_hmc.c b/drivers/net/i40e/base/i40e_hmc.c new file mode 100644 index 0000000..ae6896a --- /dev/null +++ b/drivers/net/i40e/base/i40e_hmc.c @@ -0,0 +1,373 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "i40e_osdep.h" +#include "i40e_register.h" +#include "i40e_status.h" +#include "i40e_alloc.h" +#include "i40e_hmc.h" +#include "i40e_type.h" + +/** + * i40e_add_sd_table_entry - Adds a segment descriptor to the table + * @hw: pointer to our hw struct + * @hmc_info: pointer to the HMC configuration information struct + * @sd_index: segment descriptor index to manipulate + * @type: what type of segment descriptor we're manipulating + * @direct_mode_sz: size to alloc in direct mode + **/ +enum i40e_status_code i40e_add_sd_table_entry(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 sd_index, + enum i40e_sd_entry_type type, + u64 direct_mode_sz) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + struct i40e_hmc_sd_entry *sd_entry; + enum i40e_memory_type mem_type; + bool dma_mem_alloc_done = false; + struct i40e_dma_mem mem; + u64 alloc_len; + + if (NULL == hmc_info->sd_table.sd_entry) { + ret_code = I40E_ERR_BAD_PTR; + DEBUGOUT("i40e_add_sd_table_entry: bad sd_entry\n"); + goto exit; + } + + if (sd_index >= hmc_info->sd_table.sd_cnt) { + ret_code = I40E_ERR_INVALID_SD_INDEX; + DEBUGOUT("i40e_add_sd_table_entry: bad sd_index\n"); + goto exit; + } + + sd_entry = &hmc_info->sd_table.sd_entry[sd_index]; + if (!sd_entry->valid) { + if (I40E_SD_TYPE_PAGED == type) { + mem_type = i40e_mem_pd; + alloc_len = I40E_HMC_PAGED_BP_SIZE; + } else { + mem_type = i40e_mem_bp_jumbo; + alloc_len = direct_mode_sz; + } + + /* allocate a 4K pd page or 2M backing page */ + ret_code = i40e_allocate_dma_mem(hw, &mem, mem_type, alloc_len, + I40E_HMC_PD_BP_BUF_ALIGNMENT); + if (ret_code) + goto exit; + dma_mem_alloc_done = true; + if (I40E_SD_TYPE_PAGED == type) { + ret_code = i40e_allocate_virt_mem(hw, + &sd_entry->u.pd_table.pd_entry_virt_mem, + sizeof(struct i40e_hmc_pd_entry) * 512); + if (ret_code) + goto exit; + sd_entry->u.pd_table.pd_entry = + (struct i40e_hmc_pd_entry *) + sd_entry->u.pd_table.pd_entry_virt_mem.va; + i40e_memcpy(&sd_entry->u.pd_table.pd_page_addr, + &mem, sizeof(struct i40e_dma_mem), + I40E_NONDMA_TO_NONDMA); + } else { + i40e_memcpy(&sd_entry->u.bp.addr, + &mem, sizeof(struct i40e_dma_mem), + I40E_NONDMA_TO_NONDMA); + sd_entry->u.bp.sd_pd_index = sd_index; + } + /* initialize the sd entry */ + hmc_info->sd_table.sd_entry[sd_index].entry_type = type; + + /* increment the ref count */ + I40E_INC_SD_REFCNT(&hmc_info->sd_table); + } + /* Increment backing page reference count */ + if (I40E_SD_TYPE_DIRECT == sd_entry->entry_type) + I40E_INC_BP_REFCNT(&sd_entry->u.bp); +exit: + if (I40E_SUCCESS != ret_code) + if (dma_mem_alloc_done) + i40e_free_dma_mem(hw, &mem); + + return ret_code; +} + +/** + * i40e_add_pd_table_entry - Adds page descriptor to the specified table + * @hw: pointer to our HW structure + * @hmc_info: pointer to the HMC configuration information structure + * @pd_index: which page descriptor index to manipulate + * + * This function: + * 1. Initializes the pd entry + * 2. Adds pd_entry in the pd_table + * 3. Mark the entry valid in i40e_hmc_pd_entry structure + * 4. Initializes the pd_entry's ref count to 1 + * assumptions: + * 1. The memory for pd should be pinned down, physically contiguous and + * aligned on 4K boundary and zeroed memory. + * 2. It should be 4K in size. + **/ +enum i40e_status_code i40e_add_pd_table_entry(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 pd_index) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + struct i40e_hmc_pd_table *pd_table; + struct i40e_hmc_pd_entry *pd_entry; + struct i40e_dma_mem mem; + u32 sd_idx, rel_pd_idx; + u64 *pd_addr; + u64 page_desc; + + if (pd_index / I40E_HMC_PD_CNT_IN_SD >= hmc_info->sd_table.sd_cnt) { + ret_code = I40E_ERR_INVALID_PAGE_DESC_INDEX; + DEBUGOUT("i40e_add_pd_table_entry: bad pd_index\n"); + goto exit; + } + + /* find corresponding sd */ + sd_idx = (pd_index / I40E_HMC_PD_CNT_IN_SD); + if (I40E_SD_TYPE_PAGED != + hmc_info->sd_table.sd_entry[sd_idx].entry_type) + goto exit; + + rel_pd_idx = (pd_index % I40E_HMC_PD_CNT_IN_SD); + pd_table = &hmc_info->sd_table.sd_entry[sd_idx].u.pd_table; + pd_entry = &pd_table->pd_entry[rel_pd_idx]; + if (!pd_entry->valid) { + /* allocate a 4K backing page */ + ret_code = i40e_allocate_dma_mem(hw, &mem, i40e_mem_bp, + I40E_HMC_PAGED_BP_SIZE, + I40E_HMC_PD_BP_BUF_ALIGNMENT); + if (ret_code) + goto exit; + + i40e_memcpy(&pd_entry->bp.addr, &mem, + sizeof(struct i40e_dma_mem), I40E_NONDMA_TO_NONDMA); + pd_entry->bp.sd_pd_index = pd_index; + pd_entry->bp.entry_type = I40E_SD_TYPE_PAGED; + /* Set page address and valid bit */ + page_desc = mem.pa | 0x1; + + pd_addr = (u64 *)pd_table->pd_page_addr.va; + pd_addr += rel_pd_idx; + + /* Add the backing page physical address in the pd entry */ + i40e_memcpy(pd_addr, &page_desc, sizeof(u64), + I40E_NONDMA_TO_DMA); + + pd_entry->sd_index = sd_idx; + pd_entry->valid = true; + I40E_INC_PD_REFCNT(pd_table); + } + I40E_INC_BP_REFCNT(&pd_entry->bp); +exit: + return ret_code; +} + +/** + * i40e_remove_pd_bp - remove a backing page from a page descriptor + * @hw: pointer to our HW structure + * @hmc_info: pointer to the HMC configuration information structure + * @idx: the page index + * @is_pf: distinguishes a VF from a PF + * + * This function: + * 1. Marks the entry in pd tabe (for paged address mode) or in sd table + * (for direct address mode) invalid. + * 2. Write to register PMPDINV to invalidate the backing page in FV cache + * 3. Decrement the ref count for the pd _entry + * assumptions: + * 1. Caller can deallocate the memory used by backing storage after this + * function returns. + **/ +enum i40e_status_code i40e_remove_pd_bp(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 idx) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + struct i40e_hmc_pd_entry *pd_entry; + struct i40e_hmc_pd_table *pd_table; + struct i40e_hmc_sd_entry *sd_entry; + u32 sd_idx, rel_pd_idx; + u64 *pd_addr; + + /* calculate index */ + sd_idx = idx / I40E_HMC_PD_CNT_IN_SD; + rel_pd_idx = idx % I40E_HMC_PD_CNT_IN_SD; + if (sd_idx >= hmc_info->sd_table.sd_cnt) { + ret_code = I40E_ERR_INVALID_PAGE_DESC_INDEX; + DEBUGOUT("i40e_remove_pd_bp: bad idx\n"); + goto exit; + } + sd_entry = &hmc_info->sd_table.sd_entry[sd_idx]; + if (I40E_SD_TYPE_PAGED != sd_entry->entry_type) { + ret_code = I40E_ERR_INVALID_SD_TYPE; + DEBUGOUT("i40e_remove_pd_bp: wrong sd_entry type\n"); + goto exit; + } + /* get the entry and decrease its ref counter */ + pd_table = &hmc_info->sd_table.sd_entry[sd_idx].u.pd_table; + pd_entry = &pd_table->pd_entry[rel_pd_idx]; + I40E_DEC_BP_REFCNT(&pd_entry->bp); + if (pd_entry->bp.ref_cnt) + goto exit; + + /* mark the entry invalid */ + pd_entry->valid = false; + I40E_DEC_PD_REFCNT(pd_table); + pd_addr = (u64 *)pd_table->pd_page_addr.va; + pd_addr += rel_pd_idx; + i40e_memset(pd_addr, 0, sizeof(u64), I40E_DMA_MEM); + I40E_INVALIDATE_PF_HMC_PD(hw, sd_idx, idx); + + /* free memory here */ + ret_code = i40e_free_dma_mem(hw, &(pd_entry->bp.addr)); + if (I40E_SUCCESS != ret_code) + goto exit; + if (!pd_table->ref_cnt) + i40e_free_virt_mem(hw, &pd_table->pd_entry_virt_mem); +exit: + return ret_code; +} + +/** + * i40e_prep_remove_sd_bp - Prepares to remove a backing page from a sd entry + * @hmc_info: pointer to the HMC configuration information structure + * @idx: the page index + **/ +enum i40e_status_code i40e_prep_remove_sd_bp(struct i40e_hmc_info *hmc_info, + u32 idx) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + struct i40e_hmc_sd_entry *sd_entry; + + /* get the entry and decrease its ref counter */ + sd_entry = &hmc_info->sd_table.sd_entry[idx]; + I40E_DEC_BP_REFCNT(&sd_entry->u.bp); + if (sd_entry->u.bp.ref_cnt) { + ret_code = I40E_ERR_NOT_READY; + goto exit; + } + I40E_DEC_SD_REFCNT(&hmc_info->sd_table); + + /* mark the entry invalid */ + sd_entry->valid = false; +exit: + return ret_code; +} + +/** + * i40e_remove_sd_bp_new - Removes a backing page from a segment descriptor + * @hw: pointer to our hw struct + * @hmc_info: pointer to the HMC configuration information structure + * @idx: the page index + * @is_pf: used to distinguish between VF and PF + **/ +enum i40e_status_code i40e_remove_sd_bp_new(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 idx, bool is_pf) +{ + struct i40e_hmc_sd_entry *sd_entry; + enum i40e_status_code ret_code = I40E_SUCCESS; + + /* get the entry and decrease its ref counter */ + sd_entry = &hmc_info->sd_table.sd_entry[idx]; + if (is_pf) { + I40E_CLEAR_PF_SD_ENTRY(hw, idx, I40E_SD_TYPE_DIRECT); + } else { + ret_code = I40E_NOT_SUPPORTED; + goto exit; + } + ret_code = i40e_free_dma_mem(hw, &(sd_entry->u.bp.addr)); + if (I40E_SUCCESS != ret_code) + goto exit; +exit: + return ret_code; +} + +/** + * i40e_prep_remove_pd_page - Prepares to remove a PD page from sd entry. + * @hmc_info: pointer to the HMC configuration information structure + * @idx: segment descriptor index to find the relevant page descriptor + **/ +enum i40e_status_code i40e_prep_remove_pd_page(struct i40e_hmc_info *hmc_info, + u32 idx) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + struct i40e_hmc_sd_entry *sd_entry; + + sd_entry = &hmc_info->sd_table.sd_entry[idx]; + + if (sd_entry->u.pd_table.ref_cnt) { + ret_code = I40E_ERR_NOT_READY; + goto exit; + } + + /* mark the entry invalid */ + sd_entry->valid = false; + + I40E_DEC_SD_REFCNT(&hmc_info->sd_table); +exit: + return ret_code; +} + +/** + * i40e_remove_pd_page_new - Removes a PD page from sd entry. + * @hw: pointer to our hw struct + * @hmc_info: pointer to the HMC configuration information structure + * @idx: segment descriptor index to find the relevant page descriptor + * @is_pf: used to distinguish between VF and PF + **/ +enum i40e_status_code i40e_remove_pd_page_new(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 idx, bool is_pf) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + struct i40e_hmc_sd_entry *sd_entry; + + sd_entry = &hmc_info->sd_table.sd_entry[idx]; + if (is_pf) { + I40E_CLEAR_PF_SD_ENTRY(hw, idx, I40E_SD_TYPE_PAGED); + } else { + ret_code = I40E_NOT_SUPPORTED; + goto exit; + } + /* free memory here */ + ret_code = i40e_free_dma_mem(hw, &(sd_entry->u.pd_table.pd_page_addr)); + if (I40E_SUCCESS != ret_code) + goto exit; +exit: + return ret_code; +} diff --git a/drivers/net/i40e/base/i40e_hmc.h b/drivers/net/i40e/base/i40e_hmc.h new file mode 100644 index 0000000..eb629fc --- /dev/null +++ b/drivers/net/i40e/base/i40e_hmc.h @@ -0,0 +1,243 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _I40E_HMC_H_ +#define _I40E_HMC_H_ + +#define I40E_HMC_MAX_BP_COUNT 512 + +/* forward-declare the HW struct for the compiler */ +struct i40e_hw; + +#define I40E_HMC_INFO_SIGNATURE 0x484D5347 /* HMSG */ +#define I40E_HMC_PD_CNT_IN_SD 512 +#define I40E_HMC_DIRECT_BP_SIZE 0x200000 /* 2M */ +#define I40E_HMC_PAGED_BP_SIZE 4096 +#define I40E_HMC_PD_BP_BUF_ALIGNMENT 4096 +#define I40E_FIRST_VF_FPM_ID 16 + +struct i40e_hmc_obj_info { + u64 base; /* base addr in FPM */ + u32 max_cnt; /* max count available for this hmc func */ + u32 cnt; /* count of objects driver actually wants to create */ + u64 size; /* size in bytes of one object */ +}; + +enum i40e_sd_entry_type { + I40E_SD_TYPE_INVALID = 0, + I40E_SD_TYPE_PAGED = 1, + I40E_SD_TYPE_DIRECT = 2 +}; + +struct i40e_hmc_bp { + enum i40e_sd_entry_type entry_type; + struct i40e_dma_mem addr; /* populate to be used by hw */ + u32 sd_pd_index; + u32 ref_cnt; +}; + +struct i40e_hmc_pd_entry { + struct i40e_hmc_bp bp; + u32 sd_index; + bool valid; +}; + +struct i40e_hmc_pd_table { + struct i40e_dma_mem pd_page_addr; /* populate to be used by hw */ + struct i40e_hmc_pd_entry *pd_entry; /* [512] for sw book keeping */ + struct i40e_virt_mem pd_entry_virt_mem; /* virt mem for pd_entry */ + + u32 ref_cnt; + u32 sd_index; +}; + +struct i40e_hmc_sd_entry { + enum i40e_sd_entry_type entry_type; + bool valid; + + union { + struct i40e_hmc_pd_table pd_table; + struct i40e_hmc_bp bp; + } u; +}; + +struct i40e_hmc_sd_table { + struct i40e_virt_mem addr; /* used to track sd_entry allocations */ + u32 sd_cnt; + u32 ref_cnt; + struct i40e_hmc_sd_entry *sd_entry; /* (sd_cnt*512) entries max */ +}; + +struct i40e_hmc_info { + u32 signature; + /* equals to pci func num for PF and dynamically allocated for VFs */ + u8 hmc_fn_id; + u16 first_sd_index; /* index of the first available SD */ + + /* hmc objects */ + struct i40e_hmc_obj_info *hmc_obj; + struct i40e_virt_mem hmc_obj_virt_mem; + struct i40e_hmc_sd_table sd_table; +}; + +#define I40E_INC_SD_REFCNT(sd_table) ((sd_table)->ref_cnt++) +#define I40E_INC_PD_REFCNT(pd_table) ((pd_table)->ref_cnt++) +#define I40E_INC_BP_REFCNT(bp) ((bp)->ref_cnt++) + +#define I40E_DEC_SD_REFCNT(sd_table) ((sd_table)->ref_cnt--) +#define I40E_DEC_PD_REFCNT(pd_table) ((pd_table)->ref_cnt--) +#define I40E_DEC_BP_REFCNT(bp) ((bp)->ref_cnt--) + +/** + * I40E_SET_PF_SD_ENTRY - marks the sd entry as valid in the hardware + * @hw: pointer to our hw struct + * @pa: pointer to physical address + * @sd_index: segment descriptor index + * @type: if sd entry is direct or paged + **/ +#define I40E_SET_PF_SD_ENTRY(hw, pa, sd_index, type) \ +{ \ + u32 val1, val2, val3; \ + val1 = (u32)(I40E_HI_DWORD(pa)); \ + val2 = (u32)(pa) | (I40E_HMC_MAX_BP_COUNT << \ + I40E_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) | \ + ((((type) == I40E_SD_TYPE_PAGED) ? 0 : 1) << \ + I40E_PFHMC_SDDATALOW_PMSDTYPE_SHIFT) | \ + (1 << I40E_PFHMC_SDDATALOW_PMSDVALID_SHIFT); \ + val3 = (sd_index) | (1u << I40E_PFHMC_SDCMD_PMSDWR_SHIFT); \ + wr32((hw), I40E_PFHMC_SDDATAHIGH, val1); \ + wr32((hw), I40E_PFHMC_SDDATALOW, val2); \ + wr32((hw), I40E_PFHMC_SDCMD, val3); \ +} + +/** + * I40E_CLEAR_PF_SD_ENTRY - marks the sd entry as invalid in the hardware + * @hw: pointer to our hw struct + * @sd_index: segment descriptor index + * @type: if sd entry is direct or paged + **/ +#define I40E_CLEAR_PF_SD_ENTRY(hw, sd_index, type) \ +{ \ + u32 val2, val3; \ + val2 = (I40E_HMC_MAX_BP_COUNT << \ + I40E_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) | \ + ((((type) == I40E_SD_TYPE_PAGED) ? 0 : 1) << \ + I40E_PFHMC_SDDATALOW_PMSDTYPE_SHIFT); \ + val3 = (sd_index) | (1u << I40E_PFHMC_SDCMD_PMSDWR_SHIFT); \ + wr32((hw), I40E_PFHMC_SDDATAHIGH, 0); \ + wr32((hw), I40E_PFHMC_SDDATALOW, val2); \ + wr32((hw), I40E_PFHMC_SDCMD, val3); \ +} + +/** + * I40E_INVALIDATE_PF_HMC_PD - Invalidates the pd cache in the hardware + * @hw: pointer to our hw struct + * @sd_idx: segment descriptor index + * @pd_idx: page descriptor index + **/ +#define I40E_INVALIDATE_PF_HMC_PD(hw, sd_idx, pd_idx) \ + wr32((hw), I40E_PFHMC_PDINV, \ + (((sd_idx) << I40E_PFHMC_PDINV_PMSDIDX_SHIFT) | \ + ((pd_idx) << I40E_PFHMC_PDINV_PMPDIDX_SHIFT))) + +/** + * I40E_FIND_SD_INDEX_LIMIT - finds segment descriptor index limit + * @hmc_info: pointer to the HMC configuration information structure + * @type: type of HMC resources we're searching + * @index: starting index for the object + * @cnt: number of objects we're trying to create + * @sd_idx: pointer to return index of the segment descriptor in question + * @sd_limit: pointer to return the maximum number of segment descriptors + * + * This function calculates the segment descriptor index and index limit + * for the resource defined by i40e_hmc_rsrc_type. + **/ +#define I40E_FIND_SD_INDEX_LIMIT(hmc_info, type, index, cnt, sd_idx, sd_limit)\ +{ \ + u64 fpm_addr, fpm_limit; \ + fpm_addr = (hmc_info)->hmc_obj[(type)].base + \ + (hmc_info)->hmc_obj[(type)].size * (index); \ + fpm_limit = fpm_addr + (hmc_info)->hmc_obj[(type)].size * (cnt);\ + *(sd_idx) = (u32)(fpm_addr / I40E_HMC_DIRECT_BP_SIZE); \ + *(sd_limit) = (u32)((fpm_limit - 1) / I40E_HMC_DIRECT_BP_SIZE); \ + /* add one more to the limit to correct our range */ \ + *(sd_limit) += 1; \ +} + +/** + * I40E_FIND_PD_INDEX_LIMIT - finds page descriptor index limit + * @hmc_info: pointer to the HMC configuration information struct + * @type: HMC resource type we're examining + * @idx: starting index for the object + * @cnt: number of objects we're trying to create + * @pd_index: pointer to return page descriptor index + * @pd_limit: pointer to return page descriptor index limit + * + * Calculates the page descriptor index and index limit for the resource + * defined by i40e_hmc_rsrc_type. + **/ +#define I40E_FIND_PD_INDEX_LIMIT(hmc_info, type, idx, cnt, pd_index, pd_limit)\ +{ \ + u64 fpm_adr, fpm_limit; \ + fpm_adr = (hmc_info)->hmc_obj[(type)].base + \ + (hmc_info)->hmc_obj[(type)].size * (idx); \ + fpm_limit = fpm_adr + (hmc_info)->hmc_obj[(type)].size * (cnt); \ + *(pd_index) = (u32)(fpm_adr / I40E_HMC_PAGED_BP_SIZE); \ + *(pd_limit) = (u32)((fpm_limit - 1) / I40E_HMC_PAGED_BP_SIZE); \ + /* add one more to the limit to correct our range */ \ + *(pd_limit) += 1; \ +} +enum i40e_status_code i40e_add_sd_table_entry(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 sd_index, + enum i40e_sd_entry_type type, + u64 direct_mode_sz); + +enum i40e_status_code i40e_add_pd_table_entry(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 pd_index); +enum i40e_status_code i40e_remove_pd_bp(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 idx); +enum i40e_status_code i40e_prep_remove_sd_bp(struct i40e_hmc_info *hmc_info, + u32 idx); +enum i40e_status_code i40e_remove_sd_bp_new(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 idx, bool is_pf); +enum i40e_status_code i40e_prep_remove_pd_page(struct i40e_hmc_info *hmc_info, + u32 idx); +enum i40e_status_code i40e_remove_pd_page_new(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 idx, bool is_pf); + +#endif /* _I40E_HMC_H_ */ diff --git a/drivers/net/i40e/base/i40e_lan_hmc.c b/drivers/net/i40e/base/i40e_lan_hmc.c new file mode 100644 index 0000000..b08534b --- /dev/null +++ b/drivers/net/i40e/base/i40e_lan_hmc.c @@ -0,0 +1,1417 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "i40e_osdep.h" +#include "i40e_register.h" +#include "i40e_type.h" +#include "i40e_hmc.h" +#include "i40e_lan_hmc.h" +#include "i40e_prototype.h" + +/* lan specific interface functions */ + +/** + * i40e_align_l2obj_base - aligns base object pointer to 512 bytes + * @offset: base address offset needing alignment + * + * Aligns the layer 2 function private memory so it's 512-byte aligned. + **/ +STATIC u64 i40e_align_l2obj_base(u64 offset) +{ + u64 aligned_offset = offset; + + if ((offset % I40E_HMC_L2OBJ_BASE_ALIGNMENT) > 0) + aligned_offset += (I40E_HMC_L2OBJ_BASE_ALIGNMENT - + (offset % I40E_HMC_L2OBJ_BASE_ALIGNMENT)); + + return aligned_offset; +} + +/** + * i40e_calculate_l2fpm_size - calculates layer 2 FPM memory size + * @txq_num: number of Tx queues needing backing context + * @rxq_num: number of Rx queues needing backing context + * @fcoe_cntx_num: amount of FCoE statefull contexts needing backing context + * @fcoe_filt_num: number of FCoE filters needing backing context + * + * Calculates the maximum amount of memory for the function required, based + * on the number of resources it must provide context for. + **/ +u64 i40e_calculate_l2fpm_size(u32 txq_num, u32 rxq_num, + u32 fcoe_cntx_num, u32 fcoe_filt_num) +{ + u64 fpm_size = 0; + + fpm_size = txq_num * I40E_HMC_OBJ_SIZE_TXQ; + fpm_size = i40e_align_l2obj_base(fpm_size); + + fpm_size += (rxq_num * I40E_HMC_OBJ_SIZE_RXQ); + fpm_size = i40e_align_l2obj_base(fpm_size); + + fpm_size += (fcoe_cntx_num * I40E_HMC_OBJ_SIZE_FCOE_CNTX); + fpm_size = i40e_align_l2obj_base(fpm_size); + + fpm_size += (fcoe_filt_num * I40E_HMC_OBJ_SIZE_FCOE_FILT); + fpm_size = i40e_align_l2obj_base(fpm_size); + + return fpm_size; +} + +/** + * i40e_init_lan_hmc - initialize i40e_hmc_info struct + * @hw: pointer to the HW structure + * @txq_num: number of Tx queues needing backing context + * @rxq_num: number of Rx queues needing backing context + * @fcoe_cntx_num: amount of FCoE statefull contexts needing backing context + * @fcoe_filt_num: number of FCoE filters needing backing context + * + * This function will be called once per physical function initialization. + * It will fill out the i40e_hmc_obj_info structure for LAN objects based on + * the driver's provided input, as well as information from the HMC itself + * loaded from NVRAM. + * + * Assumptions: + * - HMC Resource Profile has been selected before calling this function. + **/ +enum i40e_status_code i40e_init_lan_hmc(struct i40e_hw *hw, u32 txq_num, + u32 rxq_num, u32 fcoe_cntx_num, + u32 fcoe_filt_num) +{ + struct i40e_hmc_obj_info *obj, *full_obj; + enum i40e_status_code ret_code = I40E_SUCCESS; + u64 l2fpm_size; + u32 size_exp; + + hw->hmc.signature = I40E_HMC_INFO_SIGNATURE; + hw->hmc.hmc_fn_id = hw->pf_id; + + /* allocate memory for hmc_obj */ + ret_code = i40e_allocate_virt_mem(hw, &hw->hmc.hmc_obj_virt_mem, + sizeof(struct i40e_hmc_obj_info) * I40E_HMC_LAN_MAX); + if (ret_code) + goto init_lan_hmc_out; + hw->hmc.hmc_obj = (struct i40e_hmc_obj_info *) + hw->hmc.hmc_obj_virt_mem.va; + + /* The full object will be used to create the LAN HMC SD */ + full_obj = &hw->hmc.hmc_obj[I40E_HMC_LAN_FULL]; + full_obj->max_cnt = 0; + full_obj->cnt = 0; + full_obj->base = 0; + full_obj->size = 0; + + /* Tx queue context information */ + obj = &hw->hmc.hmc_obj[I40E_HMC_LAN_TX]; + obj->max_cnt = rd32(hw, I40E_GLHMC_LANQMAX); + obj->cnt = txq_num; + obj->base = 0; + size_exp = rd32(hw, I40E_GLHMC_LANTXOBJSZ); + obj->size = (u64)1 << size_exp; + + /* validate values requested by driver don't exceed HMC capacity */ + if (txq_num > obj->max_cnt) { + ret_code = I40E_ERR_INVALID_HMC_OBJ_COUNT; + DEBUGOUT3("i40e_init_lan_hmc: Tx context: asks for 0x%x but max allowed is 0x%x, returns error %d\n", + txq_num, obj->max_cnt, ret_code); + goto init_lan_hmc_out; + } + + /* aggregate values into the full LAN object for later */ + full_obj->max_cnt += obj->max_cnt; + full_obj->cnt += obj->cnt; + + /* Rx queue context information */ + obj = &hw->hmc.hmc_obj[I40E_HMC_LAN_RX]; + obj->max_cnt = rd32(hw, I40E_GLHMC_LANQMAX); + obj->cnt = rxq_num; + obj->base = hw->hmc.hmc_obj[I40E_HMC_LAN_TX].base + + (hw->hmc.hmc_obj[I40E_HMC_LAN_TX].cnt * + hw->hmc.hmc_obj[I40E_HMC_LAN_TX].size); + obj->base = i40e_align_l2obj_base(obj->base); + size_exp = rd32(hw, I40E_GLHMC_LANRXOBJSZ); + obj->size = (u64)1 << size_exp; + + /* validate values requested by driver don't exceed HMC capacity */ + if (rxq_num > obj->max_cnt) { + ret_code = I40E_ERR_INVALID_HMC_OBJ_COUNT; + DEBUGOUT3("i40e_init_lan_hmc: Rx context: asks for 0x%x but max allowed is 0x%x, returns error %d\n", + rxq_num, obj->max_cnt, ret_code); + goto init_lan_hmc_out; + } + + /* aggregate values into the full LAN object for later */ + full_obj->max_cnt += obj->max_cnt; + full_obj->cnt += obj->cnt; + + /* FCoE context information */ + obj = &hw->hmc.hmc_obj[I40E_HMC_FCOE_CTX]; + obj->max_cnt = rd32(hw, I40E_GLHMC_FCOEMAX); + obj->cnt = fcoe_cntx_num; + obj->base = hw->hmc.hmc_obj[I40E_HMC_LAN_RX].base + + (hw->hmc.hmc_obj[I40E_HMC_LAN_RX].cnt * + hw->hmc.hmc_obj[I40E_HMC_LAN_RX].size); + obj->base = i40e_align_l2obj_base(obj->base); + size_exp = rd32(hw, I40E_GLHMC_FCOEDDPOBJSZ); + obj->size = (u64)1 << size_exp; + + /* validate values requested by driver don't exceed HMC capacity */ + if (fcoe_cntx_num > obj->max_cnt) { + ret_code = I40E_ERR_INVALID_HMC_OBJ_COUNT; + DEBUGOUT3("i40e_init_lan_hmc: FCoE context: asks for 0x%x but max allowed is 0x%x, returns error %d\n", + fcoe_cntx_num, obj->max_cnt, ret_code); + goto init_lan_hmc_out; + } + + /* aggregate values into the full LAN object for later */ + full_obj->max_cnt += obj->max_cnt; + full_obj->cnt += obj->cnt; + + /* FCoE filter information */ + obj = &hw->hmc.hmc_obj[I40E_HMC_FCOE_FILT]; + obj->max_cnt = rd32(hw, I40E_GLHMC_FCOEFMAX); + obj->cnt = fcoe_filt_num; + obj->base = hw->hmc.hmc_obj[I40E_HMC_FCOE_CTX].base + + (hw->hmc.hmc_obj[I40E_HMC_FCOE_CTX].cnt * + hw->hmc.hmc_obj[I40E_HMC_FCOE_CTX].size); + obj->base = i40e_align_l2obj_base(obj->base); + size_exp = rd32(hw, I40E_GLHMC_FCOEFOBJSZ); + obj->size = (u64)1 << size_exp; + + /* validate values requested by driver don't exceed HMC capacity */ + if (fcoe_filt_num > obj->max_cnt) { + ret_code = I40E_ERR_INVALID_HMC_OBJ_COUNT; + DEBUGOUT3("i40e_init_lan_hmc: FCoE filter: asks for 0x%x but max allowed is 0x%x, returns error %d\n", + fcoe_filt_num, obj->max_cnt, ret_code); + goto init_lan_hmc_out; + } + + /* aggregate values into the full LAN object for later */ + full_obj->max_cnt += obj->max_cnt; + full_obj->cnt += obj->cnt; + + hw->hmc.first_sd_index = 0; + hw->hmc.sd_table.ref_cnt = 0; + l2fpm_size = i40e_calculate_l2fpm_size(txq_num, rxq_num, fcoe_cntx_num, + fcoe_filt_num); + if (NULL == hw->hmc.sd_table.sd_entry) { + hw->hmc.sd_table.sd_cnt = (u32) + (l2fpm_size + I40E_HMC_DIRECT_BP_SIZE - 1) / + I40E_HMC_DIRECT_BP_SIZE; + + /* allocate the sd_entry members in the sd_table */ + ret_code = i40e_allocate_virt_mem(hw, &hw->hmc.sd_table.addr, + (sizeof(struct i40e_hmc_sd_entry) * + hw->hmc.sd_table.sd_cnt)); + if (ret_code) + goto init_lan_hmc_out; + hw->hmc.sd_table.sd_entry = + (struct i40e_hmc_sd_entry *)hw->hmc.sd_table.addr.va; + } + /* store in the LAN full object for later */ + full_obj->size = l2fpm_size; + +init_lan_hmc_out: + return ret_code; +} + +/** + * i40e_remove_pd_page - Remove a page from the page descriptor table + * @hw: pointer to the HW structure + * @hmc_info: pointer to the HMC configuration information structure + * @idx: segment descriptor index to find the relevant page descriptor + * + * This function: + * 1. Marks the entry in pd table (for paged address mode) invalid + * 2. write to register PMPDINV to invalidate the backing page in FV cache + * 3. Decrement the ref count for pd_entry + * assumptions: + * 1. caller can deallocate the memory used by pd after this function + * returns. + **/ +STATIC enum i40e_status_code i40e_remove_pd_page(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 idx) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + + if (i40e_prep_remove_pd_page(hmc_info, idx) == I40E_SUCCESS) + ret_code = i40e_remove_pd_page_new(hw, hmc_info, idx, true); + + return ret_code; +} + +/** + * i40e_remove_sd_bp - remove a backing page from a segment descriptor + * @hw: pointer to our HW structure + * @hmc_info: pointer to the HMC configuration information structure + * @idx: the page index + * + * This function: + * 1. Marks the entry in sd table (for direct address mode) invalid + * 2. write to register PMSDCMD, PMSDDATALOW(PMSDDATALOW.PMSDVALID set + * to 0) and PMSDDATAHIGH to invalidate the sd page + * 3. Decrement the ref count for the sd_entry + * assumptions: + * 1. caller can deallocate the memory used by backing storage after this + * function returns. + **/ +STATIC enum i40e_status_code i40e_remove_sd_bp(struct i40e_hw *hw, + struct i40e_hmc_info *hmc_info, + u32 idx) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + + if (i40e_prep_remove_sd_bp(hmc_info, idx) == I40E_SUCCESS) + ret_code = i40e_remove_sd_bp_new(hw, hmc_info, idx, true); + + return ret_code; +} + +/** + * i40e_create_lan_hmc_object - allocate backing store for hmc objects + * @hw: pointer to the HW structure + * @info: pointer to i40e_hmc_create_obj_info struct + * + * This will allocate memory for PDs and backing pages and populate + * the sd and pd entries. + **/ +enum i40e_status_code i40e_create_lan_hmc_object(struct i40e_hw *hw, + struct i40e_hmc_lan_create_obj_info *info) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + struct i40e_hmc_sd_entry *sd_entry; + u32 pd_idx1 = 0, pd_lmt1 = 0; + u32 pd_idx = 0, pd_lmt = 0; + bool pd_error = false; + u32 sd_idx, sd_lmt; + u64 sd_size; + u32 i, j; + + if (NULL == info) { + ret_code = I40E_ERR_BAD_PTR; + DEBUGOUT("i40e_create_lan_hmc_object: bad info ptr\n"); + goto exit; + } + if (NULL == info->hmc_info) { + ret_code = I40E_ERR_BAD_PTR; + DEBUGOUT("i40e_create_lan_hmc_object: bad hmc_info ptr\n"); + goto exit; + } + if (I40E_HMC_INFO_SIGNATURE != info->hmc_info->signature) { + ret_code = I40E_ERR_BAD_PTR; + DEBUGOUT("i40e_create_lan_hmc_object: bad signature\n"); + goto exit; + } + + if (info->start_idx >= info->hmc_info->hmc_obj[info->rsrc_type].cnt) { + ret_code = I40E_ERR_INVALID_HMC_OBJ_INDEX; + DEBUGOUT1("i40e_create_lan_hmc_object: returns error %d\n", + ret_code); + goto exit; + } + if ((info->start_idx + info->count) > + info->hmc_info->hmc_obj[info->rsrc_type].cnt) { + ret_code = I40E_ERR_INVALID_HMC_OBJ_COUNT; + DEBUGOUT1("i40e_create_lan_hmc_object: returns error %d\n", + ret_code); + goto exit; + } + + /* find sd index and limit */ + I40E_FIND_SD_INDEX_LIMIT(info->hmc_info, info->rsrc_type, + info->start_idx, info->count, + &sd_idx, &sd_lmt); + if (sd_idx >= info->hmc_info->sd_table.sd_cnt || + sd_lmt > info->hmc_info->sd_table.sd_cnt) { + ret_code = I40E_ERR_INVALID_SD_INDEX; + goto exit; + } + /* find pd index */ + I40E_FIND_PD_INDEX_LIMIT(info->hmc_info, info->rsrc_type, + info->start_idx, info->count, &pd_idx, + &pd_lmt); + + /* This is to cover for cases where you may not want to have an SD with + * the full 2M memory but something smaller. By not filling out any + * size, the function will default the SD size to be 2M. + */ + if (info->direct_mode_sz == 0) + sd_size = I40E_HMC_DIRECT_BP_SIZE; + else + sd_size = info->direct_mode_sz; + + /* check if all the sds are valid. If not, allocate a page and + * initialize it. + */ + for (j = sd_idx; j < sd_lmt; j++) { + /* update the sd table entry */ + ret_code = i40e_add_sd_table_entry(hw, info->hmc_info, j, + info->entry_type, + sd_size); + if (I40E_SUCCESS != ret_code) + goto exit_sd_error; + sd_entry = &info->hmc_info->sd_table.sd_entry[j]; + if (I40E_SD_TYPE_PAGED == sd_entry->entry_type) { + /* check if all the pds in this sd are valid. If not, + * allocate a page and initialize it. + */ + + /* find pd_idx and pd_lmt in this sd */ + pd_idx1 = max(pd_idx, (j * I40E_HMC_MAX_BP_COUNT)); + pd_lmt1 = min(pd_lmt, + ((j + 1) * I40E_HMC_MAX_BP_COUNT)); + for (i = pd_idx1; i < pd_lmt1; i++) { + /* update the pd table entry */ + ret_code = i40e_add_pd_table_entry(hw, + info->hmc_info, + i); + if (I40E_SUCCESS != ret_code) { + pd_error = true; + break; + } + } + if (pd_error) { + /* remove the backing pages from pd_idx1 to i */ + while (i && (i > pd_idx1)) { + i40e_remove_pd_bp(hw, info->hmc_info, + (i - 1)); + i--; + } + } + } + if (!sd_entry->valid) { + sd_entry->valid = true; + switch (sd_entry->entry_type) { + case I40E_SD_TYPE_PAGED: + I40E_SET_PF_SD_ENTRY(hw, + sd_entry->u.pd_table.pd_page_addr.pa, + j, sd_entry->entry_type); + break; + case I40E_SD_TYPE_DIRECT: + I40E_SET_PF_SD_ENTRY(hw, sd_entry->u.bp.addr.pa, + j, sd_entry->entry_type); + break; + default: + ret_code = I40E_ERR_INVALID_SD_TYPE; + goto exit; + } + } + } + goto exit; + +exit_sd_error: + /* cleanup for sd entries from j to sd_idx */ + while (j && (j > sd_idx)) { + sd_entry = &info->hmc_info->sd_table.sd_entry[j - 1]; + switch (sd_entry->entry_type) { + case I40E_SD_TYPE_PAGED: + pd_idx1 = max(pd_idx, + ((j - 1) * I40E_HMC_MAX_BP_COUNT)); + pd_lmt1 = min(pd_lmt, (j * I40E_HMC_MAX_BP_COUNT)); + for (i = pd_idx1; i < pd_lmt1; i++) { + i40e_remove_pd_bp(hw, info->hmc_info, i); + } + i40e_remove_pd_page(hw, info->hmc_info, (j - 1)); + break; + case I40E_SD_TYPE_DIRECT: + i40e_remove_sd_bp(hw, info->hmc_info, (j - 1)); + break; + default: + ret_code = I40E_ERR_INVALID_SD_TYPE; + break; + } + j--; + } +exit: + return ret_code; +} + +/** + * i40e_configure_lan_hmc - prepare the HMC backing store + * @hw: pointer to the hw structure + * @model: the model for the layout of the SD/PD tables + * + * - This function will be called once per physical function initialization. + * - This function will be called after i40e_init_lan_hmc() and before + * any LAN/FCoE HMC objects can be created. + **/ +enum i40e_status_code i40e_configure_lan_hmc(struct i40e_hw *hw, + enum i40e_hmc_model model) +{ + struct i40e_hmc_lan_create_obj_info info; + u8 hmc_fn_id = hw->hmc.hmc_fn_id; + struct i40e_hmc_obj_info *obj; + enum i40e_status_code ret_code = I40E_SUCCESS; + + /* Initialize part of the create object info struct */ + info.hmc_info = &hw->hmc; + info.rsrc_type = I40E_HMC_LAN_FULL; + info.start_idx = 0; + info.direct_mode_sz = hw->hmc.hmc_obj[I40E_HMC_LAN_FULL].size; + + /* Build the SD entry for the LAN objects */ + switch (model) { + case I40E_HMC_MODEL_DIRECT_PREFERRED: + case I40E_HMC_MODEL_DIRECT_ONLY: + info.entry_type = I40E_SD_TYPE_DIRECT; + /* Make one big object, a single SD */ + info.count = 1; + ret_code = i40e_create_lan_hmc_object(hw, &info); + if ((ret_code != I40E_SUCCESS) && (model == I40E_HMC_MODEL_DIRECT_PREFERRED)) + goto try_type_paged; + else if (ret_code != I40E_SUCCESS) + goto configure_lan_hmc_out; + /* else clause falls through the break */ + break; + case I40E_HMC_MODEL_PAGED_ONLY: +try_type_paged: + info.entry_type = I40E_SD_TYPE_PAGED; + /* Make one big object in the PD table */ + info.count = 1; + ret_code = i40e_create_lan_hmc_object(hw, &info); + if (ret_code != I40E_SUCCESS) + goto configure_lan_hmc_out; + break; + default: + /* unsupported type */ + ret_code = I40E_ERR_INVALID_SD_TYPE; + DEBUGOUT1("i40e_configure_lan_hmc: Unknown SD type: %d\n", + ret_code); + goto configure_lan_hmc_out; + } + + /* Configure and program the FPM registers so objects can be created */ + + /* Tx contexts */ + obj = &hw->hmc.hmc_obj[I40E_HMC_LAN_TX]; + wr32(hw, I40E_GLHMC_LANTXBASE(hmc_fn_id), + (u32)((obj->base & I40E_GLHMC_LANTXBASE_FPMLANTXBASE_MASK) / 512)); + wr32(hw, I40E_GLHMC_LANTXCNT(hmc_fn_id), obj->cnt); + + /* Rx contexts */ + obj = &hw->hmc.hmc_obj[I40E_HMC_LAN_RX]; + wr32(hw, I40E_GLHMC_LANRXBASE(hmc_fn_id), + (u32)((obj->base & I40E_GLHMC_LANRXBASE_FPMLANRXBASE_MASK) / 512)); + wr32(hw, I40E_GLHMC_LANRXCNT(hmc_fn_id), obj->cnt); + + /* FCoE contexts */ + obj = &hw->hmc.hmc_obj[I40E_HMC_FCOE_CTX]; + wr32(hw, I40E_GLHMC_FCOEDDPBASE(hmc_fn_id), + (u32)((obj->base & I40E_GLHMC_FCOEDDPBASE_FPMFCOEDDPBASE_MASK) / 512)); + wr32(hw, I40E_GLHMC_FCOEDDPCNT(hmc_fn_id), obj->cnt); + + /* FCoE filters */ + obj = &hw->hmc.hmc_obj[I40E_HMC_FCOE_FILT]; + wr32(hw, I40E_GLHMC_FCOEFBASE(hmc_fn_id), + (u32)((obj->base & I40E_GLHMC_FCOEFBASE_FPMFCOEFBASE_MASK) / 512)); + wr32(hw, I40E_GLHMC_FCOEFCNT(hmc_fn_id), obj->cnt); + +configure_lan_hmc_out: + return ret_code; +} + +/** + * i40e_delete_hmc_object - remove hmc objects + * @hw: pointer to the HW structure + * @info: pointer to i40e_hmc_delete_obj_info struct + * + * This will de-populate the SDs and PDs. It frees + * the memory for PDS and backing storage. After this function is returned, + * caller should deallocate memory allocated previously for + * book-keeping information about PDs and backing storage. + **/ +enum i40e_status_code i40e_delete_lan_hmc_object(struct i40e_hw *hw, + struct i40e_hmc_lan_delete_obj_info *info) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + struct i40e_hmc_pd_table *pd_table; + u32 pd_idx, pd_lmt, rel_pd_idx; + u32 sd_idx, sd_lmt; + u32 i, j; + + if (NULL == info) { + ret_code = I40E_ERR_BAD_PTR; + DEBUGOUT("i40e_delete_hmc_object: bad info ptr\n"); + goto exit; + } + if (NULL == info->hmc_info) { + ret_code = I40E_ERR_BAD_PTR; + DEBUGOUT("i40e_delete_hmc_object: bad info->hmc_info ptr\n"); + goto exit; + } + if (I40E_HMC_INFO_SIGNATURE != info->hmc_info->signature) { + ret_code = I40E_ERR_BAD_PTR; + DEBUGOUT("i40e_delete_hmc_object: bad hmc_info->signature\n"); + goto exit; + } + + if (NULL == info->hmc_info->sd_table.sd_entry) { + ret_code = I40E_ERR_BAD_PTR; + DEBUGOUT("i40e_delete_hmc_object: bad sd_entry\n"); + goto exit; + } + + if (NULL == info->hmc_info->hmc_obj) { + ret_code = I40E_ERR_BAD_PTR; + DEBUGOUT("i40e_delete_hmc_object: bad hmc_info->hmc_obj\n"); + goto exit; + } + if (info->start_idx >= info->hmc_info->hmc_obj[info->rsrc_type].cnt) { + ret_code = I40E_ERR_INVALID_HMC_OBJ_INDEX; + DEBUGOUT1("i40e_delete_hmc_object: returns error %d\n", + ret_code); + goto exit; + } + + if ((info->start_idx + info->count) > + info->hmc_info->hmc_obj[info->rsrc_type].cnt) { + ret_code = I40E_ERR_INVALID_HMC_OBJ_COUNT; + DEBUGOUT1("i40e_delete_hmc_object: returns error %d\n", + ret_code); + goto exit; + } + + I40E_FIND_PD_INDEX_LIMIT(info->hmc_info, info->rsrc_type, + info->start_idx, info->count, &pd_idx, + &pd_lmt); + + for (j = pd_idx; j < pd_lmt; j++) { + sd_idx = j / I40E_HMC_PD_CNT_IN_SD; + + if (I40E_SD_TYPE_PAGED != + info->hmc_info->sd_table.sd_entry[sd_idx].entry_type) + continue; + + rel_pd_idx = j % I40E_HMC_PD_CNT_IN_SD; + + pd_table = + &info->hmc_info->sd_table.sd_entry[sd_idx].u.pd_table; + if (pd_table->pd_entry[rel_pd_idx].valid) { + ret_code = i40e_remove_pd_bp(hw, info->hmc_info, j); + if (I40E_SUCCESS != ret_code) + goto exit; + } + } + + /* find sd index and limit */ + I40E_FIND_SD_INDEX_LIMIT(info->hmc_info, info->rsrc_type, + info->start_idx, info->count, + &sd_idx, &sd_lmt); + if (sd_idx >= info->hmc_info->sd_table.sd_cnt || + sd_lmt > info->hmc_info->sd_table.sd_cnt) { + ret_code = I40E_ERR_INVALID_SD_INDEX; + goto exit; + } + + for (i = sd_idx; i < sd_lmt; i++) { + if (!info->hmc_info->sd_table.sd_entry[i].valid) + continue; + switch (info->hmc_info->sd_table.sd_entry[i].entry_type) { + case I40E_SD_TYPE_DIRECT: + ret_code = i40e_remove_sd_bp(hw, info->hmc_info, i); + if (I40E_SUCCESS != ret_code) + goto exit; + break; + case I40E_SD_TYPE_PAGED: + ret_code = i40e_remove_pd_page(hw, info->hmc_info, i); + if (I40E_SUCCESS != ret_code) + goto exit; + break; + default: + break; + } + } +exit: + return ret_code; +} + +/** + * i40e_shutdown_lan_hmc - Remove HMC backing store, free allocated memory + * @hw: pointer to the hw structure + * + * This must be called by drivers as they are shutting down and being + * removed from the OS. + **/ +enum i40e_status_code i40e_shutdown_lan_hmc(struct i40e_hw *hw) +{ + struct i40e_hmc_lan_delete_obj_info info; + enum i40e_status_code ret_code; + + info.hmc_info = &hw->hmc; + info.rsrc_type = I40E_HMC_LAN_FULL; + info.start_idx = 0; + info.count = 1; + + /* delete the object */ + ret_code = i40e_delete_lan_hmc_object(hw, &info); + + /* free the SD table entry for LAN */ + i40e_free_virt_mem(hw, &hw->hmc.sd_table.addr); + hw->hmc.sd_table.sd_cnt = 0; + hw->hmc.sd_table.sd_entry = NULL; + + /* free memory used for hmc_obj */ + i40e_free_virt_mem(hw, &hw->hmc.hmc_obj_virt_mem); + hw->hmc.hmc_obj = NULL; + + return ret_code; +} + +#define I40E_HMC_STORE(_struct, _ele) \ + offsetof(struct _struct, _ele), \ + FIELD_SIZEOF(struct _struct, _ele) + +struct i40e_context_ele { + u16 offset; + u16 size_of; + u16 width; + u16 lsb; +}; + +/* LAN Tx Queue Context */ +static struct i40e_context_ele i40e_hmc_txq_ce_info[] = { + /* Field Width LSB */ + {I40E_HMC_STORE(i40e_hmc_obj_txq, head), 13, 0 }, + {I40E_HMC_STORE(i40e_hmc_obj_txq, new_context), 1, 30 }, + {I40E_HMC_STORE(i40e_hmc_obj_txq, base), 57, 32 }, + {I40E_HMC_STORE(i40e_hmc_obj_txq, fc_ena), 1, 89 }, + {I40E_HMC_STORE(i40e_hmc_obj_txq, timesync_ena), 1, 90 }, + {I40E_HMC_STORE(i40e_hmc_obj_txq, fd_ena), 1, 91 }, + {I40E_HMC_STORE(i40e_hmc_obj_txq, alt_vlan_ena), 1, 92 }, + {I40E_HMC_STORE(i40e_hmc_obj_txq, cpuid), 8, 96 }, +/* line 1 */ + {I40E_HMC_STORE(i40e_hmc_obj_txq, thead_wb), 13, 0 + 128 }, + {I40E_HMC_STORE(i40e_hmc_obj_txq, head_wb_ena), 1, 32 + 128 }, + {I40E_HMC_STORE(i40e_hmc_obj_txq, qlen), 13, 33 + 128 }, + {I40E_HMC_STORE(i40e_hmc_obj_txq, tphrdesc_ena), 1, 46 + 128 }, + {I40E_HMC_STORE(i40e_hmc_obj_txq, tphrpacket_ena), 1, 47 + 128 }, + {I40E_HMC_STORE(i40e_hmc_obj_txq, tphwdesc_ena), 1, 48 + 128 }, + {I40E_HMC_STORE(i40e_hmc_obj_txq, head_wb_addr), 64, 64 + 128 }, +/* line 7 */ + {I40E_HMC_STORE(i40e_hmc_obj_txq, crc), 32, 0 + (7 * 128) }, + {I40E_HMC_STORE(i40e_hmc_obj_txq, rdylist), 10, 84 + (7 * 128) }, + {I40E_HMC_STORE(i40e_hmc_obj_txq, rdylist_act), 1, 94 + (7 * 128) }, + { 0 } +}; + +/* LAN Rx Queue Context */ +static struct i40e_context_ele i40e_hmc_rxq_ce_info[] = { + /* Field Width LSB */ + { I40E_HMC_STORE(i40e_hmc_obj_rxq, head), 13, 0 }, + { I40E_HMC_STORE(i40e_hmc_obj_rxq, cpuid), 8, 13 }, + { I40E_HMC_STORE(i40e_hmc_obj_rxq, base), 57, 32 }, + { I40E_HMC_STORE(i40e_hmc_obj_rxq, qlen), 13, 89 }, + { I40E_HMC_STORE(i40e_hmc_obj_rxq, dbuff), 7, 102 }, + { I40E_HMC_STORE(i40e_hmc_obj_rxq, hbuff), 5, 109 }, + { I40E_HMC_STORE(i40e_hmc_obj_rxq, dtype), 2, 114 }, + { I40E_HMC_STORE(i40e_hmc_obj_rxq, dsize), 1, 116 }, + { I40E_HMC_STORE(i40e_hmc_obj_rxq, crcstrip), 1, 117 }, + { I40E_HMC_STORE(i40e_hmc_obj_rxq, fc_ena), 1, 118 }, + { I40E_HMC_STORE(i40e_hmc_obj_rxq, l2tsel), 1, 119 }, + { I40E_HMC_STORE(i40e_hmc_obj_rxq, hsplit_0), 4, 120 }, + { I40E_HMC_STORE(i40e_hmc_obj_rxq, hsplit_1), 2, 124 }, + { I40E_HMC_STORE(i40e_hmc_obj_rxq, showiv), 1, 127 }, + { I40E_HMC_STORE(i40e_hmc_obj_rxq, rxmax), 14, 174 }, + { I40E_HMC_STORE(i40e_hmc_obj_rxq, tphrdesc_ena), 1, 193 }, + { I40E_HMC_STORE(i40e_hmc_obj_rxq, tphwdesc_ena), 1, 194 }, + { I40E_HMC_STORE(i40e_hmc_obj_rxq, tphdata_ena), 1, 195 }, + { I40E_HMC_STORE(i40e_hmc_obj_rxq, tphhead_ena), 1, 196 }, + { I40E_HMC_STORE(i40e_hmc_obj_rxq, lrxqthresh), 3, 198 }, + { I40E_HMC_STORE(i40e_hmc_obj_rxq, prefena), 1, 201 }, + { 0 } +}; + +/** + * i40e_write_byte - replace HMC context byte + * @hmc_bits: pointer to the HMC memory + * @ce_info: a description of the struct to be read from + * @src: the struct to be read from + **/ +static void i40e_write_byte(u8 *hmc_bits, + struct i40e_context_ele *ce_info, + u8 *src) +{ + u8 src_byte, dest_byte, mask; + u8 *from, *dest; + u16 shift_width; + + /* copy from the next struct field */ + from = src + ce_info->offset; + + /* prepare the bits and mask */ + shift_width = ce_info->lsb % 8; + mask = ((u8)1 << ce_info->width) - 1; + + src_byte = *from; + src_byte &= mask; + + /* shift to correct alignment */ + mask <<= shift_width; + src_byte <<= shift_width; + + /* get the current bits from the target bit string */ + dest = hmc_bits + (ce_info->lsb / 8); + + i40e_memcpy(&dest_byte, dest, sizeof(dest_byte), I40E_DMA_TO_NONDMA); + + dest_byte &= ~mask; /* get the bits not changing */ + dest_byte |= src_byte; /* add in the new bits */ + + /* put it all back */ + i40e_memcpy(dest, &dest_byte, sizeof(dest_byte), I40E_NONDMA_TO_DMA); +} + +/** + * i40e_write_word - replace HMC context word + * @hmc_bits: pointer to the HMC memory + * @ce_info: a description of the struct to be read from + * @src: the struct to be read from + **/ +static void i40e_write_word(u8 *hmc_bits, + struct i40e_context_ele *ce_info, + u8 *src) +{ + u16 src_word, mask; + u8 *from, *dest; + u16 shift_width; + __le16 dest_word; + + /* copy from the next struct field */ + from = src + ce_info->offset; + + /* prepare the bits and mask */ + shift_width = ce_info->lsb % 8; + mask = ((u16)1 << ce_info->width) - 1; + + /* don't swizzle the bits until after the mask because the mask bits + * will be in a different bit position on big endian machines + */ + src_word = *(u16 *)from; + src_word &= mask; + + /* shift to correct alignment */ + mask <<= shift_width; + src_word <<= shift_width; + + /* get the current bits from the target bit string */ + dest = hmc_bits + (ce_info->lsb / 8); + + i40e_memcpy(&dest_word, dest, sizeof(dest_word), I40E_DMA_TO_NONDMA); + + dest_word &= ~(CPU_TO_LE16(mask)); /* get the bits not changing */ + dest_word |= CPU_TO_LE16(src_word); /* add in the new bits */ + + /* put it all back */ + i40e_memcpy(dest, &dest_word, sizeof(dest_word), I40E_NONDMA_TO_DMA); +} + +/** + * i40e_write_dword - replace HMC context dword + * @hmc_bits: pointer to the HMC memory + * @ce_info: a description of the struct to be read from + * @src: the struct to be read from + **/ +static void i40e_write_dword(u8 *hmc_bits, + struct i40e_context_ele *ce_info, + u8 *src) +{ + u32 src_dword, mask; + u8 *from, *dest; + u16 shift_width; + __le32 dest_dword; + + /* copy from the next struct field */ + from = src + ce_info->offset; + + /* prepare the bits and mask */ + shift_width = ce_info->lsb % 8; + + /* if the field width is exactly 32 on an x86 machine, then the shift + * operation will not work because the SHL instructions count is masked + * to 5 bits so the shift will do nothing + */ + if (ce_info->width < 32) + mask = ((u32)1 << ce_info->width) - 1; + else + mask = 0xFFFFFFFF; + + /* don't swizzle the bits until after the mask because the mask bits + * will be in a different bit position on big endian machines + */ + src_dword = *(u32 *)from; + src_dword &= mask; + + /* shift to correct alignment */ + mask <<= shift_width; + src_dword <<= shift_width; + + /* get the current bits from the target bit string */ + dest = hmc_bits + (ce_info->lsb / 8); + + i40e_memcpy(&dest_dword, dest, sizeof(dest_dword), I40E_DMA_TO_NONDMA); + + dest_dword &= ~(CPU_TO_LE32(mask)); /* get the bits not changing */ + dest_dword |= CPU_TO_LE32(src_dword); /* add in the new bits */ + + /* put it all back */ + i40e_memcpy(dest, &dest_dword, sizeof(dest_dword), I40E_NONDMA_TO_DMA); +} + +/** + * i40e_write_qword - replace HMC context qword + * @hmc_bits: pointer to the HMC memory + * @ce_info: a description of the struct to be read from + * @src: the struct to be read from + **/ +static void i40e_write_qword(u8 *hmc_bits, + struct i40e_context_ele *ce_info, + u8 *src) +{ + u64 src_qword, mask; + u8 *from, *dest; + u16 shift_width; + __le64 dest_qword; + + /* copy from the next struct field */ + from = src + ce_info->offset; + + /* prepare the bits and mask */ + shift_width = ce_info->lsb % 8; + + /* if the field width is exactly 64 on an x86 machine, then the shift + * operation will not work because the SHL instructions count is masked + * to 6 bits so the shift will do nothing + */ + if (ce_info->width < 64) + mask = ((u64)1 << ce_info->width) - 1; + else + mask = 0xFFFFFFFFFFFFFFFFUL; + + /* don't swizzle the bits until after the mask because the mask bits + * will be in a different bit position on big endian machines + */ + src_qword = *(u64 *)from; + src_qword &= mask; + + /* shift to correct alignment */ + mask <<= shift_width; + src_qword <<= shift_width; + + /* get the current bits from the target bit string */ + dest = hmc_bits + (ce_info->lsb / 8); + + i40e_memcpy(&dest_qword, dest, sizeof(dest_qword), I40E_DMA_TO_NONDMA); + + dest_qword &= ~(CPU_TO_LE64(mask)); /* get the bits not changing */ + dest_qword |= CPU_TO_LE64(src_qword); /* add in the new bits */ + + /* put it all back */ + i40e_memcpy(dest, &dest_qword, sizeof(dest_qword), I40E_NONDMA_TO_DMA); +} + +/** + * i40e_read_byte - read HMC context byte into struct + * @hmc_bits: pointer to the HMC memory + * @ce_info: a description of the struct to be filled + * @dest: the struct to be filled + **/ +static void i40e_read_byte(u8 *hmc_bits, + struct i40e_context_ele *ce_info, + u8 *dest) +{ + u8 dest_byte, mask; + u8 *src, *target; + u16 shift_width; + + /* prepare the bits and mask */ + shift_width = ce_info->lsb % 8; + mask = ((u8)1 << ce_info->width) - 1; + + /* shift to correct alignment */ + mask <<= shift_width; + + /* get the current bits from the src bit string */ + src = hmc_bits + (ce_info->lsb / 8); + + i40e_memcpy(&dest_byte, src, sizeof(dest_byte), I40E_DMA_TO_NONDMA); + + dest_byte &= ~(mask); + + dest_byte >>= shift_width; + + /* get the address from the struct field */ + target = dest + ce_info->offset; + + /* put it back in the struct */ + i40e_memcpy(target, &dest_byte, sizeof(dest_byte), I40E_NONDMA_TO_DMA); +} + +/** + * i40e_read_word - read HMC context word into struct + * @hmc_bits: pointer to the HMC memory + * @ce_info: a description of the struct to be filled + * @dest: the struct to be filled + **/ +static void i40e_read_word(u8 *hmc_bits, + struct i40e_context_ele *ce_info, + u8 *dest) +{ + u16 dest_word, mask; + u8 *src, *target; + u16 shift_width; + __le16 src_word; + + /* prepare the bits and mask */ + shift_width = ce_info->lsb % 8; + mask = ((u16)1 << ce_info->width) - 1; + + /* shift to correct alignment */ + mask <<= shift_width; + + /* get the current bits from the src bit string */ + src = hmc_bits + (ce_info->lsb / 8); + + i40e_memcpy(&src_word, src, sizeof(src_word), I40E_DMA_TO_NONDMA); + + /* the data in the memory is stored as little endian so mask it + * correctly + */ + src_word &= ~(CPU_TO_LE16(mask)); + + /* get the data back into host order before shifting */ + dest_word = LE16_TO_CPU(src_word); + + dest_word >>= shift_width; + + /* get the address from the struct field */ + target = dest + ce_info->offset; + + /* put it back in the struct */ + i40e_memcpy(target, &dest_word, sizeof(dest_word), I40E_NONDMA_TO_DMA); +} + +/** + * i40e_read_dword - read HMC context dword into struct + * @hmc_bits: pointer to the HMC memory + * @ce_info: a description of the struct to be filled + * @dest: the struct to be filled + **/ +static void i40e_read_dword(u8 *hmc_bits, + struct i40e_context_ele *ce_info, + u8 *dest) +{ + u32 dest_dword, mask; + u8 *src, *target; + u16 shift_width; + __le32 src_dword; + + /* prepare the bits and mask */ + shift_width = ce_info->lsb % 8; + + /* if the field width is exactly 32 on an x86 machine, then the shift + * operation will not work because the SHL instructions count is masked + * to 5 bits so the shift will do nothing + */ + if (ce_info->width < 32) + mask = ((u32)1 << ce_info->width) - 1; + else + mask = 0xFFFFFFFF; + + /* shift to correct alignment */ + mask <<= shift_width; + + /* get the current bits from the src bit string */ + src = hmc_bits + (ce_info->lsb / 8); + + i40e_memcpy(&src_dword, src, sizeof(src_dword), I40E_DMA_TO_NONDMA); + + /* the data in the memory is stored as little endian so mask it + * correctly + */ + src_dword &= ~(CPU_TO_LE32(mask)); + + /* get the data back into host order before shifting */ + dest_dword = LE32_TO_CPU(src_dword); + + dest_dword >>= shift_width; + + /* get the address from the struct field */ + target = dest + ce_info->offset; + + /* put it back in the struct */ + i40e_memcpy(target, &dest_dword, sizeof(dest_dword), + I40E_NONDMA_TO_DMA); +} + +/** + * i40e_read_qword - read HMC context qword into struct + * @hmc_bits: pointer to the HMC memory + * @ce_info: a description of the struct to be filled + * @dest: the struct to be filled + **/ +static void i40e_read_qword(u8 *hmc_bits, + struct i40e_context_ele *ce_info, + u8 *dest) +{ + u64 dest_qword, mask; + u8 *src, *target; + u16 shift_width; + __le64 src_qword; + + /* prepare the bits and mask */ + shift_width = ce_info->lsb % 8; + + /* if the field width is exactly 64 on an x86 machine, then the shift + * operation will not work because the SHL instructions count is masked + * to 6 bits so the shift will do nothing + */ + if (ce_info->width < 64) + mask = ((u64)1 << ce_info->width) - 1; + else + mask = 0xFFFFFFFFFFFFFFFFUL; + + /* shift to correct alignment */ + mask <<= shift_width; + + /* get the current bits from the src bit string */ + src = hmc_bits + (ce_info->lsb / 8); + + i40e_memcpy(&src_qword, src, sizeof(src_qword), I40E_DMA_TO_NONDMA); + + /* the data in the memory is stored as little endian so mask it + * correctly + */ + src_qword &= ~(CPU_TO_LE64(mask)); + + /* get the data back into host order before shifting */ + dest_qword = LE64_TO_CPU(src_qword); + + dest_qword >>= shift_width; + + /* get the address from the struct field */ + target = dest + ce_info->offset; + + /* put it back in the struct */ + i40e_memcpy(target, &dest_qword, sizeof(dest_qword), + I40E_NONDMA_TO_DMA); +} + +/** + * i40e_get_hmc_context - extract HMC context bits + * @context_bytes: pointer to the context bit array + * @ce_info: a description of the struct to be filled + * @dest: the struct to be filled + **/ +static enum i40e_status_code i40e_get_hmc_context(u8 *context_bytes, + struct i40e_context_ele *ce_info, + u8 *dest) +{ + int f; + + for (f = 0; ce_info[f].width != 0; f++) { + switch (ce_info[f].size_of) { + case 1: + i40e_read_byte(context_bytes, &ce_info[f], dest); + break; + case 2: + i40e_read_word(context_bytes, &ce_info[f], dest); + break; + case 4: + i40e_read_dword(context_bytes, &ce_info[f], dest); + break; + case 8: + i40e_read_qword(context_bytes, &ce_info[f], dest); + break; + default: + /* nothing to do, just keep going */ + break; + } + } + + return I40E_SUCCESS; +} + +/** + * i40e_clear_hmc_context - zero out the HMC context bits + * @hw: the hardware struct + * @context_bytes: pointer to the context bit array (DMA memory) + * @hmc_type: the type of HMC resource + **/ +static enum i40e_status_code i40e_clear_hmc_context(struct i40e_hw *hw, + u8 *context_bytes, + enum i40e_hmc_lan_rsrc_type hmc_type) +{ + /* clean the bit array */ + i40e_memset(context_bytes, 0, (u32)hw->hmc.hmc_obj[hmc_type].size, + I40E_DMA_MEM); + + return I40E_SUCCESS; +} + +/** + * i40e_set_hmc_context - replace HMC context bits + * @context_bytes: pointer to the context bit array + * @ce_info: a description of the struct to be filled + * @dest: the struct to be filled + **/ +static enum i40e_status_code i40e_set_hmc_context(u8 *context_bytes, + struct i40e_context_ele *ce_info, + u8 *dest) +{ + int f; + + for (f = 0; ce_info[f].width != 0; f++) { + + /* we have to deal with each element of the HMC using the + * correct size so that we are correct regardless of the + * endianness of the machine + */ + switch (ce_info[f].size_of) { + case 1: + i40e_write_byte(context_bytes, &ce_info[f], dest); + break; + case 2: + i40e_write_word(context_bytes, &ce_info[f], dest); + break; + case 4: + i40e_write_dword(context_bytes, &ce_info[f], dest); + break; + case 8: + i40e_write_qword(context_bytes, &ce_info[f], dest); + break; + } + } + + return I40E_SUCCESS; +} + +/** + * i40e_hmc_get_object_va - retrieves an object's virtual address + * @hmc_info: pointer to i40e_hmc_info struct + * @object_base: pointer to u64 to get the va + * @rsrc_type: the hmc resource type + * @obj_idx: hmc object index + * + * This function retrieves the object's virtual address from the object + * base pointer. This function is used for LAN Queue contexts. + **/ +STATIC +enum i40e_status_code i40e_hmc_get_object_va(struct i40e_hmc_info *hmc_info, + u8 **object_base, + enum i40e_hmc_lan_rsrc_type rsrc_type, + u32 obj_idx) +{ + u32 obj_offset_in_sd, obj_offset_in_pd; + struct i40e_hmc_sd_entry *sd_entry; + struct i40e_hmc_pd_entry *pd_entry; + u32 pd_idx, pd_lmt, rel_pd_idx; + enum i40e_status_code ret_code = I40E_SUCCESS; + u64 obj_offset_in_fpm; + u32 sd_idx, sd_lmt; + + if (NULL == hmc_info) { + ret_code = I40E_ERR_BAD_PTR; + DEBUGOUT("i40e_hmc_get_object_va: bad hmc_info ptr\n"); + goto exit; + } + if (NULL == hmc_info->hmc_obj) { + ret_code = I40E_ERR_BAD_PTR; + DEBUGOUT("i40e_hmc_get_object_va: bad hmc_info->hmc_obj ptr\n"); + goto exit; + } + if (NULL == object_base) { + ret_code = I40E_ERR_BAD_PTR; + DEBUGOUT("i40e_hmc_get_object_va: bad object_base ptr\n"); + goto exit; + } + if (I40E_HMC_INFO_SIGNATURE != hmc_info->signature) { + ret_code = I40E_ERR_BAD_PTR; + DEBUGOUT("i40e_hmc_get_object_va: bad hmc_info->signature\n"); + goto exit; + } + if (obj_idx >= hmc_info->hmc_obj[rsrc_type].cnt) { + DEBUGOUT1("i40e_hmc_get_object_va: returns error %d\n", + ret_code); + ret_code = I40E_ERR_INVALID_HMC_OBJ_INDEX; + goto exit; + } + /* find sd index and limit */ + I40E_FIND_SD_INDEX_LIMIT(hmc_info, rsrc_type, obj_idx, 1, + &sd_idx, &sd_lmt); + + sd_entry = &hmc_info->sd_table.sd_entry[sd_idx]; + obj_offset_in_fpm = hmc_info->hmc_obj[rsrc_type].base + + hmc_info->hmc_obj[rsrc_type].size * obj_idx; + + if (I40E_SD_TYPE_PAGED == sd_entry->entry_type) { + I40E_FIND_PD_INDEX_LIMIT(hmc_info, rsrc_type, obj_idx, 1, + &pd_idx, &pd_lmt); + rel_pd_idx = pd_idx % I40E_HMC_PD_CNT_IN_SD; + pd_entry = &sd_entry->u.pd_table.pd_entry[rel_pd_idx]; + obj_offset_in_pd = (u32)(obj_offset_in_fpm % + I40E_HMC_PAGED_BP_SIZE); + *object_base = (u8 *)pd_entry->bp.addr.va + obj_offset_in_pd; + } else { + obj_offset_in_sd = (u32)(obj_offset_in_fpm % + I40E_HMC_DIRECT_BP_SIZE); + *object_base = (u8 *)sd_entry->u.bp.addr.va + obj_offset_in_sd; + } +exit: + return ret_code; +} + +/** + * i40e_get_lan_tx_queue_context - return the HMC context for the queue + * @hw: the hardware struct + * @queue: the queue we care about + * @s: the struct to be filled + **/ +enum i40e_status_code i40e_get_lan_tx_queue_context(struct i40e_hw *hw, + u16 queue, + struct i40e_hmc_obj_txq *s) +{ + enum i40e_status_code err; + u8 *context_bytes; + + err = i40e_hmc_get_object_va(&hw->hmc, &context_bytes, + I40E_HMC_LAN_TX, queue); + if (err < 0) + return err; + + return i40e_get_hmc_context(context_bytes, + i40e_hmc_txq_ce_info, (u8 *)s); +} + +/** + * i40e_clear_lan_tx_queue_context - clear the HMC context for the queue + * @hw: the hardware struct + * @queue: the queue we care about + **/ +enum i40e_status_code i40e_clear_lan_tx_queue_context(struct i40e_hw *hw, + u16 queue) +{ + enum i40e_status_code err; + u8 *context_bytes; + + err = i40e_hmc_get_object_va(&hw->hmc, &context_bytes, + I40E_HMC_LAN_TX, queue); + if (err < 0) + return err; + + return i40e_clear_hmc_context(hw, context_bytes, I40E_HMC_LAN_TX); +} + +/** + * i40e_set_lan_tx_queue_context - set the HMC context for the queue + * @hw: the hardware struct + * @queue: the queue we care about + * @s: the struct to be filled + **/ +enum i40e_status_code i40e_set_lan_tx_queue_context(struct i40e_hw *hw, + u16 queue, + struct i40e_hmc_obj_txq *s) +{ + enum i40e_status_code err; + u8 *context_bytes; + + err = i40e_hmc_get_object_va(&hw->hmc, &context_bytes, + I40E_HMC_LAN_TX, queue); + if (err < 0) + return err; + + return i40e_set_hmc_context(context_bytes, + i40e_hmc_txq_ce_info, (u8 *)s); +} + +/** + * i40e_get_lan_rx_queue_context - return the HMC context for the queue + * @hw: the hardware struct + * @queue: the queue we care about + * @s: the struct to be filled + **/ +enum i40e_status_code i40e_get_lan_rx_queue_context(struct i40e_hw *hw, + u16 queue, + struct i40e_hmc_obj_rxq *s) +{ + enum i40e_status_code err; + u8 *context_bytes; + + err = i40e_hmc_get_object_va(&hw->hmc, &context_bytes, + I40E_HMC_LAN_RX, queue); + if (err < 0) + return err; + + return i40e_get_hmc_context(context_bytes, + i40e_hmc_rxq_ce_info, (u8 *)s); +} + +/** + * i40e_clear_lan_rx_queue_context - clear the HMC context for the queue + * @hw: the hardware struct + * @queue: the queue we care about + **/ +enum i40e_status_code i40e_clear_lan_rx_queue_context(struct i40e_hw *hw, + u16 queue) +{ + enum i40e_status_code err; + u8 *context_bytes; + + err = i40e_hmc_get_object_va(&hw->hmc, &context_bytes, + I40E_HMC_LAN_RX, queue); + if (err < 0) + return err; + + return i40e_clear_hmc_context(hw, context_bytes, I40E_HMC_LAN_RX); +} + +/** + * i40e_set_lan_rx_queue_context - set the HMC context for the queue + * @hw: the hardware struct + * @queue: the queue we care about + * @s: the struct to be filled + **/ +enum i40e_status_code i40e_set_lan_rx_queue_context(struct i40e_hw *hw, + u16 queue, + struct i40e_hmc_obj_rxq *s) +{ + enum i40e_status_code err; + u8 *context_bytes; + + err = i40e_hmc_get_object_va(&hw->hmc, &context_bytes, + I40E_HMC_LAN_RX, queue); + if (err < 0) + return err; + + return i40e_set_hmc_context(context_bytes, + i40e_hmc_rxq_ce_info, (u8 *)s); +} diff --git a/drivers/net/i40e/base/i40e_lan_hmc.h b/drivers/net/i40e/base/i40e_lan_hmc.h new file mode 100644 index 0000000..70ef65c --- /dev/null +++ b/drivers/net/i40e/base/i40e_lan_hmc.h @@ -0,0 +1,200 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _I40E_LAN_HMC_H_ +#define _I40E_LAN_HMC_H_ + +/* forward-declare the HW struct for the compiler */ +struct i40e_hw; + +/* HMC element context information */ + +/* Rx queue context data + * + * The sizes of the variables may be larger than needed due to crossing byte + * boundaries. If we do not have the width of the variable set to the correct + * size then we could end up shifting bits off the top of the variable when the + * variable is at the top of a byte and crosses over into the next byte. + */ +struct i40e_hmc_obj_rxq { + u16 head; + u16 cpuid; /* bigger than needed, see above for reason */ + u64 base; + u16 qlen; +#define I40E_RXQ_CTX_DBUFF_SHIFT 7 + u16 dbuff; /* bigger than needed, see above for reason */ +#define I40E_RXQ_CTX_HBUFF_SHIFT 6 + u16 hbuff; /* bigger than needed, see above for reason */ + u8 dtype; + u8 dsize; + u8 crcstrip; + u8 fc_ena; + u8 l2tsel; + u8 hsplit_0; + u8 hsplit_1; + u8 showiv; + u32 rxmax; /* bigger than needed, see above for reason */ + u8 tphrdesc_ena; + u8 tphwdesc_ena; + u8 tphdata_ena; + u8 tphhead_ena; + u16 lrxqthresh; /* bigger than needed, see above for reason */ + u8 prefena; /* NOTE: normally must be set to 1 at init */ +}; + +/* Tx queue context data +* +* The sizes of the variables may be larger than needed due to crossing byte +* boundaries. If we do not have the width of the variable set to the correct +* size then we could end up shifting bits off the top of the variable when the +* variable is at the top of a byte and crosses over into the next byte. +*/ +struct i40e_hmc_obj_txq { + u16 head; + u8 new_context; + u64 base; + u8 fc_ena; + u8 timesync_ena; + u8 fd_ena; + u8 alt_vlan_ena; + u16 thead_wb; + u8 cpuid; + u8 head_wb_ena; + u16 qlen; + u8 tphrdesc_ena; + u8 tphrpacket_ena; + u8 tphwdesc_ena; + u64 head_wb_addr; + u32 crc; + u16 rdylist; + u8 rdylist_act; +}; + +/* for hsplit_0 field of Rx HMC context */ +enum i40e_hmc_obj_rx_hsplit_0 { + I40E_HMC_OBJ_RX_HSPLIT_0_NO_SPLIT = 0, + I40E_HMC_OBJ_RX_HSPLIT_0_SPLIT_L2 = 1, + I40E_HMC_OBJ_RX_HSPLIT_0_SPLIT_IP = 2, + I40E_HMC_OBJ_RX_HSPLIT_0_SPLIT_TCP_UDP = 4, + I40E_HMC_OBJ_RX_HSPLIT_0_SPLIT_SCTP = 8, +}; + +/* fcoe_cntx and fcoe_filt are for debugging purpose only */ +struct i40e_hmc_obj_fcoe_cntx { + u32 rsv[32]; +}; + +struct i40e_hmc_obj_fcoe_filt { + u32 rsv[8]; +}; + +/* Context sizes for LAN objects */ +enum i40e_hmc_lan_object_size { + I40E_HMC_LAN_OBJ_SZ_8 = 0x3, + I40E_HMC_LAN_OBJ_SZ_16 = 0x4, + I40E_HMC_LAN_OBJ_SZ_32 = 0x5, + I40E_HMC_LAN_OBJ_SZ_64 = 0x6, + I40E_HMC_LAN_OBJ_SZ_128 = 0x7, + I40E_HMC_LAN_OBJ_SZ_256 = 0x8, + I40E_HMC_LAN_OBJ_SZ_512 = 0x9, +}; + +#define I40E_HMC_L2OBJ_BASE_ALIGNMENT 512 +#define I40E_HMC_OBJ_SIZE_TXQ 128 +#define I40E_HMC_OBJ_SIZE_RXQ 32 +#define I40E_HMC_OBJ_SIZE_FCOE_CNTX 64 +#define I40E_HMC_OBJ_SIZE_FCOE_FILT 64 + +enum i40e_hmc_lan_rsrc_type { + I40E_HMC_LAN_FULL = 0, + I40E_HMC_LAN_TX = 1, + I40E_HMC_LAN_RX = 2, + I40E_HMC_FCOE_CTX = 3, + I40E_HMC_FCOE_FILT = 4, + I40E_HMC_LAN_MAX = 5 +}; + +enum i40e_hmc_model { + I40E_HMC_MODEL_DIRECT_PREFERRED = 0, + I40E_HMC_MODEL_DIRECT_ONLY = 1, + I40E_HMC_MODEL_PAGED_ONLY = 2, + I40E_HMC_MODEL_UNKNOWN, +}; + +struct i40e_hmc_lan_create_obj_info { + struct i40e_hmc_info *hmc_info; + u32 rsrc_type; + u32 start_idx; + u32 count; + enum i40e_sd_entry_type entry_type; + u64 direct_mode_sz; +}; + +struct i40e_hmc_lan_delete_obj_info { + struct i40e_hmc_info *hmc_info; + u32 rsrc_type; + u32 start_idx; + u32 count; +}; + +enum i40e_status_code i40e_init_lan_hmc(struct i40e_hw *hw, u32 txq_num, + u32 rxq_num, u32 fcoe_cntx_num, + u32 fcoe_filt_num); +enum i40e_status_code i40e_configure_lan_hmc(struct i40e_hw *hw, + enum i40e_hmc_model model); +enum i40e_status_code i40e_shutdown_lan_hmc(struct i40e_hw *hw); + +u64 i40e_calculate_l2fpm_size(u32 txq_num, u32 rxq_num, + u32 fcoe_cntx_num, u32 fcoe_filt_num); +enum i40e_status_code i40e_get_lan_tx_queue_context(struct i40e_hw *hw, + u16 queue, + struct i40e_hmc_obj_txq *s); +enum i40e_status_code i40e_clear_lan_tx_queue_context(struct i40e_hw *hw, + u16 queue); +enum i40e_status_code i40e_set_lan_tx_queue_context(struct i40e_hw *hw, + u16 queue, + struct i40e_hmc_obj_txq *s); +enum i40e_status_code i40e_get_lan_rx_queue_context(struct i40e_hw *hw, + u16 queue, + struct i40e_hmc_obj_rxq *s); +enum i40e_status_code i40e_clear_lan_rx_queue_context(struct i40e_hw *hw, + u16 queue); +enum i40e_status_code i40e_set_lan_rx_queue_context(struct i40e_hw *hw, + u16 queue, + struct i40e_hmc_obj_rxq *s); +enum i40e_status_code i40e_create_lan_hmc_object(struct i40e_hw *hw, + struct i40e_hmc_lan_create_obj_info *info); +enum i40e_status_code i40e_delete_lan_hmc_object(struct i40e_hw *hw, + struct i40e_hmc_lan_delete_obj_info *info); + +#endif /* _I40E_LAN_HMC_H_ */ diff --git a/drivers/net/i40e/base/i40e_nvm.c b/drivers/net/i40e/base/i40e_nvm.c new file mode 100644 index 0000000..c62f5eb --- /dev/null +++ b/drivers/net/i40e/base/i40e_nvm.c @@ -0,0 +1,940 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#include "i40e_prototype.h" + +/** + * i40e_init_nvm_ops - Initialize NVM function pointers + * @hw: pointer to the HW structure + * + * Setup the function pointers and the NVM info structure. Should be called + * once per NVM initialization, e.g. inside the i40e_init_shared_code(). + * Please notice that the NVM term is used here (& in all methods covered + * in this file) as an equivalent of the FLASH part mapped into the SR. + * We are accessing FLASH always thru the Shadow RAM. + **/ +enum i40e_status_code i40e_init_nvm(struct i40e_hw *hw) +{ + struct i40e_nvm_info *nvm = &hw->nvm; + enum i40e_status_code ret_code = I40E_SUCCESS; + u32 fla, gens; + u8 sr_size; + + DEBUGFUNC("i40e_init_nvm"); + + /* The SR size is stored regardless of the nvm programming mode + * as the blank mode may be used in the factory line. + */ + gens = rd32(hw, I40E_GLNVM_GENS); + sr_size = ((gens & I40E_GLNVM_GENS_SR_SIZE_MASK) >> + I40E_GLNVM_GENS_SR_SIZE_SHIFT); + /* Switching to words (sr_size contains power of 2KB) */ + nvm->sr_size = (1 << sr_size) * I40E_SR_WORDS_IN_1KB; + + /* Check if we are in the normal or blank NVM programming mode */ + fla = rd32(hw, I40E_GLNVM_FLA); + if (fla & I40E_GLNVM_FLA_LOCKED_MASK) { /* Normal programming mode */ + /* Max NVM timeout */ + nvm->timeout = I40E_MAX_NVM_TIMEOUT; + nvm->blank_nvm_mode = false; + } else { /* Blank programming mode */ + nvm->blank_nvm_mode = true; + ret_code = I40E_ERR_NVM_BLANK_MODE; + DEBUGOUT("NVM init error: unsupported blank mode.\n"); + } + + return ret_code; +} + +/** + * i40e_acquire_nvm - Generic request for acquiring the NVM ownership + * @hw: pointer to the HW structure + * @access: NVM access type (read or write) + * + * This function will request NVM ownership for reading + * via the proper Admin Command. + **/ +enum i40e_status_code i40e_acquire_nvm(struct i40e_hw *hw, + enum i40e_aq_resource_access_type access) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + u64 gtime, timeout; + u64 time = 0; + + DEBUGFUNC("i40e_acquire_nvm"); + + if (hw->nvm.blank_nvm_mode) + goto i40e_i40e_acquire_nvm_exit; + + ret_code = i40e_aq_request_resource(hw, I40E_NVM_RESOURCE_ID, access, + 0, &time, NULL); + /* Reading the Global Device Timer */ + gtime = rd32(hw, I40E_GLVFGEN_TIMER); + + /* Store the timeout */ + hw->nvm.hw_semaphore_timeout = I40E_MS_TO_GTIME(time) + gtime; + + if (ret_code != I40E_SUCCESS) { + /* Set the polling timeout */ + if (time > I40E_MAX_NVM_TIMEOUT) + timeout = I40E_MS_TO_GTIME(I40E_MAX_NVM_TIMEOUT) + + gtime; + else + timeout = hw->nvm.hw_semaphore_timeout; + /* Poll until the current NVM owner timeouts */ + while (gtime < timeout) { + i40e_msec_delay(10); + ret_code = i40e_aq_request_resource(hw, + I40E_NVM_RESOURCE_ID, + access, 0, &time, + NULL); + if (ret_code == I40E_SUCCESS) { + hw->nvm.hw_semaphore_timeout = + I40E_MS_TO_GTIME(time) + gtime; + break; + } + gtime = rd32(hw, I40E_GLVFGEN_TIMER); + } + if (ret_code != I40E_SUCCESS) { + hw->nvm.hw_semaphore_timeout = 0; + hw->nvm.hw_semaphore_wait = + I40E_MS_TO_GTIME(time) + gtime; + DEBUGOUT1("NVM acquire timed out, wait %llu ms before trying again.\n", + time); + } + } + +i40e_i40e_acquire_nvm_exit: + return ret_code; +} + +/** + * i40e_release_nvm - Generic request for releasing the NVM ownership + * @hw: pointer to the HW structure + * + * This function will release NVM resource via the proper Admin Command. + **/ +void i40e_release_nvm(struct i40e_hw *hw) +{ + DEBUGFUNC("i40e_release_nvm"); + + if (!hw->nvm.blank_nvm_mode) + i40e_aq_release_resource(hw, I40E_NVM_RESOURCE_ID, 0, NULL); +} + +/** + * i40e_poll_sr_srctl_done_bit - Polls the GLNVM_SRCTL done bit + * @hw: pointer to the HW structure + * + * Polls the SRCTL Shadow RAM register done bit. + **/ +static enum i40e_status_code i40e_poll_sr_srctl_done_bit(struct i40e_hw *hw) +{ + enum i40e_status_code ret_code = I40E_ERR_TIMEOUT; + u32 srctl, wait_cnt; + + DEBUGFUNC("i40e_poll_sr_srctl_done_bit"); + + /* Poll the I40E_GLNVM_SRCTL until the done bit is set */ + for (wait_cnt = 0; wait_cnt < I40E_SRRD_SRCTL_ATTEMPTS; wait_cnt++) { + srctl = rd32(hw, I40E_GLNVM_SRCTL); + if (srctl & I40E_GLNVM_SRCTL_DONE_MASK) { + ret_code = I40E_SUCCESS; + break; + } + i40e_usec_delay(5); + } + if (ret_code == I40E_ERR_TIMEOUT) + DEBUGOUT("Done bit in GLNVM_SRCTL not set"); + return ret_code; +} + +/** + * i40e_read_nvm_word - Reads Shadow RAM + * @hw: pointer to the HW structure + * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF) + * @data: word read from the Shadow RAM + * + * Reads one 16 bit word from the Shadow RAM using the GLNVM_SRCTL register. + **/ +enum i40e_status_code i40e_read_nvm_word(struct i40e_hw *hw, u16 offset, + u16 *data) +{ + enum i40e_status_code ret_code = I40E_ERR_TIMEOUT; + u32 sr_reg; + + DEBUGFUNC("i40e_read_nvm_srctl"); + + if (offset >= hw->nvm.sr_size) { + DEBUGOUT("NVM read error: Offset beyond Shadow RAM limit.\n"); + ret_code = I40E_ERR_PARAM; + goto read_nvm_exit; + } + + /* Poll the done bit first */ + ret_code = i40e_poll_sr_srctl_done_bit(hw); + if (ret_code == I40E_SUCCESS) { + /* Write the address and start reading */ + sr_reg = (u32)(offset << I40E_GLNVM_SRCTL_ADDR_SHIFT) | + (1 << I40E_GLNVM_SRCTL_START_SHIFT); + wr32(hw, I40E_GLNVM_SRCTL, sr_reg); + + /* Poll I40E_GLNVM_SRCTL until the done bit is set */ + ret_code = i40e_poll_sr_srctl_done_bit(hw); + if (ret_code == I40E_SUCCESS) { + sr_reg = rd32(hw, I40E_GLNVM_SRDATA); + *data = (u16)((sr_reg & + I40E_GLNVM_SRDATA_RDDATA_MASK) + >> I40E_GLNVM_SRDATA_RDDATA_SHIFT); + } + } + if (ret_code != I40E_SUCCESS) + DEBUGOUT1("NVM read error: Couldn't access Shadow RAM address: 0x%x\n", + offset); + +read_nvm_exit: + return ret_code; +} + +/** + * i40e_read_nvm_buffer - Reads Shadow RAM buffer + * @hw: pointer to the HW structure + * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF). + * @words: (in) number of words to read; (out) number of words actually read + * @data: words read from the Shadow RAM + * + * Reads 16 bit words (data buffer) from the SR using the i40e_read_nvm_srrd() + * method. The buffer read is preceded by the NVM ownership take + * and followed by the release. + **/ +enum i40e_status_code i40e_read_nvm_buffer(struct i40e_hw *hw, u16 offset, + u16 *words, u16 *data) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + u16 index, word; + + DEBUGFUNC("i40e_read_nvm_buffer"); + + /* Loop thru the selected region */ + for (word = 0; word < *words; word++) { + index = offset + word; + ret_code = i40e_read_nvm_word(hw, index, &data[word]); + if (ret_code != I40E_SUCCESS) + break; + } + + /* Update the number of words read from the Shadow RAM */ + *words = word; + + return ret_code; +} +/** + * i40e_write_nvm_aq - Writes Shadow RAM. + * @hw: pointer to the HW structure. + * @module_pointer: module pointer location in words from the NVM beginning + * @offset: offset in words from module start + * @words: number of words to write + * @data: buffer with words to write to the Shadow RAM + * @last_command: tells the AdminQ that this is the last command + * + * Writes a 16 bit words buffer to the Shadow RAM using the admin command. + **/ +enum i40e_status_code i40e_write_nvm_aq(struct i40e_hw *hw, u8 module_pointer, + u32 offset, u16 words, void *data, + bool last_command) +{ + enum i40e_status_code ret_code = I40E_ERR_NVM; + + DEBUGFUNC("i40e_write_nvm_aq"); + + /* Here we are checking the SR limit only for the flat memory model. + * We cannot do it for the module-based model, as we did not acquire + * the NVM resource yet (we cannot get the module pointer value). + * Firmware will check the module-based model. + */ + if ((offset + words) > hw->nvm.sr_size) + DEBUGOUT("NVM write error: offset beyond Shadow RAM limit.\n"); + else if (words > I40E_SR_SECTOR_SIZE_IN_WORDS) + /* We can write only up to 4KB (one sector), in one AQ write */ + DEBUGOUT("NVM write fail error: cannot write more than 4KB in a single write.\n"); + else if (((offset + (words - 1)) / I40E_SR_SECTOR_SIZE_IN_WORDS) + != (offset / I40E_SR_SECTOR_SIZE_IN_WORDS)) + /* A single write cannot spread over two sectors */ + DEBUGOUT("NVM write error: cannot spread over two sectors in a single write.\n"); + else + ret_code = i40e_aq_update_nvm(hw, module_pointer, + 2 * offset, /*bytes*/ + 2 * words, /*bytes*/ + data, last_command, NULL); + + return ret_code; +} + +/** + * i40e_write_nvm_word - Writes Shadow RAM word + * @hw: pointer to the HW structure + * @offset: offset of the Shadow RAM word to write + * @data: word to write to the Shadow RAM + * + * Writes a 16 bit word to the SR using the i40e_write_nvm_aq() method. + * NVM ownership have to be acquired and released (on ARQ completion event + * reception) by caller. To commit SR to NVM update checksum function + * should be called. + **/ +enum i40e_status_code i40e_write_nvm_word(struct i40e_hw *hw, u32 offset, + void *data) +{ + DEBUGFUNC("i40e_write_nvm_word"); + + /* Value 0x00 below means that we treat SR as a flat mem */ + return i40e_write_nvm_aq(hw, 0x00, offset, 1, data, false); +} + +/** + * i40e_write_nvm_buffer - Writes Shadow RAM buffer + * @hw: pointer to the HW structure + * @module_pointer: module pointer location in words from the NVM beginning + * @offset: offset of the Shadow RAM buffer to write + * @words: number of words to write + * @data: words to write to the Shadow RAM + * + * Writes a 16 bit words buffer to the Shadow RAM using the admin command. + * NVM ownership must be acquired before calling this function and released + * on ARQ completion event reception by caller. To commit SR to NVM update + * checksum function should be called. + **/ +enum i40e_status_code i40e_write_nvm_buffer(struct i40e_hw *hw, + u8 module_pointer, u32 offset, + u16 words, void *data) +{ + DEBUGFUNC("i40e_write_nvm_buffer"); + + /* Here we will only write one buffer as the size of the modules + * mirrored in the Shadow RAM is always less than 4K. + */ + return i40e_write_nvm_aq(hw, module_pointer, offset, words, + data, false); +} + +/** + * i40e_calc_nvm_checksum - Calculates and returns the checksum + * @hw: pointer to hardware structure + * @checksum: pointer to the checksum + * + * This function calculates SW Checksum that covers the whole 64kB shadow RAM + * except the VPD and PCIe ALT Auto-load modules. The structure and size of VPD + * is customer specific and unknown. Therefore, this function skips all maximum + * possible size of VPD (1kB). + **/ +enum i40e_status_code i40e_calc_nvm_checksum(struct i40e_hw *hw, u16 *checksum) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + u16 pcie_alt_module = 0; + u16 checksum_local = 0; + u16 vpd_module = 0; + u16 word = 0; + u32 i = 0; + + DEBUGFUNC("i40e_calc_nvm_checksum"); + + /* read pointer to VPD area */ + ret_code = i40e_read_nvm_word(hw, I40E_SR_VPD_PTR, &vpd_module); + if (ret_code != I40E_SUCCESS) { + ret_code = I40E_ERR_NVM_CHECKSUM; + goto i40e_calc_nvm_checksum_exit; + } + + /* read pointer to PCIe Alt Auto-load module */ + ret_code = i40e_read_nvm_word(hw, I40E_SR_PCIE_ALT_AUTO_LOAD_PTR, + &pcie_alt_module); + if (ret_code != I40E_SUCCESS) { + ret_code = I40E_ERR_NVM_CHECKSUM; + goto i40e_calc_nvm_checksum_exit; + } + + /* Calculate SW checksum that covers the whole 64kB shadow RAM + * except the VPD and PCIe ALT Auto-load modules + */ + for (i = 0; i < hw->nvm.sr_size; i++) { + /* Skip Checksum word */ + if (i == I40E_SR_SW_CHECKSUM_WORD) + i++; + /* Skip VPD module (convert byte size to word count) */ + if (i == (u32)vpd_module) { + i += (I40E_SR_VPD_MODULE_MAX_SIZE / 2); + if (i >= hw->nvm.sr_size) + break; + } + /* Skip PCIe ALT module (convert byte size to word count) */ + if (i == (u32)pcie_alt_module) { + i += (I40E_SR_PCIE_ALT_MODULE_MAX_SIZE / 2); + if (i >= hw->nvm.sr_size) + break; + } + + ret_code = i40e_read_nvm_word(hw, (u16)i, &word); + if (ret_code != I40E_SUCCESS) { + ret_code = I40E_ERR_NVM_CHECKSUM; + goto i40e_calc_nvm_checksum_exit; + } + checksum_local += word; + } + + *checksum = (u16)I40E_SR_SW_CHECKSUM_BASE - checksum_local; + +i40e_calc_nvm_checksum_exit: + return ret_code; +} + +/** + * i40e_update_nvm_checksum - Updates the NVM checksum + * @hw: pointer to hardware structure + * + * NVM ownership must be acquired before calling this function and released + * on ARQ completion event reception by caller. + * This function will commit SR to NVM. + **/ +enum i40e_status_code i40e_update_nvm_checksum(struct i40e_hw *hw) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + u16 checksum; + + DEBUGFUNC("i40e_update_nvm_checksum"); + + ret_code = i40e_calc_nvm_checksum(hw, &checksum); + if (ret_code == I40E_SUCCESS) + ret_code = i40e_write_nvm_aq(hw, 0x00, I40E_SR_SW_CHECKSUM_WORD, + 1, &checksum, true); + + return ret_code; +} + +/** + * i40e_validate_nvm_checksum - Validate EEPROM checksum + * @hw: pointer to hardware structure + * @checksum: calculated checksum + * + * Performs checksum calculation and validates the NVM SW checksum. If the + * caller does not need checksum, the value can be NULL. + **/ +enum i40e_status_code i40e_validate_nvm_checksum(struct i40e_hw *hw, + u16 *checksum) +{ + enum i40e_status_code ret_code = I40E_SUCCESS; + u16 checksum_sr = 0; + u16 checksum_local = 0; + + DEBUGFUNC("i40e_validate_nvm_checksum"); + + ret_code = i40e_calc_nvm_checksum(hw, &checksum_local); + if (ret_code != I40E_SUCCESS) + goto i40e_validate_nvm_checksum_exit; + + /* Do not use i40e_read_nvm_word() because we do not want to take + * the synchronization semaphores twice here. + */ + i40e_read_nvm_word(hw, I40E_SR_SW_CHECKSUM_WORD, &checksum_sr); + + /* Verify read checksum from EEPROM is the same as + * calculated checksum + */ + if (checksum_local != checksum_sr) + ret_code = I40E_ERR_NVM_CHECKSUM; + + /* If the user cares, return the calculated checksum */ + if (checksum) + *checksum = checksum_local; + +i40e_validate_nvm_checksum_exit: + return ret_code; +} + +STATIC enum i40e_status_code i40e_nvmupd_state_init(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *err); +STATIC enum i40e_status_code i40e_nvmupd_state_reading(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *err); +STATIC enum i40e_status_code i40e_nvmupd_state_writing(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *err); +STATIC enum i40e_nvmupd_cmd i40e_nvmupd_validate_command(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + int *err); +STATIC enum i40e_status_code i40e_nvmupd_nvm_erase(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + int *err); +STATIC enum i40e_status_code i40e_nvmupd_nvm_write(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *err); +STATIC enum i40e_status_code i40e_nvmupd_nvm_read(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *err); +STATIC inline u8 i40e_nvmupd_get_module(u32 val) +{ + return (u8)(val & I40E_NVM_MOD_PNT_MASK); +} +STATIC inline u8 i40e_nvmupd_get_transaction(u32 val) +{ + return (u8)((val & I40E_NVM_TRANS_MASK) >> I40E_NVM_TRANS_SHIFT); +} + +/** + * i40e_nvmupd_command - Process an NVM update command + * @hw: pointer to hardware structure + * @cmd: pointer to nvm update command + * @bytes: pointer to the data buffer + * @err: pointer to return error code + * + * Dispatches command depending on what update state is current + **/ +enum i40e_status_code i40e_nvmupd_command(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *err) +{ + enum i40e_status_code status; + + DEBUGFUNC("i40e_nvmupd_command"); + + /* assume success */ + *err = 0; + + switch (hw->nvmupd_state) { + case I40E_NVMUPD_STATE_INIT: + status = i40e_nvmupd_state_init(hw, cmd, bytes, err); + break; + + case I40E_NVMUPD_STATE_READING: + status = i40e_nvmupd_state_reading(hw, cmd, bytes, err); + break; + + case I40E_NVMUPD_STATE_WRITING: + status = i40e_nvmupd_state_writing(hw, cmd, bytes, err); + break; + + default: + /* invalid state, should never happen */ + status = I40E_NOT_SUPPORTED; + *err = -ESRCH; + break; + } + return status; +} + +/** + * i40e_nvmupd_state_init - Handle NVM update state Init + * @hw: pointer to hardware structure + * @cmd: pointer to nvm update command buffer + * @bytes: pointer to the data buffer + * @err: pointer to return error code + * + * Process legitimate commands of the Init state and conditionally set next + * state. Reject all other commands. + **/ +STATIC enum i40e_status_code i40e_nvmupd_state_init(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *err) +{ + enum i40e_status_code status = I40E_SUCCESS; + enum i40e_nvmupd_cmd upd_cmd; + + DEBUGFUNC("i40e_nvmupd_state_init"); + + upd_cmd = i40e_nvmupd_validate_command(hw, cmd, err); + + switch (upd_cmd) { + case I40E_NVMUPD_READ_SA: + status = i40e_acquire_nvm(hw, I40E_RESOURCE_READ); + if (status) { + *err = i40e_aq_rc_to_posix(hw->aq.asq_last_status); + } else { + status = i40e_nvmupd_nvm_read(hw, cmd, bytes, err); + i40e_release_nvm(hw); + } + break; + + case I40E_NVMUPD_READ_SNT: + status = i40e_acquire_nvm(hw, I40E_RESOURCE_READ); + if (status) { + *err = i40e_aq_rc_to_posix(hw->aq.asq_last_status); + } else { + status = i40e_nvmupd_nvm_read(hw, cmd, bytes, err); + hw->nvmupd_state = I40E_NVMUPD_STATE_READING; + } + break; + + case I40E_NVMUPD_WRITE_ERA: + status = i40e_acquire_nvm(hw, I40E_RESOURCE_WRITE); + if (status) { + *err = i40e_aq_rc_to_posix(hw->aq.asq_last_status); + } else { + status = i40e_nvmupd_nvm_erase(hw, cmd, err); + if (status) + i40e_release_nvm(hw); + else + hw->aq.nvm_release_on_done = true; + } + break; + + case I40E_NVMUPD_WRITE_SA: + status = i40e_acquire_nvm(hw, I40E_RESOURCE_WRITE); + if (status) { + *err = i40e_aq_rc_to_posix(hw->aq.asq_last_status); + } else { + status = i40e_nvmupd_nvm_write(hw, cmd, bytes, err); + if (status) + i40e_release_nvm(hw); + else + hw->aq.nvm_release_on_done = true; + } + break; + + case I40E_NVMUPD_WRITE_SNT: + status = i40e_acquire_nvm(hw, I40E_RESOURCE_WRITE); + if (status) { + *err = i40e_aq_rc_to_posix(hw->aq.asq_last_status); + } else { + status = i40e_nvmupd_nvm_write(hw, cmd, bytes, err); + hw->nvmupd_state = I40E_NVMUPD_STATE_WRITING; + } + break; + + case I40E_NVMUPD_CSUM_SA: + status = i40e_acquire_nvm(hw, I40E_RESOURCE_WRITE); + if (status) { + *err = i40e_aq_rc_to_posix(hw->aq.asq_last_status); + } else { + status = i40e_update_nvm_checksum(hw); + if (status) { + *err = hw->aq.asq_last_status ? + i40e_aq_rc_to_posix(hw->aq.asq_last_status) : + -EIO; + i40e_release_nvm(hw); + } else { + hw->aq.nvm_release_on_done = true; + } + } + break; + + default: + status = I40E_ERR_NVM; + *err = -ESRCH; + break; + } + return status; +} + +/** + * i40e_nvmupd_state_reading - Handle NVM update state Reading + * @hw: pointer to hardware structure + * @cmd: pointer to nvm update command buffer + * @bytes: pointer to the data buffer + * @err: pointer to return error code + * + * NVM ownership is already held. Process legitimate commands and set any + * change in state; reject all other commands. + **/ +STATIC enum i40e_status_code i40e_nvmupd_state_reading(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *err) +{ + enum i40e_status_code status; + enum i40e_nvmupd_cmd upd_cmd; + + DEBUGFUNC("i40e_nvmupd_state_reading"); + + upd_cmd = i40e_nvmupd_validate_command(hw, cmd, err); + + switch (upd_cmd) { + case I40E_NVMUPD_READ_SA: + case I40E_NVMUPD_READ_CON: + status = i40e_nvmupd_nvm_read(hw, cmd, bytes, err); + break; + + case I40E_NVMUPD_READ_LCB: + status = i40e_nvmupd_nvm_read(hw, cmd, bytes, err); + i40e_release_nvm(hw); + hw->nvmupd_state = I40E_NVMUPD_STATE_INIT; + break; + + default: + status = I40E_NOT_SUPPORTED; + *err = -ESRCH; + break; + } + return status; +} + +/** + * i40e_nvmupd_state_writing - Handle NVM update state Writing + * @hw: pointer to hardware structure + * @cmd: pointer to nvm update command buffer + * @bytes: pointer to the data buffer + * @err: pointer to return error code + * + * NVM ownership is already held. Process legitimate commands and set any + * change in state; reject all other commands + **/ +STATIC enum i40e_status_code i40e_nvmupd_state_writing(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *err) +{ + enum i40e_status_code status; + enum i40e_nvmupd_cmd upd_cmd; + + DEBUGFUNC("i40e_nvmupd_state_writing"); + + upd_cmd = i40e_nvmupd_validate_command(hw, cmd, err); + + switch (upd_cmd) { + case I40E_NVMUPD_WRITE_CON: + status = i40e_nvmupd_nvm_write(hw, cmd, bytes, err); + break; + + case I40E_NVMUPD_WRITE_LCB: + status = i40e_nvmupd_nvm_write(hw, cmd, bytes, err); + if (!status) { + hw->aq.nvm_release_on_done = true; + hw->nvmupd_state = I40E_NVMUPD_STATE_INIT; + } + break; + + case I40E_NVMUPD_CSUM_CON: + status = i40e_update_nvm_checksum(hw); + if (status) + *err = hw->aq.asq_last_status ? + i40e_aq_rc_to_posix(hw->aq.asq_last_status) : + -EIO; + break; + + case I40E_NVMUPD_CSUM_LCB: + status = i40e_update_nvm_checksum(hw); + if (status) { + *err = hw->aq.asq_last_status ? + i40e_aq_rc_to_posix(hw->aq.asq_last_status) : + -EIO; + } else { + hw->aq.nvm_release_on_done = true; + hw->nvmupd_state = I40E_NVMUPD_STATE_INIT; + } + break; + + default: + status = I40E_NOT_SUPPORTED; + *err = -ESRCH; + break; + } + return status; +} + +/** + * i40e_nvmupd_validate_command - Validate given command + * @hw: pointer to hardware structure + * @cmd: pointer to nvm update command buffer + * @err: pointer to return error code + * + * Return one of the valid command types or I40E_NVMUPD_INVALID + **/ +STATIC enum i40e_nvmupd_cmd i40e_nvmupd_validate_command(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + int *err) +{ + enum i40e_nvmupd_cmd upd_cmd; + u8 transaction, module; + + DEBUGFUNC("i40e_nvmupd_validate_command\n"); + + /* anything that doesn't match a recognized case is an error */ + upd_cmd = I40E_NVMUPD_INVALID; + + transaction = i40e_nvmupd_get_transaction(cmd->config); + module = i40e_nvmupd_get_module(cmd->config); + + /* limits on data size */ + if ((cmd->data_size < 1) || + (cmd->data_size > I40E_NVMUPD_MAX_DATA)) { + DEBUGOUT1("i40e_nvmupd_validate_command data_size %d\n", + cmd->data_size); + *err = -EFAULT; + return I40E_NVMUPD_INVALID; + } + + switch (cmd->command) { + case I40E_NVM_READ: + switch (transaction) { + case I40E_NVM_CON: + upd_cmd = I40E_NVMUPD_READ_CON; + break; + case I40E_NVM_SNT: + upd_cmd = I40E_NVMUPD_READ_SNT; + break; + case I40E_NVM_LCB: + upd_cmd = I40E_NVMUPD_READ_LCB; + break; + case I40E_NVM_SA: + upd_cmd = I40E_NVMUPD_READ_SA; + break; + } + break; + + case I40E_NVM_WRITE: + switch (transaction) { + case I40E_NVM_CON: + upd_cmd = I40E_NVMUPD_WRITE_CON; + break; + case I40E_NVM_SNT: + upd_cmd = I40E_NVMUPD_WRITE_SNT; + break; + case I40E_NVM_LCB: + upd_cmd = I40E_NVMUPD_WRITE_LCB; + break; + case I40E_NVM_SA: + upd_cmd = I40E_NVMUPD_WRITE_SA; + break; + case I40E_NVM_ERA: + upd_cmd = I40E_NVMUPD_WRITE_ERA; + break; + case I40E_NVM_CSUM: + upd_cmd = I40E_NVMUPD_CSUM_CON; + break; + case (I40E_NVM_CSUM|I40E_NVM_SA): + upd_cmd = I40E_NVMUPD_CSUM_SA; + break; + case (I40E_NVM_CSUM|I40E_NVM_LCB): + upd_cmd = I40E_NVMUPD_CSUM_LCB; + break; + } + break; + } + + if (upd_cmd == I40E_NVMUPD_INVALID) { + *err = -EFAULT; + DEBUGOUT2( + "i40e_nvmupd_validate_command returns %d err: %d\n", + upd_cmd, *err); + } + return upd_cmd; +} + +/** + * i40e_nvmupd_nvm_read - Read NVM + * @hw: pointer to hardware structure + * @cmd: pointer to nvm update command buffer + * @bytes: pointer to the data buffer + * @err: pointer to return error code + * + * cmd structure contains identifiers and data buffer + **/ +STATIC enum i40e_status_code i40e_nvmupd_nvm_read(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *err) +{ + enum i40e_status_code status; + u8 module, transaction; + bool last; + + transaction = i40e_nvmupd_get_transaction(cmd->config); + module = i40e_nvmupd_get_module(cmd->config); + last = (transaction == I40E_NVM_LCB) || (transaction == I40E_NVM_SA); + DEBUGOUT3("i40e_nvmupd_nvm_read mod 0x%x off 0x%x len 0x%x\n", + module, cmd->offset, cmd->data_size); + + status = i40e_aq_read_nvm(hw, module, cmd->offset, (u16)cmd->data_size, + bytes, last, NULL); + DEBUGOUT1("i40e_nvmupd_nvm_read status %d\n", status); + if (status != I40E_SUCCESS) + *err = i40e_aq_rc_to_posix(hw->aq.asq_last_status); + + return status; +} + +/** + * i40e_nvmupd_nvm_erase - Erase an NVM module + * @hw: pointer to hardware structure + * @cmd: pointer to nvm update command buffer + * @err: pointer to return error code + * + * module, offset, data_size and data are in cmd structure + **/ +STATIC enum i40e_status_code i40e_nvmupd_nvm_erase(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + int *err) +{ + enum i40e_status_code status = I40E_SUCCESS; + u8 module, transaction; + bool last; + + transaction = i40e_nvmupd_get_transaction(cmd->config); + module = i40e_nvmupd_get_module(cmd->config); + last = (transaction & I40E_NVM_LCB); + DEBUGOUT3("i40e_nvmupd_nvm_erase mod 0x%x off 0x%x len 0x%x\n", + module, cmd->offset, cmd->data_size); + status = i40e_aq_erase_nvm(hw, module, cmd->offset, (u16)cmd->data_size, + last, NULL); + DEBUGOUT1("i40e_nvmupd_nvm_erase status %d\n", status); + if (status != I40E_SUCCESS) + *err = i40e_aq_rc_to_posix(hw->aq.asq_last_status); + + return status; +} + +/** + * i40e_nvmupd_nvm_write - Write NVM + * @hw: pointer to hardware structure + * @cmd: pointer to nvm update command buffer + * @bytes: pointer to the data buffer + * @err: pointer to return error code + * + * module, offset, data_size and data are in cmd structure + **/ +STATIC enum i40e_status_code i40e_nvmupd_nvm_write(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *err) +{ + enum i40e_status_code status = I40E_SUCCESS; + u8 module, transaction; + bool last; + + transaction = i40e_nvmupd_get_transaction(cmd->config); + module = i40e_nvmupd_get_module(cmd->config); + last = (transaction & I40E_NVM_LCB); + DEBUGOUT3("i40e_nvmupd_nvm_write mod 0x%x off 0x%x len 0x%x\n", + module, cmd->offset, cmd->data_size); + status = i40e_aq_update_nvm(hw, module, cmd->offset, + (u16)cmd->data_size, bytes, last, NULL); + DEBUGOUT1("i40e_nvmupd_nvm_write status %d\n", status); + if (status != I40E_SUCCESS) + *err = i40e_aq_rc_to_posix(hw->aq.asq_last_status); + + return status; +} diff --git a/drivers/net/i40e/base/i40e_osdep.h b/drivers/net/i40e/base/i40e_osdep.h new file mode 100644 index 0000000..de71b0d --- /dev/null +++ b/drivers/net/i40e/base/i40e_osdep.h @@ -0,0 +1,197 @@ +/****************************************************************************** + + Copyright (c) 2001-2014, Intel Corporation + All rights reserved. + + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" + AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE + LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR + CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF + SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS + INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN + CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE + POSSIBILITY OF SUCH DAMAGE. +******************************************************************************/ + +#ifndef _I40E_OSDEP_H_ +#define _I40E_OSDEP_H_ + +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +#include "../i40e_logs.h" + +#define INLINE inline +#define STATIC static + +typedef uint8_t u8; +typedef int8_t s8; +typedef uint16_t u16; +typedef uint32_t u32; +typedef int32_t s32; +typedef uint64_t u64; +typedef int bool; + +typedef enum i40e_status_code i40e_status; +#define __iomem +#define hw_dbg(hw, S, A...) do {} while (0) +#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) +#define lower_32_bits(n) ((u32)(n)) +#define low_16_bits(x) ((x) & 0xFFFF) +#define high_16_bits(x) (((x) & 0xFFFF0000) >> 16) + +#ifndef ETH_ADDR_LEN +#define ETH_ADDR_LEN 6 +#endif + +#ifndef __le16 +#define __le16 uint16_t +#endif +#ifndef __le32 +#define __le32 uint32_t +#endif +#ifndef __le64 +#define __le64 uint64_t +#endif +#ifndef __be16 +#define __be16 uint16_t +#endif +#ifndef __be32 +#define __be32 uint32_t +#endif +#ifndef __be64 +#define __be64 uint64_t +#endif + +#define FALSE 0 +#define TRUE 1 +#define false 0 +#define true 1 + +#define min(a,b) RTE_MIN(a,b) +#define max(a,b) RTE_MAX(a,b) + +#define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f)) +#define ASSERT(x) if(!(x)) rte_panic("IXGBE: x") + +#define DEBUGOUT(S) PMD_DRV_LOG_RAW(DEBUG, S) +#define DEBUGOUT1(S, A...) PMD_DRV_LOG_RAW(DEBUG, S, ##A) + +#define DEBUGFUNC(F) DEBUGOUT(F "\n") +#define DEBUGOUT2 DEBUGOUT1 +#define DEBUGOUT3 DEBUGOUT2 +#define DEBUGOUT6 DEBUGOUT3 +#define DEBUGOUT7 DEBUGOUT6 + +#define i40e_debug(h, m, s, ...) \ +do { \ + if (((m) & (h)->debug_mask)) \ + PMD_DRV_LOG_RAW(DEBUG, "i40e %02x.%x " s, \ + (h)->bus.device, (h)->bus.func, \ + ##__VA_ARGS__); \ +} while (0) + +#define I40E_PCI_REG(reg) (*((volatile uint32_t *)(reg))) +#define I40E_PCI_REG_ADDR(a, reg) \ + ((volatile uint32_t *)((char *)(a)->hw_addr + (reg))) +static inline uint32_t i40e_read_addr(volatile void *addr) +{ + return I40E_PCI_REG(addr); +} +#define I40E_PCI_REG_WRITE(reg, value) \ + do {I40E_PCI_REG((reg)) = (value);} while(0) + +#define I40E_WRITE_FLUSH(a) I40E_READ_REG(a, I40E_GLGEN_STAT) +#define I40EVF_WRITE_FLUSH(a) I40E_READ_REG(a, I40E_VFGEN_RSTAT) + +#define I40E_READ_REG(hw, reg) i40e_read_addr(I40E_PCI_REG_ADDR((hw), (reg))) +#define I40E_WRITE_REG(hw, reg, value) \ + I40E_PCI_REG_WRITE(I40E_PCI_REG_ADDR((hw), (reg)), (value)) + +#define rd32(a, reg) i40e_read_addr(I40E_PCI_REG_ADDR((a), (reg))) +#define wr32(a, reg, value) \ + I40E_PCI_REG_WRITE(I40E_PCI_REG_ADDR((a), (reg)), (value)) +#define flush(a) i40e_read_addr(I40E_PCI_REG_ADDR((a), (I40E_GLGEN_STAT))) + +#define ARRAY_SIZE(arr) (sizeof(arr)/sizeof(arr[0])) + +/* memory allocation tracking */ +struct i40e_dma_mem { + void *va; + u64 pa; + u32 size; + u64 id; +} __attribute__((packed)); + +#define i40e_allocate_dma_mem(h, m, unused, s, a) \ + i40e_allocate_dma_mem_d(h, m, s, a) +#define i40e_free_dma_mem(h, m) i40e_free_dma_mem_d(h, m) + +struct i40e_virt_mem { + void *va; + u32 size; +} __attribute__((packed)); + +#define i40e_allocate_virt_mem(h, m, s) i40e_allocate_virt_mem_d(h, m, s) +#define i40e_free_virt_mem(h, m) i40e_free_virt_mem_d(h, m) + +#define CPU_TO_LE16(o) rte_cpu_to_le_16(o) +#define CPU_TO_LE32(s) rte_cpu_to_le_32(s) +#define CPU_TO_LE64(h) rte_cpu_to_le_64(h) +#define LE16_TO_CPU(a) rte_le_to_cpu_16(a) +#define LE32_TO_CPU(c) rte_le_to_cpu_32(c) +#define LE64_TO_CPU(k) rte_le_to_cpu_64(k) + +/* SW spinlock */ +struct i40e_spinlock { + rte_spinlock_t spinlock; +}; + +#define i40e_init_spinlock(_sp) i40e_init_spinlock_d(_sp) +#define i40e_acquire_spinlock(_sp) i40e_acquire_spinlock_d(_sp) +#define i40e_release_spinlock(_sp) i40e_release_spinlock_d(_sp) +#define i40e_destroy_spinlock(_sp) i40e_destroy_spinlock_d(_sp) + +#define I40E_NTOHS(a) rte_be_to_cpu_16(a) +#define I40E_NTOHL(a) rte_be_to_cpu_32(a) +#define I40E_HTONS(a) rte_cpu_to_be_16(a) +#define I40E_HTONL(a) rte_cpu_to_be_32(a) + +#define i40e_memset(a, b, c, d) memset((a), (b), (c)) +#define i40e_memcpy(a, b, c, d) rte_memcpy((a), (b), (c)) + +#define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d)) +#define DELAY(x) rte_delay_us(x) +#define i40e_usec_delay(x) rte_delay_us(x) +#define i40e_msec_delay(x) rte_delay_us(1000*(x)) +#define udelay(x) DELAY(x) +#define msleep(x) DELAY(1000*(x)) +#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) + +#endif /* _I40E_OSDEP_H_ */ diff --git a/drivers/net/i40e/base/i40e_prototype.h b/drivers/net/i40e/base/i40e_prototype.h new file mode 100644 index 0000000..f3215cf --- /dev/null +++ b/drivers/net/i40e/base/i40e_prototype.h @@ -0,0 +1,430 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _I40E_PROTOTYPE_H_ +#define _I40E_PROTOTYPE_H_ + +#include "i40e_type.h" +#include "i40e_alloc.h" +#include "i40e_virtchnl.h" + +/* Prototypes for shared code functions that are not in + * the standard function pointer structures. These are + * mostly because they are needed even before the init + * has happened and will assist in the early SW and FW + * setup. + */ + +/* adminq functions */ +enum i40e_status_code i40e_init_adminq(struct i40e_hw *hw); +enum i40e_status_code i40e_shutdown_adminq(struct i40e_hw *hw); +enum i40e_status_code i40e_init_asq(struct i40e_hw *hw); +enum i40e_status_code i40e_init_arq(struct i40e_hw *hw); +enum i40e_status_code i40e_alloc_adminq_asq_ring(struct i40e_hw *hw); +enum i40e_status_code i40e_alloc_adminq_arq_ring(struct i40e_hw *hw); +enum i40e_status_code i40e_shutdown_asq(struct i40e_hw *hw); +enum i40e_status_code i40e_shutdown_arq(struct i40e_hw *hw); +u16 i40e_clean_asq(struct i40e_hw *hw); +void i40e_free_adminq_asq(struct i40e_hw *hw); +void i40e_free_adminq_arq(struct i40e_hw *hw); +void i40e_adminq_init_ring_data(struct i40e_hw *hw); +enum i40e_status_code i40e_clean_arq_element(struct i40e_hw *hw, + struct i40e_arq_event_info *e, + u16 *events_pending); +enum i40e_status_code i40e_asq_send_command(struct i40e_hw *hw, + struct i40e_aq_desc *desc, + void *buff, /* can be NULL */ + u16 buff_size, + struct i40e_asq_cmd_details *cmd_details); +bool i40e_asq_done(struct i40e_hw *hw); + +/* debug function for adminq */ +void i40e_debug_aq(struct i40e_hw *hw, enum i40e_debug_mask mask, + void *desc, void *buffer, u16 buf_len); + +void i40e_idle_aq(struct i40e_hw *hw); +void i40e_resume_aq(struct i40e_hw *hw); +bool i40e_check_asq_alive(struct i40e_hw *hw); +enum i40e_status_code i40e_aq_queue_shutdown(struct i40e_hw *hw, bool unloading); + +#ifndef VF_DRIVER + +u32 i40e_led_get(struct i40e_hw *hw); +void i40e_led_set(struct i40e_hw *hw, u32 mode, bool blink); + +/* admin send queue commands */ + +enum i40e_status_code i40e_aq_get_firmware_version(struct i40e_hw *hw, + u16 *fw_major_version, u16 *fw_minor_version, + u16 *api_major_version, u16 *api_minor_version, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_debug_write_register(struct i40e_hw *hw, + u32 reg_addr, u64 reg_val, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_set_default_vsi(struct i40e_hw *hw, u16 vsi_id, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_get_phy_capabilities(struct i40e_hw *hw, + bool qualified_modules, bool report_init, + struct i40e_aq_get_phy_abilities_resp *abilities, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_set_phy_config(struct i40e_hw *hw, + struct i40e_aq_set_phy_config *config, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_set_fc(struct i40e_hw *hw, u8 *aq_failures, + bool atomic_reset); +enum i40e_status_code i40e_aq_set_phy_int_mask(struct i40e_hw *hw, u16 mask, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_set_mac_config(struct i40e_hw *hw, + u16 max_frame_size, bool crc_en, u16 pacing, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_get_local_advt_reg(struct i40e_hw *hw, + u64 *advt_reg, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_get_partner_advt(struct i40e_hw *hw, + u64 *advt_reg, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_set_lb_modes(struct i40e_hw *hw, u16 lb_modes, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_clear_pxe_mode(struct i40e_hw *hw, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_set_link_restart_an(struct i40e_hw *hw, + bool enable_link, struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_get_link_info(struct i40e_hw *hw, + bool enable_lse, struct i40e_link_status *link, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_update_link_info(struct i40e_hw *hw, + bool enable_lse); +enum i40e_status_code i40e_aq_set_local_advt_reg(struct i40e_hw *hw, + u64 advt_reg, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_send_driver_version(struct i40e_hw *hw, + struct i40e_driver_version *dv, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_add_vsi(struct i40e_hw *hw, + struct i40e_vsi_context *vsi_ctx, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_set_vsi_broadcast(struct i40e_hw *hw, + u16 vsi_id, bool set_filter, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw, + u16 vsi_id, bool set, struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw, + u16 vsi_id, bool set, struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_get_vsi_params(struct i40e_hw *hw, + struct i40e_vsi_context *vsi_ctx, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_update_vsi_params(struct i40e_hw *hw, + struct i40e_vsi_context *vsi_ctx, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_add_veb(struct i40e_hw *hw, u16 uplink_seid, + u16 downlink_seid, u8 enabled_tc, + bool default_port, bool enable_l2_filtering, + u16 *pveb_seid, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_get_veb_parameters(struct i40e_hw *hw, + u16 veb_seid, u16 *switch_id, bool *floating, + u16 *statistic_index, u16 *vebs_used, + u16 *vebs_free, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_add_macvlan(struct i40e_hw *hw, u16 vsi_id, + struct i40e_aqc_add_macvlan_element_data *mv_list, + u16 count, struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_remove_macvlan(struct i40e_hw *hw, u16 vsi_id, + struct i40e_aqc_remove_macvlan_element_data *mv_list, + u16 count, struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_add_vlan(struct i40e_hw *hw, u16 vsi_id, + struct i40e_aqc_add_remove_vlan_element_data *v_list, + u8 count, struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_remove_vlan(struct i40e_hw *hw, u16 vsi_id, + struct i40e_aqc_add_remove_vlan_element_data *v_list, + u8 count, struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_send_msg_to_vf(struct i40e_hw *hw, u16 vfid, + u32 v_opcode, u32 v_retval, u8 *msg, u16 msglen, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_get_switch_config(struct i40e_hw *hw, + struct i40e_aqc_get_switch_config_resp *buf, + u16 buf_size, u16 *start_seid, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_request_resource(struct i40e_hw *hw, + enum i40e_aq_resources_ids resource, + enum i40e_aq_resource_access_type access, + u8 sdp_number, u64 *timeout, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_release_resource(struct i40e_hw *hw, + enum i40e_aq_resources_ids resource, + u8 sdp_number, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_read_nvm(struct i40e_hw *hw, u8 module_pointer, + u32 offset, u16 length, void *data, + bool last_command, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_erase_nvm(struct i40e_hw *hw, u8 module_pointer, + u32 offset, u16 length, bool last_command, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_discover_capabilities(struct i40e_hw *hw, + void *buff, u16 buff_size, u16 *data_size, + enum i40e_admin_queue_opc list_type_opc, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_update_nvm(struct i40e_hw *hw, u8 module_pointer, + u32 offset, u16 length, void *data, + bool last_command, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_get_lldp_mib(struct i40e_hw *hw, u8 bridge_type, + u8 mib_type, void *buff, u16 buff_size, + u16 *local_len, u16 *remote_len, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_cfg_lldp_mib_change_event(struct i40e_hw *hw, + bool enable_update, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_add_lldp_tlv(struct i40e_hw *hw, u8 bridge_type, + void *buff, u16 buff_size, u16 tlv_len, + u16 *mib_len, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_update_lldp_tlv(struct i40e_hw *hw, + u8 bridge_type, void *buff, u16 buff_size, + u16 old_len, u16 new_len, u16 offset, + u16 *mib_len, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_delete_lldp_tlv(struct i40e_hw *hw, + u8 bridge_type, void *buff, u16 buff_size, + u16 tlv_len, u16 *mib_len, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_stop_lldp(struct i40e_hw *hw, bool shutdown_agent, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_start_lldp(struct i40e_hw *hw, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_add_udp_tunnel(struct i40e_hw *hw, + u16 udp_port, u8 protocol_index, + u8 *filter_index, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_del_udp_tunnel(struct i40e_hw *hw, u8 index, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_get_switch_resource_alloc(struct i40e_hw *hw, + u8 *num_entries, + struct i40e_aqc_switch_resource_alloc_element_resp *buf, + u16 count, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_add_pvirt(struct i40e_hw *hw, u16 flags, + u16 mac_seid, u16 vsi_seid, + u16 *ret_seid); +enum i40e_status_code i40e_aq_add_tag(struct i40e_hw *hw, bool direct_to_queue, + u16 vsi_seid, u16 tag, u16 queue_num, + u16 *tags_used, u16 *tags_free, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_remove_tag(struct i40e_hw *hw, u16 vsi_seid, + u16 tag, u16 *tags_used, u16 *tags_free, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_add_mcast_etag(struct i40e_hw *hw, u16 pe_seid, + u16 etag, u8 num_tags_in_buf, void *buf, + u16 *tags_used, u16 *tags_free, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_remove_mcast_etag(struct i40e_hw *hw, u16 pe_seid, + u16 etag, u16 *tags_used, u16 *tags_free, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_update_tag(struct i40e_hw *hw, u16 vsi_seid, + u16 old_tag, u16 new_tag, u16 *tags_used, + u16 *tags_free, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_add_statistics(struct i40e_hw *hw, u16 seid, + u16 vlan_id, u16 *stat_index, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_remove_statistics(struct i40e_hw *hw, u16 seid, + u16 vlan_id, u16 stat_index, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_set_port_parameters(struct i40e_hw *hw, + u16 bad_frame_vsi, bool save_bad_pac, + bool pad_short_pac, bool double_vlan, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_delete_element(struct i40e_hw *hw, u16 seid, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_mac_address_write(struct i40e_hw *hw, + u16 flags, u8 *mac_addr, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_config_vsi_bw_limit(struct i40e_hw *hw, + u16 seid, u16 credit, u8 max_credit, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_dcb_ignore_pfc(struct i40e_hw *hw, + u8 tcmap, bool request, u8 *tcmap_ret, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_get_hmc_resource_profile(struct i40e_hw *hw, + enum i40e_aq_hmc_profile *profile, + u8 *pe_vf_enabled_count, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_config_switch_comp_ets_bw_limit( + struct i40e_hw *hw, u16 seid, + struct i40e_aqc_configure_switching_comp_ets_bw_limit_data *bw_data, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_config_vsi_ets_sla_bw_limit(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_configure_vsi_ets_sla_bw_data *bw_data, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_dcb_updated(struct i40e_hw *hw, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_set_hmc_resource_profile(struct i40e_hw *hw, + enum i40e_aq_hmc_profile profile, + u8 pe_vf_enabled_count, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_config_switch_comp_bw_limit(struct i40e_hw *hw, + u16 seid, u16 credit, u8 max_bw, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_config_vsi_tc_bw(struct i40e_hw *hw, u16 seid, + struct i40e_aqc_configure_vsi_tc_bw_data *bw_data, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_query_vsi_bw_config(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_query_vsi_bw_config_resp *bw_data, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_query_vsi_ets_sla_config(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_query_vsi_ets_sla_config_resp *bw_data, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_query_switch_comp_ets_config(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_query_switching_comp_ets_config_resp *bw_data, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_query_port_ets_config(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_query_port_ets_config_resp *bw_data, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw, + u16 seid, + struct i40e_aqc_query_switching_comp_bw_config_resp *bw_data, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_resume_port_tx(struct i40e_hw *hw, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_aq_add_cloud_filters(struct i40e_hw *hw, + u16 vsi, + struct i40e_aqc_add_remove_cloud_filters_element_data *filters, + u8 filter_count); + +enum i40e_status_code i40e_aq_remove_cloud_filters(struct i40e_hw *hw, + u16 vsi, + struct i40e_aqc_add_remove_cloud_filters_element_data *filters, + u8 filter_count); + +enum i40e_status_code i40e_aq_alternate_read(struct i40e_hw *hw, + u32 reg_addr0, u32 *reg_val0, + u32 reg_addr1, u32 *reg_val1); +enum i40e_status_code i40e_aq_alternate_read_indirect(struct i40e_hw *hw, + u32 addr, u32 dw_count, void *buffer); +enum i40e_status_code i40e_aq_alternate_write(struct i40e_hw *hw, + u32 reg_addr0, u32 reg_val0, + u32 reg_addr1, u32 reg_val1); +enum i40e_status_code i40e_aq_alternate_write_indirect(struct i40e_hw *hw, + u32 addr, u32 dw_count, void *buffer); +enum i40e_status_code i40e_aq_alternate_clear(struct i40e_hw *hw); +enum i40e_status_code i40e_aq_alternate_write_done(struct i40e_hw *hw, + u8 bios_mode, bool *reset_needed); +enum i40e_status_code i40e_aq_set_oem_mode(struct i40e_hw *hw, + u8 oem_mode); + +/* i40e_common */ +enum i40e_status_code i40e_init_shared_code(struct i40e_hw *hw); +enum i40e_status_code i40e_pf_reset(struct i40e_hw *hw); +void i40e_clear_hw(struct i40e_hw *hw); +void i40e_clear_pxe_mode(struct i40e_hw *hw); +bool i40e_get_link_status(struct i40e_hw *hw); +enum i40e_status_code i40e_get_mac_addr(struct i40e_hw *hw, u8 *mac_addr); +enum i40e_status_code i40e_read_bw_from_alt_ram(struct i40e_hw *hw, + u32 *max_bw, u32 *min_bw, bool *min_valid, bool *max_valid); +enum i40e_status_code i40e_aq_configure_partition_bw(struct i40e_hw *hw, + struct i40e_aqc_configure_partition_bw_data *bw_data, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_get_port_mac_addr(struct i40e_hw *hw, u8 *mac_addr); +void i40e_pre_tx_queue_cfg(struct i40e_hw *hw, u32 queue, bool enable); +enum i40e_status_code i40e_validate_mac_addr(u8 *mac_addr); +enum i40e_aq_link_speed i40e_get_link_speed(struct i40e_hw *hw); +/* prototype for functions used for NVM access */ +enum i40e_status_code i40e_init_nvm(struct i40e_hw *hw); +enum i40e_status_code i40e_acquire_nvm(struct i40e_hw *hw, + enum i40e_aq_resource_access_type access); +void i40e_release_nvm(struct i40e_hw *hw); +enum i40e_status_code i40e_read_nvm_srrd(struct i40e_hw *hw, u16 offset, + u16 *data); +enum i40e_status_code i40e_read_nvm_word(struct i40e_hw *hw, u16 offset, + u16 *data); +enum i40e_status_code i40e_read_nvm_buffer(struct i40e_hw *hw, u16 offset, + u16 *words, u16 *data); +enum i40e_status_code i40e_write_nvm_aq(struct i40e_hw *hw, u8 module, + u32 offset, u16 words, void *data, + bool last_command); +enum i40e_status_code i40e_write_nvm_word(struct i40e_hw *hw, u32 offset, + void *data); +enum i40e_status_code i40e_write_nvm_buffer(struct i40e_hw *hw, u8 module, + u32 offset, u16 words, void *data); +enum i40e_status_code i40e_calc_nvm_checksum(struct i40e_hw *hw, u16 *checksum); +enum i40e_status_code i40e_update_nvm_checksum(struct i40e_hw *hw); +enum i40e_status_code i40e_validate_nvm_checksum(struct i40e_hw *hw, + u16 *checksum); +enum i40e_status_code i40e_nvmupd_command(struct i40e_hw *hw, + struct i40e_nvm_access *cmd, + u8 *bytes, int *); +void i40e_set_pci_config_data(struct i40e_hw *hw, u16 link_status); +#endif /* VF_DRIVER */ + +#if defined(I40E_QV) || defined(VF_DRIVER) +enum i40e_status_code i40e_set_mac_type(struct i40e_hw *hw); + +#endif +extern struct i40e_rx_ptype_decoded i40e_ptype_lookup[]; + +STATIC INLINE struct i40e_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype) +{ + return i40e_ptype_lookup[ptype]; +} + +/* prototype for functions used for SW spinlocks */ +void i40e_init_spinlock(struct i40e_spinlock *sp); +void i40e_acquire_spinlock(struct i40e_spinlock *sp); +void i40e_release_spinlock(struct i40e_spinlock *sp); +void i40e_destroy_spinlock(struct i40e_spinlock *sp); + +/* i40e_common for VF drivers*/ +void i40e_vf_parse_hw_config(struct i40e_hw *hw, + struct i40e_virtchnl_vf_resource *msg); +enum i40e_status_code i40e_vf_reset(struct i40e_hw *hw); +enum i40e_status_code i40e_aq_send_msg_to_pf(struct i40e_hw *hw, + enum i40e_virtchnl_ops v_opcode, + enum i40e_status_code v_retval, + u8 *msg, u16 msglen, + struct i40e_asq_cmd_details *cmd_details); +enum i40e_status_code i40e_set_filter_control(struct i40e_hw *hw, + struct i40e_filter_control_settings *settings); +enum i40e_status_code i40e_aq_add_rem_control_packet_filter(struct i40e_hw *hw, + u8 *mac_addr, u16 ethtype, u16 flags, + u16 vsi_seid, u16 queue, bool is_add, + struct i40e_control_filter_stats *stats, + struct i40e_asq_cmd_details *cmd_details); +#endif /* _I40E_PROTOTYPE_H_ */ diff --git a/drivers/net/i40e/base/i40e_register.h b/drivers/net/i40e/base/i40e_register.h new file mode 100644 index 0000000..f236c39 --- /dev/null +++ b/drivers/net/i40e/base/i40e_register.h @@ -0,0 +1,3377 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _I40E_REGISTER_H_ +#define _I40E_REGISTER_H_ + + +#define I40E_GL_ARQBAH 0x000801C0 /* Reset: EMPR */ +#define I40E_GL_ARQBAH_ARQBAH_SHIFT 0 +#define I40E_GL_ARQBAH_ARQBAH_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_ARQBAH_ARQBAH_SHIFT) +#define I40E_GL_ARQBAL 0x000800C0 /* Reset: EMPR */ +#define I40E_GL_ARQBAL_ARQBAL_SHIFT 0 +#define I40E_GL_ARQBAL_ARQBAL_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_ARQBAL_ARQBAL_SHIFT) +#define I40E_GL_ARQH 0x000803C0 /* Reset: EMPR */ +#define I40E_GL_ARQH_ARQH_SHIFT 0 +#define I40E_GL_ARQH_ARQH_MASK I40E_MASK(0x3FF, I40E_GL_ARQH_ARQH_SHIFT) +#define I40E_GL_ARQT 0x000804C0 /* Reset: EMPR */ +#define I40E_GL_ARQT_ARQT_SHIFT 0 +#define I40E_GL_ARQT_ARQT_MASK I40E_MASK(0x3FF, I40E_GL_ARQT_ARQT_SHIFT) +#define I40E_GL_ATQBAH 0x00080140 /* Reset: EMPR */ +#define I40E_GL_ATQBAH_ATQBAH_SHIFT 0 +#define I40E_GL_ATQBAH_ATQBAH_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_ATQBAH_ATQBAH_SHIFT) +#define I40E_GL_ATQBAL 0x00080040 /* Reset: EMPR */ +#define I40E_GL_ATQBAL_ATQBAL_SHIFT 0 +#define I40E_GL_ATQBAL_ATQBAL_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_ATQBAL_ATQBAL_SHIFT) +#define I40E_GL_ATQH 0x00080340 /* Reset: EMPR */ +#define I40E_GL_ATQH_ATQH_SHIFT 0 +#define I40E_GL_ATQH_ATQH_MASK I40E_MASK(0x3FF, I40E_GL_ATQH_ATQH_SHIFT) +#define I40E_GL_ATQLEN 0x00080240 /* Reset: EMPR */ +#define I40E_GL_ATQLEN_ATQLEN_SHIFT 0 +#define I40E_GL_ATQLEN_ATQLEN_MASK I40E_MASK(0x3FF, I40E_GL_ATQLEN_ATQLEN_SHIFT) +#define I40E_GL_ATQLEN_ATQVFE_SHIFT 28 +#define I40E_GL_ATQLEN_ATQVFE_MASK I40E_MASK(0x1, I40E_GL_ATQLEN_ATQVFE_SHIFT) +#define I40E_GL_ATQLEN_ATQOVFL_SHIFT 29 +#define I40E_GL_ATQLEN_ATQOVFL_MASK I40E_MASK(0x1, I40E_GL_ATQLEN_ATQOVFL_SHIFT) +#define I40E_GL_ATQLEN_ATQCRIT_SHIFT 30 +#define I40E_GL_ATQLEN_ATQCRIT_MASK I40E_MASK(0x1, I40E_GL_ATQLEN_ATQCRIT_SHIFT) +#define I40E_GL_ATQLEN_ATQENABLE_SHIFT 31 +#define I40E_GL_ATQLEN_ATQENABLE_MASK I40E_MASK(0x1, I40E_GL_ATQLEN_ATQENABLE_SHIFT) +#define I40E_GL_ATQT 0x00080440 /* Reset: EMPR */ +#define I40E_GL_ATQT_ATQT_SHIFT 0 +#define I40E_GL_ATQT_ATQT_MASK I40E_MASK(0x3FF, I40E_GL_ATQT_ATQT_SHIFT) +#define I40E_PF_ARQBAH 0x00080180 /* Reset: EMPR */ +#define I40E_PF_ARQBAH_ARQBAH_SHIFT 0 +#define I40E_PF_ARQBAH_ARQBAH_MASK I40E_MASK(0xFFFFFFFF, I40E_PF_ARQBAH_ARQBAH_SHIFT) +#define I40E_PF_ARQBAL 0x00080080 /* Reset: EMPR */ +#define I40E_PF_ARQBAL_ARQBAL_SHIFT 0 +#define I40E_PF_ARQBAL_ARQBAL_MASK I40E_MASK(0xFFFFFFFF, I40E_PF_ARQBAL_ARQBAL_SHIFT) +#define I40E_PF_ARQH 0x00080380 /* Reset: EMPR */ +#define I40E_PF_ARQH_ARQH_SHIFT 0 +#define I40E_PF_ARQH_ARQH_MASK I40E_MASK(0x3FF, I40E_PF_ARQH_ARQH_SHIFT) +#define I40E_PF_ARQLEN 0x00080280 /* Reset: EMPR */ +#define I40E_PF_ARQLEN_ARQLEN_SHIFT 0 +#define I40E_PF_ARQLEN_ARQLEN_MASK I40E_MASK(0x3FF, I40E_PF_ARQLEN_ARQLEN_SHIFT) +#define I40E_PF_ARQLEN_ARQVFE_SHIFT 28 +#define I40E_PF_ARQLEN_ARQVFE_MASK I40E_MASK(0x1, I40E_PF_ARQLEN_ARQVFE_SHIFT) +#define I40E_PF_ARQLEN_ARQOVFL_SHIFT 29 +#define I40E_PF_ARQLEN_ARQOVFL_MASK I40E_MASK(0x1, I40E_PF_ARQLEN_ARQOVFL_SHIFT) +#define I40E_PF_ARQLEN_ARQCRIT_SHIFT 30 +#define I40E_PF_ARQLEN_ARQCRIT_MASK I40E_MASK(0x1, I40E_PF_ARQLEN_ARQCRIT_SHIFT) +#define I40E_PF_ARQLEN_ARQENABLE_SHIFT 31 +#define I40E_PF_ARQLEN_ARQENABLE_MASK I40E_MASK(0x1, I40E_PF_ARQLEN_ARQENABLE_SHIFT) +#define I40E_PF_ARQT 0x00080480 /* Reset: EMPR */ +#define I40E_PF_ARQT_ARQT_SHIFT 0 +#define I40E_PF_ARQT_ARQT_MASK I40E_MASK(0x3FF, I40E_PF_ARQT_ARQT_SHIFT) +#define I40E_PF_ATQBAH 0x00080100 /* Reset: EMPR */ +#define I40E_PF_ATQBAH_ATQBAH_SHIFT 0 +#define I40E_PF_ATQBAH_ATQBAH_MASK I40E_MASK(0xFFFFFFFF, I40E_PF_ATQBAH_ATQBAH_SHIFT) +#define I40E_PF_ATQBAL 0x00080000 /* Reset: EMPR */ +#define I40E_PF_ATQBAL_ATQBAL_SHIFT 0 +#define I40E_PF_ATQBAL_ATQBAL_MASK I40E_MASK(0xFFFFFFFF, I40E_PF_ATQBAL_ATQBAL_SHIFT) +#define I40E_PF_ATQH 0x00080300 /* Reset: EMPR */ +#define I40E_PF_ATQH_ATQH_SHIFT 0 +#define I40E_PF_ATQH_ATQH_MASK I40E_MASK(0x3FF, I40E_PF_ATQH_ATQH_SHIFT) +#define I40E_PF_ATQLEN 0x00080200 /* Reset: EMPR */ +#define I40E_PF_ATQLEN_ATQLEN_SHIFT 0 +#define I40E_PF_ATQLEN_ATQLEN_MASK I40E_MASK(0x3FF, I40E_PF_ATQLEN_ATQLEN_SHIFT) +#define I40E_PF_ATQLEN_ATQVFE_SHIFT 28 +#define I40E_PF_ATQLEN_ATQVFE_MASK I40E_MASK(0x1, I40E_PF_ATQLEN_ATQVFE_SHIFT) +#define I40E_PF_ATQLEN_ATQOVFL_SHIFT 29 +#define I40E_PF_ATQLEN_ATQOVFL_MASK I40E_MASK(0x1, I40E_PF_ATQLEN_ATQOVFL_SHIFT) +#define I40E_PF_ATQLEN_ATQCRIT_SHIFT 30 +#define I40E_PF_ATQLEN_ATQCRIT_MASK I40E_MASK(0x1, I40E_PF_ATQLEN_ATQCRIT_SHIFT) +#define I40E_PF_ATQLEN_ATQENABLE_SHIFT 31 +#define I40E_PF_ATQLEN_ATQENABLE_MASK I40E_MASK(0x1, I40E_PF_ATQLEN_ATQENABLE_SHIFT) +#define I40E_PF_ATQT 0x00080400 /* Reset: EMPR */ +#define I40E_PF_ATQT_ATQT_SHIFT 0 +#define I40E_PF_ATQT_ATQT_MASK I40E_MASK(0x3FF, I40E_PF_ATQT_ATQT_SHIFT) +#define I40E_VF_ARQBAH(_VF) (0x00081400 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: EMPR */ +#define I40E_VF_ARQBAH_MAX_INDEX 127 +#define I40E_VF_ARQBAH_ARQBAH_SHIFT 0 +#define I40E_VF_ARQBAH_ARQBAH_MASK I40E_MASK(0xFFFFFFFF, I40E_VF_ARQBAH_ARQBAH_SHIFT) +#define I40E_VF_ARQBAL(_VF) (0x00080C00 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: EMPR */ +#define I40E_VF_ARQBAL_MAX_INDEX 127 +#define I40E_VF_ARQBAL_ARQBAL_SHIFT 0 +#define I40E_VF_ARQBAL_ARQBAL_MASK I40E_MASK(0xFFFFFFFF, I40E_VF_ARQBAL_ARQBAL_SHIFT) +#define I40E_VF_ARQH(_VF) (0x00082400 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: EMPR */ +#define I40E_VF_ARQH_MAX_INDEX 127 +#define I40E_VF_ARQH_ARQH_SHIFT 0 +#define I40E_VF_ARQH_ARQH_MASK I40E_MASK(0x3FF, I40E_VF_ARQH_ARQH_SHIFT) +#define I40E_VF_ARQLEN(_VF) (0x00081C00 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: EMPR */ +#define I40E_VF_ARQLEN_MAX_INDEX 127 +#define I40E_VF_ARQLEN_ARQLEN_SHIFT 0 +#define I40E_VF_ARQLEN_ARQLEN_MASK I40E_MASK(0x3FF, I40E_VF_ARQLEN_ARQLEN_SHIFT) +#define I40E_VF_ARQLEN_ARQVFE_SHIFT 28 +#define I40E_VF_ARQLEN_ARQVFE_MASK I40E_MASK(0x1, I40E_VF_ARQLEN_ARQVFE_SHIFT) +#define I40E_VF_ARQLEN_ARQOVFL_SHIFT 29 +#define I40E_VF_ARQLEN_ARQOVFL_MASK I40E_MASK(0x1, I40E_VF_ARQLEN_ARQOVFL_SHIFT) +#define I40E_VF_ARQLEN_ARQCRIT_SHIFT 30 +#define I40E_VF_ARQLEN_ARQCRIT_MASK I40E_MASK(0x1, I40E_VF_ARQLEN_ARQCRIT_SHIFT) +#define I40E_VF_ARQLEN_ARQENABLE_SHIFT 31 +#define I40E_VF_ARQLEN_ARQENABLE_MASK I40E_MASK(0x1, I40E_VF_ARQLEN_ARQENABLE_SHIFT) +#define I40E_VF_ARQT(_VF) (0x00082C00 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: EMPR */ +#define I40E_VF_ARQT_MAX_INDEX 127 +#define I40E_VF_ARQT_ARQT_SHIFT 0 +#define I40E_VF_ARQT_ARQT_MASK I40E_MASK(0x3FF, I40E_VF_ARQT_ARQT_SHIFT) +#define I40E_VF_ATQBAH(_VF) (0x00081000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: EMPR */ +#define I40E_VF_ATQBAH_MAX_INDEX 127 +#define I40E_VF_ATQBAH_ATQBAH_SHIFT 0 +#define I40E_VF_ATQBAH_ATQBAH_MASK I40E_MASK(0xFFFFFFFF, I40E_VF_ATQBAH_ATQBAH_SHIFT) +#define I40E_VF_ATQBAL(_VF) (0x00080800 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: EMPR */ +#define I40E_VF_ATQBAL_MAX_INDEX 127 +#define I40E_VF_ATQBAL_ATQBAL_SHIFT 0 +#define I40E_VF_ATQBAL_ATQBAL_MASK I40E_MASK(0xFFFFFFFF, I40E_VF_ATQBAL_ATQBAL_SHIFT) +#define I40E_VF_ATQH(_VF) (0x00082000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: EMPR */ +#define I40E_VF_ATQH_MAX_INDEX 127 +#define I40E_VF_ATQH_ATQH_SHIFT 0 +#define I40E_VF_ATQH_ATQH_MASK I40E_MASK(0x3FF, I40E_VF_ATQH_ATQH_SHIFT) +#define I40E_VF_ATQLEN(_VF) (0x00081800 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: EMPR */ +#define I40E_VF_ATQLEN_MAX_INDEX 127 +#define I40E_VF_ATQLEN_ATQLEN_SHIFT 0 +#define I40E_VF_ATQLEN_ATQLEN_MASK I40E_MASK(0x3FF, I40E_VF_ATQLEN_ATQLEN_SHIFT) +#define I40E_VF_ATQLEN_ATQVFE_SHIFT 28 +#define I40E_VF_ATQLEN_ATQVFE_MASK I40E_MASK(0x1, I40E_VF_ATQLEN_ATQVFE_SHIFT) +#define I40E_VF_ATQLEN_ATQOVFL_SHIFT 29 +#define I40E_VF_ATQLEN_ATQOVFL_MASK I40E_MASK(0x1, I40E_VF_ATQLEN_ATQOVFL_SHIFT) +#define I40E_VF_ATQLEN_ATQCRIT_SHIFT 30 +#define I40E_VF_ATQLEN_ATQCRIT_MASK I40E_MASK(0x1, I40E_VF_ATQLEN_ATQCRIT_SHIFT) +#define I40E_VF_ATQLEN_ATQENABLE_SHIFT 31 +#define I40E_VF_ATQLEN_ATQENABLE_MASK I40E_MASK(0x1, I40E_VF_ATQLEN_ATQENABLE_SHIFT) +#define I40E_VF_ATQT(_VF) (0x00082800 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: EMPR */ +#define I40E_VF_ATQT_MAX_INDEX 127 +#define I40E_VF_ATQT_ATQT_SHIFT 0 +#define I40E_VF_ATQT_ATQT_MASK I40E_MASK(0x3FF, I40E_VF_ATQT_ATQT_SHIFT) +#define I40E_PRT_L2TAGSEN 0x001C0B20 /* Reset: CORER */ +#define I40E_PRT_L2TAGSEN_ENABLE_SHIFT 0 +#define I40E_PRT_L2TAGSEN_ENABLE_MASK I40E_MASK(0xFF, I40E_PRT_L2TAGSEN_ENABLE_SHIFT) +#define I40E_PFCM_LAN_ERRDATA 0x0010C080 /* Reset: PFR */ +#define I40E_PFCM_LAN_ERRDATA_ERROR_CODE_SHIFT 0 +#define I40E_PFCM_LAN_ERRDATA_ERROR_CODE_MASK I40E_MASK(0xF, I40E_PFCM_LAN_ERRDATA_ERROR_CODE_SHIFT) +#define I40E_PFCM_LAN_ERRDATA_Q_TYPE_SHIFT 4 +#define I40E_PFCM_LAN_ERRDATA_Q_TYPE_MASK I40E_MASK(0x7, I40E_PFCM_LAN_ERRDATA_Q_TYPE_SHIFT) +#define I40E_PFCM_LAN_ERRDATA_Q_NUM_SHIFT 8 +#define I40E_PFCM_LAN_ERRDATA_Q_NUM_MASK I40E_MASK(0xFFF, I40E_PFCM_LAN_ERRDATA_Q_NUM_SHIFT) +#define I40E_PFCM_LAN_ERRINFO 0x0010C000 /* Reset: PFR */ +#define I40E_PFCM_LAN_ERRINFO_ERROR_VALID_SHIFT 0 +#define I40E_PFCM_LAN_ERRINFO_ERROR_VALID_MASK I40E_MASK(0x1, I40E_PFCM_LAN_ERRINFO_ERROR_VALID_SHIFT) +#define I40E_PFCM_LAN_ERRINFO_ERROR_INST_SHIFT 4 +#define I40E_PFCM_LAN_ERRINFO_ERROR_INST_MASK I40E_MASK(0x7, I40E_PFCM_LAN_ERRINFO_ERROR_INST_SHIFT) +#define I40E_PFCM_LAN_ERRINFO_DBL_ERROR_CNT_SHIFT 8 +#define I40E_PFCM_LAN_ERRINFO_DBL_ERROR_CNT_MASK I40E_MASK(0xFF, I40E_PFCM_LAN_ERRINFO_DBL_ERROR_CNT_SHIFT) +#define I40E_PFCM_LAN_ERRINFO_RLU_ERROR_CNT_SHIFT 16 +#define I40E_PFCM_LAN_ERRINFO_RLU_ERROR_CNT_MASK I40E_MASK(0xFF, I40E_PFCM_LAN_ERRINFO_RLU_ERROR_CNT_SHIFT) +#define I40E_PFCM_LAN_ERRINFO_RLS_ERROR_CNT_SHIFT 24 +#define I40E_PFCM_LAN_ERRINFO_RLS_ERROR_CNT_MASK I40E_MASK(0xFF, I40E_PFCM_LAN_ERRINFO_RLS_ERROR_CNT_SHIFT) +#define I40E_PFCM_LANCTXCTL 0x0010C300 /* Reset: CORER */ +#define I40E_PFCM_LANCTXCTL_QUEUE_NUM_SHIFT 0 +#define I40E_PFCM_LANCTXCTL_QUEUE_NUM_MASK I40E_MASK(0xFFF, I40E_PFCM_LANCTXCTL_QUEUE_NUM_SHIFT) +#define I40E_PFCM_LANCTXCTL_SUB_LINE_SHIFT 12 +#define I40E_PFCM_LANCTXCTL_SUB_LINE_MASK I40E_MASK(0x7, I40E_PFCM_LANCTXCTL_SUB_LINE_SHIFT) +#define I40E_PFCM_LANCTXCTL_QUEUE_TYPE_SHIFT 15 +#define I40E_PFCM_LANCTXCTL_QUEUE_TYPE_MASK I40E_MASK(0x3, I40E_PFCM_LANCTXCTL_QUEUE_TYPE_SHIFT) +#define I40E_PFCM_LANCTXCTL_OP_CODE_SHIFT 17 +#define I40E_PFCM_LANCTXCTL_OP_CODE_MASK I40E_MASK(0x3, I40E_PFCM_LANCTXCTL_OP_CODE_SHIFT) +#define I40E_PFCM_LANCTXDATA(_i) (0x0010C100 + ((_i) * 128)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_PFCM_LANCTXDATA_MAX_INDEX 3 +#define I40E_PFCM_LANCTXDATA_DATA_SHIFT 0 +#define I40E_PFCM_LANCTXDATA_DATA_MASK I40E_MASK(0xFFFFFFFF, I40E_PFCM_LANCTXDATA_DATA_SHIFT) +#define I40E_PFCM_LANCTXSTAT 0x0010C380 /* Reset: CORER */ +#define I40E_PFCM_LANCTXSTAT_CTX_DONE_SHIFT 0 +#define I40E_PFCM_LANCTXSTAT_CTX_DONE_MASK I40E_MASK(0x1, I40E_PFCM_LANCTXSTAT_CTX_DONE_SHIFT) +#define I40E_PFCM_LANCTXSTAT_CTX_MISS_SHIFT 1 +#define I40E_PFCM_LANCTXSTAT_CTX_MISS_MASK I40E_MASK(0x1, I40E_PFCM_LANCTXSTAT_CTX_MISS_SHIFT) +#define I40E_VFCM_PE_ERRDATA1(_VF) (0x00138800 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: VFR */ +#define I40E_VFCM_PE_ERRDATA1_MAX_INDEX 127 +#define I40E_VFCM_PE_ERRDATA1_ERROR_CODE_SHIFT 0 +#define I40E_VFCM_PE_ERRDATA1_ERROR_CODE_MASK I40E_MASK(0xF, I40E_VFCM_PE_ERRDATA1_ERROR_CODE_SHIFT) +#define I40E_VFCM_PE_ERRDATA1_Q_TYPE_SHIFT 4 +#define I40E_VFCM_PE_ERRDATA1_Q_TYPE_MASK I40E_MASK(0x7, I40E_VFCM_PE_ERRDATA1_Q_TYPE_SHIFT) +#define I40E_VFCM_PE_ERRDATA1_Q_NUM_SHIFT 8 +#define I40E_VFCM_PE_ERRDATA1_Q_NUM_MASK I40E_MASK(0x3FFFF, I40E_VFCM_PE_ERRDATA1_Q_NUM_SHIFT) +#define I40E_VFCM_PE_ERRINFO1(_VF) (0x00138400 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: VFR */ +#define I40E_VFCM_PE_ERRINFO1_MAX_INDEX 127 +#define I40E_VFCM_PE_ERRINFO1_ERROR_VALID_SHIFT 0 +#define I40E_VFCM_PE_ERRINFO1_ERROR_VALID_MASK I40E_MASK(0x1, I40E_VFCM_PE_ERRINFO1_ERROR_VALID_SHIFT) +#define I40E_VFCM_PE_ERRINFO1_ERROR_INST_SHIFT 4 +#define I40E_VFCM_PE_ERRINFO1_ERROR_INST_MASK I40E_MASK(0x7, I40E_VFCM_PE_ERRINFO1_ERROR_INST_SHIFT) +#define I40E_VFCM_PE_ERRINFO1_DBL_ERROR_CNT_SHIFT 8 +#define I40E_VFCM_PE_ERRINFO1_DBL_ERROR_CNT_MASK I40E_MASK(0xFF, I40E_VFCM_PE_ERRINFO1_DBL_ERROR_CNT_SHIFT) +#define I40E_VFCM_PE_ERRINFO1_RLU_ERROR_CNT_SHIFT 16 +#define I40E_VFCM_PE_ERRINFO1_RLU_ERROR_CNT_MASK I40E_MASK(0xFF, I40E_VFCM_PE_ERRINFO1_RLU_ERROR_CNT_SHIFT) +#define I40E_VFCM_PE_ERRINFO1_RLS_ERROR_CNT_SHIFT 24 +#define I40E_VFCM_PE_ERRINFO1_RLS_ERROR_CNT_MASK I40E_MASK(0xFF, I40E_VFCM_PE_ERRINFO1_RLS_ERROR_CNT_SHIFT) +#define I40E_GLDCB_GENC 0x00083044 /* Reset: CORER */ +#define I40E_GLDCB_GENC_PCIRTT_SHIFT 0 +#define I40E_GLDCB_GENC_PCIRTT_MASK I40E_MASK(0xFFFF, I40E_GLDCB_GENC_PCIRTT_SHIFT) +#define I40E_GLDCB_RUPTI 0x00122618 /* Reset: CORER */ +#define I40E_GLDCB_RUPTI_PFCTIMEOUT_UP_SHIFT 0 +#define I40E_GLDCB_RUPTI_PFCTIMEOUT_UP_MASK I40E_MASK(0xFFFFFFFF, I40E_GLDCB_RUPTI_PFCTIMEOUT_UP_SHIFT) +#define I40E_PRTDCB_FCCFG 0x001E4640 /* Reset: GLOBR */ +#define I40E_PRTDCB_FCCFG_TFCE_SHIFT 3 +#define I40E_PRTDCB_FCCFG_TFCE_MASK I40E_MASK(0x3, I40E_PRTDCB_FCCFG_TFCE_SHIFT) +#define I40E_PRTDCB_FCRTV 0x001E4600 /* Reset: GLOBR */ +#define I40E_PRTDCB_FCRTV_FC_REFRESH_TH_SHIFT 0 +#define I40E_PRTDCB_FCRTV_FC_REFRESH_TH_MASK I40E_MASK(0xFFFF, I40E_PRTDCB_FCRTV_FC_REFRESH_TH_SHIFT) +#define I40E_PRTDCB_FCTTVN(_i) (0x001E4580 + ((_i) * 32)) /* _i=0...3 */ /* Reset: GLOBR */ +#define I40E_PRTDCB_FCTTVN_MAX_INDEX 3 +#define I40E_PRTDCB_FCTTVN_TTV_2N_SHIFT 0 +#define I40E_PRTDCB_FCTTVN_TTV_2N_MASK I40E_MASK(0xFFFF, I40E_PRTDCB_FCTTVN_TTV_2N_SHIFT) +#define I40E_PRTDCB_FCTTVN_TTV_2N_P1_SHIFT 16 +#define I40E_PRTDCB_FCTTVN_TTV_2N_P1_MASK I40E_MASK(0xFFFF, I40E_PRTDCB_FCTTVN_TTV_2N_P1_SHIFT) +#define I40E_PRTDCB_GENC 0x00083000 /* Reset: CORER */ +#define I40E_PRTDCB_GENC_RESERVED_1_SHIFT 0 +#define I40E_PRTDCB_GENC_RESERVED_1_MASK I40E_MASK(0x3, I40E_PRTDCB_GENC_RESERVED_1_SHIFT) +#define I40E_PRTDCB_GENC_NUMTC_SHIFT 2 +#define I40E_PRTDCB_GENC_NUMTC_MASK I40E_MASK(0xF, I40E_PRTDCB_GENC_NUMTC_SHIFT) +#define I40E_PRTDCB_GENC_FCOEUP_SHIFT 6 +#define I40E_PRTDCB_GENC_FCOEUP_MASK I40E_MASK(0x7, I40E_PRTDCB_GENC_FCOEUP_SHIFT) +#define I40E_PRTDCB_GENC_FCOEUP_VALID_SHIFT 9 +#define I40E_PRTDCB_GENC_FCOEUP_VALID_MASK I40E_MASK(0x1, I40E_PRTDCB_GENC_FCOEUP_VALID_SHIFT) +#define I40E_PRTDCB_GENC_PFCLDA_SHIFT 16 +#define I40E_PRTDCB_GENC_PFCLDA_MASK I40E_MASK(0xFFFF, I40E_PRTDCB_GENC_PFCLDA_SHIFT) +#define I40E_PRTDCB_GENS 0x00083020 /* Reset: CORER */ +#define I40E_PRTDCB_GENS_DCBX_STATUS_SHIFT 0 +#define I40E_PRTDCB_GENS_DCBX_STATUS_MASK I40E_MASK(0x7, I40E_PRTDCB_GENS_DCBX_STATUS_SHIFT) +#define I40E_PRTDCB_MFLCN 0x001E2400 /* Reset: GLOBR */ +#define I40E_PRTDCB_MFLCN_PMCF_SHIFT 0 +#define I40E_PRTDCB_MFLCN_PMCF_MASK I40E_MASK(0x1, I40E_PRTDCB_MFLCN_PMCF_SHIFT) +#define I40E_PRTDCB_MFLCN_DPF_SHIFT 1 +#define I40E_PRTDCB_MFLCN_DPF_MASK I40E_MASK(0x1, I40E_PRTDCB_MFLCN_DPF_SHIFT) +#define I40E_PRTDCB_MFLCN_RPFCM_SHIFT 2 +#define I40E_PRTDCB_MFLCN_RPFCM_MASK I40E_MASK(0x1, I40E_PRTDCB_MFLCN_RPFCM_SHIFT) +#define I40E_PRTDCB_MFLCN_RFCE_SHIFT 3 +#define I40E_PRTDCB_MFLCN_RFCE_MASK I40E_MASK(0x1, I40E_PRTDCB_MFLCN_RFCE_SHIFT) +#define I40E_PRTDCB_MFLCN_RPFCE_SHIFT 4 +#define I40E_PRTDCB_MFLCN_RPFCE_MASK I40E_MASK(0xFF, I40E_PRTDCB_MFLCN_RPFCE_SHIFT) +#define I40E_PRTDCB_RETSC 0x001223E0 /* Reset: CORER */ +#define I40E_PRTDCB_RETSC_ETS_MODE_SHIFT 0 +#define I40E_PRTDCB_RETSC_ETS_MODE_MASK I40E_MASK(0x1, I40E_PRTDCB_RETSC_ETS_MODE_SHIFT) +#define I40E_PRTDCB_RETSC_NON_ETS_MODE_SHIFT 1 +#define I40E_PRTDCB_RETSC_NON_ETS_MODE_MASK I40E_MASK(0x1, I40E_PRTDCB_RETSC_NON_ETS_MODE_SHIFT) +#define I40E_PRTDCB_RETSC_ETS_MAX_EXP_SHIFT 2 +#define I40E_PRTDCB_RETSC_ETS_MAX_EXP_MASK I40E_MASK(0xF, I40E_PRTDCB_RETSC_ETS_MAX_EXP_SHIFT) +#define I40E_PRTDCB_RETSC_LLTC_SHIFT 8 +#define I40E_PRTDCB_RETSC_LLTC_MASK I40E_MASK(0xFF, I40E_PRTDCB_RETSC_LLTC_SHIFT) +#define I40E_PRTDCB_RETSTCC(_i) (0x00122180 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_PRTDCB_RETSTCC_MAX_INDEX 7 +#define I40E_PRTDCB_RETSTCC_BWSHARE_SHIFT 0 +#define I40E_PRTDCB_RETSTCC_BWSHARE_MASK I40E_MASK(0x7F, I40E_PRTDCB_RETSTCC_BWSHARE_SHIFT) +#define I40E_PRTDCB_RETSTCC_UPINTC_MODE_SHIFT 30 +#define I40E_PRTDCB_RETSTCC_UPINTC_MODE_MASK I40E_MASK(0x1, I40E_PRTDCB_RETSTCC_UPINTC_MODE_SHIFT) +#define I40E_PRTDCB_RETSTCC_ETSTC_SHIFT 31 +#define I40E_PRTDCB_RETSTCC_ETSTC_MASK I40E_MASK(0x1, I40E_PRTDCB_RETSTCC_ETSTC_SHIFT) +#define I40E_PRTDCB_RPPMC 0x001223A0 /* Reset: CORER */ +#define I40E_PRTDCB_RPPMC_LANRPPM_SHIFT 0 +#define I40E_PRTDCB_RPPMC_LANRPPM_MASK I40E_MASK(0xFF, I40E_PRTDCB_RPPMC_LANRPPM_SHIFT) +#define I40E_PRTDCB_RPPMC_RDMARPPM_SHIFT 8 +#define I40E_PRTDCB_RPPMC_RDMARPPM_MASK I40E_MASK(0xFF, I40E_PRTDCB_RPPMC_RDMARPPM_SHIFT) +#define I40E_PRTDCB_RPPMC_RX_FIFO_SIZE_SHIFT 16 +#define I40E_PRTDCB_RPPMC_RX_FIFO_SIZE_MASK I40E_MASK(0xFF, I40E_PRTDCB_RPPMC_RX_FIFO_SIZE_SHIFT) +#define I40E_PRTDCB_RUP 0x001C0B00 /* Reset: CORER */ +#define I40E_PRTDCB_RUP_NOVLANUP_SHIFT 0 +#define I40E_PRTDCB_RUP_NOVLANUP_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP_NOVLANUP_SHIFT) +#define I40E_PRTDCB_RUP2TC 0x001C09A0 /* Reset: CORER */ +#define I40E_PRTDCB_RUP2TC_UP0TC_SHIFT 0 +#define I40E_PRTDCB_RUP2TC_UP0TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP0TC_SHIFT) +#define I40E_PRTDCB_RUP2TC_UP1TC_SHIFT 3 +#define I40E_PRTDCB_RUP2TC_UP1TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP1TC_SHIFT) +#define I40E_PRTDCB_RUP2TC_UP2TC_SHIFT 6 +#define I40E_PRTDCB_RUP2TC_UP2TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP2TC_SHIFT) +#define I40E_PRTDCB_RUP2TC_UP3TC_SHIFT 9 +#define I40E_PRTDCB_RUP2TC_UP3TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP3TC_SHIFT) +#define I40E_PRTDCB_RUP2TC_UP4TC_SHIFT 12 +#define I40E_PRTDCB_RUP2TC_UP4TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP4TC_SHIFT) +#define I40E_PRTDCB_RUP2TC_UP5TC_SHIFT 15 +#define I40E_PRTDCB_RUP2TC_UP5TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP5TC_SHIFT) +#define I40E_PRTDCB_RUP2TC_UP6TC_SHIFT 18 +#define I40E_PRTDCB_RUP2TC_UP6TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP6TC_SHIFT) +#define I40E_PRTDCB_RUP2TC_UP7TC_SHIFT 21 +#define I40E_PRTDCB_RUP2TC_UP7TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP7TC_SHIFT) +#define I40E_PRTDCB_TC2PFC 0x001C0980 /* Reset: CORER */ +#define I40E_PRTDCB_TC2PFC_TC2PFC_SHIFT 0 +#define I40E_PRTDCB_TC2PFC_TC2PFC_MASK I40E_MASK(0xFF, I40E_PRTDCB_TC2PFC_TC2PFC_SHIFT) +#define I40E_PRTDCB_TCMSTC(_i) (0x000A0040 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_PRTDCB_TCMSTC_MAX_INDEX 7 +#define I40E_PRTDCB_TCMSTC_MSTC_SHIFT 0 +#define I40E_PRTDCB_TCMSTC_MSTC_MASK I40E_MASK(0xFFFFF, I40E_PRTDCB_TCMSTC_MSTC_SHIFT) +#define I40E_PRTDCB_TCPMC 0x000A21A0 /* Reset: CORER */ +#define I40E_PRTDCB_TCPMC_CPM_SHIFT 0 +#define I40E_PRTDCB_TCPMC_CPM_MASK I40E_MASK(0x1FFF, I40E_PRTDCB_TCPMC_CPM_SHIFT) +#define I40E_PRTDCB_TCPMC_LLTC_SHIFT 13 +#define I40E_PRTDCB_TCPMC_LLTC_MASK I40E_MASK(0xFF, I40E_PRTDCB_TCPMC_LLTC_SHIFT) +#define I40E_PRTDCB_TCPMC_TCPM_MODE_SHIFT 30 +#define I40E_PRTDCB_TCPMC_TCPM_MODE_MASK I40E_MASK(0x1, I40E_PRTDCB_TCPMC_TCPM_MODE_SHIFT) +#define I40E_PRTDCB_TCWSTC(_i) (0x000A2040 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_PRTDCB_TCWSTC_MAX_INDEX 7 +#define I40E_PRTDCB_TCWSTC_MSTC_SHIFT 0 +#define I40E_PRTDCB_TCWSTC_MSTC_MASK I40E_MASK(0xFFFFF, I40E_PRTDCB_TCWSTC_MSTC_SHIFT) +#define I40E_PRTDCB_TDPMC 0x000A0180 /* Reset: CORER */ +#define I40E_PRTDCB_TDPMC_DPM_SHIFT 0 +#define I40E_PRTDCB_TDPMC_DPM_MASK I40E_MASK(0xFF, I40E_PRTDCB_TDPMC_DPM_SHIFT) +#define I40E_PRTDCB_TDPMC_TCPM_MODE_SHIFT 30 +#define I40E_PRTDCB_TDPMC_TCPM_MODE_MASK I40E_MASK(0x1, I40E_PRTDCB_TDPMC_TCPM_MODE_SHIFT) +#define I40E_PRTDCB_TETSC_TCB 0x000AE060 /* Reset: CORER */ +#define I40E_PRTDCB_TETSC_TCB_EN_LL_STRICT_PRIORITY_SHIFT 0 +#define I40E_PRTDCB_TETSC_TCB_EN_LL_STRICT_PRIORITY_MASK I40E_MASK(0x1, I40E_PRTDCB_TETSC_TCB_EN_LL_STRICT_PRIORITY_SHIFT) +#define I40E_PRTDCB_TETSC_TCB_LLTC_SHIFT 8 +#define I40E_PRTDCB_TETSC_TCB_LLTC_MASK I40E_MASK(0xFF, I40E_PRTDCB_TETSC_TCB_LLTC_SHIFT) +#define I40E_PRTDCB_TETSC_TPB 0x00098060 /* Reset: CORER */ +#define I40E_PRTDCB_TETSC_TPB_EN_LL_STRICT_PRIORITY_SHIFT 0 +#define I40E_PRTDCB_TETSC_TPB_EN_LL_STRICT_PRIORITY_MASK I40E_MASK(0x1, I40E_PRTDCB_TETSC_TPB_EN_LL_STRICT_PRIORITY_SHIFT) +#define I40E_PRTDCB_TETSC_TPB_LLTC_SHIFT 8 +#define I40E_PRTDCB_TETSC_TPB_LLTC_MASK I40E_MASK(0xFF, I40E_PRTDCB_TETSC_TPB_LLTC_SHIFT) +#define I40E_PRTDCB_TFCS 0x001E4560 /* Reset: GLOBR */ +#define I40E_PRTDCB_TFCS_TXOFF_SHIFT 0 +#define I40E_PRTDCB_TFCS_TXOFF_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF_SHIFT) +#define I40E_PRTDCB_TFCS_TXOFF0_SHIFT 8 +#define I40E_PRTDCB_TFCS_TXOFF0_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF0_SHIFT) +#define I40E_PRTDCB_TFCS_TXOFF1_SHIFT 9 +#define I40E_PRTDCB_TFCS_TXOFF1_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF1_SHIFT) +#define I40E_PRTDCB_TFCS_TXOFF2_SHIFT 10 +#define I40E_PRTDCB_TFCS_TXOFF2_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF2_SHIFT) +#define I40E_PRTDCB_TFCS_TXOFF3_SHIFT 11 +#define I40E_PRTDCB_TFCS_TXOFF3_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF3_SHIFT) +#define I40E_PRTDCB_TFCS_TXOFF4_SHIFT 12 +#define I40E_PRTDCB_TFCS_TXOFF4_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF4_SHIFT) +#define I40E_PRTDCB_TFCS_TXOFF5_SHIFT 13 +#define I40E_PRTDCB_TFCS_TXOFF5_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF5_SHIFT) +#define I40E_PRTDCB_TFCS_TXOFF6_SHIFT 14 +#define I40E_PRTDCB_TFCS_TXOFF6_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF6_SHIFT) +#define I40E_PRTDCB_TFCS_TXOFF7_SHIFT 15 +#define I40E_PRTDCB_TFCS_TXOFF7_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF7_SHIFT) +#define I40E_PRTDCB_TPFCTS(_i) (0x001E4660 + ((_i) * 32)) /* _i=0...7 */ /* Reset: GLOBR */ +#define I40E_PRTDCB_TPFCTS_MAX_INDEX 7 +#define I40E_PRTDCB_TPFCTS_PFCTIMER_SHIFT 0 +#define I40E_PRTDCB_TPFCTS_PFCTIMER_MASK I40E_MASK(0x3FFF, I40E_PRTDCB_TPFCTS_PFCTIMER_SHIFT) +#define I40E_GLFCOE_RCTL 0x00269B94 /* Reset: CORER */ +#define I40E_GLFCOE_RCTL_FCOEVER_SHIFT 0 +#define I40E_GLFCOE_RCTL_FCOEVER_MASK I40E_MASK(0xF, I40E_GLFCOE_RCTL_FCOEVER_SHIFT) +#define I40E_GLFCOE_RCTL_SAVBAD_SHIFT 4 +#define I40E_GLFCOE_RCTL_SAVBAD_MASK I40E_MASK(0x1, I40E_GLFCOE_RCTL_SAVBAD_SHIFT) +#define I40E_GLFCOE_RCTL_ICRC_SHIFT 5 +#define I40E_GLFCOE_RCTL_ICRC_MASK I40E_MASK(0x1, I40E_GLFCOE_RCTL_ICRC_SHIFT) +#define I40E_GLFCOE_RCTL_MAX_SIZE_SHIFT 16 +#define I40E_GLFCOE_RCTL_MAX_SIZE_MASK I40E_MASK(0x3FFF, I40E_GLFCOE_RCTL_MAX_SIZE_SHIFT) +#define I40E_GL_FWSTS 0x00083048 /* Reset: POR */ +#define I40E_GL_FWSTS_FWS0B_SHIFT 0 +#define I40E_GL_FWSTS_FWS0B_MASK I40E_MASK(0xFF, I40E_GL_FWSTS_FWS0B_SHIFT) +#define I40E_GL_FWSTS_FWRI_SHIFT 9 +#define I40E_GL_FWSTS_FWRI_MASK I40E_MASK(0x1, I40E_GL_FWSTS_FWRI_SHIFT) +#define I40E_GL_FWSTS_FWS1B_SHIFT 16 +#define I40E_GL_FWSTS_FWS1B_MASK I40E_MASK(0xFF, I40E_GL_FWSTS_FWS1B_SHIFT) +#define I40E_GLGEN_CLKSTAT 0x000B8184 /* Reset: POR */ +#define I40E_GLGEN_CLKSTAT_CLKMODE_SHIFT 0 +#define I40E_GLGEN_CLKSTAT_CLKMODE_MASK I40E_MASK(0x1, I40E_GLGEN_CLKSTAT_CLKMODE_SHIFT) +#define I40E_GLGEN_CLKSTAT_U_CLK_SPEED_SHIFT 4 +#define I40E_GLGEN_CLKSTAT_U_CLK_SPEED_MASK I40E_MASK(0x3, I40E_GLGEN_CLKSTAT_U_CLK_SPEED_SHIFT) +#define I40E_GLGEN_CLKSTAT_P0_CLK_SPEED_SHIFT 8 +#define I40E_GLGEN_CLKSTAT_P0_CLK_SPEED_MASK I40E_MASK(0x7, I40E_GLGEN_CLKSTAT_P0_CLK_SPEED_SHIFT) +#define I40E_GLGEN_CLKSTAT_P1_CLK_SPEED_SHIFT 12 +#define I40E_GLGEN_CLKSTAT_P1_CLK_SPEED_MASK I40E_MASK(0x7, I40E_GLGEN_CLKSTAT_P1_CLK_SPEED_SHIFT) +#define I40E_GLGEN_CLKSTAT_P2_CLK_SPEED_SHIFT 16 +#define I40E_GLGEN_CLKSTAT_P2_CLK_SPEED_MASK I40E_MASK(0x7, I40E_GLGEN_CLKSTAT_P2_CLK_SPEED_SHIFT) +#define I40E_GLGEN_CLKSTAT_P3_CLK_SPEED_SHIFT 20 +#define I40E_GLGEN_CLKSTAT_P3_CLK_SPEED_MASK I40E_MASK(0x7, I40E_GLGEN_CLKSTAT_P3_CLK_SPEED_SHIFT) +#define I40E_GLGEN_GPIO_CTL(_i) (0x00088100 + ((_i) * 4)) /* _i=0...29 */ /* Reset: POR */ +#define I40E_GLGEN_GPIO_CTL_MAX_INDEX 29 +#define I40E_GLGEN_GPIO_CTL_PRT_NUM_SHIFT 0 +#define I40E_GLGEN_GPIO_CTL_PRT_NUM_MASK I40E_MASK(0x3, I40E_GLGEN_GPIO_CTL_PRT_NUM_SHIFT) +#define I40E_GLGEN_GPIO_CTL_PRT_NUM_NA_SHIFT 3 +#define I40E_GLGEN_GPIO_CTL_PRT_NUM_NA_MASK I40E_MASK(0x1, I40E_GLGEN_GPIO_CTL_PRT_NUM_NA_SHIFT) +#define I40E_GLGEN_GPIO_CTL_PIN_DIR_SHIFT 4 +#define I40E_GLGEN_GPIO_CTL_PIN_DIR_MASK I40E_MASK(0x1, I40E_GLGEN_GPIO_CTL_PIN_DIR_SHIFT) +#define I40E_GLGEN_GPIO_CTL_TRI_CTL_SHIFT 5 +#define I40E_GLGEN_GPIO_CTL_TRI_CTL_MASK I40E_MASK(0x1, I40E_GLGEN_GPIO_CTL_TRI_CTL_SHIFT) +#define I40E_GLGEN_GPIO_CTL_OUT_CTL_SHIFT 6 +#define I40E_GLGEN_GPIO_CTL_OUT_CTL_MASK I40E_MASK(0x1, I40E_GLGEN_GPIO_CTL_OUT_CTL_SHIFT) +#define I40E_GLGEN_GPIO_CTL_PIN_FUNC_SHIFT 7 +#define I40E_GLGEN_GPIO_CTL_PIN_FUNC_MASK I40E_MASK(0x7, I40E_GLGEN_GPIO_CTL_PIN_FUNC_SHIFT) +#define I40E_GLGEN_GPIO_CTL_LED_INVRT_SHIFT 10 +#define I40E_GLGEN_GPIO_CTL_LED_INVRT_MASK I40E_MASK(0x1, I40E_GLGEN_GPIO_CTL_LED_INVRT_SHIFT) +#define I40E_GLGEN_GPIO_CTL_LED_BLINK_SHIFT 11 +#define I40E_GLGEN_GPIO_CTL_LED_BLINK_MASK I40E_MASK(0x1, I40E_GLGEN_GPIO_CTL_LED_BLINK_SHIFT) +#define I40E_GLGEN_GPIO_CTL_LED_MODE_SHIFT 12 +#define I40E_GLGEN_GPIO_CTL_LED_MODE_MASK I40E_MASK(0x1F, I40E_GLGEN_GPIO_CTL_LED_MODE_SHIFT) +#define I40E_GLGEN_GPIO_CTL_INT_MODE_SHIFT 17 +#define I40E_GLGEN_GPIO_CTL_INT_MODE_MASK I40E_MASK(0x3, I40E_GLGEN_GPIO_CTL_INT_MODE_SHIFT) +#define I40E_GLGEN_GPIO_CTL_OUT_DEFAULT_SHIFT 19 +#define I40E_GLGEN_GPIO_CTL_OUT_DEFAULT_MASK I40E_MASK(0x1, I40E_GLGEN_GPIO_CTL_OUT_DEFAULT_SHIFT) +#define I40E_GLGEN_GPIO_CTL_PHY_PIN_NAME_SHIFT 20 +#define I40E_GLGEN_GPIO_CTL_PHY_PIN_NAME_MASK I40E_MASK(0x3F, I40E_GLGEN_GPIO_CTL_PHY_PIN_NAME_SHIFT) +#define I40E_GLGEN_GPIO_SET 0x00088184 /* Reset: POR */ +#define I40E_GLGEN_GPIO_SET_GPIO_INDX_SHIFT 0 +#define I40E_GLGEN_GPIO_SET_GPIO_INDX_MASK I40E_MASK(0x1F, I40E_GLGEN_GPIO_SET_GPIO_INDX_SHIFT) +#define I40E_GLGEN_GPIO_SET_SDP_DATA_SHIFT 5 +#define I40E_GLGEN_GPIO_SET_SDP_DATA_MASK I40E_MASK(0x1, I40E_GLGEN_GPIO_SET_SDP_DATA_SHIFT) +#define I40E_GLGEN_GPIO_SET_DRIVE_SDP_SHIFT 6 +#define I40E_GLGEN_GPIO_SET_DRIVE_SDP_MASK I40E_MASK(0x1, I40E_GLGEN_GPIO_SET_DRIVE_SDP_SHIFT) +#define I40E_GLGEN_GPIO_STAT 0x0008817C /* Reset: POR */ +#define I40E_GLGEN_GPIO_STAT_GPIO_VALUE_SHIFT 0 +#define I40E_GLGEN_GPIO_STAT_GPIO_VALUE_MASK I40E_MASK(0x3FFFFFFF, I40E_GLGEN_GPIO_STAT_GPIO_VALUE_SHIFT) +#define I40E_GLGEN_GPIO_TRANSIT 0x00088180 /* Reset: POR */ +#define I40E_GLGEN_GPIO_TRANSIT_GPIO_TRANSITION_SHIFT 0 +#define I40E_GLGEN_GPIO_TRANSIT_GPIO_TRANSITION_MASK I40E_MASK(0x3FFFFFFF, I40E_GLGEN_GPIO_TRANSIT_GPIO_TRANSITION_SHIFT) +#define I40E_GLGEN_I2CCMD(_i) (0x000881E0 + ((_i) * 4)) /* _i=0...3 */ /* Reset: POR */ +#define I40E_GLGEN_I2CCMD_MAX_INDEX 3 +#define I40E_GLGEN_I2CCMD_DATA_SHIFT 0 +#define I40E_GLGEN_I2CCMD_DATA_MASK I40E_MASK(0xFFFF, I40E_GLGEN_I2CCMD_DATA_SHIFT) +#define I40E_GLGEN_I2CCMD_REGADD_SHIFT 16 +#define I40E_GLGEN_I2CCMD_REGADD_MASK I40E_MASK(0xFF, I40E_GLGEN_I2CCMD_REGADD_SHIFT) +#define I40E_GLGEN_I2CCMD_PHYADD_SHIFT 24 +#define I40E_GLGEN_I2CCMD_PHYADD_MASK I40E_MASK(0x7, I40E_GLGEN_I2CCMD_PHYADD_SHIFT) +#define I40E_GLGEN_I2CCMD_OP_SHIFT 27 +#define I40E_GLGEN_I2CCMD_OP_MASK I40E_MASK(0x1, I40E_GLGEN_I2CCMD_OP_SHIFT) +#define I40E_GLGEN_I2CCMD_RESET_SHIFT 28 +#define I40E_GLGEN_I2CCMD_RESET_MASK I40E_MASK(0x1, I40E_GLGEN_I2CCMD_RESET_SHIFT) +#define I40E_GLGEN_I2CCMD_R_SHIFT 29 +#define I40E_GLGEN_I2CCMD_R_MASK I40E_MASK(0x1, I40E_GLGEN_I2CCMD_R_SHIFT) +#define I40E_GLGEN_I2CCMD_E_SHIFT 31 +#define I40E_GLGEN_I2CCMD_E_MASK I40E_MASK(0x1, I40E_GLGEN_I2CCMD_E_SHIFT) +#define I40E_GLGEN_I2CPARAMS(_i) (0x000881AC + ((_i) * 4)) /* _i=0...3 */ /* Reset: POR */ +#define I40E_GLGEN_I2CPARAMS_MAX_INDEX 3 +#define I40E_GLGEN_I2CPARAMS_WRITE_TIME_SHIFT 0 +#define I40E_GLGEN_I2CPARAMS_WRITE_TIME_MASK I40E_MASK(0x1F, I40E_GLGEN_I2CPARAMS_WRITE_TIME_SHIFT) +#define I40E_GLGEN_I2CPARAMS_READ_TIME_SHIFT 5 +#define I40E_GLGEN_I2CPARAMS_READ_TIME_MASK I40E_MASK(0x7, I40E_GLGEN_I2CPARAMS_READ_TIME_SHIFT) +#define I40E_GLGEN_I2CPARAMS_I2CBB_EN_SHIFT 8 +#define I40E_GLGEN_I2CPARAMS_I2CBB_EN_MASK I40E_MASK(0x1, I40E_GLGEN_I2CPARAMS_I2CBB_EN_SHIFT) +#define I40E_GLGEN_I2CPARAMS_CLK_SHIFT 9 +#define I40E_GLGEN_I2CPARAMS_CLK_MASK I40E_MASK(0x1, I40E_GLGEN_I2CPARAMS_CLK_SHIFT) +#define I40E_GLGEN_I2CPARAMS_DATA_OUT_SHIFT 10 +#define I40E_GLGEN_I2CPARAMS_DATA_OUT_MASK I40E_MASK(0x1, I40E_GLGEN_I2CPARAMS_DATA_OUT_SHIFT) +#define I40E_GLGEN_I2CPARAMS_DATA_OE_N_SHIFT 11 +#define I40E_GLGEN_I2CPARAMS_DATA_OE_N_MASK I40E_MASK(0x1, I40E_GLGEN_I2CPARAMS_DATA_OE_N_SHIFT) +#define I40E_GLGEN_I2CPARAMS_DATA_IN_SHIFT 12 +#define I40E_GLGEN_I2CPARAMS_DATA_IN_MASK I40E_MASK(0x1, I40E_GLGEN_I2CPARAMS_DATA_IN_SHIFT) +#define I40E_GLGEN_I2CPARAMS_CLK_OE_N_SHIFT 13 +#define I40E_GLGEN_I2CPARAMS_CLK_OE_N_MASK I40E_MASK(0x1, I40E_GLGEN_I2CPARAMS_CLK_OE_N_SHIFT) +#define I40E_GLGEN_I2CPARAMS_CLK_IN_SHIFT 14 +#define I40E_GLGEN_I2CPARAMS_CLK_IN_MASK I40E_MASK(0x1, I40E_GLGEN_I2CPARAMS_CLK_IN_SHIFT) +#define I40E_GLGEN_I2CPARAMS_CLK_STRETCH_DIS_SHIFT 15 +#define I40E_GLGEN_I2CPARAMS_CLK_STRETCH_DIS_MASK I40E_MASK(0x1, I40E_GLGEN_I2CPARAMS_CLK_STRETCH_DIS_SHIFT) +#define I40E_GLGEN_I2CPARAMS_I2C_DATA_ORDER_SHIFT 31 +#define I40E_GLGEN_I2CPARAMS_I2C_DATA_ORDER_MASK I40E_MASK(0x1, I40E_GLGEN_I2CPARAMS_I2C_DATA_ORDER_SHIFT) +#define I40E_GLGEN_LED_CTL 0x00088178 /* Reset: POR */ +#define I40E_GLGEN_LED_CTL_GLOBAL_BLINK_MODE_SHIFT 0 +#define I40E_GLGEN_LED_CTL_GLOBAL_BLINK_MODE_MASK I40E_MASK(0x1, I40E_GLGEN_LED_CTL_GLOBAL_BLINK_MODE_SHIFT) +#define I40E_GLGEN_MDIO_CTRL(_i) (0x000881D0 + ((_i) * 4)) /* _i=0...3 */ /* Reset: POR */ +#define I40E_GLGEN_MDIO_CTRL_MAX_INDEX 3 +#define I40E_GLGEN_MDIO_CTRL_LEGACY_RSVD2_SHIFT 0 +#define I40E_GLGEN_MDIO_CTRL_LEGACY_RSVD2_MASK I40E_MASK(0x1FFFF, I40E_GLGEN_MDIO_CTRL_LEGACY_RSVD2_SHIFT) +#define I40E_GLGEN_MDIO_CTRL_CONTMDC_SHIFT 17 +#define I40E_GLGEN_MDIO_CTRL_CONTMDC_MASK I40E_MASK(0x1, I40E_GLGEN_MDIO_CTRL_CONTMDC_SHIFT) +#define I40E_GLGEN_MDIO_CTRL_LEGACY_RSVD1_SHIFT 18 +#define I40E_GLGEN_MDIO_CTRL_LEGACY_RSVD1_MASK I40E_MASK(0x3FFF, I40E_GLGEN_MDIO_CTRL_LEGACY_RSVD1_SHIFT) +#define I40E_GLGEN_MDIO_I2C_SEL(_i) (0x000881C0 + ((_i) * 4)) /* _i=0...3 */ /* Reset: POR */ +#define I40E_GLGEN_MDIO_I2C_SEL_MAX_INDEX 3 +#define I40E_GLGEN_MDIO_I2C_SEL_MDIO_I2C_SEL_SHIFT 0 +#define I40E_GLGEN_MDIO_I2C_SEL_MDIO_I2C_SEL_MASK I40E_MASK(0x1, I40E_GLGEN_MDIO_I2C_SEL_MDIO_I2C_SEL_SHIFT) +#define I40E_GLGEN_MDIO_I2C_SEL_PHY_PORT_NUM_SHIFT 1 +#define I40E_GLGEN_MDIO_I2C_SEL_PHY_PORT_NUM_MASK I40E_MASK(0xF, I40E_GLGEN_MDIO_I2C_SEL_PHY_PORT_NUM_SHIFT) +#define I40E_GLGEN_MDIO_I2C_SEL_PHY0_ADDRESS_SHIFT 5 +#define I40E_GLGEN_MDIO_I2C_SEL_PHY0_ADDRESS_MASK I40E_MASK(0x1F, I40E_GLGEN_MDIO_I2C_SEL_PHY0_ADDRESS_SHIFT) +#define I40E_GLGEN_MDIO_I2C_SEL_PHY1_ADDRESS_SHIFT 10 +#define I40E_GLGEN_MDIO_I2C_SEL_PHY1_ADDRESS_MASK I40E_MASK(0x1F, I40E_GLGEN_MDIO_I2C_SEL_PHY1_ADDRESS_SHIFT) +#define I40E_GLGEN_MDIO_I2C_SEL_PHY2_ADDRESS_SHIFT 15 +#define I40E_GLGEN_MDIO_I2C_SEL_PHY2_ADDRESS_MASK I40E_MASK(0x1F, I40E_GLGEN_MDIO_I2C_SEL_PHY2_ADDRESS_SHIFT) +#define I40E_GLGEN_MDIO_I2C_SEL_PHY3_ADDRESS_SHIFT 20 +#define I40E_GLGEN_MDIO_I2C_SEL_PHY3_ADDRESS_MASK I40E_MASK(0x1F, I40E_GLGEN_MDIO_I2C_SEL_PHY3_ADDRESS_SHIFT) +#define I40E_GLGEN_MDIO_I2C_SEL_MDIO_IF_MODE_SHIFT 25 +#define I40E_GLGEN_MDIO_I2C_SEL_MDIO_IF_MODE_MASK I40E_MASK(0xF, I40E_GLGEN_MDIO_I2C_SEL_MDIO_IF_MODE_SHIFT) +#define I40E_GLGEN_MDIO_I2C_SEL_EN_FAST_MODE_SHIFT 31 +#define I40E_GLGEN_MDIO_I2C_SEL_EN_FAST_MODE_MASK I40E_MASK(0x1, I40E_GLGEN_MDIO_I2C_SEL_EN_FAST_MODE_SHIFT) +#define I40E_GLGEN_MSCA(_i) (0x0008818C + ((_i) * 4)) /* _i=0...3 */ /* Reset: POR */ +#define I40E_GLGEN_MSCA_MAX_INDEX 3 +#define I40E_GLGEN_MSCA_MDIADD_SHIFT 0 +#define I40E_GLGEN_MSCA_MDIADD_MASK I40E_MASK(0xFFFF, I40E_GLGEN_MSCA_MDIADD_SHIFT) +#define I40E_GLGEN_MSCA_DEVADD_SHIFT 16 +#define I40E_GLGEN_MSCA_DEVADD_MASK I40E_MASK(0x1F, I40E_GLGEN_MSCA_DEVADD_SHIFT) +#define I40E_GLGEN_MSCA_PHYADD_SHIFT 21 +#define I40E_GLGEN_MSCA_PHYADD_MASK I40E_MASK(0x1F, I40E_GLGEN_MSCA_PHYADD_SHIFT) +#define I40E_GLGEN_MSCA_OPCODE_SHIFT 26 +#define I40E_GLGEN_MSCA_OPCODE_MASK I40E_MASK(0x3, I40E_GLGEN_MSCA_OPCODE_SHIFT) +#define I40E_GLGEN_MSCA_STCODE_SHIFT 28 +#define I40E_GLGEN_MSCA_STCODE_MASK I40E_MASK(0x3, I40E_GLGEN_MSCA_STCODE_SHIFT) +#define I40E_GLGEN_MSCA_MDICMD_SHIFT 30 +#define I40E_GLGEN_MSCA_MDICMD_MASK I40E_MASK(0x1, I40E_GLGEN_MSCA_MDICMD_SHIFT) +#define I40E_GLGEN_MSCA_MDIINPROGEN_SHIFT 31 +#define I40E_GLGEN_MSCA_MDIINPROGEN_MASK I40E_MASK(0x1, I40E_GLGEN_MSCA_MDIINPROGEN_SHIFT) +#define I40E_GLGEN_MSRWD(_i) (0x0008819C + ((_i) * 4)) /* _i=0...3 */ /* Reset: POR */ +#define I40E_GLGEN_MSRWD_MAX_INDEX 3 +#define I40E_GLGEN_MSRWD_MDIWRDATA_SHIFT 0 +#define I40E_GLGEN_MSRWD_MDIWRDATA_MASK I40E_MASK(0xFFFF, I40E_GLGEN_MSRWD_MDIWRDATA_SHIFT) +#define I40E_GLGEN_MSRWD_MDIRDDATA_SHIFT 16 +#define I40E_GLGEN_MSRWD_MDIRDDATA_MASK I40E_MASK(0xFFFF, I40E_GLGEN_MSRWD_MDIRDDATA_SHIFT) +#define I40E_GLGEN_PCIFCNCNT 0x001C0AB4 /* Reset: PCIR */ +#define I40E_GLGEN_PCIFCNCNT_PCIPFCNT_SHIFT 0 +#define I40E_GLGEN_PCIFCNCNT_PCIPFCNT_MASK I40E_MASK(0x1F, I40E_GLGEN_PCIFCNCNT_PCIPFCNT_SHIFT) +#define I40E_GLGEN_PCIFCNCNT_PCIVFCNT_SHIFT 16 +#define I40E_GLGEN_PCIFCNCNT_PCIVFCNT_MASK I40E_MASK(0xFF, I40E_GLGEN_PCIFCNCNT_PCIVFCNT_SHIFT) +#define I40E_GLGEN_RSTAT 0x000B8188 /* Reset: POR */ +#define I40E_GLGEN_RSTAT_DEVSTATE_SHIFT 0 +#define I40E_GLGEN_RSTAT_DEVSTATE_MASK I40E_MASK(0x3, I40E_GLGEN_RSTAT_DEVSTATE_SHIFT) +#define I40E_GLGEN_RSTAT_RESET_TYPE_SHIFT 2 +#define I40E_GLGEN_RSTAT_RESET_TYPE_MASK I40E_MASK(0x3, I40E_GLGEN_RSTAT_RESET_TYPE_SHIFT) +#define I40E_GLGEN_RSTAT_CORERCNT_SHIFT 4 +#define I40E_GLGEN_RSTAT_CORERCNT_MASK I40E_MASK(0x3, I40E_GLGEN_RSTAT_CORERCNT_SHIFT) +#define I40E_GLGEN_RSTAT_GLOBRCNT_SHIFT 6 +#define I40E_GLGEN_RSTAT_GLOBRCNT_MASK I40E_MASK(0x3, I40E_GLGEN_RSTAT_GLOBRCNT_SHIFT) +#define I40E_GLGEN_RSTAT_EMPRCNT_SHIFT 8 +#define I40E_GLGEN_RSTAT_EMPRCNT_MASK I40E_MASK(0x3, I40E_GLGEN_RSTAT_EMPRCNT_SHIFT) +#define I40E_GLGEN_RSTAT_TIME_TO_RST_SHIFT 10 +#define I40E_GLGEN_RSTAT_TIME_TO_RST_MASK I40E_MASK(0x3F, I40E_GLGEN_RSTAT_TIME_TO_RST_SHIFT) +#define I40E_GLGEN_RSTCTL 0x000B8180 /* Reset: POR */ +#define I40E_GLGEN_RSTCTL_GRSTDEL_SHIFT 0 +#define I40E_GLGEN_RSTCTL_GRSTDEL_MASK I40E_MASK(0x3F, I40E_GLGEN_RSTCTL_GRSTDEL_SHIFT) +#define I40E_GLGEN_RSTCTL_ECC_RST_ENA_SHIFT 8 +#define I40E_GLGEN_RSTCTL_ECC_RST_ENA_MASK I40E_MASK(0x1, I40E_GLGEN_RSTCTL_ECC_RST_ENA_SHIFT) +#define I40E_GLGEN_RSTENA_EMP 0x000B818C /* Reset: POR */ +#define I40E_GLGEN_RSTENA_EMP_EMP_RST_ENA_SHIFT 0 +#define I40E_GLGEN_RSTENA_EMP_EMP_RST_ENA_MASK I40E_MASK(0x1, I40E_GLGEN_RSTENA_EMP_EMP_RST_ENA_SHIFT) +#define I40E_GLGEN_RTRIG 0x000B8190 /* Reset: CORER */ +#define I40E_GLGEN_RTRIG_CORER_SHIFT 0 +#define I40E_GLGEN_RTRIG_CORER_MASK I40E_MASK(0x1, I40E_GLGEN_RTRIG_CORER_SHIFT) +#define I40E_GLGEN_RTRIG_GLOBR_SHIFT 1 +#define I40E_GLGEN_RTRIG_GLOBR_MASK I40E_MASK(0x1, I40E_GLGEN_RTRIG_GLOBR_SHIFT) +#define I40E_GLGEN_RTRIG_EMPFWR_SHIFT 2 +#define I40E_GLGEN_RTRIG_EMPFWR_MASK I40E_MASK(0x1, I40E_GLGEN_RTRIG_EMPFWR_SHIFT) +#define I40E_GLGEN_STAT 0x000B612C /* Reset: POR */ +#define I40E_GLGEN_STAT_HWRSVD0_SHIFT 0 +#define I40E_GLGEN_STAT_HWRSVD0_MASK I40E_MASK(0x3, I40E_GLGEN_STAT_HWRSVD0_SHIFT) +#define I40E_GLGEN_STAT_DCBEN_SHIFT 2 +#define I40E_GLGEN_STAT_DCBEN_MASK I40E_MASK(0x1, I40E_GLGEN_STAT_DCBEN_SHIFT) +#define I40E_GLGEN_STAT_VTEN_SHIFT 3 +#define I40E_GLGEN_STAT_VTEN_MASK I40E_MASK(0x1, I40E_GLGEN_STAT_VTEN_SHIFT) +#define I40E_GLGEN_STAT_FCOEN_SHIFT 4 +#define I40E_GLGEN_STAT_FCOEN_MASK I40E_MASK(0x1, I40E_GLGEN_STAT_FCOEN_SHIFT) +#define I40E_GLGEN_STAT_EVBEN_SHIFT 5 +#define I40E_GLGEN_STAT_EVBEN_MASK I40E_MASK(0x1, I40E_GLGEN_STAT_EVBEN_SHIFT) +#define I40E_GLGEN_STAT_HWRSVD1_SHIFT 6 +#define I40E_GLGEN_STAT_HWRSVD1_MASK I40E_MASK(0x3, I40E_GLGEN_STAT_HWRSVD1_SHIFT) +#define I40E_GLGEN_VFLRSTAT(_i) (0x00092600 + ((_i) * 4)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLGEN_VFLRSTAT_MAX_INDEX 3 +#define I40E_GLGEN_VFLRSTAT_VFLRE_SHIFT 0 +#define I40E_GLGEN_VFLRSTAT_VFLRE_MASK I40E_MASK(0xFFFFFFFF, I40E_GLGEN_VFLRSTAT_VFLRE_SHIFT) +#define I40E_GLVFGEN_TIMER 0x000881BC /* Reset: CORER */ +#define I40E_GLVFGEN_TIMER_GTIME_SHIFT 0 +#define I40E_GLVFGEN_TIMER_GTIME_MASK I40E_MASK(0xFFFFFFFF, I40E_GLVFGEN_TIMER_GTIME_SHIFT) +#define I40E_PFGEN_CTRL 0x00092400 /* Reset: PFR */ +#define I40E_PFGEN_CTRL_PFSWR_SHIFT 0 +#define I40E_PFGEN_CTRL_PFSWR_MASK I40E_MASK(0x1, I40E_PFGEN_CTRL_PFSWR_SHIFT) +#define I40E_PFGEN_DRUN 0x00092500 /* Reset: CORER */ +#define I40E_PFGEN_DRUN_DRVUNLD_SHIFT 0 +#define I40E_PFGEN_DRUN_DRVUNLD_MASK I40E_MASK(0x1, I40E_PFGEN_DRUN_DRVUNLD_SHIFT) +#define I40E_PFGEN_PORTNUM 0x001C0480 /* Reset: CORER */ +#define I40E_PFGEN_PORTNUM_PORT_NUM_SHIFT 0 +#define I40E_PFGEN_PORTNUM_PORT_NUM_MASK I40E_MASK(0x3, I40E_PFGEN_PORTNUM_PORT_NUM_SHIFT) +#define I40E_PFGEN_STATE 0x00088000 /* Reset: CORER */ +#define I40E_PFGEN_STATE_RESERVED_0_SHIFT 0 +#define I40E_PFGEN_STATE_RESERVED_0_MASK I40E_MASK(0x1, I40E_PFGEN_STATE_RESERVED_0_SHIFT) +#define I40E_PFGEN_STATE_PFFCEN_SHIFT 1 +#define I40E_PFGEN_STATE_PFFCEN_MASK I40E_MASK(0x1, I40E_PFGEN_STATE_PFFCEN_SHIFT) +#define I40E_PFGEN_STATE_PFLINKEN_SHIFT 2 +#define I40E_PFGEN_STATE_PFLINKEN_MASK I40E_MASK(0x1, I40E_PFGEN_STATE_PFLINKEN_SHIFT) +#define I40E_PFGEN_STATE_PFSCEN_SHIFT 3 +#define I40E_PFGEN_STATE_PFSCEN_MASK I40E_MASK(0x1, I40E_PFGEN_STATE_PFSCEN_SHIFT) +#define I40E_PRTGEN_CNF 0x000B8120 /* Reset: POR */ +#define I40E_PRTGEN_CNF_PORT_DIS_SHIFT 0 +#define I40E_PRTGEN_CNF_PORT_DIS_MASK I40E_MASK(0x1, I40E_PRTGEN_CNF_PORT_DIS_SHIFT) +#define I40E_PRTGEN_CNF_ALLOW_PORT_DIS_SHIFT 1 +#define I40E_PRTGEN_CNF_ALLOW_PORT_DIS_MASK I40E_MASK(0x1, I40E_PRTGEN_CNF_ALLOW_PORT_DIS_SHIFT) +#define I40E_PRTGEN_CNF_EMP_PORT_DIS_SHIFT 2 +#define I40E_PRTGEN_CNF_EMP_PORT_DIS_MASK I40E_MASK(0x1, I40E_PRTGEN_CNF_EMP_PORT_DIS_SHIFT) +#define I40E_PRTGEN_CNF2 0x000B8160 /* Reset: POR */ +#define I40E_PRTGEN_CNF2_ACTIVATE_PORT_LINK_SHIFT 0 +#define I40E_PRTGEN_CNF2_ACTIVATE_PORT_LINK_MASK I40E_MASK(0x1, I40E_PRTGEN_CNF2_ACTIVATE_PORT_LINK_SHIFT) +#define I40E_PRTGEN_STATUS 0x000B8100 /* Reset: POR */ +#define I40E_PRTGEN_STATUS_PORT_VALID_SHIFT 0 +#define I40E_PRTGEN_STATUS_PORT_VALID_MASK I40E_MASK(0x1, I40E_PRTGEN_STATUS_PORT_VALID_SHIFT) +#define I40E_PRTGEN_STATUS_PORT_ACTIVE_SHIFT 1 +#define I40E_PRTGEN_STATUS_PORT_ACTIVE_MASK I40E_MASK(0x1, I40E_PRTGEN_STATUS_PORT_ACTIVE_SHIFT) +#define I40E_VFGEN_RSTAT1(_VF) (0x00074400 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: VFR */ +#define I40E_VFGEN_RSTAT1_MAX_INDEX 127 +#define I40E_VFGEN_RSTAT1_VFR_STATE_SHIFT 0 +#define I40E_VFGEN_RSTAT1_VFR_STATE_MASK I40E_MASK(0x3, I40E_VFGEN_RSTAT1_VFR_STATE_SHIFT) +#define I40E_VPGEN_VFRSTAT(_VF) (0x00091C00 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_VPGEN_VFRSTAT_MAX_INDEX 127 +#define I40E_VPGEN_VFRSTAT_VFRD_SHIFT 0 +#define I40E_VPGEN_VFRSTAT_VFRD_MASK I40E_MASK(0x1, I40E_VPGEN_VFRSTAT_VFRD_SHIFT) +#define I40E_VPGEN_VFRTRIG(_VF) (0x00091800 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_VPGEN_VFRTRIG_MAX_INDEX 127 +#define I40E_VPGEN_VFRTRIG_VFSWR_SHIFT 0 +#define I40E_VPGEN_VFRTRIG_VFSWR_MASK I40E_MASK(0x1, I40E_VPGEN_VFRTRIG_VFSWR_SHIFT) +#define I40E_VSIGEN_RSTAT(_VSI) (0x00090800 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: CORER */ +#define I40E_VSIGEN_RSTAT_MAX_INDEX 383 +#define I40E_VSIGEN_RSTAT_VMRD_SHIFT 0 +#define I40E_VSIGEN_RSTAT_VMRD_MASK I40E_MASK(0x1, I40E_VSIGEN_RSTAT_VMRD_SHIFT) +#define I40E_VSIGEN_RTRIG(_VSI) (0x00090000 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: CORER */ +#define I40E_VSIGEN_RTRIG_MAX_INDEX 383 +#define I40E_VSIGEN_RTRIG_VMSWR_SHIFT 0 +#define I40E_VSIGEN_RTRIG_VMSWR_MASK I40E_MASK(0x1, I40E_VSIGEN_RTRIG_VMSWR_SHIFT) +#define I40E_GLHMC_FCOEDDPBASE(_i) (0x000C6600 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLHMC_FCOEDDPBASE_MAX_INDEX 15 +#define I40E_GLHMC_FCOEDDPBASE_FPMFCOEDDPBASE_SHIFT 0 +#define I40E_GLHMC_FCOEDDPBASE_FPMFCOEDDPBASE_MASK I40E_MASK(0xFFFFFF, I40E_GLHMC_FCOEDDPBASE_FPMFCOEDDPBASE_SHIFT) +#define I40E_GLHMC_FCOEDDPCNT(_i) (0x000C6700 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLHMC_FCOEDDPCNT_MAX_INDEX 15 +#define I40E_GLHMC_FCOEDDPCNT_FPMFCOEDDPCNT_SHIFT 0 +#define I40E_GLHMC_FCOEDDPCNT_FPMFCOEDDPCNT_MASK I40E_MASK(0xFFFFF, I40E_GLHMC_FCOEDDPCNT_FPMFCOEDDPCNT_SHIFT) +#define I40E_GLHMC_FCOEDDPOBJSZ 0x000C2010 /* Reset: CORER */ +#define I40E_GLHMC_FCOEDDPOBJSZ_PMFCOEDDPOBJSZ_SHIFT 0 +#define I40E_GLHMC_FCOEDDPOBJSZ_PMFCOEDDPOBJSZ_MASK I40E_MASK(0xF, I40E_GLHMC_FCOEDDPOBJSZ_PMFCOEDDPOBJSZ_SHIFT) +#define I40E_GLHMC_FCOEFBASE(_i) (0x000C6800 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLHMC_FCOEFBASE_MAX_INDEX 15 +#define I40E_GLHMC_FCOEFBASE_FPMFCOEFBASE_SHIFT 0 +#define I40E_GLHMC_FCOEFBASE_FPMFCOEFBASE_MASK I40E_MASK(0xFFFFFF, I40E_GLHMC_FCOEFBASE_FPMFCOEFBASE_SHIFT) +#define I40E_GLHMC_FCOEFCNT(_i) (0x000C6900 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLHMC_FCOEFCNT_MAX_INDEX 15 +#define I40E_GLHMC_FCOEFCNT_FPMFCOEFCNT_SHIFT 0 +#define I40E_GLHMC_FCOEFCNT_FPMFCOEFCNT_MASK I40E_MASK(0x7FFFFF, I40E_GLHMC_FCOEFCNT_FPMFCOEFCNT_SHIFT) +#define I40E_GLHMC_FCOEFMAX 0x000C20D0 /* Reset: CORER */ +#define I40E_GLHMC_FCOEFMAX_PMFCOEFMAX_SHIFT 0 +#define I40E_GLHMC_FCOEFMAX_PMFCOEFMAX_MASK I40E_MASK(0xFFFF, I40E_GLHMC_FCOEFMAX_PMFCOEFMAX_SHIFT) +#define I40E_GLHMC_FCOEFOBJSZ 0x000C2018 /* Reset: CORER */ +#define I40E_GLHMC_FCOEFOBJSZ_PMFCOEFOBJSZ_SHIFT 0 +#define I40E_GLHMC_FCOEFOBJSZ_PMFCOEFOBJSZ_MASK I40E_MASK(0xF, I40E_GLHMC_FCOEFOBJSZ_PMFCOEFOBJSZ_SHIFT) +#define I40E_GLHMC_FCOEMAX 0x000C2014 /* Reset: CORER */ +#define I40E_GLHMC_FCOEMAX_PMFCOEMAX_SHIFT 0 +#define I40E_GLHMC_FCOEMAX_PMFCOEMAX_MASK I40E_MASK(0x1FFF, I40E_GLHMC_FCOEMAX_PMFCOEMAX_SHIFT) +#define I40E_GLHMC_FSIAVBASE(_i) (0x000C5600 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLHMC_FSIAVBASE_MAX_INDEX 15 +#define I40E_GLHMC_FSIAVBASE_FPMFSIAVBASE_SHIFT 0 +#define I40E_GLHMC_FSIAVBASE_FPMFSIAVBASE_MASK I40E_MASK(0xFFFFFF, I40E_GLHMC_FSIAVBASE_FPMFSIAVBASE_SHIFT) +#define I40E_GLHMC_FSIAVCNT(_i) (0x000C5700 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLHMC_FSIAVCNT_MAX_INDEX 15 +#define I40E_GLHMC_FSIAVCNT_FPMFSIAVCNT_SHIFT 0 +#define I40E_GLHMC_FSIAVCNT_FPMFSIAVCNT_MASK I40E_MASK(0x1FFFFFFF, I40E_GLHMC_FSIAVCNT_FPMFSIAVCNT_SHIFT) +#define I40E_GLHMC_FSIAVCNT_RSVD_SHIFT 29 +#define I40E_GLHMC_FSIAVCNT_RSVD_MASK I40E_MASK(0x7, I40E_GLHMC_FSIAVCNT_RSVD_SHIFT) +#define I40E_GLHMC_FSIAVMAX 0x000C2068 /* Reset: CORER */ +#define I40E_GLHMC_FSIAVMAX_PMFSIAVMAX_SHIFT 0 +#define I40E_GLHMC_FSIAVMAX_PMFSIAVMAX_MASK I40E_MASK(0x1FFFF, I40E_GLHMC_FSIAVMAX_PMFSIAVMAX_SHIFT) +#define I40E_GLHMC_FSIAVOBJSZ 0x000C2064 /* Reset: CORER */ +#define I40E_GLHMC_FSIAVOBJSZ_PMFSIAVOBJSZ_SHIFT 0 +#define I40E_GLHMC_FSIAVOBJSZ_PMFSIAVOBJSZ_MASK I40E_MASK(0xF, I40E_GLHMC_FSIAVOBJSZ_PMFSIAVOBJSZ_SHIFT) +#define I40E_GLHMC_FSIMCBASE(_i) (0x000C6000 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLHMC_FSIMCBASE_MAX_INDEX 15 +#define I40E_GLHMC_FSIMCBASE_FPMFSIMCBASE_SHIFT 0 +#define I40E_GLHMC_FSIMCBASE_FPMFSIMCBASE_MASK I40E_MASK(0xFFFFFF, I40E_GLHMC_FSIMCBASE_FPMFSIMCBASE_SHIFT) +#define I40E_GLHMC_FSIMCCNT(_i) (0x000C6100 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLHMC_FSIMCCNT_MAX_INDEX 15 +#define I40E_GLHMC_FSIMCCNT_FPMFSIMCSZ_SHIFT 0 +#define I40E_GLHMC_FSIMCCNT_FPMFSIMCSZ_MASK I40E_MASK(0x1FFFFFFF, I40E_GLHMC_FSIMCCNT_FPMFSIMCSZ_SHIFT) +#define I40E_GLHMC_FSIMCMAX 0x000C2060 /* Reset: CORER */ +#define I40E_GLHMC_FSIMCMAX_PMFSIMCMAX_SHIFT 0 +#define I40E_GLHMC_FSIMCMAX_PMFSIMCMAX_MASK I40E_MASK(0x3FFF, I40E_GLHMC_FSIMCMAX_PMFSIMCMAX_SHIFT) +#define I40E_GLHMC_FSIMCOBJSZ 0x000C205c /* Reset: CORER */ +#define I40E_GLHMC_FSIMCOBJSZ_PMFSIMCOBJSZ_SHIFT 0 +#define I40E_GLHMC_FSIMCOBJSZ_PMFSIMCOBJSZ_MASK I40E_MASK(0xF, I40E_GLHMC_FSIMCOBJSZ_PMFSIMCOBJSZ_SHIFT) +#define I40E_GLHMC_LANQMAX 0x000C2008 /* Reset: CORER */ +#define I40E_GLHMC_LANQMAX_PMLANQMAX_SHIFT 0 +#define I40E_GLHMC_LANQMAX_PMLANQMAX_MASK I40E_MASK(0x7FF, I40E_GLHMC_LANQMAX_PMLANQMAX_SHIFT) +#define I40E_GLHMC_LANRXBASE(_i) (0x000C6400 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLHMC_LANRXBASE_MAX_INDEX 15 +#define I40E_GLHMC_LANRXBASE_FPMLANRXBASE_SHIFT 0 +#define I40E_GLHMC_LANRXBASE_FPMLANRXBASE_MASK I40E_MASK(0xFFFFFF, I40E_GLHMC_LANRXBASE_FPMLANRXBASE_SHIFT) +#define I40E_GLHMC_LANRXCNT(_i) (0x000C6500 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLHMC_LANRXCNT_MAX_INDEX 15 +#define I40E_GLHMC_LANRXCNT_FPMLANRXCNT_SHIFT 0 +#define I40E_GLHMC_LANRXCNT_FPMLANRXCNT_MASK I40E_MASK(0x7FF, I40E_GLHMC_LANRXCNT_FPMLANRXCNT_SHIFT) +#define I40E_GLHMC_LANRXOBJSZ 0x000C200c /* Reset: CORER */ +#define I40E_GLHMC_LANRXOBJSZ_PMLANRXOBJSZ_SHIFT 0 +#define I40E_GLHMC_LANRXOBJSZ_PMLANRXOBJSZ_MASK I40E_MASK(0xF, I40E_GLHMC_LANRXOBJSZ_PMLANRXOBJSZ_SHIFT) +#define I40E_GLHMC_LANTXBASE(_i) (0x000C6200 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLHMC_LANTXBASE_MAX_INDEX 15 +#define I40E_GLHMC_LANTXBASE_FPMLANTXBASE_SHIFT 0 +#define I40E_GLHMC_LANTXBASE_FPMLANTXBASE_MASK I40E_MASK(0xFFFFFF, I40E_GLHMC_LANTXBASE_FPMLANTXBASE_SHIFT) +#define I40E_GLHMC_LANTXBASE_RSVD_SHIFT 24 +#define I40E_GLHMC_LANTXBASE_RSVD_MASK I40E_MASK(0xFF, I40E_GLHMC_LANTXBASE_RSVD_SHIFT) +#define I40E_GLHMC_LANTXCNT(_i) (0x000C6300 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLHMC_LANTXCNT_MAX_INDEX 15 +#define I40E_GLHMC_LANTXCNT_FPMLANTXCNT_SHIFT 0 +#define I40E_GLHMC_LANTXCNT_FPMLANTXCNT_MASK I40E_MASK(0x7FF, I40E_GLHMC_LANTXCNT_FPMLANTXCNT_SHIFT) +#define I40E_GLHMC_LANTXOBJSZ 0x000C2004 /* Reset: CORER */ +#define I40E_GLHMC_LANTXOBJSZ_PMLANTXOBJSZ_SHIFT 0 +#define I40E_GLHMC_LANTXOBJSZ_PMLANTXOBJSZ_MASK I40E_MASK(0xF, I40E_GLHMC_LANTXOBJSZ_PMLANTXOBJSZ_SHIFT) +#define I40E_GLHMC_PFASSIGN(_i) (0x000C0c00 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLHMC_PFASSIGN_MAX_INDEX 15 +#define I40E_GLHMC_PFASSIGN_PMFCNPFASSIGN_SHIFT 0 +#define I40E_GLHMC_PFASSIGN_PMFCNPFASSIGN_MASK I40E_MASK(0xF, I40E_GLHMC_PFASSIGN_PMFCNPFASSIGN_SHIFT) +#define I40E_GLHMC_SDPART(_i) (0x000C0800 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLHMC_SDPART_MAX_INDEX 15 +#define I40E_GLHMC_SDPART_PMSDBASE_SHIFT 0 +#define I40E_GLHMC_SDPART_PMSDBASE_MASK I40E_MASK(0xFFF, I40E_GLHMC_SDPART_PMSDBASE_SHIFT) +#define I40E_GLHMC_SDPART_PMSDSIZE_SHIFT 16 +#define I40E_GLHMC_SDPART_PMSDSIZE_MASK I40E_MASK(0x1FFF, I40E_GLHMC_SDPART_PMSDSIZE_SHIFT) +#define I40E_PFHMC_ERRORDATA 0x000C0500 /* Reset: PFR */ +#define I40E_PFHMC_ERRORDATA_HMC_ERROR_DATA_SHIFT 0 +#define I40E_PFHMC_ERRORDATA_HMC_ERROR_DATA_MASK I40E_MASK(0x3FFFFFFF, I40E_PFHMC_ERRORDATA_HMC_ERROR_DATA_SHIFT) +#define I40E_PFHMC_ERRORINFO 0x000C0400 /* Reset: PFR */ +#define I40E_PFHMC_ERRORINFO_PMF_INDEX_SHIFT 0 +#define I40E_PFHMC_ERRORINFO_PMF_INDEX_MASK I40E_MASK(0x1F, I40E_PFHMC_ERRORINFO_PMF_INDEX_SHIFT) +#define I40E_PFHMC_ERRORINFO_PMF_ISVF_SHIFT 7 +#define I40E_PFHMC_ERRORINFO_PMF_ISVF_MASK I40E_MASK(0x1, I40E_PFHMC_ERRORINFO_PMF_ISVF_SHIFT) +#define I40E_PFHMC_ERRORINFO_HMC_ERROR_TYPE_SHIFT 8 +#define I40E_PFHMC_ERRORINFO_HMC_ERROR_TYPE_MASK I40E_MASK(0xF, I40E_PFHMC_ERRORINFO_HMC_ERROR_TYPE_SHIFT) +#define I40E_PFHMC_ERRORINFO_HMC_OBJECT_TYPE_SHIFT 16 +#define I40E_PFHMC_ERRORINFO_HMC_OBJECT_TYPE_MASK I40E_MASK(0x1F, I40E_PFHMC_ERRORINFO_HMC_OBJECT_TYPE_SHIFT) +#define I40E_PFHMC_ERRORINFO_ERROR_DETECTED_SHIFT 31 +#define I40E_PFHMC_ERRORINFO_ERROR_DETECTED_MASK I40E_MASK(0x1, I40E_PFHMC_ERRORINFO_ERROR_DETECTED_SHIFT) +#define I40E_PFHMC_PDINV 0x000C0300 /* Reset: PFR */ +#define I40E_PFHMC_PDINV_PMSDIDX_SHIFT 0 +#define I40E_PFHMC_PDINV_PMSDIDX_MASK I40E_MASK(0xFFF, I40E_PFHMC_PDINV_PMSDIDX_SHIFT) +#define I40E_PFHMC_PDINV_PMPDIDX_SHIFT 16 +#define I40E_PFHMC_PDINV_PMPDIDX_MASK I40E_MASK(0x1FF, I40E_PFHMC_PDINV_PMPDIDX_SHIFT) +#define I40E_PFHMC_SDCMD 0x000C0000 /* Reset: PFR */ +#define I40E_PFHMC_SDCMD_PMSDIDX_SHIFT 0 +#define I40E_PFHMC_SDCMD_PMSDIDX_MASK I40E_MASK(0xFFF, I40E_PFHMC_SDCMD_PMSDIDX_SHIFT) +#define I40E_PFHMC_SDCMD_PMSDWR_SHIFT 31 +#define I40E_PFHMC_SDCMD_PMSDWR_MASK I40E_MASK(0x1, I40E_PFHMC_SDCMD_PMSDWR_SHIFT) +#define I40E_PFHMC_SDDATAHIGH 0x000C0200 /* Reset: PFR */ +#define I40E_PFHMC_SDDATAHIGH_PMSDDATAHIGH_SHIFT 0 +#define I40E_PFHMC_SDDATAHIGH_PMSDDATAHIGH_MASK I40E_MASK(0xFFFFFFFF, I40E_PFHMC_SDDATAHIGH_PMSDDATAHIGH_SHIFT) +#define I40E_PFHMC_SDDATALOW 0x000C0100 /* Reset: PFR */ +#define I40E_PFHMC_SDDATALOW_PMSDVALID_SHIFT 0 +#define I40E_PFHMC_SDDATALOW_PMSDVALID_MASK I40E_MASK(0x1, I40E_PFHMC_SDDATALOW_PMSDVALID_SHIFT) +#define I40E_PFHMC_SDDATALOW_PMSDTYPE_SHIFT 1 +#define I40E_PFHMC_SDDATALOW_PMSDTYPE_MASK I40E_MASK(0x1, I40E_PFHMC_SDDATALOW_PMSDTYPE_SHIFT) +#define I40E_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT 2 +#define I40E_PFHMC_SDDATALOW_PMSDBPCOUNT_MASK I40E_MASK(0x3FF, I40E_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) +#define I40E_PFHMC_SDDATALOW_PMSDDATALOW_SHIFT 12 +#define I40E_PFHMC_SDDATALOW_PMSDDATALOW_MASK I40E_MASK(0xFFFFF, I40E_PFHMC_SDDATALOW_PMSDDATALOW_SHIFT) +#define I40E_GL_GP_FUSE(_i) (0x0009400C + ((_i) * 4)) /* _i=0...28 */ /* Reset: POR */ +#define I40E_GL_GP_FUSE_MAX_INDEX 28 +#define I40E_GL_GP_FUSE_GL_GP_FUSE_SHIFT 0 +#define I40E_GL_GP_FUSE_GL_GP_FUSE_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_GP_FUSE_GL_GP_FUSE_SHIFT) +#define I40E_GL_UFUSE 0x00094008 /* Reset: POR */ +#define I40E_GL_UFUSE_FOUR_PORT_ENABLE_SHIFT 1 +#define I40E_GL_UFUSE_FOUR_PORT_ENABLE_MASK I40E_MASK(0x1, I40E_GL_UFUSE_FOUR_PORT_ENABLE_SHIFT) +#define I40E_GL_UFUSE_NIC_ID_SHIFT 2 +#define I40E_GL_UFUSE_NIC_ID_MASK I40E_MASK(0x1, I40E_GL_UFUSE_NIC_ID_SHIFT) +#define I40E_GL_UFUSE_ULT_LOCKOUT_SHIFT 10 +#define I40E_GL_UFUSE_ULT_LOCKOUT_MASK I40E_MASK(0x1, I40E_GL_UFUSE_ULT_LOCKOUT_SHIFT) +#define I40E_GL_UFUSE_CLS_LOCKOUT_SHIFT 11 +#define I40E_GL_UFUSE_CLS_LOCKOUT_MASK I40E_MASK(0x1, I40E_GL_UFUSE_CLS_LOCKOUT_SHIFT) +#define I40E_EMPINT_GPIO_ENA 0x00088188 /* Reset: POR */ +#define I40E_EMPINT_GPIO_ENA_GPIO0_ENA_SHIFT 0 +#define I40E_EMPINT_GPIO_ENA_GPIO0_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO0_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO1_ENA_SHIFT 1 +#define I40E_EMPINT_GPIO_ENA_GPIO1_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO1_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO2_ENA_SHIFT 2 +#define I40E_EMPINT_GPIO_ENA_GPIO2_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO2_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO3_ENA_SHIFT 3 +#define I40E_EMPINT_GPIO_ENA_GPIO3_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO3_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO4_ENA_SHIFT 4 +#define I40E_EMPINT_GPIO_ENA_GPIO4_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO4_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO5_ENA_SHIFT 5 +#define I40E_EMPINT_GPIO_ENA_GPIO5_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO5_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO6_ENA_SHIFT 6 +#define I40E_EMPINT_GPIO_ENA_GPIO6_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO6_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO7_ENA_SHIFT 7 +#define I40E_EMPINT_GPIO_ENA_GPIO7_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO7_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO8_ENA_SHIFT 8 +#define I40E_EMPINT_GPIO_ENA_GPIO8_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO8_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO9_ENA_SHIFT 9 +#define I40E_EMPINT_GPIO_ENA_GPIO9_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO9_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO10_ENA_SHIFT 10 +#define I40E_EMPINT_GPIO_ENA_GPIO10_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO10_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO11_ENA_SHIFT 11 +#define I40E_EMPINT_GPIO_ENA_GPIO11_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO11_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO12_ENA_SHIFT 12 +#define I40E_EMPINT_GPIO_ENA_GPIO12_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO12_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO13_ENA_SHIFT 13 +#define I40E_EMPINT_GPIO_ENA_GPIO13_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO13_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO14_ENA_SHIFT 14 +#define I40E_EMPINT_GPIO_ENA_GPIO14_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO14_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO15_ENA_SHIFT 15 +#define I40E_EMPINT_GPIO_ENA_GPIO15_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO15_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO16_ENA_SHIFT 16 +#define I40E_EMPINT_GPIO_ENA_GPIO16_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO16_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO17_ENA_SHIFT 17 +#define I40E_EMPINT_GPIO_ENA_GPIO17_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO17_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO18_ENA_SHIFT 18 +#define I40E_EMPINT_GPIO_ENA_GPIO18_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO18_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO19_ENA_SHIFT 19 +#define I40E_EMPINT_GPIO_ENA_GPIO19_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO19_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO20_ENA_SHIFT 20 +#define I40E_EMPINT_GPIO_ENA_GPIO20_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO20_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO21_ENA_SHIFT 21 +#define I40E_EMPINT_GPIO_ENA_GPIO21_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO21_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO22_ENA_SHIFT 22 +#define I40E_EMPINT_GPIO_ENA_GPIO22_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO22_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO23_ENA_SHIFT 23 +#define I40E_EMPINT_GPIO_ENA_GPIO23_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO23_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO24_ENA_SHIFT 24 +#define I40E_EMPINT_GPIO_ENA_GPIO24_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO24_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO25_ENA_SHIFT 25 +#define I40E_EMPINT_GPIO_ENA_GPIO25_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO25_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO26_ENA_SHIFT 26 +#define I40E_EMPINT_GPIO_ENA_GPIO26_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO26_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO27_ENA_SHIFT 27 +#define I40E_EMPINT_GPIO_ENA_GPIO27_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO27_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO28_ENA_SHIFT 28 +#define I40E_EMPINT_GPIO_ENA_GPIO28_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO28_ENA_SHIFT) +#define I40E_EMPINT_GPIO_ENA_GPIO29_ENA_SHIFT 29 +#define I40E_EMPINT_GPIO_ENA_GPIO29_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO29_ENA_SHIFT) +#define I40E_PFGEN_PORTMDIO_NUM 0x0003F100 /* Reset: CORER */ +#define I40E_PFGEN_PORTMDIO_NUM_PORT_NUM_SHIFT 0 +#define I40E_PFGEN_PORTMDIO_NUM_PORT_NUM_MASK I40E_MASK(0x3, I40E_PFGEN_PORTMDIO_NUM_PORT_NUM_SHIFT) +#define I40E_PFGEN_PORTMDIO_NUM_VFLINK_STAT_ENA_SHIFT 4 +#define I40E_PFGEN_PORTMDIO_NUM_VFLINK_STAT_ENA_MASK I40E_MASK(0x1, I40E_PFGEN_PORTMDIO_NUM_VFLINK_STAT_ENA_SHIFT) +#define I40E_PFINT_AEQCTL 0x00038700 /* Reset: CORER */ +#define I40E_PFINT_AEQCTL_MSIX_INDX_SHIFT 0 +#define I40E_PFINT_AEQCTL_MSIX_INDX_MASK I40E_MASK(0xFF, I40E_PFINT_AEQCTL_MSIX_INDX_SHIFT) +#define I40E_PFINT_AEQCTL_ITR_INDX_SHIFT 11 +#define I40E_PFINT_AEQCTL_ITR_INDX_MASK I40E_MASK(0x3, I40E_PFINT_AEQCTL_ITR_INDX_SHIFT) +#define I40E_PFINT_AEQCTL_MSIX0_INDX_SHIFT 13 +#define I40E_PFINT_AEQCTL_MSIX0_INDX_MASK I40E_MASK(0x7, I40E_PFINT_AEQCTL_MSIX0_INDX_SHIFT) +#define I40E_PFINT_AEQCTL_CAUSE_ENA_SHIFT 30 +#define I40E_PFINT_AEQCTL_CAUSE_ENA_MASK I40E_MASK(0x1, I40E_PFINT_AEQCTL_CAUSE_ENA_SHIFT) +#define I40E_PFINT_AEQCTL_INTEVENT_SHIFT 31 +#define I40E_PFINT_AEQCTL_INTEVENT_MASK I40E_MASK(0x1, I40E_PFINT_AEQCTL_INTEVENT_SHIFT) +#define I40E_PFINT_CEQCTL(_INTPF) (0x00036800 + ((_INTPF) * 4)) /* _i=0...511 */ /* Reset: CORER */ +#define I40E_PFINT_CEQCTL_MAX_INDEX 511 +#define I40E_PFINT_CEQCTL_MSIX_INDX_SHIFT 0 +#define I40E_PFINT_CEQCTL_MSIX_INDX_MASK I40E_MASK(0xFF, I40E_PFINT_CEQCTL_MSIX_INDX_SHIFT) +#define I40E_PFINT_CEQCTL_ITR_INDX_SHIFT 11 +#define I40E_PFINT_CEQCTL_ITR_INDX_MASK I40E_MASK(0x3, I40E_PFINT_CEQCTL_ITR_INDX_SHIFT) +#define I40E_PFINT_CEQCTL_MSIX0_INDX_SHIFT 13 +#define I40E_PFINT_CEQCTL_MSIX0_INDX_MASK I40E_MASK(0x7, I40E_PFINT_CEQCTL_MSIX0_INDX_SHIFT) +#define I40E_PFINT_CEQCTL_NEXTQ_INDX_SHIFT 16 +#define I40E_PFINT_CEQCTL_NEXTQ_INDX_MASK I40E_MASK(0x7FF, I40E_PFINT_CEQCTL_NEXTQ_INDX_SHIFT) +#define I40E_PFINT_CEQCTL_NEXTQ_TYPE_SHIFT 27 +#define I40E_PFINT_CEQCTL_NEXTQ_TYPE_MASK I40E_MASK(0x3, I40E_PFINT_CEQCTL_NEXTQ_TYPE_SHIFT) +#define I40E_PFINT_CEQCTL_CAUSE_ENA_SHIFT 30 +#define I40E_PFINT_CEQCTL_CAUSE_ENA_MASK I40E_MASK(0x1, I40E_PFINT_CEQCTL_CAUSE_ENA_SHIFT) +#define I40E_PFINT_CEQCTL_INTEVENT_SHIFT 31 +#define I40E_PFINT_CEQCTL_INTEVENT_MASK I40E_MASK(0x1, I40E_PFINT_CEQCTL_INTEVENT_SHIFT) +#define I40E_PFINT_DYN_CTL0 0x00038480 /* Reset: PFR */ +#define I40E_PFINT_DYN_CTL0_INTENA_SHIFT 0 +#define I40E_PFINT_DYN_CTL0_INTENA_MASK I40E_MASK(0x1, I40E_PFINT_DYN_CTL0_INTENA_SHIFT) +#define I40E_PFINT_DYN_CTL0_CLEARPBA_SHIFT 1 +#define I40E_PFINT_DYN_CTL0_CLEARPBA_MASK I40E_MASK(0x1, I40E_PFINT_DYN_CTL0_CLEARPBA_SHIFT) +#define I40E_PFINT_DYN_CTL0_SWINT_TRIG_SHIFT 2 +#define I40E_PFINT_DYN_CTL0_SWINT_TRIG_MASK I40E_MASK(0x1, I40E_PFINT_DYN_CTL0_SWINT_TRIG_SHIFT) +#define I40E_PFINT_DYN_CTL0_ITR_INDX_SHIFT 3 +#define I40E_PFINT_DYN_CTL0_ITR_INDX_MASK I40E_MASK(0x3, I40E_PFINT_DYN_CTL0_ITR_INDX_SHIFT) +#define I40E_PFINT_DYN_CTL0_INTERVAL_SHIFT 5 +#define I40E_PFINT_DYN_CTL0_INTERVAL_MASK I40E_MASK(0xFFF, I40E_PFINT_DYN_CTL0_INTERVAL_SHIFT) +#define I40E_PFINT_DYN_CTL0_SW_ITR_INDX_ENA_SHIFT 24 +#define I40E_PFINT_DYN_CTL0_SW_ITR_INDX_ENA_MASK I40E_MASK(0x1, I40E_PFINT_DYN_CTL0_SW_ITR_INDX_ENA_SHIFT) +#define I40E_PFINT_DYN_CTL0_SW_ITR_INDX_SHIFT 25 +#define I40E_PFINT_DYN_CTL0_SW_ITR_INDX_MASK I40E_MASK(0x3, I40E_PFINT_DYN_CTL0_SW_ITR_INDX_SHIFT) +#define I40E_PFINT_DYN_CTL0_INTENA_MSK_SHIFT 31 +#define I40E_PFINT_DYN_CTL0_INTENA_MSK_MASK I40E_MASK(0x1, I40E_PFINT_DYN_CTL0_INTENA_MSK_SHIFT) +#define I40E_PFINT_DYN_CTLN(_INTPF) (0x00034800 + ((_INTPF) * 4)) /* _i=0...511 */ /* Reset: PFR */ +#define I40E_PFINT_DYN_CTLN_MAX_INDEX 511 +#define I40E_PFINT_DYN_CTLN_INTENA_SHIFT 0 +#define I40E_PFINT_DYN_CTLN_INTENA_MASK I40E_MASK(0x1, I40E_PFINT_DYN_CTLN_INTENA_SHIFT) +#define I40E_PFINT_DYN_CTLN_CLEARPBA_SHIFT 1 +#define I40E_PFINT_DYN_CTLN_CLEARPBA_MASK I40E_MASK(0x1, I40E_PFINT_DYN_CTLN_CLEARPBA_SHIFT) +#define I40E_PFINT_DYN_CTLN_SWINT_TRIG_SHIFT 2 +#define I40E_PFINT_DYN_CTLN_SWINT_TRIG_MASK I40E_MASK(0x1, I40E_PFINT_DYN_CTLN_SWINT_TRIG_SHIFT) +#define I40E_PFINT_DYN_CTLN_ITR_INDX_SHIFT 3 +#define I40E_PFINT_DYN_CTLN_ITR_INDX_MASK I40E_MASK(0x3, I40E_PFINT_DYN_CTLN_ITR_INDX_SHIFT) +#define I40E_PFINT_DYN_CTLN_INTERVAL_SHIFT 5 +#define I40E_PFINT_DYN_CTLN_INTERVAL_MASK I40E_MASK(0xFFF, I40E_PFINT_DYN_CTLN_INTERVAL_SHIFT) +#define I40E_PFINT_DYN_CTLN_SW_ITR_INDX_ENA_SHIFT 24 +#define I40E_PFINT_DYN_CTLN_SW_ITR_INDX_ENA_MASK I40E_MASK(0x1, I40E_PFINT_DYN_CTLN_SW_ITR_INDX_ENA_SHIFT) +#define I40E_PFINT_DYN_CTLN_SW_ITR_INDX_SHIFT 25 +#define I40E_PFINT_DYN_CTLN_SW_ITR_INDX_MASK I40E_MASK(0x3, I40E_PFINT_DYN_CTLN_SW_ITR_INDX_SHIFT) +#define I40E_PFINT_DYN_CTLN_INTENA_MSK_SHIFT 31 +#define I40E_PFINT_DYN_CTLN_INTENA_MSK_MASK I40E_MASK(0x1, I40E_PFINT_DYN_CTLN_INTENA_MSK_SHIFT) +#define I40E_PFINT_GPIO_ENA 0x00088080 /* Reset: CORER */ +#define I40E_PFINT_GPIO_ENA_GPIO0_ENA_SHIFT 0 +#define I40E_PFINT_GPIO_ENA_GPIO0_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO0_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO1_ENA_SHIFT 1 +#define I40E_PFINT_GPIO_ENA_GPIO1_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO1_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO2_ENA_SHIFT 2 +#define I40E_PFINT_GPIO_ENA_GPIO2_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO2_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO3_ENA_SHIFT 3 +#define I40E_PFINT_GPIO_ENA_GPIO3_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO3_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO4_ENA_SHIFT 4 +#define I40E_PFINT_GPIO_ENA_GPIO4_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO4_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO5_ENA_SHIFT 5 +#define I40E_PFINT_GPIO_ENA_GPIO5_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO5_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO6_ENA_SHIFT 6 +#define I40E_PFINT_GPIO_ENA_GPIO6_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO6_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO7_ENA_SHIFT 7 +#define I40E_PFINT_GPIO_ENA_GPIO7_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO7_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO8_ENA_SHIFT 8 +#define I40E_PFINT_GPIO_ENA_GPIO8_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO8_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO9_ENA_SHIFT 9 +#define I40E_PFINT_GPIO_ENA_GPIO9_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO9_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO10_ENA_SHIFT 10 +#define I40E_PFINT_GPIO_ENA_GPIO10_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO10_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO11_ENA_SHIFT 11 +#define I40E_PFINT_GPIO_ENA_GPIO11_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO11_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO12_ENA_SHIFT 12 +#define I40E_PFINT_GPIO_ENA_GPIO12_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO12_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO13_ENA_SHIFT 13 +#define I40E_PFINT_GPIO_ENA_GPIO13_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO13_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO14_ENA_SHIFT 14 +#define I40E_PFINT_GPIO_ENA_GPIO14_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO14_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO15_ENA_SHIFT 15 +#define I40E_PFINT_GPIO_ENA_GPIO15_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO15_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO16_ENA_SHIFT 16 +#define I40E_PFINT_GPIO_ENA_GPIO16_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO16_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO17_ENA_SHIFT 17 +#define I40E_PFINT_GPIO_ENA_GPIO17_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO17_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO18_ENA_SHIFT 18 +#define I40E_PFINT_GPIO_ENA_GPIO18_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO18_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO19_ENA_SHIFT 19 +#define I40E_PFINT_GPIO_ENA_GPIO19_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO19_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO20_ENA_SHIFT 20 +#define I40E_PFINT_GPIO_ENA_GPIO20_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO20_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO21_ENA_SHIFT 21 +#define I40E_PFINT_GPIO_ENA_GPIO21_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO21_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO22_ENA_SHIFT 22 +#define I40E_PFINT_GPIO_ENA_GPIO22_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO22_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO23_ENA_SHIFT 23 +#define I40E_PFINT_GPIO_ENA_GPIO23_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO23_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO24_ENA_SHIFT 24 +#define I40E_PFINT_GPIO_ENA_GPIO24_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO24_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO25_ENA_SHIFT 25 +#define I40E_PFINT_GPIO_ENA_GPIO25_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO25_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO26_ENA_SHIFT 26 +#define I40E_PFINT_GPIO_ENA_GPIO26_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO26_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO27_ENA_SHIFT 27 +#define I40E_PFINT_GPIO_ENA_GPIO27_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO27_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO28_ENA_SHIFT 28 +#define I40E_PFINT_GPIO_ENA_GPIO28_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO28_ENA_SHIFT) +#define I40E_PFINT_GPIO_ENA_GPIO29_ENA_SHIFT 29 +#define I40E_PFINT_GPIO_ENA_GPIO29_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO29_ENA_SHIFT) +#define I40E_PFINT_ICR0 0x00038780 /* Reset: CORER */ +#define I40E_PFINT_ICR0_INTEVENT_SHIFT 0 +#define I40E_PFINT_ICR0_INTEVENT_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_INTEVENT_SHIFT) +#define I40E_PFINT_ICR0_QUEUE_0_SHIFT 1 +#define I40E_PFINT_ICR0_QUEUE_0_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_QUEUE_0_SHIFT) +#define I40E_PFINT_ICR0_QUEUE_1_SHIFT 2 +#define I40E_PFINT_ICR0_QUEUE_1_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_QUEUE_1_SHIFT) +#define I40E_PFINT_ICR0_QUEUE_2_SHIFT 3 +#define I40E_PFINT_ICR0_QUEUE_2_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_QUEUE_2_SHIFT) +#define I40E_PFINT_ICR0_QUEUE_3_SHIFT 4 +#define I40E_PFINT_ICR0_QUEUE_3_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_QUEUE_3_SHIFT) +#define I40E_PFINT_ICR0_QUEUE_4_SHIFT 5 +#define I40E_PFINT_ICR0_QUEUE_4_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_QUEUE_4_SHIFT) +#define I40E_PFINT_ICR0_QUEUE_5_SHIFT 6 +#define I40E_PFINT_ICR0_QUEUE_5_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_QUEUE_5_SHIFT) +#define I40E_PFINT_ICR0_QUEUE_6_SHIFT 7 +#define I40E_PFINT_ICR0_QUEUE_6_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_QUEUE_6_SHIFT) +#define I40E_PFINT_ICR0_QUEUE_7_SHIFT 8 +#define I40E_PFINT_ICR0_QUEUE_7_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_QUEUE_7_SHIFT) +#define I40E_PFINT_ICR0_ECC_ERR_SHIFT 16 +#define I40E_PFINT_ICR0_ECC_ERR_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ECC_ERR_SHIFT) +#define I40E_PFINT_ICR0_MAL_DETECT_SHIFT 19 +#define I40E_PFINT_ICR0_MAL_DETECT_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_MAL_DETECT_SHIFT) +#define I40E_PFINT_ICR0_GRST_SHIFT 20 +#define I40E_PFINT_ICR0_GRST_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_GRST_SHIFT) +#define I40E_PFINT_ICR0_PCI_EXCEPTION_SHIFT 21 +#define I40E_PFINT_ICR0_PCI_EXCEPTION_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_PCI_EXCEPTION_SHIFT) +#define I40E_PFINT_ICR0_GPIO_SHIFT 22 +#define I40E_PFINT_ICR0_GPIO_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_GPIO_SHIFT) +#define I40E_PFINT_ICR0_TIMESYNC_SHIFT 23 +#define I40E_PFINT_ICR0_TIMESYNC_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_TIMESYNC_SHIFT) +#define I40E_PFINT_ICR0_STORM_DETECT_SHIFT 24 +#define I40E_PFINT_ICR0_STORM_DETECT_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_STORM_DETECT_SHIFT) +#define I40E_PFINT_ICR0_LINK_STAT_CHANGE_SHIFT 25 +#define I40E_PFINT_ICR0_LINK_STAT_CHANGE_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_LINK_STAT_CHANGE_SHIFT) +#define I40E_PFINT_ICR0_HMC_ERR_SHIFT 26 +#define I40E_PFINT_ICR0_HMC_ERR_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_HMC_ERR_SHIFT) +#define I40E_PFINT_ICR0_PE_CRITERR_SHIFT 28 +#define I40E_PFINT_ICR0_PE_CRITERR_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_PE_CRITERR_SHIFT) +#define I40E_PFINT_ICR0_VFLR_SHIFT 29 +#define I40E_PFINT_ICR0_VFLR_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_VFLR_SHIFT) +#define I40E_PFINT_ICR0_ADMINQ_SHIFT 30 +#define I40E_PFINT_ICR0_ADMINQ_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ADMINQ_SHIFT) +#define I40E_PFINT_ICR0_SWINT_SHIFT 31 +#define I40E_PFINT_ICR0_SWINT_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_SWINT_SHIFT) +#define I40E_PFINT_ICR0_ENA 0x00038800 /* Reset: CORER */ +#define I40E_PFINT_ICR0_ENA_ECC_ERR_SHIFT 16 +#define I40E_PFINT_ICR0_ENA_ECC_ERR_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_ECC_ERR_SHIFT) +#define I40E_PFINT_ICR0_ENA_MAL_DETECT_SHIFT 19 +#define I40E_PFINT_ICR0_ENA_MAL_DETECT_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_MAL_DETECT_SHIFT) +#define I40E_PFINT_ICR0_ENA_GRST_SHIFT 20 +#define I40E_PFINT_ICR0_ENA_GRST_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_GRST_SHIFT) +#define I40E_PFINT_ICR0_ENA_PCI_EXCEPTION_SHIFT 21 +#define I40E_PFINT_ICR0_ENA_PCI_EXCEPTION_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_PCI_EXCEPTION_SHIFT) +#define I40E_PFINT_ICR0_ENA_GPIO_SHIFT 22 +#define I40E_PFINT_ICR0_ENA_GPIO_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_GPIO_SHIFT) +#define I40E_PFINT_ICR0_ENA_TIMESYNC_SHIFT 23 +#define I40E_PFINT_ICR0_ENA_TIMESYNC_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_TIMESYNC_SHIFT) +#define I40E_PFINT_ICR0_ENA_STORM_DETECT_SHIFT 24 +#define I40E_PFINT_ICR0_ENA_STORM_DETECT_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_STORM_DETECT_SHIFT) +#define I40E_PFINT_ICR0_ENA_LINK_STAT_CHANGE_SHIFT 25 +#define I40E_PFINT_ICR0_ENA_LINK_STAT_CHANGE_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_LINK_STAT_CHANGE_SHIFT) +#define I40E_PFINT_ICR0_ENA_HMC_ERR_SHIFT 26 +#define I40E_PFINT_ICR0_ENA_HMC_ERR_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_HMC_ERR_SHIFT) +#define I40E_PFINT_ICR0_ENA_PE_CRITERR_SHIFT 28 +#define I40E_PFINT_ICR0_ENA_PE_CRITERR_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_PE_CRITERR_SHIFT) +#define I40E_PFINT_ICR0_ENA_VFLR_SHIFT 29 +#define I40E_PFINT_ICR0_ENA_VFLR_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_VFLR_SHIFT) +#define I40E_PFINT_ICR0_ENA_ADMINQ_SHIFT 30 +#define I40E_PFINT_ICR0_ENA_ADMINQ_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_ADMINQ_SHIFT) +#define I40E_PFINT_ICR0_ENA_RSVD_SHIFT 31 +#define I40E_PFINT_ICR0_ENA_RSVD_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_RSVD_SHIFT) +#define I40E_PFINT_ITR0(_i) (0x00038000 + ((_i) * 128)) /* _i=0...2 */ /* Reset: PFR */ +#define I40E_PFINT_ITR0_MAX_INDEX 2 +#define I40E_PFINT_ITR0_INTERVAL_SHIFT 0 +#define I40E_PFINT_ITR0_INTERVAL_MASK I40E_MASK(0xFFF, I40E_PFINT_ITR0_INTERVAL_SHIFT) +#define I40E_PFINT_ITRN(_i, _INTPF) (0x00030000 + ((_i) * 2048 + (_INTPF) * 4)) /* _i=0...2, _INTPF=0...511 */ /* Reset: PFR */ +#define I40E_PFINT_ITRN_MAX_INDEX 2 +#define I40E_PFINT_ITRN_INTERVAL_SHIFT 0 +#define I40E_PFINT_ITRN_INTERVAL_MASK I40E_MASK(0xFFF, I40E_PFINT_ITRN_INTERVAL_SHIFT) +#define I40E_PFINT_LNKLST0 0x00038500 /* Reset: PFR */ +#define I40E_PFINT_LNKLST0_FIRSTQ_INDX_SHIFT 0 +#define I40E_PFINT_LNKLST0_FIRSTQ_INDX_MASK I40E_MASK(0x7FF, I40E_PFINT_LNKLST0_FIRSTQ_INDX_SHIFT) +#define I40E_PFINT_LNKLST0_FIRSTQ_TYPE_SHIFT 11 +#define I40E_PFINT_LNKLST0_FIRSTQ_TYPE_MASK I40E_MASK(0x3, I40E_PFINT_LNKLST0_FIRSTQ_TYPE_SHIFT) +#define I40E_PFINT_LNKLSTN(_INTPF) (0x00035000 + ((_INTPF) * 4)) /* _i=0...511 */ /* Reset: PFR */ +#define I40E_PFINT_LNKLSTN_MAX_INDEX 511 +#define I40E_PFINT_LNKLSTN_FIRSTQ_INDX_SHIFT 0 +#define I40E_PFINT_LNKLSTN_FIRSTQ_INDX_MASK I40E_MASK(0x7FF, I40E_PFINT_LNKLSTN_FIRSTQ_INDX_SHIFT) +#define I40E_PFINT_LNKLSTN_FIRSTQ_TYPE_SHIFT 11 +#define I40E_PFINT_LNKLSTN_FIRSTQ_TYPE_MASK I40E_MASK(0x3, I40E_PFINT_LNKLSTN_FIRSTQ_TYPE_SHIFT) +#define I40E_PFINT_RATE0 0x00038580 /* Reset: PFR */ +#define I40E_PFINT_RATE0_INTERVAL_SHIFT 0 +#define I40E_PFINT_RATE0_INTERVAL_MASK I40E_MASK(0x3F, I40E_PFINT_RATE0_INTERVAL_SHIFT) +#define I40E_PFINT_RATE0_INTRL_ENA_SHIFT 6 +#define I40E_PFINT_RATE0_INTRL_ENA_MASK I40E_MASK(0x1, I40E_PFINT_RATE0_INTRL_ENA_SHIFT) +#define I40E_PFINT_RATEN(_INTPF) (0x00035800 + ((_INTPF) * 4)) /* _i=0...511 */ /* Reset: PFR */ +#define I40E_PFINT_RATEN_MAX_INDEX 511 +#define I40E_PFINT_RATEN_INTERVAL_SHIFT 0 +#define I40E_PFINT_RATEN_INTERVAL_MASK I40E_MASK(0x3F, I40E_PFINT_RATEN_INTERVAL_SHIFT) +#define I40E_PFINT_RATEN_INTRL_ENA_SHIFT 6 +#define I40E_PFINT_RATEN_INTRL_ENA_MASK I40E_MASK(0x1, I40E_PFINT_RATEN_INTRL_ENA_SHIFT) +#define I40E_PFINT_STAT_CTL0 0x00038400 /* Reset: PFR */ +#define I40E_PFINT_STAT_CTL0_OTHER_ITR_INDX_SHIFT 2 +#define I40E_PFINT_STAT_CTL0_OTHER_ITR_INDX_MASK I40E_MASK(0x3, I40E_PFINT_STAT_CTL0_OTHER_ITR_INDX_SHIFT) +#define I40E_QINT_RQCTL(_Q) (0x0003A000 + ((_Q) * 4)) /* _i=0...1535 */ /* Reset: CORER */ +#define I40E_QINT_RQCTL_MAX_INDEX 1535 +#define I40E_QINT_RQCTL_MSIX_INDX_SHIFT 0 +#define I40E_QINT_RQCTL_MSIX_INDX_MASK I40E_MASK(0xFF, I40E_QINT_RQCTL_MSIX_INDX_SHIFT) +#define I40E_QINT_RQCTL_ITR_INDX_SHIFT 11 +#define I40E_QINT_RQCTL_ITR_INDX_MASK I40E_MASK(0x3, I40E_QINT_RQCTL_ITR_INDX_SHIFT) +#define I40E_QINT_RQCTL_MSIX0_INDX_SHIFT 13 +#define I40E_QINT_RQCTL_MSIX0_INDX_MASK I40E_MASK(0x7, I40E_QINT_RQCTL_MSIX0_INDX_SHIFT) +#define I40E_QINT_RQCTL_NEXTQ_INDX_SHIFT 16 +#define I40E_QINT_RQCTL_NEXTQ_INDX_MASK I40E_MASK(0x7FF, I40E_QINT_RQCTL_NEXTQ_INDX_SHIFT) +#define I40E_QINT_RQCTL_NEXTQ_TYPE_SHIFT 27 +#define I40E_QINT_RQCTL_NEXTQ_TYPE_MASK I40E_MASK(0x3, I40E_QINT_RQCTL_NEXTQ_TYPE_SHIFT) +#define I40E_QINT_RQCTL_CAUSE_ENA_SHIFT 30 +#define I40E_QINT_RQCTL_CAUSE_ENA_MASK I40E_MASK(0x1, I40E_QINT_RQCTL_CAUSE_ENA_SHIFT) +#define I40E_QINT_RQCTL_INTEVENT_SHIFT 31 +#define I40E_QINT_RQCTL_INTEVENT_MASK I40E_MASK(0x1, I40E_QINT_RQCTL_INTEVENT_SHIFT) +#define I40E_QINT_TQCTL(_Q) (0x0003C000 + ((_Q) * 4)) /* _i=0...1535 */ /* Reset: CORER */ +#define I40E_QINT_TQCTL_MAX_INDEX 1535 +#define I40E_QINT_TQCTL_MSIX_INDX_SHIFT 0 +#define I40E_QINT_TQCTL_MSIX_INDX_MASK I40E_MASK(0xFF, I40E_QINT_TQCTL_MSIX_INDX_SHIFT) +#define I40E_QINT_TQCTL_ITR_INDX_SHIFT 11 +#define I40E_QINT_TQCTL_ITR_INDX_MASK I40E_MASK(0x3, I40E_QINT_TQCTL_ITR_INDX_SHIFT) +#define I40E_QINT_TQCTL_MSIX0_INDX_SHIFT 13 +#define I40E_QINT_TQCTL_MSIX0_INDX_MASK I40E_MASK(0x7, I40E_QINT_TQCTL_MSIX0_INDX_SHIFT) +#define I40E_QINT_TQCTL_NEXTQ_INDX_SHIFT 16 +#define I40E_QINT_TQCTL_NEXTQ_INDX_MASK I40E_MASK(0x7FF, I40E_QINT_TQCTL_NEXTQ_INDX_SHIFT) +#define I40E_QINT_TQCTL_NEXTQ_TYPE_SHIFT 27 +#define I40E_QINT_TQCTL_NEXTQ_TYPE_MASK I40E_MASK(0x3, I40E_QINT_TQCTL_NEXTQ_TYPE_SHIFT) +#define I40E_QINT_TQCTL_CAUSE_ENA_SHIFT 30 +#define I40E_QINT_TQCTL_CAUSE_ENA_MASK I40E_MASK(0x1, I40E_QINT_TQCTL_CAUSE_ENA_SHIFT) +#define I40E_QINT_TQCTL_INTEVENT_SHIFT 31 +#define I40E_QINT_TQCTL_INTEVENT_MASK I40E_MASK(0x1, I40E_QINT_TQCTL_INTEVENT_SHIFT) +#define I40E_VFINT_DYN_CTL0(_VF) (0x0002A400 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: VFR */ +#define I40E_VFINT_DYN_CTL0_MAX_INDEX 127 +#define I40E_VFINT_DYN_CTL0_INTENA_SHIFT 0 +#define I40E_VFINT_DYN_CTL0_INTENA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL0_INTENA_SHIFT) +#define I40E_VFINT_DYN_CTL0_CLEARPBA_SHIFT 1 +#define I40E_VFINT_DYN_CTL0_CLEARPBA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL0_CLEARPBA_SHIFT) +#define I40E_VFINT_DYN_CTL0_SWINT_TRIG_SHIFT 2 +#define I40E_VFINT_DYN_CTL0_SWINT_TRIG_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL0_SWINT_TRIG_SHIFT) +#define I40E_VFINT_DYN_CTL0_ITR_INDX_SHIFT 3 +#define I40E_VFINT_DYN_CTL0_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_DYN_CTL0_ITR_INDX_SHIFT) +#define I40E_VFINT_DYN_CTL0_INTERVAL_SHIFT 5 +#define I40E_VFINT_DYN_CTL0_INTERVAL_MASK I40E_MASK(0xFFF, I40E_VFINT_DYN_CTL0_INTERVAL_SHIFT) +#define I40E_VFINT_DYN_CTL0_SW_ITR_INDX_ENA_SHIFT 24 +#define I40E_VFINT_DYN_CTL0_SW_ITR_INDX_ENA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL0_SW_ITR_INDX_ENA_SHIFT) +#define I40E_VFINT_DYN_CTL0_SW_ITR_INDX_SHIFT 25 +#define I40E_VFINT_DYN_CTL0_SW_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_DYN_CTL0_SW_ITR_INDX_SHIFT) +#define I40E_VFINT_DYN_CTL0_INTENA_MSK_SHIFT 31 +#define I40E_VFINT_DYN_CTL0_INTENA_MSK_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL0_INTENA_MSK_SHIFT) +#define I40E_VFINT_DYN_CTLN(_INTVF) (0x00024800 + ((_INTVF) * 4)) /* _i=0...511 */ /* Reset: VFR */ +#define I40E_VFINT_DYN_CTLN_MAX_INDEX 511 +#define I40E_VFINT_DYN_CTLN_INTENA_SHIFT 0 +#define I40E_VFINT_DYN_CTLN_INTENA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN_INTENA_SHIFT) +#define I40E_VFINT_DYN_CTLN_CLEARPBA_SHIFT 1 +#define I40E_VFINT_DYN_CTLN_CLEARPBA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN_CLEARPBA_SHIFT) +#define I40E_VFINT_DYN_CTLN_SWINT_TRIG_SHIFT 2 +#define I40E_VFINT_DYN_CTLN_SWINT_TRIG_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN_SWINT_TRIG_SHIFT) +#define I40E_VFINT_DYN_CTLN_ITR_INDX_SHIFT 3 +#define I40E_VFINT_DYN_CTLN_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_DYN_CTLN_ITR_INDX_SHIFT) +#define I40E_VFINT_DYN_CTLN_INTERVAL_SHIFT 5 +#define I40E_VFINT_DYN_CTLN_INTERVAL_MASK I40E_MASK(0xFFF, I40E_VFINT_DYN_CTLN_INTERVAL_SHIFT) +#define I40E_VFINT_DYN_CTLN_SW_ITR_INDX_ENA_SHIFT 24 +#define I40E_VFINT_DYN_CTLN_SW_ITR_INDX_ENA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN_SW_ITR_INDX_ENA_SHIFT) +#define I40E_VFINT_DYN_CTLN_SW_ITR_INDX_SHIFT 25 +#define I40E_VFINT_DYN_CTLN_SW_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_DYN_CTLN_SW_ITR_INDX_SHIFT) +#define I40E_VFINT_DYN_CTLN_INTENA_MSK_SHIFT 31 +#define I40E_VFINT_DYN_CTLN_INTENA_MSK_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN_INTENA_MSK_SHIFT) +#define I40E_VFINT_ICR0(_VF) (0x0002BC00 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_VFINT_ICR0_MAX_INDEX 127 +#define I40E_VFINT_ICR0_INTEVENT_SHIFT 0 +#define I40E_VFINT_ICR0_INTEVENT_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_INTEVENT_SHIFT) +#define I40E_VFINT_ICR0_QUEUE_0_SHIFT 1 +#define I40E_VFINT_ICR0_QUEUE_0_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_QUEUE_0_SHIFT) +#define I40E_VFINT_ICR0_QUEUE_1_SHIFT 2 +#define I40E_VFINT_ICR0_QUEUE_1_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_QUEUE_1_SHIFT) +#define I40E_VFINT_ICR0_QUEUE_2_SHIFT 3 +#define I40E_VFINT_ICR0_QUEUE_2_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_QUEUE_2_SHIFT) +#define I40E_VFINT_ICR0_QUEUE_3_SHIFT 4 +#define I40E_VFINT_ICR0_QUEUE_3_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_QUEUE_3_SHIFT) +#define I40E_VFINT_ICR0_LINK_STAT_CHANGE_SHIFT 25 +#define I40E_VFINT_ICR0_LINK_STAT_CHANGE_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_LINK_STAT_CHANGE_SHIFT) +#define I40E_VFINT_ICR0_ADMINQ_SHIFT 30 +#define I40E_VFINT_ICR0_ADMINQ_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ADMINQ_SHIFT) +#define I40E_VFINT_ICR0_SWINT_SHIFT 31 +#define I40E_VFINT_ICR0_SWINT_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_SWINT_SHIFT) +#define I40E_VFINT_ICR0_ENA(_VF) (0x0002C000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_VFINT_ICR0_ENA_MAX_INDEX 127 +#define I40E_VFINT_ICR0_ENA_LINK_STAT_CHANGE_SHIFT 25 +#define I40E_VFINT_ICR0_ENA_LINK_STAT_CHANGE_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ENA_LINK_STAT_CHANGE_SHIFT) +#define I40E_VFINT_ICR0_ENA_ADMINQ_SHIFT 30 +#define I40E_VFINT_ICR0_ENA_ADMINQ_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ENA_ADMINQ_SHIFT) +#define I40E_VFINT_ICR0_ENA_RSVD_SHIFT 31 +#define I40E_VFINT_ICR0_ENA_RSVD_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ENA_RSVD_SHIFT) +#define I40E_VFINT_ITR0(_i, _VF) (0x00028000 + ((_i) * 1024 + (_VF) * 4)) /* _i=0...2, _VF=0...127 */ /* Reset: VFR */ +#define I40E_VFINT_ITR0_MAX_INDEX 2 +#define I40E_VFINT_ITR0_INTERVAL_SHIFT 0 +#define I40E_VFINT_ITR0_INTERVAL_MASK I40E_MASK(0xFFF, I40E_VFINT_ITR0_INTERVAL_SHIFT) +#define I40E_VFINT_ITRN(_i, _INTVF) (0x00020000 + ((_i) * 2048 + (_INTVF) * 4)) /* _i=0...2, _INTVF=0...511 */ /* Reset: VFR */ +#define I40E_VFINT_ITRN_MAX_INDEX 2 +#define I40E_VFINT_ITRN_INTERVAL_SHIFT 0 +#define I40E_VFINT_ITRN_INTERVAL_MASK I40E_MASK(0xFFF, I40E_VFINT_ITRN_INTERVAL_SHIFT) +#define I40E_VFINT_STAT_CTL0(_VF) (0x0002A000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: VFR */ +#define I40E_VFINT_STAT_CTL0_MAX_INDEX 127 +#define I40E_VFINT_STAT_CTL0_OTHER_ITR_INDX_SHIFT 2 +#define I40E_VFINT_STAT_CTL0_OTHER_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_STAT_CTL0_OTHER_ITR_INDX_SHIFT) +#define I40E_VPINT_AEQCTL(_VF) (0x0002B800 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_VPINT_AEQCTL_MAX_INDEX 127 +#define I40E_VPINT_AEQCTL_MSIX_INDX_SHIFT 0 +#define I40E_VPINT_AEQCTL_MSIX_INDX_MASK I40E_MASK(0xFF, I40E_VPINT_AEQCTL_MSIX_INDX_SHIFT) +#define I40E_VPINT_AEQCTL_ITR_INDX_SHIFT 11 +#define I40E_VPINT_AEQCTL_ITR_INDX_MASK I40E_MASK(0x3, I40E_VPINT_AEQCTL_ITR_INDX_SHIFT) +#define I40E_VPINT_AEQCTL_MSIX0_INDX_SHIFT 13 +#define I40E_VPINT_AEQCTL_MSIX0_INDX_MASK I40E_MASK(0x7, I40E_VPINT_AEQCTL_MSIX0_INDX_SHIFT) +#define I40E_VPINT_AEQCTL_CAUSE_ENA_SHIFT 30 +#define I40E_VPINT_AEQCTL_CAUSE_ENA_MASK I40E_MASK(0x1, I40E_VPINT_AEQCTL_CAUSE_ENA_SHIFT) +#define I40E_VPINT_AEQCTL_INTEVENT_SHIFT 31 +#define I40E_VPINT_AEQCTL_INTEVENT_MASK I40E_MASK(0x1, I40E_VPINT_AEQCTL_INTEVENT_SHIFT) +#define I40E_VPINT_CEQCTL(_INTVF) (0x00026800 + ((_INTVF) * 4)) /* _i=0...511 */ /* Reset: CORER */ +#define I40E_VPINT_CEQCTL_MAX_INDEX 511 +#define I40E_VPINT_CEQCTL_MSIX_INDX_SHIFT 0 +#define I40E_VPINT_CEQCTL_MSIX_INDX_MASK I40E_MASK(0xFF, I40E_VPINT_CEQCTL_MSIX_INDX_SHIFT) +#define I40E_VPINT_CEQCTL_ITR_INDX_SHIFT 11 +#define I40E_VPINT_CEQCTL_ITR_INDX_MASK I40E_MASK(0x3, I40E_VPINT_CEQCTL_ITR_INDX_SHIFT) +#define I40E_VPINT_CEQCTL_MSIX0_INDX_SHIFT 13 +#define I40E_VPINT_CEQCTL_MSIX0_INDX_MASK I40E_MASK(0x7, I40E_VPINT_CEQCTL_MSIX0_INDX_SHIFT) +#define I40E_VPINT_CEQCTL_NEXTQ_INDX_SHIFT 16 +#define I40E_VPINT_CEQCTL_NEXTQ_INDX_MASK I40E_MASK(0x7FF, I40E_VPINT_CEQCTL_NEXTQ_INDX_SHIFT) +#define I40E_VPINT_CEQCTL_NEXTQ_TYPE_SHIFT 27 +#define I40E_VPINT_CEQCTL_NEXTQ_TYPE_MASK I40E_MASK(0x3, I40E_VPINT_CEQCTL_NEXTQ_TYPE_SHIFT) +#define I40E_VPINT_CEQCTL_CAUSE_ENA_SHIFT 30 +#define I40E_VPINT_CEQCTL_CAUSE_ENA_MASK I40E_MASK(0x1, I40E_VPINT_CEQCTL_CAUSE_ENA_SHIFT) +#define I40E_VPINT_CEQCTL_INTEVENT_SHIFT 31 +#define I40E_VPINT_CEQCTL_INTEVENT_MASK I40E_MASK(0x1, I40E_VPINT_CEQCTL_INTEVENT_SHIFT) +#define I40E_VPINT_LNKLST0(_VF) (0x0002A800 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: VFR */ +#define I40E_VPINT_LNKLST0_MAX_INDEX 127 +#define I40E_VPINT_LNKLST0_FIRSTQ_INDX_SHIFT 0 +#define I40E_VPINT_LNKLST0_FIRSTQ_INDX_MASK I40E_MASK(0x7FF, I40E_VPINT_LNKLST0_FIRSTQ_INDX_SHIFT) +#define I40E_VPINT_LNKLST0_FIRSTQ_TYPE_SHIFT 11 +#define I40E_VPINT_LNKLST0_FIRSTQ_TYPE_MASK I40E_MASK(0x3, I40E_VPINT_LNKLST0_FIRSTQ_TYPE_SHIFT) +#define I40E_VPINT_LNKLSTN(_INTVF) (0x00025000 + ((_INTVF) * 4)) /* _i=0...511 */ /* Reset: VFR */ +#define I40E_VPINT_LNKLSTN_MAX_INDEX 511 +#define I40E_VPINT_LNKLSTN_FIRSTQ_INDX_SHIFT 0 +#define I40E_VPINT_LNKLSTN_FIRSTQ_INDX_MASK I40E_MASK(0x7FF, I40E_VPINT_LNKLSTN_FIRSTQ_INDX_SHIFT) +#define I40E_VPINT_LNKLSTN_FIRSTQ_TYPE_SHIFT 11 +#define I40E_VPINT_LNKLSTN_FIRSTQ_TYPE_MASK I40E_MASK(0x3, I40E_VPINT_LNKLSTN_FIRSTQ_TYPE_SHIFT) +#define I40E_VPINT_RATE0(_VF) (0x0002AC00 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: VFR */ +#define I40E_VPINT_RATE0_MAX_INDEX 127 +#define I40E_VPINT_RATE0_INTERVAL_SHIFT 0 +#define I40E_VPINT_RATE0_INTERVAL_MASK I40E_MASK(0x3F, I40E_VPINT_RATE0_INTERVAL_SHIFT) +#define I40E_VPINT_RATE0_INTRL_ENA_SHIFT 6 +#define I40E_VPINT_RATE0_INTRL_ENA_MASK I40E_MASK(0x1, I40E_VPINT_RATE0_INTRL_ENA_SHIFT) +#define I40E_VPINT_RATEN(_INTVF) (0x00025800 + ((_INTVF) * 4)) /* _i=0...511 */ /* Reset: VFR */ +#define I40E_VPINT_RATEN_MAX_INDEX 511 +#define I40E_VPINT_RATEN_INTERVAL_SHIFT 0 +#define I40E_VPINT_RATEN_INTERVAL_MASK I40E_MASK(0x3F, I40E_VPINT_RATEN_INTERVAL_SHIFT) +#define I40E_VPINT_RATEN_INTRL_ENA_SHIFT 6 +#define I40E_VPINT_RATEN_INTRL_ENA_MASK I40E_MASK(0x1, I40E_VPINT_RATEN_INTRL_ENA_SHIFT) +#define I40E_GL_RDPU_CNTRL 0x00051060 /* Reset: CORER */ +#define I40E_GL_RDPU_CNTRL_RX_PAD_EN_SHIFT 0 +#define I40E_GL_RDPU_CNTRL_RX_PAD_EN_MASK I40E_MASK(0x1, I40E_GL_RDPU_CNTRL_RX_PAD_EN_SHIFT) +#define I40E_GL_RDPU_CNTRL_ECO_SHIFT 1 +#define I40E_GL_RDPU_CNTRL_ECO_MASK I40E_MASK(0x7FFFFFFF, I40E_GL_RDPU_CNTRL_ECO_SHIFT) +#define I40E_GLLAN_RCTL_0 0x0012A500 /* Reset: CORER */ +#define I40E_GLLAN_RCTL_0_PXE_MODE_SHIFT 0 +#define I40E_GLLAN_RCTL_0_PXE_MODE_MASK I40E_MASK(0x1, I40E_GLLAN_RCTL_0_PXE_MODE_SHIFT) +#define I40E_GLLAN_TSOMSK_F 0x000442D8 /* Reset: CORER */ +#define I40E_GLLAN_TSOMSK_F_TCPMSKF_SHIFT 0 +#define I40E_GLLAN_TSOMSK_F_TCPMSKF_MASK I40E_MASK(0xFFF, I40E_GLLAN_TSOMSK_F_TCPMSKF_SHIFT) +#define I40E_GLLAN_TSOMSK_L 0x000442E0 /* Reset: CORER */ +#define I40E_GLLAN_TSOMSK_L_TCPMSKL_SHIFT 0 +#define I40E_GLLAN_TSOMSK_L_TCPMSKL_MASK I40E_MASK(0xFFF, I40E_GLLAN_TSOMSK_L_TCPMSKL_SHIFT) +#define I40E_GLLAN_TSOMSK_M 0x000442DC /* Reset: CORER */ +#define I40E_GLLAN_TSOMSK_M_TCPMSKM_SHIFT 0 +#define I40E_GLLAN_TSOMSK_M_TCPMSKM_MASK I40E_MASK(0xFFF, I40E_GLLAN_TSOMSK_M_TCPMSKM_SHIFT) +#define I40E_GLLAN_TXPRE_QDIS(_i) (0x000e6500 + ((_i) * 4)) /* _i=0...11 */ /* Reset: CORER */ +#define I40E_GLLAN_TXPRE_QDIS_MAX_INDEX 11 +#define I40E_GLLAN_TXPRE_QDIS_QINDX_SHIFT 0 +#define I40E_GLLAN_TXPRE_QDIS_QINDX_MASK I40E_MASK(0x7FF, I40E_GLLAN_TXPRE_QDIS_QINDX_SHIFT) +#define I40E_GLLAN_TXPRE_QDIS_QDIS_STAT_SHIFT 16 +#define I40E_GLLAN_TXPRE_QDIS_QDIS_STAT_MASK I40E_MASK(0x1, I40E_GLLAN_TXPRE_QDIS_QDIS_STAT_SHIFT) +#define I40E_GLLAN_TXPRE_QDIS_SET_QDIS_SHIFT 30 +#define I40E_GLLAN_TXPRE_QDIS_SET_QDIS_MASK I40E_MASK(0x1, I40E_GLLAN_TXPRE_QDIS_SET_QDIS_SHIFT) +#define I40E_GLLAN_TXPRE_QDIS_CLEAR_QDIS_SHIFT 31 +#define I40E_GLLAN_TXPRE_QDIS_CLEAR_QDIS_MASK I40E_MASK(0x1, I40E_GLLAN_TXPRE_QDIS_CLEAR_QDIS_SHIFT) +#define I40E_PFLAN_QALLOC 0x001C0400 /* Reset: CORER */ +#define I40E_PFLAN_QALLOC_FIRSTQ_SHIFT 0 +#define I40E_PFLAN_QALLOC_FIRSTQ_MASK I40E_MASK(0x7FF, I40E_PFLAN_QALLOC_FIRSTQ_SHIFT) +#define I40E_PFLAN_QALLOC_LASTQ_SHIFT 16 +#define I40E_PFLAN_QALLOC_LASTQ_MASK I40E_MASK(0x7FF, I40E_PFLAN_QALLOC_LASTQ_SHIFT) +#define I40E_PFLAN_QALLOC_VALID_SHIFT 31 +#define I40E_PFLAN_QALLOC_VALID_MASK I40E_MASK(0x1, I40E_PFLAN_QALLOC_VALID_SHIFT) +#define I40E_QRX_ENA(_Q) (0x00120000 + ((_Q) * 4)) /* _i=0...1535 */ /* Reset: PFR */ +#define I40E_QRX_ENA_MAX_INDEX 1535 +#define I40E_QRX_ENA_QENA_REQ_SHIFT 0 +#define I40E_QRX_ENA_QENA_REQ_MASK I40E_MASK(0x1, I40E_QRX_ENA_QENA_REQ_SHIFT) +#define I40E_QRX_ENA_FAST_QDIS_SHIFT 1 +#define I40E_QRX_ENA_FAST_QDIS_MASK I40E_MASK(0x1, I40E_QRX_ENA_FAST_QDIS_SHIFT) +#define I40E_QRX_ENA_QENA_STAT_SHIFT 2 +#define I40E_QRX_ENA_QENA_STAT_MASK I40E_MASK(0x1, I40E_QRX_ENA_QENA_STAT_SHIFT) +#define I40E_QRX_TAIL(_Q) (0x00128000 + ((_Q) * 4)) /* _i=0...1535 */ /* Reset: CORER */ +#define I40E_QRX_TAIL_MAX_INDEX 1535 +#define I40E_QRX_TAIL_TAIL_SHIFT 0 +#define I40E_QRX_TAIL_TAIL_MASK I40E_MASK(0x1FFF, I40E_QRX_TAIL_TAIL_SHIFT) +#define I40E_QTX_CTL(_Q) (0x00104000 + ((_Q) * 4)) /* _i=0...1535 */ /* Reset: CORER */ +#define I40E_QTX_CTL_MAX_INDEX 1535 +#define I40E_QTX_CTL_PFVF_Q_SHIFT 0 +#define I40E_QTX_CTL_PFVF_Q_MASK I40E_MASK(0x3, I40E_QTX_CTL_PFVF_Q_SHIFT) +#define I40E_QTX_CTL_PF_INDX_SHIFT 2 +#define I40E_QTX_CTL_PF_INDX_MASK I40E_MASK(0xF, I40E_QTX_CTL_PF_INDX_SHIFT) +#define I40E_QTX_CTL_VFVM_INDX_SHIFT 7 +#define I40E_QTX_CTL_VFVM_INDX_MASK I40E_MASK(0x1FF, I40E_QTX_CTL_VFVM_INDX_SHIFT) +#define I40E_QTX_ENA(_Q) (0x00100000 + ((_Q) * 4)) /* _i=0...1535 */ /* Reset: PFR */ +#define I40E_QTX_ENA_MAX_INDEX 1535 +#define I40E_QTX_ENA_QENA_REQ_SHIFT 0 +#define I40E_QTX_ENA_QENA_REQ_MASK I40E_MASK(0x1, I40E_QTX_ENA_QENA_REQ_SHIFT) +#define I40E_QTX_ENA_FAST_QDIS_SHIFT 1 +#define I40E_QTX_ENA_FAST_QDIS_MASK I40E_MASK(0x1, I40E_QTX_ENA_FAST_QDIS_SHIFT) +#define I40E_QTX_ENA_QENA_STAT_SHIFT 2 +#define I40E_QTX_ENA_QENA_STAT_MASK I40E_MASK(0x1, I40E_QTX_ENA_QENA_STAT_SHIFT) +#define I40E_QTX_HEAD(_Q) (0x000E4000 + ((_Q) * 4)) /* _i=0...1535 */ /* Reset: CORER */ +#define I40E_QTX_HEAD_MAX_INDEX 1535 +#define I40E_QTX_HEAD_HEAD_SHIFT 0 +#define I40E_QTX_HEAD_HEAD_MASK I40E_MASK(0x1FFF, I40E_QTX_HEAD_HEAD_SHIFT) +#define I40E_QTX_HEAD_RS_PENDING_SHIFT 16 +#define I40E_QTX_HEAD_RS_PENDING_MASK I40E_MASK(0x1, I40E_QTX_HEAD_RS_PENDING_SHIFT) +#define I40E_QTX_TAIL(_Q) (0x00108000 + ((_Q) * 4)) /* _i=0...1535 */ /* Reset: PFR */ +#define I40E_QTX_TAIL_MAX_INDEX 1535 +#define I40E_QTX_TAIL_TAIL_SHIFT 0 +#define I40E_QTX_TAIL_TAIL_MASK I40E_MASK(0x1FFF, I40E_QTX_TAIL_TAIL_SHIFT) +#define I40E_VPLAN_MAPENA(_VF) (0x00074000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: VFR */ +#define I40E_VPLAN_MAPENA_MAX_INDEX 127 +#define I40E_VPLAN_MAPENA_TXRX_ENA_SHIFT 0 +#define I40E_VPLAN_MAPENA_TXRX_ENA_MASK I40E_MASK(0x1, I40E_VPLAN_MAPENA_TXRX_ENA_SHIFT) +#define I40E_VPLAN_QTABLE(_i, _VF) (0x00070000 + ((_i) * 1024 + (_VF) * 4)) /* _i=0...15, _VF=0...127 */ /* Reset: VFR */ +#define I40E_VPLAN_QTABLE_MAX_INDEX 15 +#define I40E_VPLAN_QTABLE_QINDEX_SHIFT 0 +#define I40E_VPLAN_QTABLE_QINDEX_MASK I40E_MASK(0x7FF, I40E_VPLAN_QTABLE_QINDEX_SHIFT) +#define I40E_VSILAN_QBASE(_VSI) (0x0020C800 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: PFR */ +#define I40E_VSILAN_QBASE_MAX_INDEX 383 +#define I40E_VSILAN_QBASE_VSIBASE_SHIFT 0 +#define I40E_VSILAN_QBASE_VSIBASE_MASK I40E_MASK(0x7FF, I40E_VSILAN_QBASE_VSIBASE_SHIFT) +#define I40E_VSILAN_QBASE_VSIQTABLE_ENA_SHIFT 11 +#define I40E_VSILAN_QBASE_VSIQTABLE_ENA_MASK I40E_MASK(0x1, I40E_VSILAN_QBASE_VSIQTABLE_ENA_SHIFT) +#define I40E_VSILAN_QTABLE(_i, _VSI) (0x00200000 + ((_i) * 2048 + (_VSI) * 4)) /* _i=0...7, _VSI=0...383 */ /* Reset: PFR */ +#define I40E_VSILAN_QTABLE_MAX_INDEX 7 +#define I40E_VSILAN_QTABLE_QINDEX_0_SHIFT 0 +#define I40E_VSILAN_QTABLE_QINDEX_0_MASK I40E_MASK(0x7FF, I40E_VSILAN_QTABLE_QINDEX_0_SHIFT) +#define I40E_VSILAN_QTABLE_QINDEX_1_SHIFT 16 +#define I40E_VSILAN_QTABLE_QINDEX_1_MASK I40E_MASK(0x7FF, I40E_VSILAN_QTABLE_QINDEX_1_SHIFT) +#define I40E_PRTGL_SAH 0x001E2140 /* Reset: GLOBR */ +#define I40E_PRTGL_SAH_FC_SAH_SHIFT 0 +#define I40E_PRTGL_SAH_FC_SAH_MASK I40E_MASK(0xFFFF, I40E_PRTGL_SAH_FC_SAH_SHIFT) +#define I40E_PRTGL_SAH_MFS_SHIFT 16 +#define I40E_PRTGL_SAH_MFS_MASK I40E_MASK(0xFFFF, I40E_PRTGL_SAH_MFS_SHIFT) +#define I40E_PRTGL_SAL 0x001E2120 /* Reset: GLOBR */ +#define I40E_PRTGL_SAL_FC_SAL_SHIFT 0 +#define I40E_PRTGL_SAL_FC_SAL_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTGL_SAL_FC_SAL_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GCP 0x001E30E0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GPP 0x001E3260 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_PPP 0x001E32E0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL 0x001E3360 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1 0x001E3110 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2 0x001E3120 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE 0x001E30C0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_MASK I40E_MASK(0x1FF, I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1 0x001E3140 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2 0x001E3150 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE 0x001E30D0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_MASK I40E_MASK(0x1FF, I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA(_i) (0x001E3370 + ((_i) * 16)) /* _i=0...8 */ /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_MAX_INDEX 8 +#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(_i) (0x001E3400 + ((_i) * 16)) /* _i=0...8 */ /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MAX_INDEX 8 +#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_TX_SA_PART1 0x001E34B0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_SHIFT) +#define I40E_PRTMAC_HSEC_CTL_TX_SA_PART2 0x001E34C0 /* Reset: GLOBR */ +#define I40E_PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_SHIFT 0 +#define I40E_PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_SHIFT) +#define I40E_PRTMAC_PCS_XAUI_SWAP_A 0x0008C480 /* Reset: GLOBR */ +#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE3_SHIFT 0 +#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE3_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE3_SHIFT) +#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE2_SHIFT 2 +#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE2_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE2_SHIFT) +#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE1_SHIFT 4 +#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE1_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE1_SHIFT) +#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE0_SHIFT 6 +#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE0_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE0_SHIFT) +#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE3_SHIFT 8 +#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE3_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE3_SHIFT) +#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE2_SHIFT 10 +#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE2_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE2_SHIFT) +#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE1_SHIFT 12 +#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE1_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE1_SHIFT) +#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE0_SHIFT 14 +#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE0_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE0_SHIFT) +#define I40E_PRTMAC_PCS_XAUI_SWAP_B 0x0008C484 /* Reset: GLOBR */ +#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE3_SHIFT 0 +#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE3_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE3_SHIFT) +#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE2_SHIFT 2 +#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE2_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE2_SHIFT) +#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE1_SHIFT 4 +#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE1_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE1_SHIFT) +#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE0_SHIFT 6 +#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE0_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE0_SHIFT) +#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE3_SHIFT 8 +#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE3_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE3_SHIFT) +#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE2_SHIFT 10 +#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE2_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE2_SHIFT) +#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE1_SHIFT 12 +#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE1_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE1_SHIFT) +#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE0_SHIFT 14 +#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE0_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE0_SHIFT) +#define I40E_GL_FWRESETCNT 0x00083100 /* Reset: POR */ +#define I40E_GL_FWRESETCNT_FWRESETCNT_SHIFT 0 +#define I40E_GL_FWRESETCNT_FWRESETCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FWRESETCNT_FWRESETCNT_SHIFT) +#define I40E_GL_MNG_FWSM 0x000B6134 /* Reset: POR */ +#define I40E_GL_MNG_FWSM_FW_MODES_SHIFT 0 +#define I40E_GL_MNG_FWSM_FW_MODES_MASK I40E_MASK(0x3, I40E_GL_MNG_FWSM_FW_MODES_SHIFT) +#define I40E_GL_MNG_FWSM_EEP_RELOAD_IND_SHIFT 10 +#define I40E_GL_MNG_FWSM_EEP_RELOAD_IND_MASK I40E_MASK(0x1, I40E_GL_MNG_FWSM_EEP_RELOAD_IND_SHIFT) +#define I40E_GL_MNG_FWSM_CRC_ERROR_MODULE_SHIFT 11 +#define I40E_GL_MNG_FWSM_CRC_ERROR_MODULE_MASK I40E_MASK(0xF, I40E_GL_MNG_FWSM_CRC_ERROR_MODULE_SHIFT) +#define I40E_GL_MNG_FWSM_FW_STATUS_VALID_SHIFT 15 +#define I40E_GL_MNG_FWSM_FW_STATUS_VALID_MASK I40E_MASK(0x1, I40E_GL_MNG_FWSM_FW_STATUS_VALID_SHIFT) +#define I40E_GL_MNG_FWSM_RESET_CNT_SHIFT 16 +#define I40E_GL_MNG_FWSM_RESET_CNT_MASK I40E_MASK(0x7, I40E_GL_MNG_FWSM_RESET_CNT_SHIFT) +#define I40E_GL_MNG_FWSM_EXT_ERR_IND_SHIFT 19 +#define I40E_GL_MNG_FWSM_EXT_ERR_IND_MASK I40E_MASK(0x3F, I40E_GL_MNG_FWSM_EXT_ERR_IND_SHIFT) +#define I40E_GL_MNG_FWSM_PHY_SERDES0_CONFIG_ERR_SHIFT 26 +#define I40E_GL_MNG_FWSM_PHY_SERDES0_CONFIG_ERR_MASK I40E_MASK(0x1, I40E_GL_MNG_FWSM_PHY_SERDES0_CONFIG_ERR_SHIFT) +#define I40E_GL_MNG_FWSM_PHY_SERDES1_CONFIG_ERR_SHIFT 27 +#define I40E_GL_MNG_FWSM_PHY_SERDES1_CONFIG_ERR_MASK I40E_MASK(0x1, I40E_GL_MNG_FWSM_PHY_SERDES1_CONFIG_ERR_SHIFT) +#define I40E_GL_MNG_FWSM_PHY_SERDES2_CONFIG_ERR_SHIFT 28 +#define I40E_GL_MNG_FWSM_PHY_SERDES2_CONFIG_ERR_MASK I40E_MASK(0x1, I40E_GL_MNG_FWSM_PHY_SERDES2_CONFIG_ERR_SHIFT) +#define I40E_GL_MNG_FWSM_PHY_SERDES3_CONFIG_ERR_SHIFT 29 +#define I40E_GL_MNG_FWSM_PHY_SERDES3_CONFIG_ERR_MASK I40E_MASK(0x1, I40E_GL_MNG_FWSM_PHY_SERDES3_CONFIG_ERR_SHIFT) +#define I40E_GL_MNG_HWARB_CTRL 0x000B6130 /* Reset: POR */ +#define I40E_GL_MNG_HWARB_CTRL_NCSI_ARB_EN_SHIFT 0 +#define I40E_GL_MNG_HWARB_CTRL_NCSI_ARB_EN_MASK I40E_MASK(0x1, I40E_GL_MNG_HWARB_CTRL_NCSI_ARB_EN_SHIFT) +#define I40E_PRT_MNG_FTFT_DATA(_i) (0x000852A0 + ((_i) * 32)) /* _i=0...31 */ /* Reset: POR */ +#define I40E_PRT_MNG_FTFT_DATA_MAX_INDEX 31 +#define I40E_PRT_MNG_FTFT_DATA_DWORD_SHIFT 0 +#define I40E_PRT_MNG_FTFT_DATA_DWORD_MASK I40E_MASK(0xFFFFFFFF, I40E_PRT_MNG_FTFT_DATA_DWORD_SHIFT) +#define I40E_PRT_MNG_FTFT_LENGTH 0x00085260 /* Reset: POR */ +#define I40E_PRT_MNG_FTFT_LENGTH_LENGTH_SHIFT 0 +#define I40E_PRT_MNG_FTFT_LENGTH_LENGTH_MASK I40E_MASK(0xFF, I40E_PRT_MNG_FTFT_LENGTH_LENGTH_SHIFT) +#define I40E_PRT_MNG_FTFT_MASK(_i) (0x00085160 + ((_i) * 32)) /* _i=0...7 */ /* Reset: POR */ +#define I40E_PRT_MNG_FTFT_MASK_MAX_INDEX 7 +#define I40E_PRT_MNG_FTFT_MASK_MASK_SHIFT 0 +#define I40E_PRT_MNG_FTFT_MASK_MASK_MASK I40E_MASK(0xFFFF, I40E_PRT_MNG_FTFT_MASK_MASK_SHIFT) +#define I40E_PRT_MNG_MANC 0x00256A20 /* Reset: POR */ +#define I40E_PRT_MNG_MANC_FLOW_CONTROL_DISCARD_SHIFT 0 +#define I40E_PRT_MNG_MANC_FLOW_CONTROL_DISCARD_MASK I40E_MASK(0x1, I40E_PRT_MNG_MANC_FLOW_CONTROL_DISCARD_SHIFT) +#define I40E_PRT_MNG_MANC_NCSI_DISCARD_SHIFT 1 +#define I40E_PRT_MNG_MANC_NCSI_DISCARD_MASK I40E_MASK(0x1, I40E_PRT_MNG_MANC_NCSI_DISCARD_SHIFT) +#define I40E_PRT_MNG_MANC_RCV_TCO_EN_SHIFT 17 +#define I40E_PRT_MNG_MANC_RCV_TCO_EN_MASK I40E_MASK(0x1, I40E_PRT_MNG_MANC_RCV_TCO_EN_SHIFT) +#define I40E_PRT_MNG_MANC_RCV_ALL_SHIFT 19 +#define I40E_PRT_MNG_MANC_RCV_ALL_MASK I40E_MASK(0x1, I40E_PRT_MNG_MANC_RCV_ALL_SHIFT) +#define I40E_PRT_MNG_MANC_FIXED_NET_TYPE_SHIFT 25 +#define I40E_PRT_MNG_MANC_FIXED_NET_TYPE_MASK I40E_MASK(0x1, I40E_PRT_MNG_MANC_FIXED_NET_TYPE_SHIFT) +#define I40E_PRT_MNG_MANC_NET_TYPE_SHIFT 26 +#define I40E_PRT_MNG_MANC_NET_TYPE_MASK I40E_MASK(0x1, I40E_PRT_MNG_MANC_NET_TYPE_SHIFT) +#define I40E_PRT_MNG_MANC_EN_BMC2OS_SHIFT 28 +#define I40E_PRT_MNG_MANC_EN_BMC2OS_MASK I40E_MASK(0x1, I40E_PRT_MNG_MANC_EN_BMC2OS_SHIFT) +#define I40E_PRT_MNG_MANC_EN_BMC2NET_SHIFT 29 +#define I40E_PRT_MNG_MANC_EN_BMC2NET_MASK I40E_MASK(0x1, I40E_PRT_MNG_MANC_EN_BMC2NET_SHIFT) +#define I40E_PRT_MNG_MAVTV(_i) (0x00255900 + ((_i) * 32)) /* _i=0...7 */ /* Reset: POR */ +#define I40E_PRT_MNG_MAVTV_MAX_INDEX 7 +#define I40E_PRT_MNG_MAVTV_VID_SHIFT 0 +#define I40E_PRT_MNG_MAVTV_VID_MASK I40E_MASK(0xFFF, I40E_PRT_MNG_MAVTV_VID_SHIFT) +#define I40E_PRT_MNG_MDEF(_i) (0x00255D00 + ((_i) * 32)) /* _i=0...7 */ /* Reset: POR */ +#define I40E_PRT_MNG_MDEF_MAX_INDEX 7 +#define I40E_PRT_MNG_MDEF_MAC_EXACT_AND_SHIFT 0 +#define I40E_PRT_MNG_MDEF_MAC_EXACT_AND_MASK I40E_MASK(0xF, I40E_PRT_MNG_MDEF_MAC_EXACT_AND_SHIFT) +#define I40E_PRT_MNG_MDEF_BROADCAST_AND_SHIFT 4 +#define I40E_PRT_MNG_MDEF_BROADCAST_AND_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_BROADCAST_AND_SHIFT) +#define I40E_PRT_MNG_MDEF_VLAN_AND_SHIFT 5 +#define I40E_PRT_MNG_MDEF_VLAN_AND_MASK I40E_MASK(0xFF, I40E_PRT_MNG_MDEF_VLAN_AND_SHIFT) +#define I40E_PRT_MNG_MDEF_IPV4_ADDRESS_AND_SHIFT 13 +#define I40E_PRT_MNG_MDEF_IPV4_ADDRESS_AND_MASK I40E_MASK(0xF, I40E_PRT_MNG_MDEF_IPV4_ADDRESS_AND_SHIFT) +#define I40E_PRT_MNG_MDEF_IPV6_ADDRESS_AND_SHIFT 17 +#define I40E_PRT_MNG_MDEF_IPV6_ADDRESS_AND_MASK I40E_MASK(0xF, I40E_PRT_MNG_MDEF_IPV6_ADDRESS_AND_SHIFT) +#define I40E_PRT_MNG_MDEF_MAC_EXACT_OR_SHIFT 21 +#define I40E_PRT_MNG_MDEF_MAC_EXACT_OR_MASK I40E_MASK(0xF, I40E_PRT_MNG_MDEF_MAC_EXACT_OR_SHIFT) +#define I40E_PRT_MNG_MDEF_BROADCAST_OR_SHIFT 25 +#define I40E_PRT_MNG_MDEF_BROADCAST_OR_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_BROADCAST_OR_SHIFT) +#define I40E_PRT_MNG_MDEF_MULTICAST_AND_SHIFT 26 +#define I40E_PRT_MNG_MDEF_MULTICAST_AND_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_MULTICAST_AND_SHIFT) +#define I40E_PRT_MNG_MDEF_ARP_REQUEST_OR_SHIFT 27 +#define I40E_PRT_MNG_MDEF_ARP_REQUEST_OR_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_ARP_REQUEST_OR_SHIFT) +#define I40E_PRT_MNG_MDEF_ARP_RESPONSE_OR_SHIFT 28 +#define I40E_PRT_MNG_MDEF_ARP_RESPONSE_OR_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_ARP_RESPONSE_OR_SHIFT) +#define I40E_PRT_MNG_MDEF_NEIGHBOR_DISCOVERY_134_OR_SHIFT 29 +#define I40E_PRT_MNG_MDEF_NEIGHBOR_DISCOVERY_134_OR_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_NEIGHBOR_DISCOVERY_134_OR_SHIFT) +#define I40E_PRT_MNG_MDEF_PORT_0X298_OR_SHIFT 30 +#define I40E_PRT_MNG_MDEF_PORT_0X298_OR_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_PORT_0X298_OR_SHIFT) +#define I40E_PRT_MNG_MDEF_PORT_0X26F_OR_SHIFT 31 +#define I40E_PRT_MNG_MDEF_PORT_0X26F_OR_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_PORT_0X26F_OR_SHIFT) +#define I40E_PRT_MNG_MDEF_EXT(_i) (0x00255F00 + ((_i) * 32)) /* _i=0...7 */ /* Reset: POR */ +#define I40E_PRT_MNG_MDEF_EXT_MAX_INDEX 7 +#define I40E_PRT_MNG_MDEF_EXT_L2_ETHERTYPE_AND_SHIFT 0 +#define I40E_PRT_MNG_MDEF_EXT_L2_ETHERTYPE_AND_MASK I40E_MASK(0xF, I40E_PRT_MNG_MDEF_EXT_L2_ETHERTYPE_AND_SHIFT) +#define I40E_PRT_MNG_MDEF_EXT_L2_ETHERTYPE_OR_SHIFT 4 +#define I40E_PRT_MNG_MDEF_EXT_L2_ETHERTYPE_OR_MASK I40E_MASK(0xF, I40E_PRT_MNG_MDEF_EXT_L2_ETHERTYPE_OR_SHIFT) +#define I40E_PRT_MNG_MDEF_EXT_FLEX_PORT_OR_SHIFT 8 +#define I40E_PRT_MNG_MDEF_EXT_FLEX_PORT_OR_MASK I40E_MASK(0xFFFF, I40E_PRT_MNG_MDEF_EXT_FLEX_PORT_OR_SHIFT) +#define I40E_PRT_MNG_MDEF_EXT_FLEX_TCO_SHIFT 24 +#define I40E_PRT_MNG_MDEF_EXT_FLEX_TCO_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_EXT_FLEX_TCO_SHIFT) +#define I40E_PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_135_OR_SHIFT 25 +#define I40E_PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_135_OR_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_135_OR_SHIFT) +#define I40E_PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_136_OR_SHIFT 26 +#define I40E_PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_136_OR_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_136_OR_SHIFT) +#define I40E_PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_137_OR_SHIFT 27 +#define I40E_PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_137_OR_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_137_OR_SHIFT) +#define I40E_PRT_MNG_MDEF_EXT_ICMP_OR_SHIFT 28 +#define I40E_PRT_MNG_MDEF_EXT_ICMP_OR_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_EXT_ICMP_OR_SHIFT) +#define I40E_PRT_MNG_MDEF_EXT_MLD_SHIFT 29 +#define I40E_PRT_MNG_MDEF_EXT_MLD_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_EXT_MLD_SHIFT) +#define I40E_PRT_MNG_MDEF_EXT_APPLY_TO_NETWORK_TRAFFIC_SHIFT 30 +#define I40E_PRT_MNG_MDEF_EXT_APPLY_TO_NETWORK_TRAFFIC_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_EXT_APPLY_TO_NETWORK_TRAFFIC_SHIFT) +#define I40E_PRT_MNG_MDEF_EXT_APPLY_TO_HOST_TRAFFIC_SHIFT 31 +#define I40E_PRT_MNG_MDEF_EXT_APPLY_TO_HOST_TRAFFIC_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_EXT_APPLY_TO_HOST_TRAFFIC_SHIFT) +#define I40E_PRT_MNG_MDEFVSI(_i) (0x00256580 + ((_i) * 32)) /* _i=0...3 */ /* Reset: POR */ +#define I40E_PRT_MNG_MDEFVSI_MAX_INDEX 3 +#define I40E_PRT_MNG_MDEFVSI_MDEFVSI_2N_SHIFT 0 +#define I40E_PRT_MNG_MDEFVSI_MDEFVSI_2N_MASK I40E_MASK(0xFFFF, I40E_PRT_MNG_MDEFVSI_MDEFVSI_2N_SHIFT) +#define I40E_PRT_MNG_MDEFVSI_MDEFVSI_2NP1_SHIFT 16 +#define I40E_PRT_MNG_MDEFVSI_MDEFVSI_2NP1_MASK I40E_MASK(0xFFFF, I40E_PRT_MNG_MDEFVSI_MDEFVSI_2NP1_SHIFT) +#define I40E_PRT_MNG_METF(_i) (0x00256780 + ((_i) * 32)) /* _i=0...3 */ /* Reset: POR */ +#define I40E_PRT_MNG_METF_MAX_INDEX 3 +#define I40E_PRT_MNG_METF_ETYPE_SHIFT 0 +#define I40E_PRT_MNG_METF_ETYPE_MASK I40E_MASK(0xFFFF, I40E_PRT_MNG_METF_ETYPE_SHIFT) +#define I40E_PRT_MNG_METF_POLARITY_SHIFT 30 +#define I40E_PRT_MNG_METF_POLARITY_MASK I40E_MASK(0x1, I40E_PRT_MNG_METF_POLARITY_SHIFT) +#define I40E_PRT_MNG_MFUTP(_i) (0x00254E00 + ((_i) * 32)) /* _i=0...15 */ /* Reset: POR */ +#define I40E_PRT_MNG_MFUTP_MAX_INDEX 15 +#define I40E_PRT_MNG_MFUTP_MFUTP_N_SHIFT 0 +#define I40E_PRT_MNG_MFUTP_MFUTP_N_MASK I40E_MASK(0xFFFF, I40E_PRT_MNG_MFUTP_MFUTP_N_SHIFT) +#define I40E_PRT_MNG_MFUTP_UDP_SHIFT 16 +#define I40E_PRT_MNG_MFUTP_UDP_MASK I40E_MASK(0x1, I40E_PRT_MNG_MFUTP_UDP_SHIFT) +#define I40E_PRT_MNG_MFUTP_TCP_SHIFT 17 +#define I40E_PRT_MNG_MFUTP_TCP_MASK I40E_MASK(0x1, I40E_PRT_MNG_MFUTP_TCP_SHIFT) +#define I40E_PRT_MNG_MFUTP_SOURCE_DESTINATION_SHIFT 18 +#define I40E_PRT_MNG_MFUTP_SOURCE_DESTINATION_MASK I40E_MASK(0x1, I40E_PRT_MNG_MFUTP_SOURCE_DESTINATION_SHIFT) +#define I40E_PRT_MNG_MIPAF4(_i) (0x00256280 + ((_i) * 32)) /* _i=0...3 */ /* Reset: POR */ +#define I40E_PRT_MNG_MIPAF4_MAX_INDEX 3 +#define I40E_PRT_MNG_MIPAF4_MIPAF_SHIFT 0 +#define I40E_PRT_MNG_MIPAF4_MIPAF_MASK I40E_MASK(0xFFFFFFFF, I40E_PRT_MNG_MIPAF4_MIPAF_SHIFT) +#define I40E_PRT_MNG_MIPAF6(_i) (0x00254200 + ((_i) * 32)) /* _i=0...15 */ /* Reset: POR */ +#define I40E_PRT_MNG_MIPAF6_MAX_INDEX 15 +#define I40E_PRT_MNG_MIPAF6_MIPAF_SHIFT 0 +#define I40E_PRT_MNG_MIPAF6_MIPAF_MASK I40E_MASK(0xFFFFFFFF, I40E_PRT_MNG_MIPAF6_MIPAF_SHIFT) +#define I40E_PRT_MNG_MMAH(_i) (0x00256380 + ((_i) * 32)) /* _i=0...3 */ /* Reset: POR */ +#define I40E_PRT_MNG_MMAH_MAX_INDEX 3 +#define I40E_PRT_MNG_MMAH_MMAH_SHIFT 0 +#define I40E_PRT_MNG_MMAH_MMAH_MASK I40E_MASK(0xFFFF, I40E_PRT_MNG_MMAH_MMAH_SHIFT) +#define I40E_PRT_MNG_MMAL(_i) (0x00256480 + ((_i) * 32)) /* _i=0...3 */ /* Reset: POR */ +#define I40E_PRT_MNG_MMAL_MAX_INDEX 3 +#define I40E_PRT_MNG_MMAL_MMAL_SHIFT 0 +#define I40E_PRT_MNG_MMAL_MMAL_MASK I40E_MASK(0xFFFFFFFF, I40E_PRT_MNG_MMAL_MMAL_SHIFT) +#define I40E_PRT_MNG_MNGONLY 0x00256A60 /* Reset: POR */ +#define I40E_PRT_MNG_MNGONLY_EXCLUSIVE_TO_MANAGEABILITY_SHIFT 0 +#define I40E_PRT_MNG_MNGONLY_EXCLUSIVE_TO_MANAGEABILITY_MASK I40E_MASK(0xFF, I40E_PRT_MNG_MNGONLY_EXCLUSIVE_TO_MANAGEABILITY_SHIFT) +#define I40E_PRT_MNG_MSFM 0x00256AA0 /* Reset: POR */ +#define I40E_PRT_MNG_MSFM_PORT_26F_UDP_SHIFT 0 +#define I40E_PRT_MNG_MSFM_PORT_26F_UDP_MASK I40E_MASK(0x1, I40E_PRT_MNG_MSFM_PORT_26F_UDP_SHIFT) +#define I40E_PRT_MNG_MSFM_PORT_26F_TCP_SHIFT 1 +#define I40E_PRT_MNG_MSFM_PORT_26F_TCP_MASK I40E_MASK(0x1, I40E_PRT_MNG_MSFM_PORT_26F_TCP_SHIFT) +#define I40E_PRT_MNG_MSFM_PORT_298_UDP_SHIFT 2 +#define I40E_PRT_MNG_MSFM_PORT_298_UDP_MASK I40E_MASK(0x1, I40E_PRT_MNG_MSFM_PORT_298_UDP_SHIFT) +#define I40E_PRT_MNG_MSFM_PORT_298_TCP_SHIFT 3 +#define I40E_PRT_MNG_MSFM_PORT_298_TCP_MASK I40E_MASK(0x1, I40E_PRT_MNG_MSFM_PORT_298_TCP_SHIFT) +#define I40E_PRT_MNG_MSFM_IPV6_0_MASK_SHIFT 4 +#define I40E_PRT_MNG_MSFM_IPV6_0_MASK_MASK I40E_MASK(0x1, I40E_PRT_MNG_MSFM_IPV6_0_MASK_SHIFT) +#define I40E_PRT_MNG_MSFM_IPV6_1_MASK_SHIFT 5 +#define I40E_PRT_MNG_MSFM_IPV6_1_MASK_MASK I40E_MASK(0x1, I40E_PRT_MNG_MSFM_IPV6_1_MASK_SHIFT) +#define I40E_PRT_MNG_MSFM_IPV6_2_MASK_SHIFT 6 +#define I40E_PRT_MNG_MSFM_IPV6_2_MASK_MASK I40E_MASK(0x1, I40E_PRT_MNG_MSFM_IPV6_2_MASK_SHIFT) +#define I40E_PRT_MNG_MSFM_IPV6_3_MASK_SHIFT 7 +#define I40E_PRT_MNG_MSFM_IPV6_3_MASK_MASK I40E_MASK(0x1, I40E_PRT_MNG_MSFM_IPV6_3_MASK_SHIFT) +#define I40E_MSIX_PBA(_i) (0x00001000 + ((_i) * 4)) /* _i=0...5 */ /* Reset: FLR */ +#define I40E_MSIX_PBA_MAX_INDEX 5 +#define I40E_MSIX_PBA_PENBIT_SHIFT 0 +#define I40E_MSIX_PBA_PENBIT_MASK I40E_MASK(0xFFFFFFFF, I40E_MSIX_PBA_PENBIT_SHIFT) +#define I40E_MSIX_TADD(_i) (0x00000000 + ((_i) * 16)) /* _i=0...128 */ /* Reset: FLR */ +#define I40E_MSIX_TADD_MAX_INDEX 128 +#define I40E_MSIX_TADD_MSIXTADD10_SHIFT 0 +#define I40E_MSIX_TADD_MSIXTADD10_MASK I40E_MASK(0x3, I40E_MSIX_TADD_MSIXTADD10_SHIFT) +#define I40E_MSIX_TADD_MSIXTADD_SHIFT 2 +#define I40E_MSIX_TADD_MSIXTADD_MASK I40E_MASK(0x3FFFFFFF, I40E_MSIX_TADD_MSIXTADD_SHIFT) +#define I40E_MSIX_TMSG(_i) (0x00000008 + ((_i) * 16)) /* _i=0...128 */ /* Reset: FLR */ +#define I40E_MSIX_TMSG_MAX_INDEX 128 +#define I40E_MSIX_TMSG_MSIXTMSG_SHIFT 0 +#define I40E_MSIX_TMSG_MSIXTMSG_MASK I40E_MASK(0xFFFFFFFF, I40E_MSIX_TMSG_MSIXTMSG_SHIFT) +#define I40E_MSIX_TUADD(_i) (0x00000004 + ((_i) * 16)) /* _i=0...128 */ /* Reset: FLR */ +#define I40E_MSIX_TUADD_MAX_INDEX 128 +#define I40E_MSIX_TUADD_MSIXTUADD_SHIFT 0 +#define I40E_MSIX_TUADD_MSIXTUADD_MASK I40E_MASK(0xFFFFFFFF, I40E_MSIX_TUADD_MSIXTUADD_SHIFT) +#define I40E_MSIX_TVCTRL(_i) (0x0000000C + ((_i) * 16)) /* _i=0...128 */ /* Reset: FLR */ +#define I40E_MSIX_TVCTRL_MAX_INDEX 128 +#define I40E_MSIX_TVCTRL_MASK_SHIFT 0 +#define I40E_MSIX_TVCTRL_MASK_MASK I40E_MASK(0x1, I40E_MSIX_TVCTRL_MASK_SHIFT) +#define I40E_VFMSIX_PBA1(_i) (0x00002000 + ((_i) * 4)) /* _i=0...19 */ /* Reset: VFLR */ +#define I40E_VFMSIX_PBA1_MAX_INDEX 19 +#define I40E_VFMSIX_PBA1_PENBIT_SHIFT 0 +#define I40E_VFMSIX_PBA1_PENBIT_MASK I40E_MASK(0xFFFFFFFF, I40E_VFMSIX_PBA1_PENBIT_SHIFT) +#define I40E_VFMSIX_TADD1(_i) (0x00002100 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */ +#define I40E_VFMSIX_TADD1_MAX_INDEX 639 +#define I40E_VFMSIX_TADD1_MSIXTADD10_SHIFT 0 +#define I40E_VFMSIX_TADD1_MSIXTADD10_MASK I40E_MASK(0x3, I40E_VFMSIX_TADD1_MSIXTADD10_SHIFT) +#define I40E_VFMSIX_TADD1_MSIXTADD_SHIFT 2 +#define I40E_VFMSIX_TADD1_MSIXTADD_MASK I40E_MASK(0x3FFFFFFF, I40E_VFMSIX_TADD1_MSIXTADD_SHIFT) +#define I40E_VFMSIX_TMSG1(_i) (0x00002108 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */ +#define I40E_VFMSIX_TMSG1_MAX_INDEX 639 +#define I40E_VFMSIX_TMSG1_MSIXTMSG_SHIFT 0 +#define I40E_VFMSIX_TMSG1_MSIXTMSG_MASK I40E_MASK(0xFFFFFFFF, I40E_VFMSIX_TMSG1_MSIXTMSG_SHIFT) +#define I40E_VFMSIX_TUADD1(_i) (0x00002104 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */ +#define I40E_VFMSIX_TUADD1_MAX_INDEX 639 +#define I40E_VFMSIX_TUADD1_MSIXTUADD_SHIFT 0 +#define I40E_VFMSIX_TUADD1_MSIXTUADD_MASK I40E_MASK(0xFFFFFFFF, I40E_VFMSIX_TUADD1_MSIXTUADD_SHIFT) +#define I40E_VFMSIX_TVCTRL1(_i) (0x0000210C + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */ +#define I40E_VFMSIX_TVCTRL1_MAX_INDEX 639 +#define I40E_VFMSIX_TVCTRL1_MASK_SHIFT 0 +#define I40E_VFMSIX_TVCTRL1_MASK_MASK I40E_MASK(0x1, I40E_VFMSIX_TVCTRL1_MASK_SHIFT) +#define I40E_GLNVM_FLA 0x000B6108 /* Reset: POR */ +#define I40E_GLNVM_FLA_FL_SCK_SHIFT 0 +#define I40E_GLNVM_FLA_FL_SCK_MASK I40E_MASK(0x1, I40E_GLNVM_FLA_FL_SCK_SHIFT) +#define I40E_GLNVM_FLA_FL_CE_SHIFT 1 +#define I40E_GLNVM_FLA_FL_CE_MASK I40E_MASK(0x1, I40E_GLNVM_FLA_FL_CE_SHIFT) +#define I40E_GLNVM_FLA_FL_SI_SHIFT 2 +#define I40E_GLNVM_FLA_FL_SI_MASK I40E_MASK(0x1, I40E_GLNVM_FLA_FL_SI_SHIFT) +#define I40E_GLNVM_FLA_FL_SO_SHIFT 3 +#define I40E_GLNVM_FLA_FL_SO_MASK I40E_MASK(0x1, I40E_GLNVM_FLA_FL_SO_SHIFT) +#define I40E_GLNVM_FLA_FL_REQ_SHIFT 4 +#define I40E_GLNVM_FLA_FL_REQ_MASK I40E_MASK(0x1, I40E_GLNVM_FLA_FL_REQ_SHIFT) +#define I40E_GLNVM_FLA_FL_GNT_SHIFT 5 +#define I40E_GLNVM_FLA_FL_GNT_MASK I40E_MASK(0x1, I40E_GLNVM_FLA_FL_GNT_SHIFT) +#define I40E_GLNVM_FLA_LOCKED_SHIFT 6 +#define I40E_GLNVM_FLA_LOCKED_MASK I40E_MASK(0x1, I40E_GLNVM_FLA_LOCKED_SHIFT) +#define I40E_GLNVM_FLA_FL_SADDR_SHIFT 18 +#define I40E_GLNVM_FLA_FL_SADDR_MASK I40E_MASK(0x7FF, I40E_GLNVM_FLA_FL_SADDR_SHIFT) +#define I40E_GLNVM_FLA_FL_BUSY_SHIFT 30 +#define I40E_GLNVM_FLA_FL_BUSY_MASK I40E_MASK(0x1, I40E_GLNVM_FLA_FL_BUSY_SHIFT) +#define I40E_GLNVM_FLA_FL_DER_SHIFT 31 +#define I40E_GLNVM_FLA_FL_DER_MASK I40E_MASK(0x1, I40E_GLNVM_FLA_FL_DER_SHIFT) +#define I40E_GLNVM_FLASHID 0x000B6104 /* Reset: POR */ +#define I40E_GLNVM_FLASHID_FLASHID_SHIFT 0 +#define I40E_GLNVM_FLASHID_FLASHID_MASK I40E_MASK(0xFFFFFF, I40E_GLNVM_FLASHID_FLASHID_SHIFT) +#define I40E_GLNVM_FLASHID_FLEEP_PERF_SHIFT 31 +#define I40E_GLNVM_FLASHID_FLEEP_PERF_MASK I40E_MASK(0x1, I40E_GLNVM_FLASHID_FLEEP_PERF_SHIFT) +#define I40E_GLNVM_GENS 0x000B6100 /* Reset: POR */ +#define I40E_GLNVM_GENS_NVM_PRES_SHIFT 0 +#define I40E_GLNVM_GENS_NVM_PRES_MASK I40E_MASK(0x1, I40E_GLNVM_GENS_NVM_PRES_SHIFT) +#define I40E_GLNVM_GENS_SR_SIZE_SHIFT 5 +#define I40E_GLNVM_GENS_SR_SIZE_MASK I40E_MASK(0x7, I40E_GLNVM_GENS_SR_SIZE_SHIFT) +#define I40E_GLNVM_GENS_BANK1VAL_SHIFT 8 +#define I40E_GLNVM_GENS_BANK1VAL_MASK I40E_MASK(0x1, I40E_GLNVM_GENS_BANK1VAL_SHIFT) +#define I40E_GLNVM_GENS_ALT_PRST_SHIFT 23 +#define I40E_GLNVM_GENS_ALT_PRST_MASK I40E_MASK(0x1, I40E_GLNVM_GENS_ALT_PRST_SHIFT) +#define I40E_GLNVM_GENS_FL_AUTO_RD_SHIFT 25 +#define I40E_GLNVM_GENS_FL_AUTO_RD_MASK I40E_MASK(0x1, I40E_GLNVM_GENS_FL_AUTO_RD_SHIFT) +#define I40E_GLNVM_PROTCSR(_i) (0x000B6010 + ((_i) * 4)) /* _i=0...59 */ /* Reset: POR */ +#define I40E_GLNVM_PROTCSR_MAX_INDEX 59 +#define I40E_GLNVM_PROTCSR_ADDR_BLOCK_SHIFT 0 +#define I40E_GLNVM_PROTCSR_ADDR_BLOCK_MASK I40E_MASK(0xFFFFFF, I40E_GLNVM_PROTCSR_ADDR_BLOCK_SHIFT) +#define I40E_GLNVM_SRCTL 0x000B6110 /* Reset: POR */ +#define I40E_GLNVM_SRCTL_SRBUSY_SHIFT 0 +#define I40E_GLNVM_SRCTL_SRBUSY_MASK I40E_MASK(0x1, I40E_GLNVM_SRCTL_SRBUSY_SHIFT) +#define I40E_GLNVM_SRCTL_ADDR_SHIFT 14 +#define I40E_GLNVM_SRCTL_ADDR_MASK I40E_MASK(0x7FFF, I40E_GLNVM_SRCTL_ADDR_SHIFT) +#define I40E_GLNVM_SRCTL_WRITE_SHIFT 29 +#define I40E_GLNVM_SRCTL_WRITE_MASK I40E_MASK(0x1, I40E_GLNVM_SRCTL_WRITE_SHIFT) +#define I40E_GLNVM_SRCTL_START_SHIFT 30 +#define I40E_GLNVM_SRCTL_START_MASK I40E_MASK(0x1, I40E_GLNVM_SRCTL_START_SHIFT) +#define I40E_GLNVM_SRCTL_DONE_SHIFT 31 +#define I40E_GLNVM_SRCTL_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_SRCTL_DONE_SHIFT) +#define I40E_GLNVM_SRDATA 0x000B6114 /* Reset: POR */ +#define I40E_GLNVM_SRDATA_WRDATA_SHIFT 0 +#define I40E_GLNVM_SRDATA_WRDATA_MASK I40E_MASK(0xFFFF, I40E_GLNVM_SRDATA_WRDATA_SHIFT) +#define I40E_GLNVM_SRDATA_RDDATA_SHIFT 16 +#define I40E_GLNVM_SRDATA_RDDATA_MASK I40E_MASK(0xFFFF, I40E_GLNVM_SRDATA_RDDATA_SHIFT) +#define I40E_GLNVM_ULD 0x000B6008 /* Reset: POR */ +#define I40E_GLNVM_ULD_CONF_PCIR_DONE_SHIFT 0 +#define I40E_GLNVM_ULD_CONF_PCIR_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_ULD_CONF_PCIR_DONE_SHIFT) +#define I40E_GLNVM_ULD_CONF_PCIRTL_DONE_SHIFT 1 +#define I40E_GLNVM_ULD_CONF_PCIRTL_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_ULD_CONF_PCIRTL_DONE_SHIFT) +#define I40E_GLNVM_ULD_CONF_LCB_DONE_SHIFT 2 +#define I40E_GLNVM_ULD_CONF_LCB_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_ULD_CONF_LCB_DONE_SHIFT) +#define I40E_GLNVM_ULD_CONF_CORE_DONE_SHIFT 3 +#define I40E_GLNVM_ULD_CONF_CORE_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_ULD_CONF_CORE_DONE_SHIFT) +#define I40E_GLNVM_ULD_CONF_GLOBAL_DONE_SHIFT 4 +#define I40E_GLNVM_ULD_CONF_GLOBAL_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_ULD_CONF_GLOBAL_DONE_SHIFT) +#define I40E_GLNVM_ULD_CONF_POR_DONE_SHIFT 5 +#define I40E_GLNVM_ULD_CONF_POR_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_ULD_CONF_POR_DONE_SHIFT) +#define I40E_GLNVM_ULD_CONF_PCIE_ANA_DONE_SHIFT 6 +#define I40E_GLNVM_ULD_CONF_PCIE_ANA_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_ULD_CONF_PCIE_ANA_DONE_SHIFT) +#define I40E_GLNVM_ULD_CONF_PHY_ANA_DONE_SHIFT 7 +#define I40E_GLNVM_ULD_CONF_PHY_ANA_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_ULD_CONF_PHY_ANA_DONE_SHIFT) +#define I40E_GLNVM_ULD_CONF_EMP_DONE_SHIFT 8 +#define I40E_GLNVM_ULD_CONF_EMP_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_ULD_CONF_EMP_DONE_SHIFT) +#define I40E_GLNVM_ULD_CONF_PCIALT_DONE_SHIFT 9 +#define I40E_GLNVM_ULD_CONF_PCIALT_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_ULD_CONF_PCIALT_DONE_SHIFT) +#define I40E_GLPCI_BYTCTH 0x0009C484 /* Reset: PCIR */ +#define I40E_GLPCI_BYTCTH_PCI_COUNT_BW_BCT_SHIFT 0 +#define I40E_GLPCI_BYTCTH_PCI_COUNT_BW_BCT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPCI_BYTCTH_PCI_COUNT_BW_BCT_SHIFT) +#define I40E_GLPCI_BYTCTL 0x0009C488 /* Reset: PCIR */ +#define I40E_GLPCI_BYTCTL_PCI_COUNT_BW_BCT_SHIFT 0 +#define I40E_GLPCI_BYTCTL_PCI_COUNT_BW_BCT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPCI_BYTCTL_PCI_COUNT_BW_BCT_SHIFT) +#define I40E_GLPCI_CAPCTRL 0x000BE4A4 /* Reset: PCIR */ +#define I40E_GLPCI_CAPCTRL_VPD_EN_SHIFT 0 +#define I40E_GLPCI_CAPCTRL_VPD_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPCTRL_VPD_EN_SHIFT) +#define I40E_GLPCI_CAPSUP 0x000BE4A8 /* Reset: PCIR */ +#define I40E_GLPCI_CAPSUP_PCIE_VER_SHIFT 0 +#define I40E_GLPCI_CAPSUP_PCIE_VER_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_PCIE_VER_SHIFT) +#define I40E_GLPCI_CAPSUP_LTR_EN_SHIFT 2 +#define I40E_GLPCI_CAPSUP_LTR_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_LTR_EN_SHIFT) +#define I40E_GLPCI_CAPSUP_TPH_EN_SHIFT 3 +#define I40E_GLPCI_CAPSUP_TPH_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_TPH_EN_SHIFT) +#define I40E_GLPCI_CAPSUP_ARI_EN_SHIFT 4 +#define I40E_GLPCI_CAPSUP_ARI_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_ARI_EN_SHIFT) +#define I40E_GLPCI_CAPSUP_IOV_EN_SHIFT 5 +#define I40E_GLPCI_CAPSUP_IOV_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_IOV_EN_SHIFT) +#define I40E_GLPCI_CAPSUP_ACS_EN_SHIFT 6 +#define I40E_GLPCI_CAPSUP_ACS_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_ACS_EN_SHIFT) +#define I40E_GLPCI_CAPSUP_SEC_EN_SHIFT 7 +#define I40E_GLPCI_CAPSUP_SEC_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_SEC_EN_SHIFT) +#define I40E_GLPCI_CAPSUP_ECRC_GEN_EN_SHIFT 16 +#define I40E_GLPCI_CAPSUP_ECRC_GEN_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_ECRC_GEN_EN_SHIFT) +#define I40E_GLPCI_CAPSUP_ECRC_CHK_EN_SHIFT 17 +#define I40E_GLPCI_CAPSUP_ECRC_CHK_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_ECRC_CHK_EN_SHIFT) +#define I40E_GLPCI_CAPSUP_IDO_EN_SHIFT 18 +#define I40E_GLPCI_CAPSUP_IDO_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_IDO_EN_SHIFT) +#define I40E_GLPCI_CAPSUP_MSI_MASK_SHIFT 19 +#define I40E_GLPCI_CAPSUP_MSI_MASK_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_MSI_MASK_SHIFT) +#define I40E_GLPCI_CAPSUP_CSR_CONF_EN_SHIFT 20 +#define I40E_GLPCI_CAPSUP_CSR_CONF_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_CSR_CONF_EN_SHIFT) +#define I40E_GLPCI_CAPSUP_LOAD_SUBSYS_ID_SHIFT 30 +#define I40E_GLPCI_CAPSUP_LOAD_SUBSYS_ID_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_LOAD_SUBSYS_ID_SHIFT) +#define I40E_GLPCI_CAPSUP_LOAD_DEV_ID_SHIFT 31 +#define I40E_GLPCI_CAPSUP_LOAD_DEV_ID_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_LOAD_DEV_ID_SHIFT) +#define I40E_GLPCI_CNF 0x000BE4C0 /* Reset: POR */ +#define I40E_GLPCI_CNF_FLEX10_SHIFT 1 +#define I40E_GLPCI_CNF_FLEX10_MASK I40E_MASK(0x1, I40E_GLPCI_CNF_FLEX10_SHIFT) +#define I40E_GLPCI_CNF_WAKE_PIN_EN_SHIFT 2 +#define I40E_GLPCI_CNF_WAKE_PIN_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CNF_WAKE_PIN_EN_SHIFT) +#define I40E_GLPCI_CNF2 0x000BE494 /* Reset: PCIR */ +#define I40E_GLPCI_CNF2_RO_DIS_SHIFT 0 +#define I40E_GLPCI_CNF2_RO_DIS_MASK I40E_MASK(0x1, I40E_GLPCI_CNF2_RO_DIS_SHIFT) +#define I40E_GLPCI_CNF2_CACHELINE_SIZE_SHIFT 1 +#define I40E_GLPCI_CNF2_CACHELINE_SIZE_MASK I40E_MASK(0x1, I40E_GLPCI_CNF2_CACHELINE_SIZE_SHIFT) +#define I40E_GLPCI_CNF2_MSI_X_PF_N_SHIFT 2 +#define I40E_GLPCI_CNF2_MSI_X_PF_N_MASK I40E_MASK(0x7FF, I40E_GLPCI_CNF2_MSI_X_PF_N_SHIFT) +#define I40E_GLPCI_CNF2_MSI_X_VF_N_SHIFT 13 +#define I40E_GLPCI_CNF2_MSI_X_VF_N_MASK I40E_MASK(0x7FF, I40E_GLPCI_CNF2_MSI_X_VF_N_SHIFT) +#define I40E_GLPCI_DREVID 0x0009C480 /* Reset: PCIR */ +#define I40E_GLPCI_DREVID_DEFAULT_REVID_SHIFT 0 +#define I40E_GLPCI_DREVID_DEFAULT_REVID_MASK I40E_MASK(0xFF, I40E_GLPCI_DREVID_DEFAULT_REVID_SHIFT) +#define I40E_GLPCI_GSCL_1 0x0009C48C /* Reset: PCIR */ +#define I40E_GLPCI_GSCL_1_GIO_COUNT_EN_0_SHIFT 0 +#define I40E_GLPCI_GSCL_1_GIO_COUNT_EN_0_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_GIO_COUNT_EN_0_SHIFT) +#define I40E_GLPCI_GSCL_1_GIO_COUNT_EN_1_SHIFT 1 +#define I40E_GLPCI_GSCL_1_GIO_COUNT_EN_1_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_GIO_COUNT_EN_1_SHIFT) +#define I40E_GLPCI_GSCL_1_GIO_COUNT_EN_2_SHIFT 2 +#define I40E_GLPCI_GSCL_1_GIO_COUNT_EN_2_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_GIO_COUNT_EN_2_SHIFT) +#define I40E_GLPCI_GSCL_1_GIO_COUNT_EN_3_SHIFT 3 +#define I40E_GLPCI_GSCL_1_GIO_COUNT_EN_3_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_GIO_COUNT_EN_3_SHIFT) +#define I40E_GLPCI_GSCL_1_LBC_ENABLE_0_SHIFT 4 +#define I40E_GLPCI_GSCL_1_LBC_ENABLE_0_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_LBC_ENABLE_0_SHIFT) +#define I40E_GLPCI_GSCL_1_LBC_ENABLE_1_SHIFT 5 +#define I40E_GLPCI_GSCL_1_LBC_ENABLE_1_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_LBC_ENABLE_1_SHIFT) +#define I40E_GLPCI_GSCL_1_LBC_ENABLE_2_SHIFT 6 +#define I40E_GLPCI_GSCL_1_LBC_ENABLE_2_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_LBC_ENABLE_2_SHIFT) +#define I40E_GLPCI_GSCL_1_LBC_ENABLE_3_SHIFT 7 +#define I40E_GLPCI_GSCL_1_LBC_ENABLE_3_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_LBC_ENABLE_3_SHIFT) +#define I40E_GLPCI_GSCL_1_PCI_COUNT_LAT_EN_SHIFT 8 +#define I40E_GLPCI_GSCL_1_PCI_COUNT_LAT_EN_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_PCI_COUNT_LAT_EN_SHIFT) +#define I40E_GLPCI_GSCL_1_PCI_COUNT_LAT_EV_SHIFT 9 +#define I40E_GLPCI_GSCL_1_PCI_COUNT_LAT_EV_MASK I40E_MASK(0x1F, I40E_GLPCI_GSCL_1_PCI_COUNT_LAT_EV_SHIFT) +#define I40E_GLPCI_GSCL_1_PCI_COUNT_BW_EN_SHIFT 14 +#define I40E_GLPCI_GSCL_1_PCI_COUNT_BW_EN_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_PCI_COUNT_BW_EN_SHIFT) +#define I40E_GLPCI_GSCL_1_PCI_COUNT_BW_EV_SHIFT 15 +#define I40E_GLPCI_GSCL_1_PCI_COUNT_BW_EV_MASK I40E_MASK(0x1F, I40E_GLPCI_GSCL_1_PCI_COUNT_BW_EV_SHIFT) +#define I40E_GLPCI_GSCL_1_GIO_64_BIT_EN_SHIFT 28 +#define I40E_GLPCI_GSCL_1_GIO_64_BIT_EN_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_GIO_64_BIT_EN_SHIFT) +#define I40E_GLPCI_GSCL_1_GIO_COUNT_RESET_SHIFT 29 +#define I40E_GLPCI_GSCL_1_GIO_COUNT_RESET_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_GIO_COUNT_RESET_SHIFT) +#define I40E_GLPCI_GSCL_1_GIO_COUNT_STOP_SHIFT 30 +#define I40E_GLPCI_GSCL_1_GIO_COUNT_STOP_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_GIO_COUNT_STOP_SHIFT) +#define I40E_GLPCI_GSCL_1_GIO_COUNT_START_SHIFT 31 +#define I40E_GLPCI_GSCL_1_GIO_COUNT_START_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_GIO_COUNT_START_SHIFT) +#define I40E_GLPCI_GSCL_2 0x0009C490 /* Reset: PCIR */ +#define I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_0_SHIFT 0 +#define I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_0_MASK I40E_MASK(0xFF, I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_0_SHIFT) +#define I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_1_SHIFT 8 +#define I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_1_MASK I40E_MASK(0xFF, I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_1_SHIFT) +#define I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_2_SHIFT 16 +#define I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_2_MASK I40E_MASK(0xFF, I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_2_SHIFT) +#define I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_3_SHIFT 24 +#define I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_3_MASK I40E_MASK(0xFF, I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_3_SHIFT) +#define I40E_GLPCI_GSCL_5_8(_i) (0x0009C494 + ((_i) * 4)) /* _i=0...3 */ /* Reset: PCIR */ +#define I40E_GLPCI_GSCL_5_8_MAX_INDEX 3 +#define I40E_GLPCI_GSCL_5_8_LBC_THRESHOLD_N_SHIFT 0 +#define I40E_GLPCI_GSCL_5_8_LBC_THRESHOLD_N_MASK I40E_MASK(0xFFFF, I40E_GLPCI_GSCL_5_8_LBC_THRESHOLD_N_SHIFT) +#define I40E_GLPCI_GSCL_5_8_LBC_TIMER_N_SHIFT 16 +#define I40E_GLPCI_GSCL_5_8_LBC_TIMER_N_MASK I40E_MASK(0xFFFF, I40E_GLPCI_GSCL_5_8_LBC_TIMER_N_SHIFT) +#define I40E_GLPCI_GSCN_0_3(_i) (0x0009C4A4 + ((_i) * 4)) /* _i=0...3 */ /* Reset: PCIR */ +#define I40E_GLPCI_GSCN_0_3_MAX_INDEX 3 +#define I40E_GLPCI_GSCN_0_3_EVENT_COUNTER_SHIFT 0 +#define I40E_GLPCI_GSCN_0_3_EVENT_COUNTER_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPCI_GSCN_0_3_EVENT_COUNTER_SHIFT) +#define I40E_GLPCI_LATCT 0x0009C4B4 /* Reset: PCIR */ +#define I40E_GLPCI_LATCT_PCI_COUNT_LAT_CT_SHIFT 0 +#define I40E_GLPCI_LATCT_PCI_COUNT_LAT_CT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPCI_LATCT_PCI_COUNT_LAT_CT_SHIFT) +#define I40E_GLPCI_LBARCTRL 0x000BE484 /* Reset: POR */ +#define I40E_GLPCI_LBARCTRL_PREFBAR_SHIFT 0 +#define I40E_GLPCI_LBARCTRL_PREFBAR_MASK I40E_MASK(0x1, I40E_GLPCI_LBARCTRL_PREFBAR_SHIFT) +#define I40E_GLPCI_LBARCTRL_BAR32_SHIFT 1 +#define I40E_GLPCI_LBARCTRL_BAR32_MASK I40E_MASK(0x1, I40E_GLPCI_LBARCTRL_BAR32_SHIFT) +#define I40E_GLPCI_LBARCTRL_FLASH_EXPOSE_SHIFT 3 +#define I40E_GLPCI_LBARCTRL_FLASH_EXPOSE_MASK I40E_MASK(0x1, I40E_GLPCI_LBARCTRL_FLASH_EXPOSE_SHIFT) +#define I40E_GLPCI_LBARCTRL_RSVD_4_SHIFT 4 +#define I40E_GLPCI_LBARCTRL_RSVD_4_MASK I40E_MASK(0x3, I40E_GLPCI_LBARCTRL_RSVD_4_SHIFT) +#define I40E_GLPCI_LBARCTRL_FL_SIZE_SHIFT 6 +#define I40E_GLPCI_LBARCTRL_FL_SIZE_MASK I40E_MASK(0x7, I40E_GLPCI_LBARCTRL_FL_SIZE_SHIFT) +#define I40E_GLPCI_LBARCTRL_RSVD_10_SHIFT 10 +#define I40E_GLPCI_LBARCTRL_RSVD_10_MASK I40E_MASK(0x1, I40E_GLPCI_LBARCTRL_RSVD_10_SHIFT) +#define I40E_GLPCI_LBARCTRL_EXROM_SIZE_SHIFT 11 +#define I40E_GLPCI_LBARCTRL_EXROM_SIZE_MASK I40E_MASK(0x7, I40E_GLPCI_LBARCTRL_EXROM_SIZE_SHIFT) +#define I40E_GLPCI_LINKCAP 0x000BE4AC /* Reset: PCIR */ +#define I40E_GLPCI_LINKCAP_LINK_SPEEDS_VECTOR_SHIFT 0 +#define I40E_GLPCI_LINKCAP_LINK_SPEEDS_VECTOR_MASK I40E_MASK(0x3F, I40E_GLPCI_LINKCAP_LINK_SPEEDS_VECTOR_SHIFT) +#define I40E_GLPCI_LINKCAP_MAX_PAYLOAD_SHIFT 6 +#define I40E_GLPCI_LINKCAP_MAX_PAYLOAD_MASK I40E_MASK(0x7, I40E_GLPCI_LINKCAP_MAX_PAYLOAD_SHIFT) +#define I40E_GLPCI_LINKCAP_MAX_LINK_WIDTH_SHIFT 9 +#define I40E_GLPCI_LINKCAP_MAX_LINK_WIDTH_MASK I40E_MASK(0xF, I40E_GLPCI_LINKCAP_MAX_LINK_WIDTH_SHIFT) +#define I40E_GLPCI_PCIERR 0x000BE4FC /* Reset: PCIR */ +#define I40E_GLPCI_PCIERR_PCIE_ERR_REP_SHIFT 0 +#define I40E_GLPCI_PCIERR_PCIE_ERR_REP_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPCI_PCIERR_PCIE_ERR_REP_SHIFT) +#define I40E_GLPCI_PKTCT 0x0009C4BC /* Reset: PCIR */ +#define I40E_GLPCI_PKTCT_PCI_COUNT_BW_PCT_SHIFT 0 +#define I40E_GLPCI_PKTCT_PCI_COUNT_BW_PCT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPCI_PKTCT_PCI_COUNT_BW_PCT_SHIFT) +#define I40E_GLPCI_PM_MUX_NPQ 0x0009C4F4 /* Reset: PCIR */ +#define I40E_GLPCI_PM_MUX_NPQ_NPQ_NUM_PORT_SEL_SHIFT 0 +#define I40E_GLPCI_PM_MUX_NPQ_NPQ_NUM_PORT_SEL_MASK I40E_MASK(0x7, I40E_GLPCI_PM_MUX_NPQ_NPQ_NUM_PORT_SEL_SHIFT) +#define I40E_GLPCI_PM_MUX_NPQ_INNER_NPQ_SEL_SHIFT 16 +#define I40E_GLPCI_PM_MUX_NPQ_INNER_NPQ_SEL_MASK I40E_MASK(0x1F, I40E_GLPCI_PM_MUX_NPQ_INNER_NPQ_SEL_SHIFT) +#define I40E_GLPCI_PM_MUX_PFB 0x0009C4F0 /* Reset: PCIR */ +#define I40E_GLPCI_PM_MUX_PFB_PFB_PORT_SEL_SHIFT 0 +#define I40E_GLPCI_PM_MUX_PFB_PFB_PORT_SEL_MASK I40E_MASK(0x1F, I40E_GLPCI_PM_MUX_PFB_PFB_PORT_SEL_SHIFT) +#define I40E_GLPCI_PM_MUX_PFB_INNER_PORT_SEL_SHIFT 16 +#define I40E_GLPCI_PM_MUX_PFB_INNER_PORT_SEL_MASK I40E_MASK(0x7, I40E_GLPCI_PM_MUX_PFB_INNER_PORT_SEL_SHIFT) +#define I40E_GLPCI_PMSUP 0x000BE4B0 /* Reset: PCIR */ +#define I40E_GLPCI_PMSUP_ASPM_SUP_SHIFT 0 +#define I40E_GLPCI_PMSUP_ASPM_SUP_MASK I40E_MASK(0x3, I40E_GLPCI_PMSUP_ASPM_SUP_SHIFT) +#define I40E_GLPCI_PMSUP_L0S_EXIT_LAT_SHIFT 2 +#define I40E_GLPCI_PMSUP_L0S_EXIT_LAT_MASK I40E_MASK(0x7, I40E_GLPCI_PMSUP_L0S_EXIT_LAT_SHIFT) +#define I40E_GLPCI_PMSUP_L1_EXIT_LAT_SHIFT 5 +#define I40E_GLPCI_PMSUP_L1_EXIT_LAT_MASK I40E_MASK(0x7, I40E_GLPCI_PMSUP_L1_EXIT_LAT_SHIFT) +#define I40E_GLPCI_PMSUP_L0S_ACC_LAT_SHIFT 8 +#define I40E_GLPCI_PMSUP_L0S_ACC_LAT_MASK I40E_MASK(0x7, I40E_GLPCI_PMSUP_L0S_ACC_LAT_SHIFT) +#define I40E_GLPCI_PMSUP_L1_ACC_LAT_SHIFT 11 +#define I40E_GLPCI_PMSUP_L1_ACC_LAT_MASK I40E_MASK(0x7, I40E_GLPCI_PMSUP_L1_ACC_LAT_SHIFT) +#define I40E_GLPCI_PMSUP_SLOT_CLK_SHIFT 14 +#define I40E_GLPCI_PMSUP_SLOT_CLK_MASK I40E_MASK(0x1, I40E_GLPCI_PMSUP_SLOT_CLK_SHIFT) +#define I40E_GLPCI_PMSUP_OBFF_SUP_SHIFT 15 +#define I40E_GLPCI_PMSUP_OBFF_SUP_MASK I40E_MASK(0x3, I40E_GLPCI_PMSUP_OBFF_SUP_SHIFT) +#define I40E_GLPCI_PQ_MAX_USED_SPC 0x0009C4EC /* Reset: PCIR */ +#define I40E_GLPCI_PQ_MAX_USED_SPC_GLPCI_PQ_MAX_USED_SPC_12_SHIFT 0 +#define I40E_GLPCI_PQ_MAX_USED_SPC_GLPCI_PQ_MAX_USED_SPC_12_MASK I40E_MASK(0xFF, I40E_GLPCI_PQ_MAX_USED_SPC_GLPCI_PQ_MAX_USED_SPC_12_SHIFT) +#define I40E_GLPCI_PQ_MAX_USED_SPC_GLPCI_PQ_MAX_USED_SPC_13_SHIFT 8 +#define I40E_GLPCI_PQ_MAX_USED_SPC_GLPCI_PQ_MAX_USED_SPC_13_MASK I40E_MASK(0xFF, I40E_GLPCI_PQ_MAX_USED_SPC_GLPCI_PQ_MAX_USED_SPC_13_SHIFT) +#define I40E_GLPCI_PWRDATA 0x000BE490 /* Reset: PCIR */ +#define I40E_GLPCI_PWRDATA_D0_POWER_SHIFT 0 +#define I40E_GLPCI_PWRDATA_D0_POWER_MASK I40E_MASK(0xFF, I40E_GLPCI_PWRDATA_D0_POWER_SHIFT) +#define I40E_GLPCI_PWRDATA_COMM_POWER_SHIFT 8 +#define I40E_GLPCI_PWRDATA_COMM_POWER_MASK I40E_MASK(0xFF, I40E_GLPCI_PWRDATA_COMM_POWER_SHIFT) +#define I40E_GLPCI_PWRDATA_D3_POWER_SHIFT 16 +#define I40E_GLPCI_PWRDATA_D3_POWER_MASK I40E_MASK(0xFF, I40E_GLPCI_PWRDATA_D3_POWER_SHIFT) +#define I40E_GLPCI_PWRDATA_DATA_SCALE_SHIFT 24 +#define I40E_GLPCI_PWRDATA_DATA_SCALE_MASK I40E_MASK(0x3, I40E_GLPCI_PWRDATA_DATA_SCALE_SHIFT) +#define I40E_GLPCI_REVID 0x000BE4B4 /* Reset: PCIR */ +#define I40E_GLPCI_REVID_NVM_REVID_SHIFT 0 +#define I40E_GLPCI_REVID_NVM_REVID_MASK I40E_MASK(0xFF, I40E_GLPCI_REVID_NVM_REVID_SHIFT) +#define I40E_GLPCI_SERH 0x000BE49C /* Reset: PCIR */ +#define I40E_GLPCI_SERH_SER_NUM_H_SHIFT 0 +#define I40E_GLPCI_SERH_SER_NUM_H_MASK I40E_MASK(0xFFFF, I40E_GLPCI_SERH_SER_NUM_H_SHIFT) +#define I40E_GLPCI_SERL 0x000BE498 /* Reset: PCIR */ +#define I40E_GLPCI_SERL_SER_NUM_L_SHIFT 0 +#define I40E_GLPCI_SERL_SER_NUM_L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPCI_SERL_SER_NUM_L_SHIFT) +#define I40E_GLPCI_SPARE_BITS_0 0x0009C4F8 /* Reset: PCIR */ +#define I40E_GLPCI_SPARE_BITS_0_SPARE_BITS_SHIFT 0 +#define I40E_GLPCI_SPARE_BITS_0_SPARE_BITS_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPCI_SPARE_BITS_0_SPARE_BITS_SHIFT) +#define I40E_GLPCI_SPARE_BITS_1 0x0009C4FC /* Reset: PCIR */ +#define I40E_GLPCI_SPARE_BITS_1_SPARE_BITS_SHIFT 0 +#define I40E_GLPCI_SPARE_BITS_1_SPARE_BITS_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPCI_SPARE_BITS_1_SPARE_BITS_SHIFT) +#define I40E_GLPCI_SUBVENID 0x000BE48C /* Reset: PCIR */ +#define I40E_GLPCI_SUBVENID_SUB_VEN_ID_SHIFT 0 +#define I40E_GLPCI_SUBVENID_SUB_VEN_ID_MASK I40E_MASK(0xFFFF, I40E_GLPCI_SUBVENID_SUB_VEN_ID_SHIFT) +#define I40E_GLPCI_UPADD 0x000BE4F8 /* Reset: PCIR */ +#define I40E_GLPCI_UPADD_ADDRESS_SHIFT 1 +#define I40E_GLPCI_UPADD_ADDRESS_MASK I40E_MASK(0x7FFFFFFF, I40E_GLPCI_UPADD_ADDRESS_SHIFT) +#define I40E_GLPCI_VENDORID 0x000BE518 /* Reset: PCIR */ +#define I40E_GLPCI_VENDORID_VENDORID_SHIFT 0 +#define I40E_GLPCI_VENDORID_VENDORID_MASK I40E_MASK(0xFFFF, I40E_GLPCI_VENDORID_VENDORID_SHIFT) +#define I40E_GLPCI_VFSUP 0x000BE4B8 /* Reset: PCIR */ +#define I40E_GLPCI_VFSUP_VF_PREFETCH_SHIFT 0 +#define I40E_GLPCI_VFSUP_VF_PREFETCH_MASK I40E_MASK(0x1, I40E_GLPCI_VFSUP_VF_PREFETCH_SHIFT) +#define I40E_GLPCI_VFSUP_VR_BAR_TYPE_SHIFT 1 +#define I40E_GLPCI_VFSUP_VR_BAR_TYPE_MASK I40E_MASK(0x1, I40E_GLPCI_VFSUP_VR_BAR_TYPE_SHIFT) +#define I40E_PF_FUNC_RID 0x0009C000 /* Reset: PCIR */ +#define I40E_PF_FUNC_RID_FUNCTION_NUMBER_SHIFT 0 +#define I40E_PF_FUNC_RID_FUNCTION_NUMBER_MASK I40E_MASK(0x7, I40E_PF_FUNC_RID_FUNCTION_NUMBER_SHIFT) +#define I40E_PF_FUNC_RID_DEVICE_NUMBER_SHIFT 3 +#define I40E_PF_FUNC_RID_DEVICE_NUMBER_MASK I40E_MASK(0x1F, I40E_PF_FUNC_RID_DEVICE_NUMBER_SHIFT) +#define I40E_PF_FUNC_RID_BUS_NUMBER_SHIFT 8 +#define I40E_PF_FUNC_RID_BUS_NUMBER_MASK I40E_MASK(0xFF, I40E_PF_FUNC_RID_BUS_NUMBER_SHIFT) +#define I40E_PF_PCI_CIAA 0x0009C080 /* Reset: FLR */ +#define I40E_PF_PCI_CIAA_ADDRESS_SHIFT 0 +#define I40E_PF_PCI_CIAA_ADDRESS_MASK I40E_MASK(0xFFF, I40E_PF_PCI_CIAA_ADDRESS_SHIFT) +#define I40E_PF_PCI_CIAA_VF_NUM_SHIFT 12 +#define I40E_PF_PCI_CIAA_VF_NUM_MASK I40E_MASK(0x7F, I40E_PF_PCI_CIAA_VF_NUM_SHIFT) +#define I40E_PF_PCI_CIAD 0x0009C100 /* Reset: FLR */ +#define I40E_PF_PCI_CIAD_DATA_SHIFT 0 +#define I40E_PF_PCI_CIAD_DATA_MASK I40E_MASK(0xFFFFFFFF, I40E_PF_PCI_CIAD_DATA_SHIFT) +#define I40E_PFPCI_CLASS 0x000BE400 /* Reset: PCIR */ +#define I40E_PFPCI_CLASS_STORAGE_CLASS_SHIFT 0 +#define I40E_PFPCI_CLASS_STORAGE_CLASS_MASK I40E_MASK(0x1, I40E_PFPCI_CLASS_STORAGE_CLASS_SHIFT) +#define I40E_PFPCI_CLASS_RESERVED_1_SHIFT 1 +#define I40E_PFPCI_CLASS_RESERVED_1_MASK I40E_MASK(0x1, I40E_PFPCI_CLASS_RESERVED_1_SHIFT) +#define I40E_PFPCI_CLASS_PF_IS_LAN_SHIFT 2 +#define I40E_PFPCI_CLASS_PF_IS_LAN_MASK I40E_MASK(0x1, I40E_PFPCI_CLASS_PF_IS_LAN_SHIFT) +#define I40E_PFPCI_CNF 0x000BE000 /* Reset: PCIR */ +#define I40E_PFPCI_CNF_MSI_EN_SHIFT 2 +#define I40E_PFPCI_CNF_MSI_EN_MASK I40E_MASK(0x1, I40E_PFPCI_CNF_MSI_EN_SHIFT) +#define I40E_PFPCI_CNF_EXROM_DIS_SHIFT 3 +#define I40E_PFPCI_CNF_EXROM_DIS_MASK I40E_MASK(0x1, I40E_PFPCI_CNF_EXROM_DIS_SHIFT) +#define I40E_PFPCI_CNF_IO_BAR_SHIFT 4 +#define I40E_PFPCI_CNF_IO_BAR_MASK I40E_MASK(0x1, I40E_PFPCI_CNF_IO_BAR_SHIFT) +#define I40E_PFPCI_CNF_INT_PIN_SHIFT 5 +#define I40E_PFPCI_CNF_INT_PIN_MASK I40E_MASK(0x3, I40E_PFPCI_CNF_INT_PIN_SHIFT) +#define I40E_PFPCI_DEVID 0x000BE080 /* Reset: PCIR */ +#define I40E_PFPCI_DEVID_PF_DEV_ID_SHIFT 0 +#define I40E_PFPCI_DEVID_PF_DEV_ID_MASK I40E_MASK(0xFFFF, I40E_PFPCI_DEVID_PF_DEV_ID_SHIFT) +#define I40E_PFPCI_DEVID_VF_DEV_ID_SHIFT 16 +#define I40E_PFPCI_DEVID_VF_DEV_ID_MASK I40E_MASK(0xFFFF, I40E_PFPCI_DEVID_VF_DEV_ID_SHIFT) +#define I40E_PFPCI_FACTPS 0x0009C180 /* Reset: FLR */ +#define I40E_PFPCI_FACTPS_FUNC_POWER_STATE_SHIFT 0 +#define I40E_PFPCI_FACTPS_FUNC_POWER_STATE_MASK I40E_MASK(0x3, I40E_PFPCI_FACTPS_FUNC_POWER_STATE_SHIFT) +#define I40E_PFPCI_FACTPS_FUNC_AUX_EN_SHIFT 3 +#define I40E_PFPCI_FACTPS_FUNC_AUX_EN_MASK I40E_MASK(0x1, I40E_PFPCI_FACTPS_FUNC_AUX_EN_SHIFT) +#define I40E_PFPCI_FUNC 0x000BE200 /* Reset: POR */ +#define I40E_PFPCI_FUNC_FUNC_DIS_SHIFT 0 +#define I40E_PFPCI_FUNC_FUNC_DIS_MASK I40E_MASK(0x1, I40E_PFPCI_FUNC_FUNC_DIS_SHIFT) +#define I40E_PFPCI_FUNC_ALLOW_FUNC_DIS_SHIFT 1 +#define I40E_PFPCI_FUNC_ALLOW_FUNC_DIS_MASK I40E_MASK(0x1, I40E_PFPCI_FUNC_ALLOW_FUNC_DIS_SHIFT) +#define I40E_PFPCI_FUNC_DIS_FUNC_ON_PORT_DIS_SHIFT 2 +#define I40E_PFPCI_FUNC_DIS_FUNC_ON_PORT_DIS_MASK I40E_MASK(0x1, I40E_PFPCI_FUNC_DIS_FUNC_ON_PORT_DIS_SHIFT) +#define I40E_PFPCI_FUNC2 0x000BE180 /* Reset: PCIR */ +#define I40E_PFPCI_FUNC2_EMP_FUNC_DIS_SHIFT 0 +#define I40E_PFPCI_FUNC2_EMP_FUNC_DIS_MASK I40E_MASK(0x1, I40E_PFPCI_FUNC2_EMP_FUNC_DIS_SHIFT) +#define I40E_PFPCI_ICAUSE 0x0009C200 /* Reset: PFR */ +#define I40E_PFPCI_ICAUSE_PCIE_ERR_CAUSE_SHIFT 0 +#define I40E_PFPCI_ICAUSE_PCIE_ERR_CAUSE_MASK I40E_MASK(0xFFFFFFFF, I40E_PFPCI_ICAUSE_PCIE_ERR_CAUSE_SHIFT) +#define I40E_PFPCI_IENA 0x0009C280 /* Reset: PFR */ +#define I40E_PFPCI_IENA_PCIE_ERR_EN_SHIFT 0 +#define I40E_PFPCI_IENA_PCIE_ERR_EN_MASK I40E_MASK(0xFFFFFFFF, I40E_PFPCI_IENA_PCIE_ERR_EN_SHIFT) +#define I40E_PFPCI_PF_FLUSH_DONE 0x0009C800 /* Reset: PCIR */ +#define I40E_PFPCI_PF_FLUSH_DONE_FLUSH_DONE_SHIFT 0 +#define I40E_PFPCI_PF_FLUSH_DONE_FLUSH_DONE_MASK I40E_MASK(0x1, I40E_PFPCI_PF_FLUSH_DONE_FLUSH_DONE_SHIFT) +#define I40E_PFPCI_PM 0x000BE300 /* Reset: POR */ +#define I40E_PFPCI_PM_PME_EN_SHIFT 0 +#define I40E_PFPCI_PM_PME_EN_MASK I40E_MASK(0x1, I40E_PFPCI_PM_PME_EN_SHIFT) +#define I40E_PFPCI_STATUS1 0x000BE280 /* Reset: POR */ +#define I40E_PFPCI_STATUS1_FUNC_VALID_SHIFT 0 +#define I40E_PFPCI_STATUS1_FUNC_VALID_MASK I40E_MASK(0x1, I40E_PFPCI_STATUS1_FUNC_VALID_SHIFT) +#define I40E_PFPCI_SUBSYSID 0x000BE100 /* Reset: PCIR */ +#define I40E_PFPCI_SUBSYSID_PF_SUBSYS_ID_SHIFT 0 +#define I40E_PFPCI_SUBSYSID_PF_SUBSYS_ID_MASK I40E_MASK(0xFFFF, I40E_PFPCI_SUBSYSID_PF_SUBSYS_ID_SHIFT) +#define I40E_PFPCI_SUBSYSID_VF_SUBSYS_ID_SHIFT 16 +#define I40E_PFPCI_SUBSYSID_VF_SUBSYS_ID_MASK I40E_MASK(0xFFFF, I40E_PFPCI_SUBSYSID_VF_SUBSYS_ID_SHIFT) +#define I40E_PFPCI_VF_FLUSH_DONE 0x0000E400 /* Reset: PCIR */ +#define I40E_PFPCI_VF_FLUSH_DONE_FLUSH_DONE_SHIFT 0 +#define I40E_PFPCI_VF_FLUSH_DONE_FLUSH_DONE_MASK I40E_MASK(0x1, I40E_PFPCI_VF_FLUSH_DONE_FLUSH_DONE_SHIFT) +#define I40E_PFPCI_VF_FLUSH_DONE1(_VF) (0x0009C600 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: PCIR */ +#define I40E_PFPCI_VF_FLUSH_DONE1_MAX_INDEX 127 +#define I40E_PFPCI_VF_FLUSH_DONE1_FLUSH_DONE_SHIFT 0 +#define I40E_PFPCI_VF_FLUSH_DONE1_FLUSH_DONE_MASK I40E_MASK(0x1, I40E_PFPCI_VF_FLUSH_DONE1_FLUSH_DONE_SHIFT) +#define I40E_PFPCI_VM_FLUSH_DONE 0x0009C880 /* Reset: PCIR */ +#define I40E_PFPCI_VM_FLUSH_DONE_FLUSH_DONE_SHIFT 0 +#define I40E_PFPCI_VM_FLUSH_DONE_FLUSH_DONE_MASK I40E_MASK(0x1, I40E_PFPCI_VM_FLUSH_DONE_FLUSH_DONE_SHIFT) +#define I40E_PFPCI_VMINDEX 0x0009C300 /* Reset: PCIR */ +#define I40E_PFPCI_VMINDEX_VMINDEX_SHIFT 0 +#define I40E_PFPCI_VMINDEX_VMINDEX_MASK I40E_MASK(0x1FF, I40E_PFPCI_VMINDEX_VMINDEX_SHIFT) +#define I40E_PFPCI_VMPEND 0x0009C380 /* Reset: PCIR */ +#define I40E_PFPCI_VMPEND_PENDING_SHIFT 0 +#define I40E_PFPCI_VMPEND_PENDING_MASK I40E_MASK(0x1, I40E_PFPCI_VMPEND_PENDING_SHIFT) +#define I40E_PRTPM_EEE_STAT 0x001E4320 /* Reset: GLOBR */ +#define I40E_PRTPM_EEE_STAT_EEE_NEG_SHIFT 29 +#define I40E_PRTPM_EEE_STAT_EEE_NEG_MASK I40E_MASK(0x1, I40E_PRTPM_EEE_STAT_EEE_NEG_SHIFT) +#define I40E_PRTPM_EEE_STAT_RX_LPI_STATUS_SHIFT 30 +#define I40E_PRTPM_EEE_STAT_RX_LPI_STATUS_MASK I40E_MASK(0x1, I40E_PRTPM_EEE_STAT_RX_LPI_STATUS_SHIFT) +#define I40E_PRTPM_EEE_STAT_TX_LPI_STATUS_SHIFT 31 +#define I40E_PRTPM_EEE_STAT_TX_LPI_STATUS_MASK I40E_MASK(0x1, I40E_PRTPM_EEE_STAT_TX_LPI_STATUS_SHIFT) +#define I40E_PRTPM_EEEC 0x001E4380 /* Reset: GLOBR */ +#define I40E_PRTPM_EEEC_TW_WAKE_MIN_SHIFT 16 +#define I40E_PRTPM_EEEC_TW_WAKE_MIN_MASK I40E_MASK(0x3F, I40E_PRTPM_EEEC_TW_WAKE_MIN_SHIFT) +#define I40E_PRTPM_EEEC_TX_LU_LPI_DLY_SHIFT 24 +#define I40E_PRTPM_EEEC_TX_LU_LPI_DLY_MASK I40E_MASK(0x3, I40E_PRTPM_EEEC_TX_LU_LPI_DLY_SHIFT) +#define I40E_PRTPM_EEEC_TEEE_DLY_SHIFT 26 +#define I40E_PRTPM_EEEC_TEEE_DLY_MASK I40E_MASK(0x3F, I40E_PRTPM_EEEC_TEEE_DLY_SHIFT) +#define I40E_PRTPM_EEEFWD 0x001E4400 /* Reset: GLOBR */ +#define I40E_PRTPM_EEEFWD_EEE_FW_CONFIG_DONE_SHIFT 31 +#define I40E_PRTPM_EEEFWD_EEE_FW_CONFIG_DONE_MASK I40E_MASK(0x1, I40E_PRTPM_EEEFWD_EEE_FW_CONFIG_DONE_SHIFT) +#define I40E_PRTPM_EEER 0x001E4360 /* Reset: GLOBR */ +#define I40E_PRTPM_EEER_TW_SYSTEM_SHIFT 0 +#define I40E_PRTPM_EEER_TW_SYSTEM_MASK I40E_MASK(0xFFFF, I40E_PRTPM_EEER_TW_SYSTEM_SHIFT) +#define I40E_PRTPM_EEER_TX_LPI_EN_SHIFT 16 +#define I40E_PRTPM_EEER_TX_LPI_EN_MASK I40E_MASK(0x1, I40E_PRTPM_EEER_TX_LPI_EN_SHIFT) +#define I40E_PRTPM_EEETXC 0x001E43E0 /* Reset: GLOBR */ +#define I40E_PRTPM_EEETXC_TW_PHY_SHIFT 0 +#define I40E_PRTPM_EEETXC_TW_PHY_MASK I40E_MASK(0xFFFF, I40E_PRTPM_EEETXC_TW_PHY_SHIFT) +#define I40E_PRTPM_GC 0x000B8140 /* Reset: POR */ +#define I40E_PRTPM_GC_EMP_LINK_ON_SHIFT 0 +#define I40E_PRTPM_GC_EMP_LINK_ON_MASK I40E_MASK(0x1, I40E_PRTPM_GC_EMP_LINK_ON_SHIFT) +#define I40E_PRTPM_GC_MNG_VETO_SHIFT 1 +#define I40E_PRTPM_GC_MNG_VETO_MASK I40E_MASK(0x1, I40E_PRTPM_GC_MNG_VETO_SHIFT) +#define I40E_PRTPM_GC_RATD_SHIFT 2 +#define I40E_PRTPM_GC_RATD_MASK I40E_MASK(0x1, I40E_PRTPM_GC_RATD_SHIFT) +#define I40E_PRTPM_GC_LCDMP_SHIFT 3 +#define I40E_PRTPM_GC_LCDMP_MASK I40E_MASK(0x1, I40E_PRTPM_GC_LCDMP_SHIFT) +#define I40E_PRTPM_GC_LPLU_ASSERTED_SHIFT 31 +#define I40E_PRTPM_GC_LPLU_ASSERTED_MASK I40E_MASK(0x1, I40E_PRTPM_GC_LPLU_ASSERTED_SHIFT) +#define I40E_PRTPM_RLPIC 0x001E43A0 /* Reset: GLOBR */ +#define I40E_PRTPM_RLPIC_ERLPIC_SHIFT 0 +#define I40E_PRTPM_RLPIC_ERLPIC_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTPM_RLPIC_ERLPIC_SHIFT) +#define I40E_PRTPM_TLPIC 0x001E43C0 /* Reset: GLOBR */ +#define I40E_PRTPM_TLPIC_ETLPIC_SHIFT 0 +#define I40E_PRTPM_TLPIC_ETLPIC_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTPM_TLPIC_ETLPIC_SHIFT) +#define I40E_GLRPB_DPSS 0x000AC828 /* Reset: CORER */ +#define I40E_GLRPB_DPSS_DPS_TCN_SHIFT 0 +#define I40E_GLRPB_DPSS_DPS_TCN_MASK I40E_MASK(0xFFFFF, I40E_GLRPB_DPSS_DPS_TCN_SHIFT) +#define I40E_GLRPB_GHW 0x000AC830 /* Reset: CORER */ +#define I40E_GLRPB_GHW_GHW_SHIFT 0 +#define I40E_GLRPB_GHW_GHW_MASK I40E_MASK(0xFFFFF, I40E_GLRPB_GHW_GHW_SHIFT) +#define I40E_GLRPB_GLW 0x000AC834 /* Reset: CORER */ +#define I40E_GLRPB_GLW_GLW_SHIFT 0 +#define I40E_GLRPB_GLW_GLW_MASK I40E_MASK(0xFFFFF, I40E_GLRPB_GLW_GLW_SHIFT) +#define I40E_GLRPB_PHW 0x000AC844 /* Reset: CORER */ +#define I40E_GLRPB_PHW_PHW_SHIFT 0 +#define I40E_GLRPB_PHW_PHW_MASK I40E_MASK(0xFFFFF, I40E_GLRPB_PHW_PHW_SHIFT) +#define I40E_GLRPB_PLW 0x000AC848 /* Reset: CORER */ +#define I40E_GLRPB_PLW_PLW_SHIFT 0 +#define I40E_GLRPB_PLW_PLW_MASK I40E_MASK(0xFFFFF, I40E_GLRPB_PLW_PLW_SHIFT) +#define I40E_PRTRPB_DHW(_i) (0x000AC100 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_PRTRPB_DHW_MAX_INDEX 7 +#define I40E_PRTRPB_DHW_DHW_TCN_SHIFT 0 +#define I40E_PRTRPB_DHW_DHW_TCN_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_DHW_DHW_TCN_SHIFT) +#define I40E_PRTRPB_DLW(_i) (0x000AC220 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_PRTRPB_DLW_MAX_INDEX 7 +#define I40E_PRTRPB_DLW_DLW_TCN_SHIFT 0 +#define I40E_PRTRPB_DLW_DLW_TCN_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_DLW_DLW_TCN_SHIFT) +#define I40E_PRTRPB_DPS(_i) (0x000AC320 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_PRTRPB_DPS_MAX_INDEX 7 +#define I40E_PRTRPB_DPS_DPS_TCN_SHIFT 0 +#define I40E_PRTRPB_DPS_DPS_TCN_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_DPS_DPS_TCN_SHIFT) +#define I40E_PRTRPB_SHT(_i) (0x000AC480 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_PRTRPB_SHT_MAX_INDEX 7 +#define I40E_PRTRPB_SHT_SHT_TCN_SHIFT 0 +#define I40E_PRTRPB_SHT_SHT_TCN_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_SHT_SHT_TCN_SHIFT) +#define I40E_PRTRPB_SHW 0x000AC580 /* Reset: CORER */ +#define I40E_PRTRPB_SHW_SHW_SHIFT 0 +#define I40E_PRTRPB_SHW_SHW_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_SHW_SHW_SHIFT) +#define I40E_PRTRPB_SLT(_i) (0x000AC5A0 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_PRTRPB_SLT_MAX_INDEX 7 +#define I40E_PRTRPB_SLT_SLT_TCN_SHIFT 0 +#define I40E_PRTRPB_SLT_SLT_TCN_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_SLT_SLT_TCN_SHIFT) +#define I40E_PRTRPB_SLW 0x000AC6A0 /* Reset: CORER */ +#define I40E_PRTRPB_SLW_SLW_SHIFT 0 +#define I40E_PRTRPB_SLW_SLW_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_SLW_SLW_SHIFT) +#define I40E_PRTRPB_SPS 0x000AC7C0 /* Reset: CORER */ +#define I40E_PRTRPB_SPS_SPS_SHIFT 0 +#define I40E_PRTRPB_SPS_SPS_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_SPS_SPS_SHIFT) +#define I40E_GLQF_CTL 0x00269BA4 /* Reset: CORER */ +#define I40E_GLQF_CTL_HTOEP_SHIFT 1 +#define I40E_GLQF_CTL_HTOEP_MASK I40E_MASK(0x1, I40E_GLQF_CTL_HTOEP_SHIFT) +#define I40E_GLQF_CTL_HTOEP_FCOE_SHIFT 2 +#define I40E_GLQF_CTL_HTOEP_FCOE_MASK I40E_MASK(0x1, I40E_GLQF_CTL_HTOEP_FCOE_SHIFT) +#define I40E_GLQF_CTL_PCNT_ALLOC_SHIFT 3 +#define I40E_GLQF_CTL_PCNT_ALLOC_MASK I40E_MASK(0x7, I40E_GLQF_CTL_PCNT_ALLOC_SHIFT) +#define I40E_GLQF_CTL_FD_AUTO_PCTYPE_SHIFT 6 +#define I40E_GLQF_CTL_FD_AUTO_PCTYPE_MASK I40E_MASK(0x1, I40E_GLQF_CTL_FD_AUTO_PCTYPE_SHIFT) +#define I40E_GLQF_CTL_RSVD_SHIFT 7 +#define I40E_GLQF_CTL_RSVD_MASK I40E_MASK(0x1, I40E_GLQF_CTL_RSVD_SHIFT) +#define I40E_GLQF_CTL_MAXPEBLEN_SHIFT 8 +#define I40E_GLQF_CTL_MAXPEBLEN_MASK I40E_MASK(0x7, I40E_GLQF_CTL_MAXPEBLEN_SHIFT) +#define I40E_GLQF_CTL_MAXFCBLEN_SHIFT 11 +#define I40E_GLQF_CTL_MAXFCBLEN_MASK I40E_MASK(0x7, I40E_GLQF_CTL_MAXFCBLEN_SHIFT) +#define I40E_GLQF_CTL_MAXFDBLEN_SHIFT 14 +#define I40E_GLQF_CTL_MAXFDBLEN_MASK I40E_MASK(0x7, I40E_GLQF_CTL_MAXFDBLEN_SHIFT) +#define I40E_GLQF_CTL_FDBEST_SHIFT 17 +#define I40E_GLQF_CTL_FDBEST_MASK I40E_MASK(0xFF, I40E_GLQF_CTL_FDBEST_SHIFT) +#define I40E_GLQF_CTL_PROGPRIO_SHIFT 25 +#define I40E_GLQF_CTL_PROGPRIO_MASK I40E_MASK(0x1, I40E_GLQF_CTL_PROGPRIO_SHIFT) +#define I40E_GLQF_CTL_INVALPRIO_SHIFT 26 +#define I40E_GLQF_CTL_INVALPRIO_MASK I40E_MASK(0x1, I40E_GLQF_CTL_INVALPRIO_SHIFT) +#define I40E_GLQF_CTL_IGNORE_IP_SHIFT 27 +#define I40E_GLQF_CTL_IGNORE_IP_MASK I40E_MASK(0x1, I40E_GLQF_CTL_IGNORE_IP_SHIFT) +#define I40E_GLQF_FDCNT_0 0x00269BAC /* Reset: CORER */ +#define I40E_GLQF_FDCNT_0_GUARANT_CNT_SHIFT 0 +#define I40E_GLQF_FDCNT_0_GUARANT_CNT_MASK I40E_MASK(0x1FFF, I40E_GLQF_FDCNT_0_GUARANT_CNT_SHIFT) +#define I40E_GLQF_FDCNT_0_BESTCNT_SHIFT 13 +#define I40E_GLQF_FDCNT_0_BESTCNT_MASK I40E_MASK(0x1FFF, I40E_GLQF_FDCNT_0_BESTCNT_SHIFT) +#define I40E_GLQF_HKEY(_i) (0x00270140 + ((_i) * 4)) /* _i=0...12 */ /* Reset: CORER */ +#define I40E_GLQF_HKEY_MAX_INDEX 12 +#define I40E_GLQF_HKEY_KEY_0_SHIFT 0 +#define I40E_GLQF_HKEY_KEY_0_MASK I40E_MASK(0xFF, I40E_GLQF_HKEY_KEY_0_SHIFT) +#define I40E_GLQF_HKEY_KEY_1_SHIFT 8 +#define I40E_GLQF_HKEY_KEY_1_MASK I40E_MASK(0xFF, I40E_GLQF_HKEY_KEY_1_SHIFT) +#define I40E_GLQF_HKEY_KEY_2_SHIFT 16 +#define I40E_GLQF_HKEY_KEY_2_MASK I40E_MASK(0xFF, I40E_GLQF_HKEY_KEY_2_SHIFT) +#define I40E_GLQF_HKEY_KEY_3_SHIFT 24 +#define I40E_GLQF_HKEY_KEY_3_MASK I40E_MASK(0xFF, I40E_GLQF_HKEY_KEY_3_SHIFT) +#define I40E_GLQF_HSYM(_i) (0x00269D00 + ((_i) * 4)) /* _i=0...63 */ /* Reset: CORER */ +#define I40E_GLQF_HSYM_MAX_INDEX 63 +#define I40E_GLQF_HSYM_SYMH_ENA_SHIFT 0 +#define I40E_GLQF_HSYM_SYMH_ENA_MASK I40E_MASK(0x1, I40E_GLQF_HSYM_SYMH_ENA_SHIFT) +#define I40E_GLQF_PCNT(_i) (0x00266800 + ((_i) * 4)) /* _i=0...511 */ /* Reset: CORER */ +#define I40E_GLQF_PCNT_MAX_INDEX 511 +#define I40E_GLQF_PCNT_PCNT_SHIFT 0 +#define I40E_GLQF_PCNT_PCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLQF_PCNT_PCNT_SHIFT) +#define I40E_GLQF_SWAP(_i, _j) (0x00267E00 + ((_i) * 4 + (_j) * 8)) /* _i=0...1, _j=0...63 */ /* Reset: CORER */ +#define I40E_GLQF_SWAP_MAX_INDEX 1 +#define I40E_GLQF_SWAP_OFF0_SRC0_SHIFT 0 +#define I40E_GLQF_SWAP_OFF0_SRC0_MASK I40E_MASK(0x3F, I40E_GLQF_SWAP_OFF0_SRC0_SHIFT) +#define I40E_GLQF_SWAP_OFF0_SRC1_SHIFT 6 +#define I40E_GLQF_SWAP_OFF0_SRC1_MASK I40E_MASK(0x3F, I40E_GLQF_SWAP_OFF0_SRC1_SHIFT) +#define I40E_GLQF_SWAP_FLEN0_SHIFT 12 +#define I40E_GLQF_SWAP_FLEN0_MASK I40E_MASK(0xF, I40E_GLQF_SWAP_FLEN0_SHIFT) +#define I40E_GLQF_SWAP_OFF1_SRC0_SHIFT 16 +#define I40E_GLQF_SWAP_OFF1_SRC0_MASK I40E_MASK(0x3F, I40E_GLQF_SWAP_OFF1_SRC0_SHIFT) +#define I40E_GLQF_SWAP_OFF1_SRC1_SHIFT 22 +#define I40E_GLQF_SWAP_OFF1_SRC1_MASK I40E_MASK(0x3F, I40E_GLQF_SWAP_OFF1_SRC1_SHIFT) +#define I40E_GLQF_SWAP_FLEN1_SHIFT 28 +#define I40E_GLQF_SWAP_FLEN1_MASK I40E_MASK(0xF, I40E_GLQF_SWAP_FLEN1_SHIFT) +#define I40E_PFQF_CTL_0 0x001C0AC0 /* Reset: CORER */ +#define I40E_PFQF_CTL_0_PEHSIZE_SHIFT 0 +#define I40E_PFQF_CTL_0_PEHSIZE_MASK I40E_MASK(0x1F, I40E_PFQF_CTL_0_PEHSIZE_SHIFT) +#define I40E_PFQF_CTL_0_PEDSIZE_SHIFT 5 +#define I40E_PFQF_CTL_0_PEDSIZE_MASK I40E_MASK(0x1F, I40E_PFQF_CTL_0_PEDSIZE_SHIFT) +#define I40E_PFQF_CTL_0_PFFCHSIZE_SHIFT 10 +#define I40E_PFQF_CTL_0_PFFCHSIZE_MASK I40E_MASK(0xF, I40E_PFQF_CTL_0_PFFCHSIZE_SHIFT) +#define I40E_PFQF_CTL_0_PFFCDSIZE_SHIFT 14 +#define I40E_PFQF_CTL_0_PFFCDSIZE_MASK I40E_MASK(0x3, I40E_PFQF_CTL_0_PFFCDSIZE_SHIFT) +#define I40E_PFQF_CTL_0_HASHLUTSIZE_SHIFT 16 +#define I40E_PFQF_CTL_0_HASHLUTSIZE_MASK I40E_MASK(0x1, I40E_PFQF_CTL_0_HASHLUTSIZE_SHIFT) +#define I40E_PFQF_CTL_0_FD_ENA_SHIFT 17 +#define I40E_PFQF_CTL_0_FD_ENA_MASK I40E_MASK(0x1, I40E_PFQF_CTL_0_FD_ENA_SHIFT) +#define I40E_PFQF_CTL_0_ETYPE_ENA_SHIFT 18 +#define I40E_PFQF_CTL_0_ETYPE_ENA_MASK I40E_MASK(0x1, I40E_PFQF_CTL_0_ETYPE_ENA_SHIFT) +#define I40E_PFQF_CTL_0_MACVLAN_ENA_SHIFT 19 +#define I40E_PFQF_CTL_0_MACVLAN_ENA_MASK I40E_MASK(0x1, I40E_PFQF_CTL_0_MACVLAN_ENA_SHIFT) +#define I40E_PFQF_CTL_0_VFFCHSIZE_SHIFT 20 +#define I40E_PFQF_CTL_0_VFFCHSIZE_MASK I40E_MASK(0xF, I40E_PFQF_CTL_0_VFFCHSIZE_SHIFT) +#define I40E_PFQF_CTL_0_VFFCDSIZE_SHIFT 24 +#define I40E_PFQF_CTL_0_VFFCDSIZE_MASK I40E_MASK(0x3, I40E_PFQF_CTL_0_VFFCDSIZE_SHIFT) +#define I40E_PFQF_CTL_1 0x00245D80 /* Reset: CORER */ +#define I40E_PFQF_CTL_1_CLEARFDTABLE_SHIFT 0 +#define I40E_PFQF_CTL_1_CLEARFDTABLE_MASK I40E_MASK(0x1, I40E_PFQF_CTL_1_CLEARFDTABLE_SHIFT) +#define I40E_PFQF_FDALLOC 0x00246280 /* Reset: CORER */ +#define I40E_PFQF_FDALLOC_FDALLOC_SHIFT 0 +#define I40E_PFQF_FDALLOC_FDALLOC_MASK I40E_MASK(0xFF, I40E_PFQF_FDALLOC_FDALLOC_SHIFT) +#define I40E_PFQF_FDALLOC_FDBEST_SHIFT 8 +#define I40E_PFQF_FDALLOC_FDBEST_MASK I40E_MASK(0xFF, I40E_PFQF_FDALLOC_FDBEST_SHIFT) +#define I40E_PFQF_FDSTAT 0x00246380 /* Reset: CORER */ +#define I40E_PFQF_FDSTAT_GUARANT_CNT_SHIFT 0 +#define I40E_PFQF_FDSTAT_GUARANT_CNT_MASK I40E_MASK(0x1FFF, I40E_PFQF_FDSTAT_GUARANT_CNT_SHIFT) +#define I40E_PFQF_FDSTAT_BEST_CNT_SHIFT 16 +#define I40E_PFQF_FDSTAT_BEST_CNT_MASK I40E_MASK(0x1FFF, I40E_PFQF_FDSTAT_BEST_CNT_SHIFT) +#define I40E_PFQF_HENA(_i) (0x00245900 + ((_i) * 128)) /* _i=0...1 */ /* Reset: CORER */ +#define I40E_PFQF_HENA_MAX_INDEX 1 +#define I40E_PFQF_HENA_PTYPE_ENA_SHIFT 0 +#define I40E_PFQF_HENA_PTYPE_ENA_MASK I40E_MASK(0xFFFFFFFF, I40E_PFQF_HENA_PTYPE_ENA_SHIFT) +#define I40E_PFQF_HKEY(_i) (0x00244800 + ((_i) * 128)) /* _i=0...12 */ /* Reset: CORER */ +#define I40E_PFQF_HKEY_MAX_INDEX 12 +#define I40E_PFQF_HKEY_KEY_0_SHIFT 0 +#define I40E_PFQF_HKEY_KEY_0_MASK I40E_MASK(0xFF, I40E_PFQF_HKEY_KEY_0_SHIFT) +#define I40E_PFQF_HKEY_KEY_1_SHIFT 8 +#define I40E_PFQF_HKEY_KEY_1_MASK I40E_MASK(0xFF, I40E_PFQF_HKEY_KEY_1_SHIFT) +#define I40E_PFQF_HKEY_KEY_2_SHIFT 16 +#define I40E_PFQF_HKEY_KEY_2_MASK I40E_MASK(0xFF, I40E_PFQF_HKEY_KEY_2_SHIFT) +#define I40E_PFQF_HKEY_KEY_3_SHIFT 24 +#define I40E_PFQF_HKEY_KEY_3_MASK I40E_MASK(0xFF, I40E_PFQF_HKEY_KEY_3_SHIFT) +#define I40E_PFQF_HLUT(_i) (0x00240000 + ((_i) * 128)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_PFQF_HLUT_MAX_INDEX 127 +#define I40E_PFQF_HLUT_LUT0_SHIFT 0 +#define I40E_PFQF_HLUT_LUT0_MASK I40E_MASK(0x3F, I40E_PFQF_HLUT_LUT0_SHIFT) +#define I40E_PFQF_HLUT_LUT1_SHIFT 8 +#define I40E_PFQF_HLUT_LUT1_MASK I40E_MASK(0x3F, I40E_PFQF_HLUT_LUT1_SHIFT) +#define I40E_PFQF_HLUT_LUT2_SHIFT 16 +#define I40E_PFQF_HLUT_LUT2_MASK I40E_MASK(0x3F, I40E_PFQF_HLUT_LUT2_SHIFT) +#define I40E_PFQF_HLUT_LUT3_SHIFT 24 +#define I40E_PFQF_HLUT_LUT3_MASK I40E_MASK(0x3F, I40E_PFQF_HLUT_LUT3_SHIFT) +#define I40E_PRTQF_CTL_0 0x00256E60 /* Reset: CORER */ +#define I40E_PRTQF_CTL_0_HSYM_ENA_SHIFT 0 +#define I40E_PRTQF_CTL_0_HSYM_ENA_MASK I40E_MASK(0x1, I40E_PRTQF_CTL_0_HSYM_ENA_SHIFT) +#define I40E_PRTQF_FD_FLXINSET(_i) (0x00253800 + ((_i) * 32)) /* _i=0...63 */ /* Reset: CORER */ +#define I40E_PRTQF_FD_FLXINSET_MAX_INDEX 63 +#define I40E_PRTQF_FD_FLXINSET_INSET_SHIFT 0 +#define I40E_PRTQF_FD_FLXINSET_INSET_MASK I40E_MASK(0xFF, I40E_PRTQF_FD_FLXINSET_INSET_SHIFT) +#define I40E_PRTQF_FD_MSK(_i, _j) (0x00252000 + ((_i) * 64 + (_j) * 32)) /* _i=0...63, _j=0...1 */ /* Reset: CORER */ +#define I40E_PRTQF_FD_MSK_MAX_INDEX 63 +#define I40E_PRTQF_FD_MSK_MASK_SHIFT 0 +#define I40E_PRTQF_FD_MSK_MASK_MASK I40E_MASK(0xFFFF, I40E_PRTQF_FD_MSK_MASK_SHIFT) +#define I40E_PRTQF_FD_MSK_OFFSET_SHIFT 16 +#define I40E_PRTQF_FD_MSK_OFFSET_MASK I40E_MASK(0x3F, I40E_PRTQF_FD_MSK_OFFSET_SHIFT) +#define I40E_PRTQF_FLX_PIT(_i) (0x00255200 + ((_i) * 32)) /* _i=0...8 */ /* Reset: CORER */ +#define I40E_PRTQF_FLX_PIT_MAX_INDEX 8 +#define I40E_PRTQF_FLX_PIT_SOURCE_OFF_SHIFT 0 +#define I40E_PRTQF_FLX_PIT_SOURCE_OFF_MASK I40E_MASK(0x1F, I40E_PRTQF_FLX_PIT_SOURCE_OFF_SHIFT) +#define I40E_PRTQF_FLX_PIT_FSIZE_SHIFT 5 +#define I40E_PRTQF_FLX_PIT_FSIZE_MASK I40E_MASK(0x1F, I40E_PRTQF_FLX_PIT_FSIZE_SHIFT) +#define I40E_PRTQF_FLX_PIT_DEST_OFF_SHIFT 10 +#define I40E_PRTQF_FLX_PIT_DEST_OFF_MASK I40E_MASK(0x3F, I40E_PRTQF_FLX_PIT_DEST_OFF_SHIFT) +#define I40E_VFQF_HENA1(_i, _VF) (0x00230800 + ((_i) * 1024 + (_VF) * 4)) /* _i=0...1, _VF=0...127 */ /* Reset: CORER */ +#define I40E_VFQF_HENA1_MAX_INDEX 1 +#define I40E_VFQF_HENA1_PTYPE_ENA_SHIFT 0 +#define I40E_VFQF_HENA1_PTYPE_ENA_MASK I40E_MASK(0xFFFFFFFF, I40E_VFQF_HENA1_PTYPE_ENA_SHIFT) +#define I40E_VFQF_HKEY1(_i, _VF) (0x00228000 + ((_i) * 1024 + (_VF) * 4)) /* _i=0...12, _VF=0...127 */ /* Reset: CORER */ +#define I40E_VFQF_HKEY1_MAX_INDEX 12 +#define I40E_VFQF_HKEY1_KEY_0_SHIFT 0 +#define I40E_VFQF_HKEY1_KEY_0_MASK I40E_MASK(0xFF, I40E_VFQF_HKEY1_KEY_0_SHIFT) +#define I40E_VFQF_HKEY1_KEY_1_SHIFT 8 +#define I40E_VFQF_HKEY1_KEY_1_MASK I40E_MASK(0xFF, I40E_VFQF_HKEY1_KEY_1_SHIFT) +#define I40E_VFQF_HKEY1_KEY_2_SHIFT 16 +#define I40E_VFQF_HKEY1_KEY_2_MASK I40E_MASK(0xFF, I40E_VFQF_HKEY1_KEY_2_SHIFT) +#define I40E_VFQF_HKEY1_KEY_3_SHIFT 24 +#define I40E_VFQF_HKEY1_KEY_3_MASK I40E_MASK(0xFF, I40E_VFQF_HKEY1_KEY_3_SHIFT) +#define I40E_VFQF_HLUT1(_i, _VF) (0x00220000 + ((_i) * 1024 + (_VF) * 4)) /* _i=0...15, _VF=0...127 */ /* Reset: CORER */ +#define I40E_VFQF_HLUT1_MAX_INDEX 15 +#define I40E_VFQF_HLUT1_LUT0_SHIFT 0 +#define I40E_VFQF_HLUT1_LUT0_MASK I40E_MASK(0xF, I40E_VFQF_HLUT1_LUT0_SHIFT) +#define I40E_VFQF_HLUT1_LUT1_SHIFT 8 +#define I40E_VFQF_HLUT1_LUT1_MASK I40E_MASK(0xF, I40E_VFQF_HLUT1_LUT1_SHIFT) +#define I40E_VFQF_HLUT1_LUT2_SHIFT 16 +#define I40E_VFQF_HLUT1_LUT2_MASK I40E_MASK(0xF, I40E_VFQF_HLUT1_LUT2_SHIFT) +#define I40E_VFQF_HLUT1_LUT3_SHIFT 24 +#define I40E_VFQF_HLUT1_LUT3_MASK I40E_MASK(0xF, I40E_VFQF_HLUT1_LUT3_SHIFT) +#define I40E_VFQF_HREGION1(_i, _VF) (0x0022E000 + ((_i) * 1024 + (_VF) * 4)) /* _i=0...7, _VF=0...127 */ /* Reset: CORER */ +#define I40E_VFQF_HREGION1_MAX_INDEX 7 +#define I40E_VFQF_HREGION1_OVERRIDE_ENA_0_SHIFT 0 +#define I40E_VFQF_HREGION1_OVERRIDE_ENA_0_MASK I40E_MASK(0x1, I40E_VFQF_HREGION1_OVERRIDE_ENA_0_SHIFT) +#define I40E_VFQF_HREGION1_REGION_0_SHIFT 1 +#define I40E_VFQF_HREGION1_REGION_0_MASK I40E_MASK(0x7, I40E_VFQF_HREGION1_REGION_0_SHIFT) +#define I40E_VFQF_HREGION1_OVERRIDE_ENA_1_SHIFT 4 +#define I40E_VFQF_HREGION1_OVERRIDE_ENA_1_MASK I40E_MASK(0x1, I40E_VFQF_HREGION1_OVERRIDE_ENA_1_SHIFT) +#define I40E_VFQF_HREGION1_REGION_1_SHIFT 5 +#define I40E_VFQF_HREGION1_REGION_1_MASK I40E_MASK(0x7, I40E_VFQF_HREGION1_REGION_1_SHIFT) +#define I40E_VFQF_HREGION1_OVERRIDE_ENA_2_SHIFT 8 +#define I40E_VFQF_HREGION1_OVERRIDE_ENA_2_MASK I40E_MASK(0x1, I40E_VFQF_HREGION1_OVERRIDE_ENA_2_SHIFT) +#define I40E_VFQF_HREGION1_REGION_2_SHIFT 9 +#define I40E_VFQF_HREGION1_REGION_2_MASK I40E_MASK(0x7, I40E_VFQF_HREGION1_REGION_2_SHIFT) +#define I40E_VFQF_HREGION1_OVERRIDE_ENA_3_SHIFT 12 +#define I40E_VFQF_HREGION1_OVERRIDE_ENA_3_MASK I40E_MASK(0x1, I40E_VFQF_HREGION1_OVERRIDE_ENA_3_SHIFT) +#define I40E_VFQF_HREGION1_REGION_3_SHIFT 13 +#define I40E_VFQF_HREGION1_REGION_3_MASK I40E_MASK(0x7, I40E_VFQF_HREGION1_REGION_3_SHIFT) +#define I40E_VFQF_HREGION1_OVERRIDE_ENA_4_SHIFT 16 +#define I40E_VFQF_HREGION1_OVERRIDE_ENA_4_MASK I40E_MASK(0x1, I40E_VFQF_HREGION1_OVERRIDE_ENA_4_SHIFT) +#define I40E_VFQF_HREGION1_REGION_4_SHIFT 17 +#define I40E_VFQF_HREGION1_REGION_4_MASK I40E_MASK(0x7, I40E_VFQF_HREGION1_REGION_4_SHIFT) +#define I40E_VFQF_HREGION1_OVERRIDE_ENA_5_SHIFT 20 +#define I40E_VFQF_HREGION1_OVERRIDE_ENA_5_MASK I40E_MASK(0x1, I40E_VFQF_HREGION1_OVERRIDE_ENA_5_SHIFT) +#define I40E_VFQF_HREGION1_REGION_5_SHIFT 21 +#define I40E_VFQF_HREGION1_REGION_5_MASK I40E_MASK(0x7, I40E_VFQF_HREGION1_REGION_5_SHIFT) +#define I40E_VFQF_HREGION1_OVERRIDE_ENA_6_SHIFT 24 +#define I40E_VFQF_HREGION1_OVERRIDE_ENA_6_MASK I40E_MASK(0x1, I40E_VFQF_HREGION1_OVERRIDE_ENA_6_SHIFT) +#define I40E_VFQF_HREGION1_REGION_6_SHIFT 25 +#define I40E_VFQF_HREGION1_REGION_6_MASK I40E_MASK(0x7, I40E_VFQF_HREGION1_REGION_6_SHIFT) +#define I40E_VFQF_HREGION1_OVERRIDE_ENA_7_SHIFT 28 +#define I40E_VFQF_HREGION1_OVERRIDE_ENA_7_MASK I40E_MASK(0x1, I40E_VFQF_HREGION1_OVERRIDE_ENA_7_SHIFT) +#define I40E_VFQF_HREGION1_REGION_7_SHIFT 29 +#define I40E_VFQF_HREGION1_REGION_7_MASK I40E_MASK(0x7, I40E_VFQF_HREGION1_REGION_7_SHIFT) +#define I40E_VPQF_CTL(_VF) (0x001C0000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: VFR */ +#define I40E_VPQF_CTL_MAX_INDEX 127 +#define I40E_VPQF_CTL_PEHSIZE_SHIFT 0 +#define I40E_VPQF_CTL_PEHSIZE_MASK I40E_MASK(0x1F, I40E_VPQF_CTL_PEHSIZE_SHIFT) +#define I40E_VPQF_CTL_PEDSIZE_SHIFT 5 +#define I40E_VPQF_CTL_PEDSIZE_MASK I40E_MASK(0x1F, I40E_VPQF_CTL_PEDSIZE_SHIFT) +#define I40E_VPQF_CTL_FCHSIZE_SHIFT 10 +#define I40E_VPQF_CTL_FCHSIZE_MASK I40E_MASK(0xF, I40E_VPQF_CTL_FCHSIZE_SHIFT) +#define I40E_VPQF_CTL_FCDSIZE_SHIFT 14 +#define I40E_VPQF_CTL_FCDSIZE_MASK I40E_MASK(0x3, I40E_VPQF_CTL_FCDSIZE_SHIFT) +#define I40E_VSIQF_CTL(_VSI) (0x0020D800 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: PFR */ +#define I40E_VSIQF_CTL_MAX_INDEX 383 +#define I40E_VSIQF_CTL_FCOE_ENA_SHIFT 0 +#define I40E_VSIQF_CTL_FCOE_ENA_MASK I40E_MASK(0x1, I40E_VSIQF_CTL_FCOE_ENA_SHIFT) +#define I40E_VSIQF_CTL_PETCP_ENA_SHIFT 1 +#define I40E_VSIQF_CTL_PETCP_ENA_MASK I40E_MASK(0x1, I40E_VSIQF_CTL_PETCP_ENA_SHIFT) +#define I40E_VSIQF_CTL_PEUUDP_ENA_SHIFT 2 +#define I40E_VSIQF_CTL_PEUUDP_ENA_MASK I40E_MASK(0x1, I40E_VSIQF_CTL_PEUUDP_ENA_SHIFT) +#define I40E_VSIQF_CTL_PEMUDP_ENA_SHIFT 3 +#define I40E_VSIQF_CTL_PEMUDP_ENA_MASK I40E_MASK(0x1, I40E_VSIQF_CTL_PEMUDP_ENA_SHIFT) +#define I40E_VSIQF_CTL_PEUFRAG_ENA_SHIFT 4 +#define I40E_VSIQF_CTL_PEUFRAG_ENA_MASK I40E_MASK(0x1, I40E_VSIQF_CTL_PEUFRAG_ENA_SHIFT) +#define I40E_VSIQF_CTL_PEMFRAG_ENA_SHIFT 5 +#define I40E_VSIQF_CTL_PEMFRAG_ENA_MASK I40E_MASK(0x1, I40E_VSIQF_CTL_PEMFRAG_ENA_SHIFT) +#define I40E_VSIQF_TCREGION(_i, _VSI) (0x00206000 + ((_i) * 2048 + (_VSI) * 4)) /* _i=0...3, _VSI=0...383 */ /* Reset: PFR */ +#define I40E_VSIQF_TCREGION_MAX_INDEX 3 +#define I40E_VSIQF_TCREGION_TC_OFFSET_SHIFT 0 +#define I40E_VSIQF_TCREGION_TC_OFFSET_MASK I40E_MASK(0x1FF, I40E_VSIQF_TCREGION_TC_OFFSET_SHIFT) +#define I40E_VSIQF_TCREGION_TC_SIZE_SHIFT 9 +#define I40E_VSIQF_TCREGION_TC_SIZE_MASK I40E_MASK(0x7, I40E_VSIQF_TCREGION_TC_SIZE_SHIFT) +#define I40E_VSIQF_TCREGION_TC_OFFSET2_SHIFT 16 +#define I40E_VSIQF_TCREGION_TC_OFFSET2_MASK I40E_MASK(0x1FF, I40E_VSIQF_TCREGION_TC_OFFSET2_SHIFT) +#define I40E_VSIQF_TCREGION_TC_SIZE2_SHIFT 25 +#define I40E_VSIQF_TCREGION_TC_SIZE2_MASK I40E_MASK(0x7, I40E_VSIQF_TCREGION_TC_SIZE2_SHIFT) +#define I40E_GL_FCOECRC(_i) (0x00314d80 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ +#define I40E_GL_FCOECRC_MAX_INDEX 143 +#define I40E_GL_FCOECRC_FCOECRC_SHIFT 0 +#define I40E_GL_FCOECRC_FCOECRC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOECRC_FCOECRC_SHIFT) +#define I40E_GL_FCOEDDPC(_i) (0x00314480 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ +#define I40E_GL_FCOEDDPC_MAX_INDEX 143 +#define I40E_GL_FCOEDDPC_FCOEDDPC_SHIFT 0 +#define I40E_GL_FCOEDDPC_FCOEDDPC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOEDDPC_FCOEDDPC_SHIFT) +#define I40E_GL_FCOEDIFEC(_i) (0x00318480 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ +#define I40E_GL_FCOEDIFEC_MAX_INDEX 143 +#define I40E_GL_FCOEDIFEC_FCOEDIFRC_SHIFT 0 +#define I40E_GL_FCOEDIFEC_FCOEDIFRC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOEDIFEC_FCOEDIFRC_SHIFT) +#define I40E_GL_FCOEDIFTCL(_i) (0x00354000 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ +#define I40E_GL_FCOEDIFTCL_MAX_INDEX 143 +#define I40E_GL_FCOEDIFTCL_FCOEDIFTC_SHIFT 0 +#define I40E_GL_FCOEDIFTCL_FCOEDIFTC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOEDIFTCL_FCOEDIFTC_SHIFT) +#define I40E_GL_FCOEDIXEC(_i) (0x0034c000 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ +#define I40E_GL_FCOEDIXEC_MAX_INDEX 143 +#define I40E_GL_FCOEDIXEC_FCOEDIXEC_SHIFT 0 +#define I40E_GL_FCOEDIXEC_FCOEDIXEC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOEDIXEC_FCOEDIXEC_SHIFT) +#define I40E_GL_FCOEDIXVC(_i) (0x00350000 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ +#define I40E_GL_FCOEDIXVC_MAX_INDEX 143 +#define I40E_GL_FCOEDIXVC_FCOEDIXVC_SHIFT 0 +#define I40E_GL_FCOEDIXVC_FCOEDIXVC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOEDIXVC_FCOEDIXVC_SHIFT) +#define I40E_GL_FCOEDWRCH(_i) (0x00320004 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ +#define I40E_GL_FCOEDWRCH_MAX_INDEX 143 +#define I40E_GL_FCOEDWRCH_FCOEDWRCH_SHIFT 0 +#define I40E_GL_FCOEDWRCH_FCOEDWRCH_MASK I40E_MASK(0xFFFF, I40E_GL_FCOEDWRCH_FCOEDWRCH_SHIFT) +#define I40E_GL_FCOEDWRCL(_i) (0x00320000 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ +#define I40E_GL_FCOEDWRCL_MAX_INDEX 143 +#define I40E_GL_FCOEDWRCL_FCOEDWRCL_SHIFT 0 +#define I40E_GL_FCOEDWRCL_FCOEDWRCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOEDWRCL_FCOEDWRCL_SHIFT) +#define I40E_GL_FCOEDWTCH(_i) (0x00348084 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ +#define I40E_GL_FCOEDWTCH_MAX_INDEX 143 +#define I40E_GL_FCOEDWTCH_FCOEDWTCH_SHIFT 0 +#define I40E_GL_FCOEDWTCH_FCOEDWTCH_MASK I40E_MASK(0xFFFF, I40E_GL_FCOEDWTCH_FCOEDWTCH_SHIFT) +#define I40E_GL_FCOEDWTCL(_i) (0x00348080 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ +#define I40E_GL_FCOEDWTCL_MAX_INDEX 143 +#define I40E_GL_FCOEDWTCL_FCOEDWTCL_SHIFT 0 +#define I40E_GL_FCOEDWTCL_FCOEDWTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOEDWTCL_FCOEDWTCL_SHIFT) +#define I40E_GL_FCOELAST(_i) (0x00314000 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ +#define I40E_GL_FCOELAST_MAX_INDEX 143 +#define I40E_GL_FCOELAST_FCOELAST_SHIFT 0 +#define I40E_GL_FCOELAST_FCOELAST_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOELAST_FCOELAST_SHIFT) +#define I40E_GL_FCOEPRC(_i) (0x00315200 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ +#define I40E_GL_FCOEPRC_MAX_INDEX 143 +#define I40E_GL_FCOEPRC_FCOEPRC_SHIFT 0 +#define I40E_GL_FCOEPRC_FCOEPRC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOEPRC_FCOEPRC_SHIFT) +#define I40E_GL_FCOEPTC(_i) (0x00344C00 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ +#define I40E_GL_FCOEPTC_MAX_INDEX 143 +#define I40E_GL_FCOEPTC_FCOEPTC_SHIFT 0 +#define I40E_GL_FCOEPTC_FCOEPTC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOEPTC_FCOEPTC_SHIFT) +#define I40E_GL_FCOERPDC(_i) (0x00324000 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ +#define I40E_GL_FCOERPDC_MAX_INDEX 143 +#define I40E_GL_FCOERPDC_FCOERPDC_SHIFT 0 +#define I40E_GL_FCOERPDC_FCOERPDC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOERPDC_FCOERPDC_SHIFT) +#define I40E_GL_RXERR1_L(_i) (0x00318000 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ +#define I40E_GL_RXERR1_L_MAX_INDEX 143 +#define I40E_GL_RXERR1_L_FCOEDIFRC_SHIFT 0 +#define I40E_GL_RXERR1_L_FCOEDIFRC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_RXERR1_L_FCOEDIFRC_SHIFT) +#define I40E_GL_RXERR2_L(_i) (0x0031c000 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ +#define I40E_GL_RXERR2_L_MAX_INDEX 143 +#define I40E_GL_RXERR2_L_FCOEDIXAC_SHIFT 0 +#define I40E_GL_RXERR2_L_FCOEDIXAC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_RXERR2_L_FCOEDIXAC_SHIFT) +#define I40E_GLPRT_BPRCH(_i) (0x003005E4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_BPRCH_MAX_INDEX 3 +#define I40E_GLPRT_BPRCH_UPRCH_SHIFT 0 +#define I40E_GLPRT_BPRCH_UPRCH_MASK I40E_MASK(0xFFFF, I40E_GLPRT_BPRCH_UPRCH_SHIFT) +#define I40E_GLPRT_BPRCL(_i) (0x003005E0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_BPRCL_MAX_INDEX 3 +#define I40E_GLPRT_BPRCL_UPRCH_SHIFT 0 +#define I40E_GLPRT_BPRCL_UPRCH_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_BPRCL_UPRCH_SHIFT) +#define I40E_GLPRT_BPTCH(_i) (0x00300A04 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_BPTCH_MAX_INDEX 3 +#define I40E_GLPRT_BPTCH_UPRCH_SHIFT 0 +#define I40E_GLPRT_BPTCH_UPRCH_MASK I40E_MASK(0xFFFF, I40E_GLPRT_BPTCH_UPRCH_SHIFT) +#define I40E_GLPRT_BPTCL(_i) (0x00300A00 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_BPTCL_MAX_INDEX 3 +#define I40E_GLPRT_BPTCL_UPRCH_SHIFT 0 +#define I40E_GLPRT_BPTCL_UPRCH_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_BPTCL_UPRCH_SHIFT) +#define I40E_GLPRT_CRCERRS(_i) (0x00300080 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_CRCERRS_MAX_INDEX 3 +#define I40E_GLPRT_CRCERRS_CRCERRS_SHIFT 0 +#define I40E_GLPRT_CRCERRS_CRCERRS_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_CRCERRS_CRCERRS_SHIFT) +#define I40E_GLPRT_GORCH(_i) (0x00300004 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_GORCH_MAX_INDEX 3 +#define I40E_GLPRT_GORCH_GORCH_SHIFT 0 +#define I40E_GLPRT_GORCH_GORCH_MASK I40E_MASK(0xFFFF, I40E_GLPRT_GORCH_GORCH_SHIFT) +#define I40E_GLPRT_GORCL(_i) (0x00300000 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_GORCL_MAX_INDEX 3 +#define I40E_GLPRT_GORCL_GORCL_SHIFT 0 +#define I40E_GLPRT_GORCL_GORCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_GORCL_GORCL_SHIFT) +#define I40E_GLPRT_GOTCH(_i) (0x00300684 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_GOTCH_MAX_INDEX 3 +#define I40E_GLPRT_GOTCH_GOTCH_SHIFT 0 +#define I40E_GLPRT_GOTCH_GOTCH_MASK I40E_MASK(0xFFFF, I40E_GLPRT_GOTCH_GOTCH_SHIFT) +#define I40E_GLPRT_GOTCL(_i) (0x00300680 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_GOTCL_MAX_INDEX 3 +#define I40E_GLPRT_GOTCL_GOTCL_SHIFT 0 +#define I40E_GLPRT_GOTCL_GOTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_GOTCL_GOTCL_SHIFT) +#define I40E_GLPRT_ILLERRC(_i) (0x003000E0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_ILLERRC_MAX_INDEX 3 +#define I40E_GLPRT_ILLERRC_ILLERRC_SHIFT 0 +#define I40E_GLPRT_ILLERRC_ILLERRC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_ILLERRC_ILLERRC_SHIFT) +#define I40E_GLPRT_LDPC(_i) (0x00300620 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_LDPC_MAX_INDEX 3 +#define I40E_GLPRT_LDPC_LDPC_SHIFT 0 +#define I40E_GLPRT_LDPC_LDPC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_LDPC_LDPC_SHIFT) +#define I40E_GLPRT_LXOFFRXC(_i) (0x00300160 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_LXOFFRXC_MAX_INDEX 3 +#define I40E_GLPRT_LXOFFRXC_LXOFFRXCNT_SHIFT 0 +#define I40E_GLPRT_LXOFFRXC_LXOFFRXCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_LXOFFRXC_LXOFFRXCNT_SHIFT) +#define I40E_GLPRT_LXOFFTXC(_i) (0x003009A0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_LXOFFTXC_MAX_INDEX 3 +#define I40E_GLPRT_LXOFFTXC_LXOFFTXC_SHIFT 0 +#define I40E_GLPRT_LXOFFTXC_LXOFFTXC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_LXOFFTXC_LXOFFTXC_SHIFT) +#define I40E_GLPRT_LXONRXC(_i) (0x00300140 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_LXONRXC_MAX_INDEX 3 +#define I40E_GLPRT_LXONRXC_LXONRXCNT_SHIFT 0 +#define I40E_GLPRT_LXONRXC_LXONRXCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_LXONRXC_LXONRXCNT_SHIFT) +#define I40E_GLPRT_LXONTXC(_i) (0x00300980 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_LXONTXC_MAX_INDEX 3 +#define I40E_GLPRT_LXONTXC_LXONTXC_SHIFT 0 +#define I40E_GLPRT_LXONTXC_LXONTXC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_LXONTXC_LXONTXC_SHIFT) +#define I40E_GLPRT_MLFC(_i) (0x00300020 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_MLFC_MAX_INDEX 3 +#define I40E_GLPRT_MLFC_MLFC_SHIFT 0 +#define I40E_GLPRT_MLFC_MLFC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_MLFC_MLFC_SHIFT) +#define I40E_GLPRT_MPRCH(_i) (0x003005C4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_MPRCH_MAX_INDEX 3 +#define I40E_GLPRT_MPRCH_MPRCH_SHIFT 0 +#define I40E_GLPRT_MPRCH_MPRCH_MASK I40E_MASK(0xFFFF, I40E_GLPRT_MPRCH_MPRCH_SHIFT) +#define I40E_GLPRT_MPRCL(_i) (0x003005C0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_MPRCL_MAX_INDEX 3 +#define I40E_GLPRT_MPRCL_MPRCL_SHIFT 0 +#define I40E_GLPRT_MPRCL_MPRCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_MPRCL_MPRCL_SHIFT) +#define I40E_GLPRT_MPTCH(_i) (0x003009E4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_MPTCH_MAX_INDEX 3 +#define I40E_GLPRT_MPTCH_MPTCH_SHIFT 0 +#define I40E_GLPRT_MPTCH_MPTCH_MASK I40E_MASK(0xFFFF, I40E_GLPRT_MPTCH_MPTCH_SHIFT) +#define I40E_GLPRT_MPTCL(_i) (0x003009E0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_MPTCL_MAX_INDEX 3 +#define I40E_GLPRT_MPTCL_MPTCL_SHIFT 0 +#define I40E_GLPRT_MPTCL_MPTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_MPTCL_MPTCL_SHIFT) +#define I40E_GLPRT_MRFC(_i) (0x00300040 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_MRFC_MAX_INDEX 3 +#define I40E_GLPRT_MRFC_MRFC_SHIFT 0 +#define I40E_GLPRT_MRFC_MRFC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_MRFC_MRFC_SHIFT) +#define I40E_GLPRT_PRC1023H(_i) (0x00300504 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PRC1023H_MAX_INDEX 3 +#define I40E_GLPRT_PRC1023H_PRC1023H_SHIFT 0 +#define I40E_GLPRT_PRC1023H_PRC1023H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PRC1023H_PRC1023H_SHIFT) +#define I40E_GLPRT_PRC1023L(_i) (0x00300500 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PRC1023L_MAX_INDEX 3 +#define I40E_GLPRT_PRC1023L_PRC1023L_SHIFT 0 +#define I40E_GLPRT_PRC1023L_PRC1023L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PRC1023L_PRC1023L_SHIFT) +#define I40E_GLPRT_PRC127H(_i) (0x003004A4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PRC127H_MAX_INDEX 3 +#define I40E_GLPRT_PRC127H_PRC127H_SHIFT 0 +#define I40E_GLPRT_PRC127H_PRC127H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PRC127H_PRC127H_SHIFT) +#define I40E_GLPRT_PRC127L(_i) (0x003004A0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PRC127L_MAX_INDEX 3 +#define I40E_GLPRT_PRC127L_PRC127L_SHIFT 0 +#define I40E_GLPRT_PRC127L_PRC127L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PRC127L_PRC127L_SHIFT) +#define I40E_GLPRT_PRC1522H(_i) (0x00300524 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PRC1522H_MAX_INDEX 3 +#define I40E_GLPRT_PRC1522H_PRC1522H_SHIFT 0 +#define I40E_GLPRT_PRC1522H_PRC1522H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PRC1522H_PRC1522H_SHIFT) +#define I40E_GLPRT_PRC1522L(_i) (0x00300520 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PRC1522L_MAX_INDEX 3 +#define I40E_GLPRT_PRC1522L_PRC1522L_SHIFT 0 +#define I40E_GLPRT_PRC1522L_PRC1522L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PRC1522L_PRC1522L_SHIFT) +#define I40E_GLPRT_PRC255H(_i) (0x003004C4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PRC255H_MAX_INDEX 3 +#define I40E_GLPRT_PRC255H_PRTPRC255H_SHIFT 0 +#define I40E_GLPRT_PRC255H_PRTPRC255H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PRC255H_PRTPRC255H_SHIFT) +#define I40E_GLPRT_PRC255L(_i) (0x003004C0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PRC255L_MAX_INDEX 3 +#define I40E_GLPRT_PRC255L_PRC255L_SHIFT 0 +#define I40E_GLPRT_PRC255L_PRC255L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PRC255L_PRC255L_SHIFT) +#define I40E_GLPRT_PRC511H(_i) (0x003004E4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PRC511H_MAX_INDEX 3 +#define I40E_GLPRT_PRC511H_PRC511H_SHIFT 0 +#define I40E_GLPRT_PRC511H_PRC511H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PRC511H_PRC511H_SHIFT) +#define I40E_GLPRT_PRC511L(_i) (0x003004E0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PRC511L_MAX_INDEX 3 +#define I40E_GLPRT_PRC511L_PRC511L_SHIFT 0 +#define I40E_GLPRT_PRC511L_PRC511L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PRC511L_PRC511L_SHIFT) +#define I40E_GLPRT_PRC64H(_i) (0x00300484 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PRC64H_MAX_INDEX 3 +#define I40E_GLPRT_PRC64H_PRC64H_SHIFT 0 +#define I40E_GLPRT_PRC64H_PRC64H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PRC64H_PRC64H_SHIFT) +#define I40E_GLPRT_PRC64L(_i) (0x00300480 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PRC64L_MAX_INDEX 3 +#define I40E_GLPRT_PRC64L_PRC64L_SHIFT 0 +#define I40E_GLPRT_PRC64L_PRC64L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PRC64L_PRC64L_SHIFT) +#define I40E_GLPRT_PRC9522H(_i) (0x00300544 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PRC9522H_MAX_INDEX 3 +#define I40E_GLPRT_PRC9522H_PRC1522H_SHIFT 0 +#define I40E_GLPRT_PRC9522H_PRC1522H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PRC9522H_PRC1522H_SHIFT) +#define I40E_GLPRT_PRC9522L(_i) (0x00300540 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PRC9522L_MAX_INDEX 3 +#define I40E_GLPRT_PRC9522L_PRC1522L_SHIFT 0 +#define I40E_GLPRT_PRC9522L_PRC1522L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PRC9522L_PRC1522L_SHIFT) +#define I40E_GLPRT_PTC1023H(_i) (0x00300724 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PTC1023H_MAX_INDEX 3 +#define I40E_GLPRT_PTC1023H_PTC1023H_SHIFT 0 +#define I40E_GLPRT_PTC1023H_PTC1023H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PTC1023H_PTC1023H_SHIFT) +#define I40E_GLPRT_PTC1023L(_i) (0x00300720 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PTC1023L_MAX_INDEX 3 +#define I40E_GLPRT_PTC1023L_PTC1023L_SHIFT 0 +#define I40E_GLPRT_PTC1023L_PTC1023L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PTC1023L_PTC1023L_SHIFT) +#define I40E_GLPRT_PTC127H(_i) (0x003006C4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PTC127H_MAX_INDEX 3 +#define I40E_GLPRT_PTC127H_PTC127H_SHIFT 0 +#define I40E_GLPRT_PTC127H_PTC127H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PTC127H_PTC127H_SHIFT) +#define I40E_GLPRT_PTC127L(_i) (0x003006C0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PTC127L_MAX_INDEX 3 +#define I40E_GLPRT_PTC127L_PTC127L_SHIFT 0 +#define I40E_GLPRT_PTC127L_PTC127L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PTC127L_PTC127L_SHIFT) +#define I40E_GLPRT_PTC1522H(_i) (0x00300744 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PTC1522H_MAX_INDEX 3 +#define I40E_GLPRT_PTC1522H_PTC1522H_SHIFT 0 +#define I40E_GLPRT_PTC1522H_PTC1522H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PTC1522H_PTC1522H_SHIFT) +#define I40E_GLPRT_PTC1522L(_i) (0x00300740 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PTC1522L_MAX_INDEX 3 +#define I40E_GLPRT_PTC1522L_PTC1522L_SHIFT 0 +#define I40E_GLPRT_PTC1522L_PTC1522L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PTC1522L_PTC1522L_SHIFT) +#define I40E_GLPRT_PTC255H(_i) (0x003006E4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PTC255H_MAX_INDEX 3 +#define I40E_GLPRT_PTC255H_PTC255H_SHIFT 0 +#define I40E_GLPRT_PTC255H_PTC255H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PTC255H_PTC255H_SHIFT) +#define I40E_GLPRT_PTC255L(_i) (0x003006E0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PTC255L_MAX_INDEX 3 +#define I40E_GLPRT_PTC255L_PTC255L_SHIFT 0 +#define I40E_GLPRT_PTC255L_PTC255L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PTC255L_PTC255L_SHIFT) +#define I40E_GLPRT_PTC511H(_i) (0x00300704 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PTC511H_MAX_INDEX 3 +#define I40E_GLPRT_PTC511H_PTC511H_SHIFT 0 +#define I40E_GLPRT_PTC511H_PTC511H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PTC511H_PTC511H_SHIFT) +#define I40E_GLPRT_PTC511L(_i) (0x00300700 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PTC511L_MAX_INDEX 3 +#define I40E_GLPRT_PTC511L_PTC511L_SHIFT 0 +#define I40E_GLPRT_PTC511L_PTC511L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PTC511L_PTC511L_SHIFT) +#define I40E_GLPRT_PTC64H(_i) (0x003006A4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PTC64H_MAX_INDEX 3 +#define I40E_GLPRT_PTC64H_PTC64H_SHIFT 0 +#define I40E_GLPRT_PTC64H_PTC64H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PTC64H_PTC64H_SHIFT) +#define I40E_GLPRT_PTC64L(_i) (0x003006A0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PTC64L_MAX_INDEX 3 +#define I40E_GLPRT_PTC64L_PTC64L_SHIFT 0 +#define I40E_GLPRT_PTC64L_PTC64L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PTC64L_PTC64L_SHIFT) +#define I40E_GLPRT_PTC9522H(_i) (0x00300764 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PTC9522H_MAX_INDEX 3 +#define I40E_GLPRT_PTC9522H_PTC9522H_SHIFT 0 +#define I40E_GLPRT_PTC9522H_PTC9522H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PTC9522H_PTC9522H_SHIFT) +#define I40E_GLPRT_PTC9522L(_i) (0x00300760 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_PTC9522L_MAX_INDEX 3 +#define I40E_GLPRT_PTC9522L_PTC9522L_SHIFT 0 +#define I40E_GLPRT_PTC9522L_PTC9522L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PTC9522L_PTC9522L_SHIFT) +#define I40E_GLPRT_PXOFFRXC(_i, _j) (0x00300280 + ((_i) * 8 + (_j) * 32)) /* _i=0...3, _j=0...7 */ /* Reset: CORER */ +#define I40E_GLPRT_PXOFFRXC_MAX_INDEX 3 +#define I40E_GLPRT_PXOFFRXC_PRPXOFFRXCNT_SHIFT 0 +#define I40E_GLPRT_PXOFFRXC_PRPXOFFRXCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PXOFFRXC_PRPXOFFRXCNT_SHIFT) +#define I40E_GLPRT_PXOFFTXC(_i, _j) (0x00300880 + ((_i) * 8 + (_j) * 32)) /* _i=0...3, _j=0...7 */ /* Reset: CORER */ +#define I40E_GLPRT_PXOFFTXC_MAX_INDEX 3 +#define I40E_GLPRT_PXOFFTXC_PRPXOFFTXCNT_SHIFT 0 +#define I40E_GLPRT_PXOFFTXC_PRPXOFFTXCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PXOFFTXC_PRPXOFFTXCNT_SHIFT) +#define I40E_GLPRT_PXONRXC(_i, _j) (0x00300180 + ((_i) * 8 + (_j) * 32)) /* _i=0...3, _j=0...7 */ /* Reset: CORER */ +#define I40E_GLPRT_PXONRXC_MAX_INDEX 3 +#define I40E_GLPRT_PXONRXC_PRPXONRXCNT_SHIFT 0 +#define I40E_GLPRT_PXONRXC_PRPXONRXCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PXONRXC_PRPXONRXCNT_SHIFT) +#define I40E_GLPRT_PXONTXC(_i, _j) (0x00300780 + ((_i) * 8 + (_j) * 32)) /* _i=0...3, _j=0...7 */ /* Reset: CORER */ +#define I40E_GLPRT_PXONTXC_MAX_INDEX 3 +#define I40E_GLPRT_PXONTXC_PRPXONTXC_SHIFT 0 +#define I40E_GLPRT_PXONTXC_PRPXONTXC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PXONTXC_PRPXONTXC_SHIFT) +#define I40E_GLPRT_RDPC(_i) (0x00300600 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_RDPC_MAX_INDEX 3 +#define I40E_GLPRT_RDPC_RDPC_SHIFT 0 +#define I40E_GLPRT_RDPC_RDPC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_RDPC_RDPC_SHIFT) +#define I40E_GLPRT_RFC(_i) (0x00300560 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_RFC_MAX_INDEX 3 +#define I40E_GLPRT_RFC_RFC_SHIFT 0 +#define I40E_GLPRT_RFC_RFC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_RFC_RFC_SHIFT) +#define I40E_GLPRT_RJC(_i) (0x00300580 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_RJC_MAX_INDEX 3 +#define I40E_GLPRT_RJC_RJC_SHIFT 0 +#define I40E_GLPRT_RJC_RJC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_RJC_RJC_SHIFT) +#define I40E_GLPRT_RLEC(_i) (0x003000A0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_RLEC_MAX_INDEX 3 +#define I40E_GLPRT_RLEC_RLEC_SHIFT 0 +#define I40E_GLPRT_RLEC_RLEC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_RLEC_RLEC_SHIFT) +#define I40E_GLPRT_ROC(_i) (0x00300120 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_ROC_MAX_INDEX 3 +#define I40E_GLPRT_ROC_ROC_SHIFT 0 +#define I40E_GLPRT_ROC_ROC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_ROC_ROC_SHIFT) +#define I40E_GLPRT_RUC(_i) (0x00300100 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_RUC_MAX_INDEX 3 +#define I40E_GLPRT_RUC_RUC_SHIFT 0 +#define I40E_GLPRT_RUC_RUC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_RUC_RUC_SHIFT) +#define I40E_GLPRT_RUPP(_i) (0x00300660 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_RUPP_MAX_INDEX 3 +#define I40E_GLPRT_RUPP_RUPP_SHIFT 0 +#define I40E_GLPRT_RUPP_RUPP_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_RUPP_RUPP_SHIFT) +#define I40E_GLPRT_RXON2OFFCNT(_i, _j) (0x00300380 + ((_i) * 8 + (_j) * 32)) /* _i=0...3, _j=0...7 */ /* Reset: CORER */ +#define I40E_GLPRT_RXON2OFFCNT_MAX_INDEX 3 +#define I40E_GLPRT_RXON2OFFCNT_PRRXON2OFFCNT_SHIFT 0 +#define I40E_GLPRT_RXON2OFFCNT_PRRXON2OFFCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_RXON2OFFCNT_PRRXON2OFFCNT_SHIFT) +#define I40E_GLPRT_TDOLD(_i) (0x00300A20 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_TDOLD_MAX_INDEX 3 +#define I40E_GLPRT_TDOLD_GLPRT_TDOLD_SHIFT 0 +#define I40E_GLPRT_TDOLD_GLPRT_TDOLD_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_TDOLD_GLPRT_TDOLD_SHIFT) +#define I40E_GLPRT_TDPC(_i) (0x00375400 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_TDPC_MAX_INDEX 3 +#define I40E_GLPRT_TDPC_TDPC_SHIFT 0 +#define I40E_GLPRT_TDPC_TDPC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_TDPC_TDPC_SHIFT) +#define I40E_GLPRT_UPRCH(_i) (0x003005A4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_UPRCH_MAX_INDEX 3 +#define I40E_GLPRT_UPRCH_UPRCH_SHIFT 0 +#define I40E_GLPRT_UPRCH_UPRCH_MASK I40E_MASK(0xFFFF, I40E_GLPRT_UPRCH_UPRCH_SHIFT) +#define I40E_GLPRT_UPRCL(_i) (0x003005A0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_UPRCL_MAX_INDEX 3 +#define I40E_GLPRT_UPRCL_UPRCL_SHIFT 0 +#define I40E_GLPRT_UPRCL_UPRCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_UPRCL_UPRCL_SHIFT) +#define I40E_GLPRT_UPTCH(_i) (0x003009C4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_UPTCH_MAX_INDEX 3 +#define I40E_GLPRT_UPTCH_UPTCH_SHIFT 0 +#define I40E_GLPRT_UPTCH_UPTCH_MASK I40E_MASK(0xFFFF, I40E_GLPRT_UPTCH_UPTCH_SHIFT) +#define I40E_GLPRT_UPTCL(_i) (0x003009C0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_GLPRT_UPTCL_MAX_INDEX 3 +#define I40E_GLPRT_UPTCL_VUPTCH_SHIFT 0 +#define I40E_GLPRT_UPTCL_VUPTCH_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_UPTCL_VUPTCH_SHIFT) +#define I40E_GLSW_BPRCH(_i) (0x00370104 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLSW_BPRCH_MAX_INDEX 15 +#define I40E_GLSW_BPRCH_BPRCH_SHIFT 0 +#define I40E_GLSW_BPRCH_BPRCH_MASK I40E_MASK(0xFFFF, I40E_GLSW_BPRCH_BPRCH_SHIFT) +#define I40E_GLSW_BPRCL(_i) (0x00370100 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLSW_BPRCL_MAX_INDEX 15 +#define I40E_GLSW_BPRCL_BPRCL_SHIFT 0 +#define I40E_GLSW_BPRCL_BPRCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSW_BPRCL_BPRCL_SHIFT) +#define I40E_GLSW_BPTCH(_i) (0x00340104 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLSW_BPTCH_MAX_INDEX 15 +#define I40E_GLSW_BPTCH_BPTCH_SHIFT 0 +#define I40E_GLSW_BPTCH_BPTCH_MASK I40E_MASK(0xFFFF, I40E_GLSW_BPTCH_BPTCH_SHIFT) +#define I40E_GLSW_BPTCL(_i) (0x00340100 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLSW_BPTCL_MAX_INDEX 15 +#define I40E_GLSW_BPTCL_BPTCL_SHIFT 0 +#define I40E_GLSW_BPTCL_BPTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSW_BPTCL_BPTCL_SHIFT) +#define I40E_GLSW_GORCH(_i) (0x0035C004 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLSW_GORCH_MAX_INDEX 15 +#define I40E_GLSW_GORCH_GORCH_SHIFT 0 +#define I40E_GLSW_GORCH_GORCH_MASK I40E_MASK(0xFFFF, I40E_GLSW_GORCH_GORCH_SHIFT) +#define I40E_GLSW_GORCL(_i) (0x0035c000 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLSW_GORCL_MAX_INDEX 15 +#define I40E_GLSW_GORCL_GORCL_SHIFT 0 +#define I40E_GLSW_GORCL_GORCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSW_GORCL_GORCL_SHIFT) +#define I40E_GLSW_GOTCH(_i) (0x0032C004 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLSW_GOTCH_MAX_INDEX 15 +#define I40E_GLSW_GOTCH_GOTCH_SHIFT 0 +#define I40E_GLSW_GOTCH_GOTCH_MASK I40E_MASK(0xFFFF, I40E_GLSW_GOTCH_GOTCH_SHIFT) +#define I40E_GLSW_GOTCL(_i) (0x0032c000 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLSW_GOTCL_MAX_INDEX 15 +#define I40E_GLSW_GOTCL_GOTCL_SHIFT 0 +#define I40E_GLSW_GOTCL_GOTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSW_GOTCL_GOTCL_SHIFT) +#define I40E_GLSW_MPRCH(_i) (0x00370084 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLSW_MPRCH_MAX_INDEX 15 +#define I40E_GLSW_MPRCH_MPRCH_SHIFT 0 +#define I40E_GLSW_MPRCH_MPRCH_MASK I40E_MASK(0xFFFF, I40E_GLSW_MPRCH_MPRCH_SHIFT) +#define I40E_GLSW_MPRCL(_i) (0x00370080 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLSW_MPRCL_MAX_INDEX 15 +#define I40E_GLSW_MPRCL_MPRCL_SHIFT 0 +#define I40E_GLSW_MPRCL_MPRCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSW_MPRCL_MPRCL_SHIFT) +#define I40E_GLSW_MPTCH(_i) (0x00340084 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLSW_MPTCH_MAX_INDEX 15 +#define I40E_GLSW_MPTCH_MPTCH_SHIFT 0 +#define I40E_GLSW_MPTCH_MPTCH_MASK I40E_MASK(0xFFFF, I40E_GLSW_MPTCH_MPTCH_SHIFT) +#define I40E_GLSW_MPTCL(_i) (0x00340080 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLSW_MPTCL_MAX_INDEX 15 +#define I40E_GLSW_MPTCL_MPTCL_SHIFT 0 +#define I40E_GLSW_MPTCL_MPTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSW_MPTCL_MPTCL_SHIFT) +#define I40E_GLSW_RUPP(_i) (0x00370180 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLSW_RUPP_MAX_INDEX 15 +#define I40E_GLSW_RUPP_RUPP_SHIFT 0 +#define I40E_GLSW_RUPP_RUPP_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSW_RUPP_RUPP_SHIFT) +#define I40E_GLSW_TDPC(_i) (0x00348000 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLSW_TDPC_MAX_INDEX 15 +#define I40E_GLSW_TDPC_TDPC_SHIFT 0 +#define I40E_GLSW_TDPC_TDPC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSW_TDPC_TDPC_SHIFT) +#define I40E_GLSW_UPRCH(_i) (0x00370004 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLSW_UPRCH_MAX_INDEX 15 +#define I40E_GLSW_UPRCH_UPRCH_SHIFT 0 +#define I40E_GLSW_UPRCH_UPRCH_MASK I40E_MASK(0xFFFF, I40E_GLSW_UPRCH_UPRCH_SHIFT) +#define I40E_GLSW_UPRCL(_i) (0x00370000 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLSW_UPRCL_MAX_INDEX 15 +#define I40E_GLSW_UPRCL_UPRCL_SHIFT 0 +#define I40E_GLSW_UPRCL_UPRCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSW_UPRCL_UPRCL_SHIFT) +#define I40E_GLSW_UPTCH(_i) (0x00340004 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLSW_UPTCH_MAX_INDEX 15 +#define I40E_GLSW_UPTCH_UPTCH_SHIFT 0 +#define I40E_GLSW_UPTCH_UPTCH_MASK I40E_MASK(0xFFFF, I40E_GLSW_UPTCH_UPTCH_SHIFT) +#define I40E_GLSW_UPTCL(_i) (0x00340000 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_GLSW_UPTCL_MAX_INDEX 15 +#define I40E_GLSW_UPTCL_UPTCL_SHIFT 0 +#define I40E_GLSW_UPTCL_UPTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSW_UPTCL_UPTCL_SHIFT) +#define I40E_GLV_BPRCH(_i) (0x0036D804 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ +#define I40E_GLV_BPRCH_MAX_INDEX 383 +#define I40E_GLV_BPRCH_BPRCH_SHIFT 0 +#define I40E_GLV_BPRCH_BPRCH_MASK I40E_MASK(0xFFFF, I40E_GLV_BPRCH_BPRCH_SHIFT) +#define I40E_GLV_BPRCL(_i) (0x0036d800 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ +#define I40E_GLV_BPRCL_MAX_INDEX 383 +#define I40E_GLV_BPRCL_BPRCL_SHIFT 0 +#define I40E_GLV_BPRCL_BPRCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_BPRCL_BPRCL_SHIFT) +#define I40E_GLV_BPTCH(_i) (0x0033D804 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ +#define I40E_GLV_BPTCH_MAX_INDEX 383 +#define I40E_GLV_BPTCH_BPTCH_SHIFT 0 +#define I40E_GLV_BPTCH_BPTCH_MASK I40E_MASK(0xFFFF, I40E_GLV_BPTCH_BPTCH_SHIFT) +#define I40E_GLV_BPTCL(_i) (0x0033d800 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ +#define I40E_GLV_BPTCL_MAX_INDEX 383 +#define I40E_GLV_BPTCL_BPTCL_SHIFT 0 +#define I40E_GLV_BPTCL_BPTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_BPTCL_BPTCL_SHIFT) +#define I40E_GLV_GORCH(_i) (0x00358004 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ +#define I40E_GLV_GORCH_MAX_INDEX 383 +#define I40E_GLV_GORCH_GORCH_SHIFT 0 +#define I40E_GLV_GORCH_GORCH_MASK I40E_MASK(0xFFFF, I40E_GLV_GORCH_GORCH_SHIFT) +#define I40E_GLV_GORCL(_i) (0x00358000 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ +#define I40E_GLV_GORCL_MAX_INDEX 383 +#define I40E_GLV_GORCL_GORCL_SHIFT 0 +#define I40E_GLV_GORCL_GORCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_GORCL_GORCL_SHIFT) +#define I40E_GLV_GOTCH(_i) (0x00328004 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ +#define I40E_GLV_GOTCH_MAX_INDEX 383 +#define I40E_GLV_GOTCH_GOTCH_SHIFT 0 +#define I40E_GLV_GOTCH_GOTCH_MASK I40E_MASK(0xFFFF, I40E_GLV_GOTCH_GOTCH_SHIFT) +#define I40E_GLV_GOTCL(_i) (0x00328000 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ +#define I40E_GLV_GOTCL_MAX_INDEX 383 +#define I40E_GLV_GOTCL_GOTCL_SHIFT 0 +#define I40E_GLV_GOTCL_GOTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_GOTCL_GOTCL_SHIFT) +#define I40E_GLV_MPRCH(_i) (0x0036CC04 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ +#define I40E_GLV_MPRCH_MAX_INDEX 383 +#define I40E_GLV_MPRCH_MPRCH_SHIFT 0 +#define I40E_GLV_MPRCH_MPRCH_MASK I40E_MASK(0xFFFF, I40E_GLV_MPRCH_MPRCH_SHIFT) +#define I40E_GLV_MPRCL(_i) (0x0036cc00 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ +#define I40E_GLV_MPRCL_MAX_INDEX 383 +#define I40E_GLV_MPRCL_MPRCL_SHIFT 0 +#define I40E_GLV_MPRCL_MPRCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_MPRCL_MPRCL_SHIFT) +#define I40E_GLV_MPTCH(_i) (0x0033CC04 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ +#define I40E_GLV_MPTCH_MAX_INDEX 383 +#define I40E_GLV_MPTCH_MPTCH_SHIFT 0 +#define I40E_GLV_MPTCH_MPTCH_MASK I40E_MASK(0xFFFF, I40E_GLV_MPTCH_MPTCH_SHIFT) +#define I40E_GLV_MPTCL(_i) (0x0033cc00 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ +#define I40E_GLV_MPTCL_MAX_INDEX 383 +#define I40E_GLV_MPTCL_MPTCL_SHIFT 0 +#define I40E_GLV_MPTCL_MPTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_MPTCL_MPTCL_SHIFT) +#define I40E_GLV_RDPC(_i) (0x00310000 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ +#define I40E_GLV_RDPC_MAX_INDEX 383 +#define I40E_GLV_RDPC_RDPC_SHIFT 0 +#define I40E_GLV_RDPC_RDPC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_RDPC_RDPC_SHIFT) +#define I40E_GLV_RUPP(_i) (0x0036E400 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ +#define I40E_GLV_RUPP_MAX_INDEX 383 +#define I40E_GLV_RUPP_RUPP_SHIFT 0 +#define I40E_GLV_RUPP_RUPP_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_RUPP_RUPP_SHIFT) +#define I40E_GLV_TEPC(_VSI) (0x00344000 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: CORER */ +#define I40E_GLV_TEPC_MAX_INDEX 383 +#define I40E_GLV_TEPC_TEPC_SHIFT 0 +#define I40E_GLV_TEPC_TEPC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_TEPC_TEPC_SHIFT) +#define I40E_GLV_UPRCH(_i) (0x0036C004 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ +#define I40E_GLV_UPRCH_MAX_INDEX 383 +#define I40E_GLV_UPRCH_UPRCH_SHIFT 0 +#define I40E_GLV_UPRCH_UPRCH_MASK I40E_MASK(0xFFFF, I40E_GLV_UPRCH_UPRCH_SHIFT) +#define I40E_GLV_UPRCL(_i) (0x0036c000 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ +#define I40E_GLV_UPRCL_MAX_INDEX 383 +#define I40E_GLV_UPRCL_UPRCL_SHIFT 0 +#define I40E_GLV_UPRCL_UPRCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_UPRCL_UPRCL_SHIFT) +#define I40E_GLV_UPTCH(_i) (0x0033C004 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ +#define I40E_GLV_UPTCH_MAX_INDEX 383 +#define I40E_GLV_UPTCH_GLVUPTCH_SHIFT 0 +#define I40E_GLV_UPTCH_GLVUPTCH_MASK I40E_MASK(0xFFFF, I40E_GLV_UPTCH_GLVUPTCH_SHIFT) +#define I40E_GLV_UPTCL(_i) (0x0033c000 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ +#define I40E_GLV_UPTCL_MAX_INDEX 383 +#define I40E_GLV_UPTCL_UPTCL_SHIFT 0 +#define I40E_GLV_UPTCL_UPTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_UPTCL_UPTCL_SHIFT) +#define I40E_GLVEBTC_RBCH(_i, _j) (0x00364004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...15 */ /* Reset: CORER */ +#define I40E_GLVEBTC_RBCH_MAX_INDEX 7 +#define I40E_GLVEBTC_RBCH_TCBCH_SHIFT 0 +#define I40E_GLVEBTC_RBCH_TCBCH_MASK I40E_MASK(0xFFFF, I40E_GLVEBTC_RBCH_TCBCH_SHIFT) +#define I40E_GLVEBTC_RBCL(_i, _j) (0x00364000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...15 */ /* Reset: CORER */ +#define I40E_GLVEBTC_RBCL_MAX_INDEX 7 +#define I40E_GLVEBTC_RBCL_TCBCL_SHIFT 0 +#define I40E_GLVEBTC_RBCL_TCBCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLVEBTC_RBCL_TCBCL_SHIFT) +#define I40E_GLVEBTC_RPCH(_i, _j) (0x00368004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...15 */ /* Reset: CORER */ +#define I40E_GLVEBTC_RPCH_MAX_INDEX 7 +#define I40E_GLVEBTC_RPCH_TCPCH_SHIFT 0 +#define I40E_GLVEBTC_RPCH_TCPCH_MASK I40E_MASK(0xFFFF, I40E_GLVEBTC_RPCH_TCPCH_SHIFT) +#define I40E_GLVEBTC_RPCL(_i, _j) (0x00368000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...15 */ /* Reset: CORER */ +#define I40E_GLVEBTC_RPCL_MAX_INDEX 7 +#define I40E_GLVEBTC_RPCL_TCPCL_SHIFT 0 +#define I40E_GLVEBTC_RPCL_TCPCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLVEBTC_RPCL_TCPCL_SHIFT) +#define I40E_GLVEBTC_TBCH(_i, _j) (0x00334004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...15 */ /* Reset: CORER */ +#define I40E_GLVEBTC_TBCH_MAX_INDEX 7 +#define I40E_GLVEBTC_TBCH_TCBCH_SHIFT 0 +#define I40E_GLVEBTC_TBCH_TCBCH_MASK I40E_MASK(0xFFFF, I40E_GLVEBTC_TBCH_TCBCH_SHIFT) +#define I40E_GLVEBTC_TBCL(_i, _j) (0x00334000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...15 */ /* Reset: CORER */ +#define I40E_GLVEBTC_TBCL_MAX_INDEX 7 +#define I40E_GLVEBTC_TBCL_TCBCL_SHIFT 0 +#define I40E_GLVEBTC_TBCL_TCBCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLVEBTC_TBCL_TCBCL_SHIFT) +#define I40E_GLVEBTC_TPCH(_i, _j) (0x00338004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...15 */ /* Reset: CORER */ +#define I40E_GLVEBTC_TPCH_MAX_INDEX 7 +#define I40E_GLVEBTC_TPCH_TCPCH_SHIFT 0 +#define I40E_GLVEBTC_TPCH_TCPCH_MASK I40E_MASK(0xFFFF, I40E_GLVEBTC_TPCH_TCPCH_SHIFT) +#define I40E_GLVEBTC_TPCL(_i, _j) (0x00338000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...15 */ /* Reset: CORER */ +#define I40E_GLVEBTC_TPCL_MAX_INDEX 7 +#define I40E_GLVEBTC_TPCL_TCPCL_SHIFT 0 +#define I40E_GLVEBTC_TPCL_TCPCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLVEBTC_TPCL_TCPCL_SHIFT) +#define I40E_GLVEBVL_BPCH(_i) (0x00374804 + ((_i) * 8)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_GLVEBVL_BPCH_MAX_INDEX 127 +#define I40E_GLVEBVL_BPCH_VLBPCH_SHIFT 0 +#define I40E_GLVEBVL_BPCH_VLBPCH_MASK I40E_MASK(0xFFFF, I40E_GLVEBVL_BPCH_VLBPCH_SHIFT) +#define I40E_GLVEBVL_BPCL(_i) (0x00374800 + ((_i) * 8)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_GLVEBVL_BPCL_MAX_INDEX 127 +#define I40E_GLVEBVL_BPCL_VLBPCL_SHIFT 0 +#define I40E_GLVEBVL_BPCL_VLBPCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLVEBVL_BPCL_VLBPCL_SHIFT) +#define I40E_GLVEBVL_GORCH(_i) (0x00360004 + ((_i) * 8)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_GLVEBVL_GORCH_MAX_INDEX 127 +#define I40E_GLVEBVL_GORCH_VLBCH_SHIFT 0 +#define I40E_GLVEBVL_GORCH_VLBCH_MASK I40E_MASK(0xFFFF, I40E_GLVEBVL_GORCH_VLBCH_SHIFT) +#define I40E_GLVEBVL_GORCL(_i) (0x00360000 + ((_i) * 8)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_GLVEBVL_GORCL_MAX_INDEX 127 +#define I40E_GLVEBVL_GORCL_VLBCL_SHIFT 0 +#define I40E_GLVEBVL_GORCL_VLBCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLVEBVL_GORCL_VLBCL_SHIFT) +#define I40E_GLVEBVL_GOTCH(_i) (0x00330004 + ((_i) * 8)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_GLVEBVL_GOTCH_MAX_INDEX 127 +#define I40E_GLVEBVL_GOTCH_VLBCH_SHIFT 0 +#define I40E_GLVEBVL_GOTCH_VLBCH_MASK I40E_MASK(0xFFFF, I40E_GLVEBVL_GOTCH_VLBCH_SHIFT) +#define I40E_GLVEBVL_GOTCL(_i) (0x00330000 + ((_i) * 8)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_GLVEBVL_GOTCL_MAX_INDEX 127 +#define I40E_GLVEBVL_GOTCL_VLBCL_SHIFT 0 +#define I40E_GLVEBVL_GOTCL_VLBCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLVEBVL_GOTCL_VLBCL_SHIFT) +#define I40E_GLVEBVL_MPCH(_i) (0x00374404 + ((_i) * 8)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_GLVEBVL_MPCH_MAX_INDEX 127 +#define I40E_GLVEBVL_MPCH_VLMPCH_SHIFT 0 +#define I40E_GLVEBVL_MPCH_VLMPCH_MASK I40E_MASK(0xFFFF, I40E_GLVEBVL_MPCH_VLMPCH_SHIFT) +#define I40E_GLVEBVL_MPCL(_i) (0x00374400 + ((_i) * 8)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_GLVEBVL_MPCL_MAX_INDEX 127 +#define I40E_GLVEBVL_MPCL_VLMPCL_SHIFT 0 +#define I40E_GLVEBVL_MPCL_VLMPCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLVEBVL_MPCL_VLMPCL_SHIFT) +#define I40E_GLVEBVL_UPCH(_i) (0x00374004 + ((_i) * 8)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_GLVEBVL_UPCH_MAX_INDEX 127 +#define I40E_GLVEBVL_UPCH_VLUPCH_SHIFT 0 +#define I40E_GLVEBVL_UPCH_VLUPCH_MASK I40E_MASK(0xFFFF, I40E_GLVEBVL_UPCH_VLUPCH_SHIFT) +#define I40E_GLVEBVL_UPCL(_i) (0x00374000 + ((_i) * 8)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_GLVEBVL_UPCL_MAX_INDEX 127 +#define I40E_GLVEBVL_UPCL_VLUPCL_SHIFT 0 +#define I40E_GLVEBVL_UPCL_VLUPCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLVEBVL_UPCL_VLUPCL_SHIFT) +#define I40E_GL_MTG_FLU_MSK_H 0x00269F4C /* Reset: CORER */ +#define I40E_GL_MTG_FLU_MSK_H_MASK_HIGH_SHIFT 0 +#define I40E_GL_MTG_FLU_MSK_H_MASK_HIGH_MASK I40E_MASK(0xFFFF, I40E_GL_MTG_FLU_MSK_H_MASK_HIGH_SHIFT) +#define I40E_GL_SWR_DEF_ACT(_i) (0x00270200 + ((_i) * 4)) /* _i=0...35 */ /* Reset: CORER */ +#define I40E_GL_SWR_DEF_ACT_MAX_INDEX 35 +#define I40E_GL_SWR_DEF_ACT_DEF_ACTION_SHIFT 0 +#define I40E_GL_SWR_DEF_ACT_DEF_ACTION_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_SWR_DEF_ACT_DEF_ACTION_SHIFT) +#define I40E_GL_SWR_DEF_ACT_EN(_i) (0x0026CFB8 + ((_i) * 4)) /* _i=0...1 */ /* Reset: CORER */ +#define I40E_GL_SWR_DEF_ACT_EN_MAX_INDEX 1 +#define I40E_GL_SWR_DEF_ACT_EN_DEF_ACT_EN_BITMAP_SHIFT 0 +#define I40E_GL_SWR_DEF_ACT_EN_DEF_ACT_EN_BITMAP_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_SWR_DEF_ACT_EN_DEF_ACT_EN_BITMAP_SHIFT) +#define I40E_PRTTSYN_ADJ 0x001E4280 /* Reset: GLOBR */ +#define I40E_PRTTSYN_ADJ_TSYNADJ_SHIFT 0 +#define I40E_PRTTSYN_ADJ_TSYNADJ_MASK I40E_MASK(0x7FFFFFFF, I40E_PRTTSYN_ADJ_TSYNADJ_SHIFT) +#define I40E_PRTTSYN_ADJ_SIGN_SHIFT 31 +#define I40E_PRTTSYN_ADJ_SIGN_MASK I40E_MASK(0x1, I40E_PRTTSYN_ADJ_SIGN_SHIFT) +#define I40E_PRTTSYN_AUX_0(_i) (0x001E42A0 + ((_i) * 32)) /* _i=0...1 */ /* Reset: GLOBR */ +#define I40E_PRTTSYN_AUX_0_MAX_INDEX 1 +#define I40E_PRTTSYN_AUX_0_OUT_ENA_SHIFT 0 +#define I40E_PRTTSYN_AUX_0_OUT_ENA_MASK I40E_MASK(0x1, I40E_PRTTSYN_AUX_0_OUT_ENA_SHIFT) +#define I40E_PRTTSYN_AUX_0_OUTMOD_SHIFT 1 +#define I40E_PRTTSYN_AUX_0_OUTMOD_MASK I40E_MASK(0x3, I40E_PRTTSYN_AUX_0_OUTMOD_SHIFT) +#define I40E_PRTTSYN_AUX_0_OUTLVL_SHIFT 3 +#define I40E_PRTTSYN_AUX_0_OUTLVL_MASK I40E_MASK(0x1, I40E_PRTTSYN_AUX_0_OUTLVL_SHIFT) +#define I40E_PRTTSYN_AUX_0_PULSEW_SHIFT 8 +#define I40E_PRTTSYN_AUX_0_PULSEW_MASK I40E_MASK(0xF, I40E_PRTTSYN_AUX_0_PULSEW_SHIFT) +#define I40E_PRTTSYN_AUX_0_EVNTLVL_SHIFT 16 +#define I40E_PRTTSYN_AUX_0_EVNTLVL_MASK I40E_MASK(0x3, I40E_PRTTSYN_AUX_0_EVNTLVL_SHIFT) +#define I40E_PRTTSYN_AUX_1(_i) (0x001E42E0 + ((_i) * 32)) /* _i=0...1 */ /* Reset: GLOBR */ +#define I40E_PRTTSYN_AUX_1_MAX_INDEX 1 +#define I40E_PRTTSYN_AUX_1_INSTNT_SHIFT 0 +#define I40E_PRTTSYN_AUX_1_INSTNT_MASK I40E_MASK(0x1, I40E_PRTTSYN_AUX_1_INSTNT_SHIFT) +#define I40E_PRTTSYN_AUX_1_SAMPLE_TIME_SHIFT 1 +#define I40E_PRTTSYN_AUX_1_SAMPLE_TIME_MASK I40E_MASK(0x1, I40E_PRTTSYN_AUX_1_SAMPLE_TIME_SHIFT) +#define I40E_PRTTSYN_CLKO(_i) (0x001E4240 + ((_i) * 32)) /* _i=0...1 */ /* Reset: GLOBR */ +#define I40E_PRTTSYN_CLKO_MAX_INDEX 1 +#define I40E_PRTTSYN_CLKO_TSYNCLKO_SHIFT 0 +#define I40E_PRTTSYN_CLKO_TSYNCLKO_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_CLKO_TSYNCLKO_SHIFT) +#define I40E_PRTTSYN_CTL0 0x001E4200 /* Reset: GLOBR */ +#define I40E_PRTTSYN_CTL0_CLEAR_TSYNTIMER_SHIFT 0 +#define I40E_PRTTSYN_CTL0_CLEAR_TSYNTIMER_MASK I40E_MASK(0x1, I40E_PRTTSYN_CTL0_CLEAR_TSYNTIMER_SHIFT) +#define I40E_PRTTSYN_CTL0_TXTIME_INT_ENA_SHIFT 1 +#define I40E_PRTTSYN_CTL0_TXTIME_INT_ENA_MASK I40E_MASK(0x1, I40E_PRTTSYN_CTL0_TXTIME_INT_ENA_SHIFT) +#define I40E_PRTTSYN_CTL0_EVENT_INT_ENA_SHIFT 2 +#define I40E_PRTTSYN_CTL0_EVENT_INT_ENA_MASK I40E_MASK(0x1, I40E_PRTTSYN_CTL0_EVENT_INT_ENA_SHIFT) +#define I40E_PRTTSYN_CTL0_TGT_INT_ENA_SHIFT 3 +#define I40E_PRTTSYN_CTL0_TGT_INT_ENA_MASK I40E_MASK(0x1, I40E_PRTTSYN_CTL0_TGT_INT_ENA_SHIFT) +#define I40E_PRTTSYN_CTL0_PF_ID_SHIFT 8 +#define I40E_PRTTSYN_CTL0_PF_ID_MASK I40E_MASK(0xF, I40E_PRTTSYN_CTL0_PF_ID_SHIFT) +#define I40E_PRTTSYN_CTL0_TSYNACT_SHIFT 12 +#define I40E_PRTTSYN_CTL0_TSYNACT_MASK I40E_MASK(0x3, I40E_PRTTSYN_CTL0_TSYNACT_SHIFT) +#define I40E_PRTTSYN_CTL0_TSYNENA_SHIFT 31 +#define I40E_PRTTSYN_CTL0_TSYNENA_MASK I40E_MASK(0x1, I40E_PRTTSYN_CTL0_TSYNENA_SHIFT) +#define I40E_PRTTSYN_CTL1 0x00085020 /* Reset: CORER */ +#define I40E_PRTTSYN_CTL1_V1MESSTYPE0_SHIFT 0 +#define I40E_PRTTSYN_CTL1_V1MESSTYPE0_MASK I40E_MASK(0xFF, I40E_PRTTSYN_CTL1_V1MESSTYPE0_SHIFT) +#define I40E_PRTTSYN_CTL1_V1MESSTYPE1_SHIFT 8 +#define I40E_PRTTSYN_CTL1_V1MESSTYPE1_MASK I40E_MASK(0xFF, I40E_PRTTSYN_CTL1_V1MESSTYPE1_SHIFT) +#define I40E_PRTTSYN_CTL1_V2MESSTYPE0_SHIFT 16 +#define I40E_PRTTSYN_CTL1_V2MESSTYPE0_MASK I40E_MASK(0xF, I40E_PRTTSYN_CTL1_V2MESSTYPE0_SHIFT) +#define I40E_PRTTSYN_CTL1_V2MESSTYPE1_SHIFT 20 +#define I40E_PRTTSYN_CTL1_V2MESSTYPE1_MASK I40E_MASK(0xF, I40E_PRTTSYN_CTL1_V2MESSTYPE1_SHIFT) +#define I40E_PRTTSYN_CTL1_TSYNTYPE_SHIFT 24 +#define I40E_PRTTSYN_CTL1_TSYNTYPE_MASK I40E_MASK(0x3, I40E_PRTTSYN_CTL1_TSYNTYPE_SHIFT) +#define I40E_PRTTSYN_CTL1_UDP_ENA_SHIFT 26 +#define I40E_PRTTSYN_CTL1_UDP_ENA_MASK I40E_MASK(0x3, I40E_PRTTSYN_CTL1_UDP_ENA_SHIFT) +#define I40E_PRTTSYN_CTL1_TSYNENA_SHIFT 31 +#define I40E_PRTTSYN_CTL1_TSYNENA_MASK I40E_MASK(0x1, I40E_PRTTSYN_CTL1_TSYNENA_SHIFT) +#define I40E_PRTTSYN_EVNT_H(_i) (0x001E40C0 + ((_i) * 32)) /* _i=0...1 */ /* Reset: GLOBR */ +#define I40E_PRTTSYN_EVNT_H_MAX_INDEX 1 +#define I40E_PRTTSYN_EVNT_H_TSYNEVNT_H_SHIFT 0 +#define I40E_PRTTSYN_EVNT_H_TSYNEVNT_H_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_EVNT_H_TSYNEVNT_H_SHIFT) +#define I40E_PRTTSYN_EVNT_L(_i) (0x001E4080 + ((_i) * 32)) /* _i=0...1 */ /* Reset: GLOBR */ +#define I40E_PRTTSYN_EVNT_L_MAX_INDEX 1 +#define I40E_PRTTSYN_EVNT_L_TSYNEVNT_L_SHIFT 0 +#define I40E_PRTTSYN_EVNT_L_TSYNEVNT_L_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_EVNT_L_TSYNEVNT_L_SHIFT) +#define I40E_PRTTSYN_INC_H 0x001E4060 /* Reset: GLOBR */ +#define I40E_PRTTSYN_INC_H_TSYNINC_H_SHIFT 0 +#define I40E_PRTTSYN_INC_H_TSYNINC_H_MASK I40E_MASK(0x3F, I40E_PRTTSYN_INC_H_TSYNINC_H_SHIFT) +#define I40E_PRTTSYN_INC_L 0x001E4040 /* Reset: GLOBR */ +#define I40E_PRTTSYN_INC_L_TSYNINC_L_SHIFT 0 +#define I40E_PRTTSYN_INC_L_TSYNINC_L_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_INC_L_TSYNINC_L_SHIFT) +#define I40E_PRTTSYN_RXTIME_H(_i) (0x00085040 + ((_i) * 32)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_PRTTSYN_RXTIME_H_MAX_INDEX 3 +#define I40E_PRTTSYN_RXTIME_H_RXTIEM_H_SHIFT 0 +#define I40E_PRTTSYN_RXTIME_H_RXTIEM_H_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_RXTIME_H_RXTIEM_H_SHIFT) +#define I40E_PRTTSYN_RXTIME_L(_i) (0x000850C0 + ((_i) * 32)) /* _i=0...3 */ /* Reset: CORER */ +#define I40E_PRTTSYN_RXTIME_L_MAX_INDEX 3 +#define I40E_PRTTSYN_RXTIME_L_RXTIEM_L_SHIFT 0 +#define I40E_PRTTSYN_RXTIME_L_RXTIEM_L_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_RXTIME_L_RXTIEM_L_SHIFT) +#define I40E_PRTTSYN_STAT_0 0x001E4220 /* Reset: GLOBR */ +#define I40E_PRTTSYN_STAT_0_EVENT0_SHIFT 0 +#define I40E_PRTTSYN_STAT_0_EVENT0_MASK I40E_MASK(0x1, I40E_PRTTSYN_STAT_0_EVENT0_SHIFT) +#define I40E_PRTTSYN_STAT_0_EVENT1_SHIFT 1 +#define I40E_PRTTSYN_STAT_0_EVENT1_MASK I40E_MASK(0x1, I40E_PRTTSYN_STAT_0_EVENT1_SHIFT) +#define I40E_PRTTSYN_STAT_0_TGT0_SHIFT 2 +#define I40E_PRTTSYN_STAT_0_TGT0_MASK I40E_MASK(0x1, I40E_PRTTSYN_STAT_0_TGT0_SHIFT) +#define I40E_PRTTSYN_STAT_0_TGT1_SHIFT 3 +#define I40E_PRTTSYN_STAT_0_TGT1_MASK I40E_MASK(0x1, I40E_PRTTSYN_STAT_0_TGT1_SHIFT) +#define I40E_PRTTSYN_STAT_0_TXTIME_SHIFT 4 +#define I40E_PRTTSYN_STAT_0_TXTIME_MASK I40E_MASK(0x1, I40E_PRTTSYN_STAT_0_TXTIME_SHIFT) +#define I40E_PRTTSYN_STAT_1 0x00085140 /* Reset: CORER */ +#define I40E_PRTTSYN_STAT_1_RXT0_SHIFT 0 +#define I40E_PRTTSYN_STAT_1_RXT0_MASK I40E_MASK(0x1, I40E_PRTTSYN_STAT_1_RXT0_SHIFT) +#define I40E_PRTTSYN_STAT_1_RXT1_SHIFT 1 +#define I40E_PRTTSYN_STAT_1_RXT1_MASK I40E_MASK(0x1, I40E_PRTTSYN_STAT_1_RXT1_SHIFT) +#define I40E_PRTTSYN_STAT_1_RXT2_SHIFT 2 +#define I40E_PRTTSYN_STAT_1_RXT2_MASK I40E_MASK(0x1, I40E_PRTTSYN_STAT_1_RXT2_SHIFT) +#define I40E_PRTTSYN_STAT_1_RXT3_SHIFT 3 +#define I40E_PRTTSYN_STAT_1_RXT3_MASK I40E_MASK(0x1, I40E_PRTTSYN_STAT_1_RXT3_SHIFT) +#define I40E_PRTTSYN_TGT_H(_i) (0x001E4180 + ((_i) * 32)) /* _i=0...1 */ /* Reset: GLOBR */ +#define I40E_PRTTSYN_TGT_H_MAX_INDEX 1 +#define I40E_PRTTSYN_TGT_H_TSYNTGTT_H_SHIFT 0 +#define I40E_PRTTSYN_TGT_H_TSYNTGTT_H_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_TGT_H_TSYNTGTT_H_SHIFT) +#define I40E_PRTTSYN_TGT_L(_i) (0x001E4140 + ((_i) * 32)) /* _i=0...1 */ /* Reset: GLOBR */ +#define I40E_PRTTSYN_TGT_L_MAX_INDEX 1 +#define I40E_PRTTSYN_TGT_L_TSYNTGTT_L_SHIFT 0 +#define I40E_PRTTSYN_TGT_L_TSYNTGTT_L_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_TGT_L_TSYNTGTT_L_SHIFT) +#define I40E_PRTTSYN_TIME_H 0x001E4120 /* Reset: GLOBR */ +#define I40E_PRTTSYN_TIME_H_TSYNTIME_H_SHIFT 0 +#define I40E_PRTTSYN_TIME_H_TSYNTIME_H_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_TIME_H_TSYNTIME_H_SHIFT) +#define I40E_PRTTSYN_TIME_L 0x001E4100 /* Reset: GLOBR */ +#define I40E_PRTTSYN_TIME_L_TSYNTIME_L_SHIFT 0 +#define I40E_PRTTSYN_TIME_L_TSYNTIME_L_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_TIME_L_TSYNTIME_L_SHIFT) +#define I40E_PRTTSYN_TXTIME_H 0x001E41E0 /* Reset: GLOBR */ +#define I40E_PRTTSYN_TXTIME_H_TXTIEM_H_SHIFT 0 +#define I40E_PRTTSYN_TXTIME_H_TXTIEM_H_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_TXTIME_H_TXTIEM_H_SHIFT) +#define I40E_PRTTSYN_TXTIME_L 0x001E41C0 /* Reset: GLOBR */ +#define I40E_PRTTSYN_TXTIME_L_TXTIEM_L_SHIFT 0 +#define I40E_PRTTSYN_TXTIME_L_TXTIEM_L_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_TXTIME_L_TXTIEM_L_SHIFT) +#define I40E_GLSCD_QUANTA 0x000B2080 /* Reset: CORER */ +#define I40E_GLSCD_QUANTA_TSCDQUANTA_SHIFT 0 +#define I40E_GLSCD_QUANTA_TSCDQUANTA_MASK I40E_MASK(0x7, I40E_GLSCD_QUANTA_TSCDQUANTA_SHIFT) +#define I40E_GL_MDET_RX 0x0012A510 /* Reset: CORER */ +#define I40E_GL_MDET_RX_FUNCTION_SHIFT 0 +#define I40E_GL_MDET_RX_FUNCTION_MASK I40E_MASK(0xFF, I40E_GL_MDET_RX_FUNCTION_SHIFT) +#define I40E_GL_MDET_RX_EVENT_SHIFT 8 +#define I40E_GL_MDET_RX_EVENT_MASK I40E_MASK(0x1FF, I40E_GL_MDET_RX_EVENT_SHIFT) +#define I40E_GL_MDET_RX_QUEUE_SHIFT 17 +#define I40E_GL_MDET_RX_QUEUE_MASK I40E_MASK(0x3FFF, I40E_GL_MDET_RX_QUEUE_SHIFT) +#define I40E_GL_MDET_RX_VALID_SHIFT 31 +#define I40E_GL_MDET_RX_VALID_MASK I40E_MASK(0x1, I40E_GL_MDET_RX_VALID_SHIFT) +#define I40E_GL_MDET_TX 0x000E6480 /* Reset: CORER */ +#define I40E_GL_MDET_TX_QUEUE_SHIFT 0 +#define I40E_GL_MDET_TX_QUEUE_MASK I40E_MASK(0xFFF, I40E_GL_MDET_TX_QUEUE_SHIFT) +#define I40E_GL_MDET_TX_VF_NUM_SHIFT 12 +#define I40E_GL_MDET_TX_VF_NUM_MASK I40E_MASK(0x1FF, I40E_GL_MDET_TX_VF_NUM_SHIFT) +#define I40E_GL_MDET_TX_PF_NUM_SHIFT 21 +#define I40E_GL_MDET_TX_PF_NUM_MASK I40E_MASK(0xF, I40E_GL_MDET_TX_PF_NUM_SHIFT) +#define I40E_GL_MDET_TX_EVENT_SHIFT 25 +#define I40E_GL_MDET_TX_EVENT_MASK I40E_MASK(0x1F, I40E_GL_MDET_TX_EVENT_SHIFT) +#define I40E_GL_MDET_TX_VALID_SHIFT 31 +#define I40E_GL_MDET_TX_VALID_MASK I40E_MASK(0x1, I40E_GL_MDET_TX_VALID_SHIFT) +#define I40E_PF_MDET_RX 0x0012A400 /* Reset: CORER */ +#define I40E_PF_MDET_RX_VALID_SHIFT 0 +#define I40E_PF_MDET_RX_VALID_MASK I40E_MASK(0x1, I40E_PF_MDET_RX_VALID_SHIFT) +#define I40E_PF_MDET_TX 0x000E6400 /* Reset: CORER */ +#define I40E_PF_MDET_TX_VALID_SHIFT 0 +#define I40E_PF_MDET_TX_VALID_MASK I40E_MASK(0x1, I40E_PF_MDET_TX_VALID_SHIFT) +#define I40E_PF_VT_PFALLOC 0x001C0500 /* Reset: CORER */ +#define I40E_PF_VT_PFALLOC_FIRSTVF_SHIFT 0 +#define I40E_PF_VT_PFALLOC_FIRSTVF_MASK I40E_MASK(0xFF, I40E_PF_VT_PFALLOC_FIRSTVF_SHIFT) +#define I40E_PF_VT_PFALLOC_LASTVF_SHIFT 8 +#define I40E_PF_VT_PFALLOC_LASTVF_MASK I40E_MASK(0xFF, I40E_PF_VT_PFALLOC_LASTVF_SHIFT) +#define I40E_PF_VT_PFALLOC_VALID_SHIFT 31 +#define I40E_PF_VT_PFALLOC_VALID_MASK I40E_MASK(0x1, I40E_PF_VT_PFALLOC_VALID_SHIFT) +#define I40E_VP_MDET_RX(_VF) (0x0012A000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_VP_MDET_RX_MAX_INDEX 127 +#define I40E_VP_MDET_RX_VALID_SHIFT 0 +#define I40E_VP_MDET_RX_VALID_MASK I40E_MASK(0x1, I40E_VP_MDET_RX_VALID_SHIFT) +#define I40E_VP_MDET_TX(_VF) (0x000E6000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ +#define I40E_VP_MDET_TX_MAX_INDEX 127 +#define I40E_VP_MDET_TX_VALID_SHIFT 0 +#define I40E_VP_MDET_TX_VALID_MASK I40E_MASK(0x1, I40E_VP_MDET_TX_VALID_SHIFT) +#define I40E_GLPM_WUMC 0x0006C800 /* Reset: POR */ +#define I40E_GLPM_WUMC_NOTCO_SHIFT 0 +#define I40E_GLPM_WUMC_NOTCO_MASK I40E_MASK(0x1, I40E_GLPM_WUMC_NOTCO_SHIFT) +#define I40E_GLPM_WUMC_SRST_PIN_VAL_SHIFT 1 +#define I40E_GLPM_WUMC_SRST_PIN_VAL_MASK I40E_MASK(0x1, I40E_GLPM_WUMC_SRST_PIN_VAL_SHIFT) +#define I40E_GLPM_WUMC_ROL_MODE_SHIFT 2 +#define I40E_GLPM_WUMC_ROL_MODE_MASK I40E_MASK(0x1, I40E_GLPM_WUMC_ROL_MODE_SHIFT) +#define I40E_GLPM_WUMC_RESERVED_4_SHIFT 3 +#define I40E_GLPM_WUMC_RESERVED_4_MASK I40E_MASK(0x1FFF, I40E_GLPM_WUMC_RESERVED_4_SHIFT) +#define I40E_GLPM_WUMC_MNG_WU_PF_SHIFT 16 +#define I40E_GLPM_WUMC_MNG_WU_PF_MASK I40E_MASK(0xFFFF, I40E_GLPM_WUMC_MNG_WU_PF_SHIFT) +#define I40E_PFPM_APM 0x000B8080 /* Reset: POR */ +#define I40E_PFPM_APM_APME_SHIFT 0 +#define I40E_PFPM_APM_APME_MASK I40E_MASK(0x1, I40E_PFPM_APM_APME_SHIFT) +#define I40E_PFPM_FHFT_LENGTH(_i) (0x0006A000 + ((_i) * 128)) /* _i=0...7 */ /* Reset: POR */ +#define I40E_PFPM_FHFT_LENGTH_MAX_INDEX 7 +#define I40E_PFPM_FHFT_LENGTH_LENGTH_SHIFT 0 +#define I40E_PFPM_FHFT_LENGTH_LENGTH_MASK I40E_MASK(0xFF, I40E_PFPM_FHFT_LENGTH_LENGTH_SHIFT) +#define I40E_PFPM_WUC 0x0006B200 /* Reset: POR */ +#define I40E_PFPM_WUC_EN_APM_D0_SHIFT 5 +#define I40E_PFPM_WUC_EN_APM_D0_MASK I40E_MASK(0x1, I40E_PFPM_WUC_EN_APM_D0_SHIFT) +#define I40E_PFPM_WUFC 0x0006B400 /* Reset: POR */ +#define I40E_PFPM_WUFC_LNKC_SHIFT 0 +#define I40E_PFPM_WUFC_LNKC_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_LNKC_SHIFT) +#define I40E_PFPM_WUFC_MAG_SHIFT 1 +#define I40E_PFPM_WUFC_MAG_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_MAG_SHIFT) +#define I40E_PFPM_WUFC_MNG_SHIFT 3 +#define I40E_PFPM_WUFC_MNG_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_MNG_SHIFT) +#define I40E_PFPM_WUFC_FLX0_ACT_SHIFT 4 +#define I40E_PFPM_WUFC_FLX0_ACT_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX0_ACT_SHIFT) +#define I40E_PFPM_WUFC_FLX1_ACT_SHIFT 5 +#define I40E_PFPM_WUFC_FLX1_ACT_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX1_ACT_SHIFT) +#define I40E_PFPM_WUFC_FLX2_ACT_SHIFT 6 +#define I40E_PFPM_WUFC_FLX2_ACT_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX2_ACT_SHIFT) +#define I40E_PFPM_WUFC_FLX3_ACT_SHIFT 7 +#define I40E_PFPM_WUFC_FLX3_ACT_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX3_ACT_SHIFT) +#define I40E_PFPM_WUFC_FLX4_ACT_SHIFT 8 +#define I40E_PFPM_WUFC_FLX4_ACT_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX4_ACT_SHIFT) +#define I40E_PFPM_WUFC_FLX5_ACT_SHIFT 9 +#define I40E_PFPM_WUFC_FLX5_ACT_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX5_ACT_SHIFT) +#define I40E_PFPM_WUFC_FLX6_ACT_SHIFT 10 +#define I40E_PFPM_WUFC_FLX6_ACT_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX6_ACT_SHIFT) +#define I40E_PFPM_WUFC_FLX7_ACT_SHIFT 11 +#define I40E_PFPM_WUFC_FLX7_ACT_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX7_ACT_SHIFT) +#define I40E_PFPM_WUFC_FLX0_SHIFT 16 +#define I40E_PFPM_WUFC_FLX0_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX0_SHIFT) +#define I40E_PFPM_WUFC_FLX1_SHIFT 17 +#define I40E_PFPM_WUFC_FLX1_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX1_SHIFT) +#define I40E_PFPM_WUFC_FLX2_SHIFT 18 +#define I40E_PFPM_WUFC_FLX2_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX2_SHIFT) +#define I40E_PFPM_WUFC_FLX3_SHIFT 19 +#define I40E_PFPM_WUFC_FLX3_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX3_SHIFT) +#define I40E_PFPM_WUFC_FLX4_SHIFT 20 +#define I40E_PFPM_WUFC_FLX4_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX4_SHIFT) +#define I40E_PFPM_WUFC_FLX5_SHIFT 21 +#define I40E_PFPM_WUFC_FLX5_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX5_SHIFT) +#define I40E_PFPM_WUFC_FLX6_SHIFT 22 +#define I40E_PFPM_WUFC_FLX6_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX6_SHIFT) +#define I40E_PFPM_WUFC_FLX7_SHIFT 23 +#define I40E_PFPM_WUFC_FLX7_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX7_SHIFT) +#define I40E_PFPM_WUFC_FW_RST_WK_SHIFT 31 +#define I40E_PFPM_WUFC_FW_RST_WK_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FW_RST_WK_SHIFT) +#define I40E_PFPM_WUS 0x0006B600 /* Reset: POR */ +#define I40E_PFPM_WUS_LNKC_SHIFT 0 +#define I40E_PFPM_WUS_LNKC_MASK I40E_MASK(0x1, I40E_PFPM_WUS_LNKC_SHIFT) +#define I40E_PFPM_WUS_MAG_SHIFT 1 +#define I40E_PFPM_WUS_MAG_MASK I40E_MASK(0x1, I40E_PFPM_WUS_MAG_SHIFT) +#define I40E_PFPM_WUS_PME_STATUS_SHIFT 2 +#define I40E_PFPM_WUS_PME_STATUS_MASK I40E_MASK(0x1, I40E_PFPM_WUS_PME_STATUS_SHIFT) +#define I40E_PFPM_WUS_MNG_SHIFT 3 +#define I40E_PFPM_WUS_MNG_MASK I40E_MASK(0x1, I40E_PFPM_WUS_MNG_SHIFT) +#define I40E_PFPM_WUS_FLX0_SHIFT 16 +#define I40E_PFPM_WUS_FLX0_MASK I40E_MASK(0x1, I40E_PFPM_WUS_FLX0_SHIFT) +#define I40E_PFPM_WUS_FLX1_SHIFT 17 +#define I40E_PFPM_WUS_FLX1_MASK I40E_MASK(0x1, I40E_PFPM_WUS_FLX1_SHIFT) +#define I40E_PFPM_WUS_FLX2_SHIFT 18 +#define I40E_PFPM_WUS_FLX2_MASK I40E_MASK(0x1, I40E_PFPM_WUS_FLX2_SHIFT) +#define I40E_PFPM_WUS_FLX3_SHIFT 19 +#define I40E_PFPM_WUS_FLX3_MASK I40E_MASK(0x1, I40E_PFPM_WUS_FLX3_SHIFT) +#define I40E_PFPM_WUS_FLX4_SHIFT 20 +#define I40E_PFPM_WUS_FLX4_MASK I40E_MASK(0x1, I40E_PFPM_WUS_FLX4_SHIFT) +#define I40E_PFPM_WUS_FLX5_SHIFT 21 +#define I40E_PFPM_WUS_FLX5_MASK I40E_MASK(0x1, I40E_PFPM_WUS_FLX5_SHIFT) +#define I40E_PFPM_WUS_FLX6_SHIFT 22 +#define I40E_PFPM_WUS_FLX6_MASK I40E_MASK(0x1, I40E_PFPM_WUS_FLX6_SHIFT) +#define I40E_PFPM_WUS_FLX7_SHIFT 23 +#define I40E_PFPM_WUS_FLX7_MASK I40E_MASK(0x1, I40E_PFPM_WUS_FLX7_SHIFT) +#define I40E_PFPM_WUS_FW_RST_WK_SHIFT 31 +#define I40E_PFPM_WUS_FW_RST_WK_MASK I40E_MASK(0x1, I40E_PFPM_WUS_FW_RST_WK_SHIFT) +#define I40E_PRTPM_FHFHR 0x0006C000 /* Reset: POR */ +#define I40E_PRTPM_FHFHR_UNICAST_SHIFT 0 +#define I40E_PRTPM_FHFHR_UNICAST_MASK I40E_MASK(0x1, I40E_PRTPM_FHFHR_UNICAST_SHIFT) +#define I40E_PRTPM_FHFHR_MULTICAST_SHIFT 1 +#define I40E_PRTPM_FHFHR_MULTICAST_MASK I40E_MASK(0x1, I40E_PRTPM_FHFHR_MULTICAST_SHIFT) +#define I40E_PRTPM_SAH(_i) (0x001E44C0 + ((_i) * 32)) /* _i=0...3 */ /* Reset: PFR */ +#define I40E_PRTPM_SAH_MAX_INDEX 3 +#define I40E_PRTPM_SAH_PFPM_SAH_SHIFT 0 +#define I40E_PRTPM_SAH_PFPM_SAH_MASK I40E_MASK(0xFFFF, I40E_PRTPM_SAH_PFPM_SAH_SHIFT) +#define I40E_PRTPM_SAH_PF_NUM_SHIFT 26 +#define I40E_PRTPM_SAH_PF_NUM_MASK I40E_MASK(0xF, I40E_PRTPM_SAH_PF_NUM_SHIFT) +#define I40E_PRTPM_SAH_MC_MAG_EN_SHIFT 30 +#define I40E_PRTPM_SAH_MC_MAG_EN_MASK I40E_MASK(0x1, I40E_PRTPM_SAH_MC_MAG_EN_SHIFT) +#define I40E_PRTPM_SAH_AV_SHIFT 31 +#define I40E_PRTPM_SAH_AV_MASK I40E_MASK(0x1, I40E_PRTPM_SAH_AV_SHIFT) +#define I40E_PRTPM_SAL(_i) (0x001E4440 + ((_i) * 32)) /* _i=0...3 */ /* Reset: PFR */ +#define I40E_PRTPM_SAL_MAX_INDEX 3 +#define I40E_PRTPM_SAL_PFPM_SAL_SHIFT 0 +#define I40E_PRTPM_SAL_PFPM_SAL_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTPM_SAL_PFPM_SAL_SHIFT) +#define I40E_VF_ARQBAH1 0x00006000 /* Reset: EMPR */ +#define I40E_VF_ARQBAH1_ARQBAH_SHIFT 0 +#define I40E_VF_ARQBAH1_ARQBAH_MASK I40E_MASK(0xFFFFFFFF, I40E_VF_ARQBAH1_ARQBAH_SHIFT) +#define I40E_VF_ARQBAL1 0x00006C00 /* Reset: EMPR */ +#define I40E_VF_ARQBAL1_ARQBAL_SHIFT 0 +#define I40E_VF_ARQBAL1_ARQBAL_MASK I40E_MASK(0xFFFFFFFF, I40E_VF_ARQBAL1_ARQBAL_SHIFT) +#define I40E_VF_ARQH1 0x00007400 /* Reset: EMPR */ +#define I40E_VF_ARQH1_ARQH_SHIFT 0 +#define I40E_VF_ARQH1_ARQH_MASK I40E_MASK(0x3FF, I40E_VF_ARQH1_ARQH_SHIFT) +#define I40E_VF_ARQLEN1 0x00008000 /* Reset: EMPR */ +#define I40E_VF_ARQLEN1_ARQLEN_SHIFT 0 +#define I40E_VF_ARQLEN1_ARQLEN_MASK I40E_MASK(0x3FF, I40E_VF_ARQLEN1_ARQLEN_SHIFT) +#define I40E_VF_ARQLEN1_ARQVFE_SHIFT 28 +#define I40E_VF_ARQLEN1_ARQVFE_MASK I40E_MASK(0x1, I40E_VF_ARQLEN1_ARQVFE_SHIFT) +#define I40E_VF_ARQLEN1_ARQOVFL_SHIFT 29 +#define I40E_VF_ARQLEN1_ARQOVFL_MASK I40E_MASK(0x1, I40E_VF_ARQLEN1_ARQOVFL_SHIFT) +#define I40E_VF_ARQLEN1_ARQCRIT_SHIFT 30 +#define I40E_VF_ARQLEN1_ARQCRIT_MASK I40E_MASK(0x1, I40E_VF_ARQLEN1_ARQCRIT_SHIFT) +#define I40E_VF_ARQLEN1_ARQENABLE_SHIFT 31 +#define I40E_VF_ARQLEN1_ARQENABLE_MASK I40E_MASK(0x1, I40E_VF_ARQLEN1_ARQENABLE_SHIFT) +#define I40E_VF_ARQT1 0x00007000 /* Reset: EMPR */ +#define I40E_VF_ARQT1_ARQT_SHIFT 0 +#define I40E_VF_ARQT1_ARQT_MASK I40E_MASK(0x3FF, I40E_VF_ARQT1_ARQT_SHIFT) +#define I40E_VF_ATQBAH1 0x00007800 /* Reset: EMPR */ +#define I40E_VF_ATQBAH1_ATQBAH_SHIFT 0 +#define I40E_VF_ATQBAH1_ATQBAH_MASK I40E_MASK(0xFFFFFFFF, I40E_VF_ATQBAH1_ATQBAH_SHIFT) +#define I40E_VF_ATQBAL1 0x00007C00 /* Reset: EMPR */ +#define I40E_VF_ATQBAL1_ATQBAL_SHIFT 0 +#define I40E_VF_ATQBAL1_ATQBAL_MASK I40E_MASK(0xFFFFFFFF, I40E_VF_ATQBAL1_ATQBAL_SHIFT) +#define I40E_VF_ATQH1 0x00006400 /* Reset: EMPR */ +#define I40E_VF_ATQH1_ATQH_SHIFT 0 +#define I40E_VF_ATQH1_ATQH_MASK I40E_MASK(0x3FF, I40E_VF_ATQH1_ATQH_SHIFT) +#define I40E_VF_ATQLEN1 0x00006800 /* Reset: EMPR */ +#define I40E_VF_ATQLEN1_ATQLEN_SHIFT 0 +#define I40E_VF_ATQLEN1_ATQLEN_MASK I40E_MASK(0x3FF, I40E_VF_ATQLEN1_ATQLEN_SHIFT) +#define I40E_VF_ATQLEN1_ATQVFE_SHIFT 28 +#define I40E_VF_ATQLEN1_ATQVFE_MASK I40E_MASK(0x1, I40E_VF_ATQLEN1_ATQVFE_SHIFT) +#define I40E_VF_ATQLEN1_ATQOVFL_SHIFT 29 +#define I40E_VF_ATQLEN1_ATQOVFL_MASK I40E_MASK(0x1, I40E_VF_ATQLEN1_ATQOVFL_SHIFT) +#define I40E_VF_ATQLEN1_ATQCRIT_SHIFT 30 +#define I40E_VF_ATQLEN1_ATQCRIT_MASK I40E_MASK(0x1, I40E_VF_ATQLEN1_ATQCRIT_SHIFT) +#define I40E_VF_ATQLEN1_ATQENABLE_SHIFT 31 +#define I40E_VF_ATQLEN1_ATQENABLE_MASK I40E_MASK(0x1, I40E_VF_ATQLEN1_ATQENABLE_SHIFT) +#define I40E_VF_ATQT1 0x00008400 /* Reset: EMPR */ +#define I40E_VF_ATQT1_ATQT_SHIFT 0 +#define I40E_VF_ATQT1_ATQT_MASK I40E_MASK(0x3FF, I40E_VF_ATQT1_ATQT_SHIFT) +#define I40E_VFGEN_RSTAT 0x00008800 /* Reset: VFR */ +#define I40E_VFGEN_RSTAT_VFR_STATE_SHIFT 0 +#define I40E_VFGEN_RSTAT_VFR_STATE_MASK I40E_MASK(0x3, I40E_VFGEN_RSTAT_VFR_STATE_SHIFT) +#define I40E_VFINT_DYN_CTL01 0x00005C00 /* Reset: VFR */ +#define I40E_VFINT_DYN_CTL01_INTENA_SHIFT 0 +#define I40E_VFINT_DYN_CTL01_INTENA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL01_INTENA_SHIFT) +#define I40E_VFINT_DYN_CTL01_CLEARPBA_SHIFT 1 +#define I40E_VFINT_DYN_CTL01_CLEARPBA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL01_CLEARPBA_SHIFT) +#define I40E_VFINT_DYN_CTL01_SWINT_TRIG_SHIFT 2 +#define I40E_VFINT_DYN_CTL01_SWINT_TRIG_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL01_SWINT_TRIG_SHIFT) +#define I40E_VFINT_DYN_CTL01_ITR_INDX_SHIFT 3 +#define I40E_VFINT_DYN_CTL01_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_DYN_CTL01_ITR_INDX_SHIFT) +#define I40E_VFINT_DYN_CTL01_INTERVAL_SHIFT 5 +#define I40E_VFINT_DYN_CTL01_INTERVAL_MASK I40E_MASK(0xFFF, I40E_VFINT_DYN_CTL01_INTERVAL_SHIFT) +#define I40E_VFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT 24 +#define I40E_VFINT_DYN_CTL01_SW_ITR_INDX_ENA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT) +#define I40E_VFINT_DYN_CTL01_SW_ITR_INDX_SHIFT 25 +#define I40E_VFINT_DYN_CTL01_SW_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_DYN_CTL01_SW_ITR_INDX_SHIFT) +#define I40E_VFINT_DYN_CTL01_INTENA_MSK_SHIFT 31 +#define I40E_VFINT_DYN_CTL01_INTENA_MSK_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL01_INTENA_MSK_SHIFT) +#define I40E_VFINT_DYN_CTLN1(_INTVF) (0x00003800 + ((_INTVF) * 4)) /* _i=0...15 */ /* Reset: VFR */ +#define I40E_VFINT_DYN_CTLN1_MAX_INDEX 15 +#define I40E_VFINT_DYN_CTLN1_INTENA_SHIFT 0 +#define I40E_VFINT_DYN_CTLN1_INTENA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN1_INTENA_SHIFT) +#define I40E_VFINT_DYN_CTLN1_CLEARPBA_SHIFT 1 +#define I40E_VFINT_DYN_CTLN1_CLEARPBA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN1_CLEARPBA_SHIFT) +#define I40E_VFINT_DYN_CTLN1_SWINT_TRIG_SHIFT 2 +#define I40E_VFINT_DYN_CTLN1_SWINT_TRIG_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN1_SWINT_TRIG_SHIFT) +#define I40E_VFINT_DYN_CTLN1_ITR_INDX_SHIFT 3 +#define I40E_VFINT_DYN_CTLN1_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_DYN_CTLN1_ITR_INDX_SHIFT) +#define I40E_VFINT_DYN_CTLN1_INTERVAL_SHIFT 5 +#define I40E_VFINT_DYN_CTLN1_INTERVAL_MASK I40E_MASK(0xFFF, I40E_VFINT_DYN_CTLN1_INTERVAL_SHIFT) +#define I40E_VFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT 24 +#define I40E_VFINT_DYN_CTLN1_SW_ITR_INDX_ENA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT) +#define I40E_VFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT 25 +#define I40E_VFINT_DYN_CTLN1_SW_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT) +#define I40E_VFINT_DYN_CTLN1_INTENA_MSK_SHIFT 31 +#define I40E_VFINT_DYN_CTLN1_INTENA_MSK_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN1_INTENA_MSK_SHIFT) +#define I40E_VFINT_ICR0_ENA1 0x00005000 /* Reset: CORER */ +#define I40E_VFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT 25 +#define I40E_VFINT_ICR0_ENA1_LINK_STAT_CHANGE_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT) +#define I40E_VFINT_ICR0_ENA1_ADMINQ_SHIFT 30 +#define I40E_VFINT_ICR0_ENA1_ADMINQ_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ENA1_ADMINQ_SHIFT) +#define I40E_VFINT_ICR0_ENA1_RSVD_SHIFT 31 +#define I40E_VFINT_ICR0_ENA1_RSVD_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ENA1_RSVD_SHIFT) +#define I40E_VFINT_ICR01 0x00004800 /* Reset: CORER */ +#define I40E_VFINT_ICR01_INTEVENT_SHIFT 0 +#define I40E_VFINT_ICR01_INTEVENT_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_INTEVENT_SHIFT) +#define I40E_VFINT_ICR01_QUEUE_0_SHIFT 1 +#define I40E_VFINT_ICR01_QUEUE_0_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_QUEUE_0_SHIFT) +#define I40E_VFINT_ICR01_QUEUE_1_SHIFT 2 +#define I40E_VFINT_ICR01_QUEUE_1_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_QUEUE_1_SHIFT) +#define I40E_VFINT_ICR01_QUEUE_2_SHIFT 3 +#define I40E_VFINT_ICR01_QUEUE_2_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_QUEUE_2_SHIFT) +#define I40E_VFINT_ICR01_QUEUE_3_SHIFT 4 +#define I40E_VFINT_ICR01_QUEUE_3_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_QUEUE_3_SHIFT) +#define I40E_VFINT_ICR01_LINK_STAT_CHANGE_SHIFT 25 +#define I40E_VFINT_ICR01_LINK_STAT_CHANGE_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_LINK_STAT_CHANGE_SHIFT) +#define I40E_VFINT_ICR01_ADMINQ_SHIFT 30 +#define I40E_VFINT_ICR01_ADMINQ_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_ADMINQ_SHIFT) +#define I40E_VFINT_ICR01_SWINT_SHIFT 31 +#define I40E_VFINT_ICR01_SWINT_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_SWINT_SHIFT) +#define I40E_VFINT_ITR01(_i) (0x00004C00 + ((_i) * 4)) /* _i=0...2 */ /* Reset: VFR */ +#define I40E_VFINT_ITR01_MAX_INDEX 2 +#define I40E_VFINT_ITR01_INTERVAL_SHIFT 0 +#define I40E_VFINT_ITR01_INTERVAL_MASK I40E_MASK(0xFFF, I40E_VFINT_ITR01_INTERVAL_SHIFT) +#define I40E_VFINT_ITRN1(_i, _INTVF) (0x00002800 + ((_i) * 64 + (_INTVF) * 4)) /* _i=0...2, _INTVF=0...15 */ /* Reset: VFR */ +#define I40E_VFINT_ITRN1_MAX_INDEX 2 +#define I40E_VFINT_ITRN1_INTERVAL_SHIFT 0 +#define I40E_VFINT_ITRN1_INTERVAL_MASK I40E_MASK(0xFFF, I40E_VFINT_ITRN1_INTERVAL_SHIFT) +#define I40E_VFINT_STAT_CTL01 0x00005400 /* Reset: VFR */ +#define I40E_VFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT 2 +#define I40E_VFINT_STAT_CTL01_OTHER_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT) +#define I40E_QRX_TAIL1(_Q) (0x00002000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_QRX_TAIL1_MAX_INDEX 15 +#define I40E_QRX_TAIL1_TAIL_SHIFT 0 +#define I40E_QRX_TAIL1_TAIL_MASK I40E_MASK(0x1FFF, I40E_QRX_TAIL1_TAIL_SHIFT) +#define I40E_QTX_TAIL1(_Q) (0x00000000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: PFR */ +#define I40E_QTX_TAIL1_MAX_INDEX 15 +#define I40E_QTX_TAIL1_TAIL_SHIFT 0 +#define I40E_QTX_TAIL1_TAIL_MASK I40E_MASK(0x1FFF, I40E_QTX_TAIL1_TAIL_SHIFT) +#define I40E_VFMSIX_PBA 0x00002000 /* Reset: VFLR */ +#define I40E_VFMSIX_PBA_PENBIT_SHIFT 0 +#define I40E_VFMSIX_PBA_PENBIT_MASK I40E_MASK(0xFFFFFFFF, I40E_VFMSIX_PBA_PENBIT_SHIFT) +#define I40E_VFMSIX_TADD(_i) (0x00000000 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */ +#define I40E_VFMSIX_TADD_MAX_INDEX 16 +#define I40E_VFMSIX_TADD_MSIXTADD10_SHIFT 0 +#define I40E_VFMSIX_TADD_MSIXTADD10_MASK I40E_MASK(0x3, I40E_VFMSIX_TADD_MSIXTADD10_SHIFT) +#define I40E_VFMSIX_TADD_MSIXTADD_SHIFT 2 +#define I40E_VFMSIX_TADD_MSIXTADD_MASK I40E_MASK(0x3FFFFFFF, I40E_VFMSIX_TADD_MSIXTADD_SHIFT) +#define I40E_VFMSIX_TMSG(_i) (0x00000008 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */ +#define I40E_VFMSIX_TMSG_MAX_INDEX 16 +#define I40E_VFMSIX_TMSG_MSIXTMSG_SHIFT 0 +#define I40E_VFMSIX_TMSG_MSIXTMSG_MASK I40E_MASK(0xFFFFFFFF, I40E_VFMSIX_TMSG_MSIXTMSG_SHIFT) +#define I40E_VFMSIX_TUADD(_i) (0x00000004 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */ +#define I40E_VFMSIX_TUADD_MAX_INDEX 16 +#define I40E_VFMSIX_TUADD_MSIXTUADD_SHIFT 0 +#define I40E_VFMSIX_TUADD_MSIXTUADD_MASK I40E_MASK(0xFFFFFFFF, I40E_VFMSIX_TUADD_MSIXTUADD_SHIFT) +#define I40E_VFMSIX_TVCTRL(_i) (0x0000000C + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */ +#define I40E_VFMSIX_TVCTRL_MAX_INDEX 16 +#define I40E_VFMSIX_TVCTRL_MASK_SHIFT 0 +#define I40E_VFMSIX_TVCTRL_MASK_MASK I40E_MASK(0x1, I40E_VFMSIX_TVCTRL_MASK_SHIFT) +#define I40E_VFCM_PE_ERRDATA 0x0000DC00 /* Reset: VFR */ +#define I40E_VFCM_PE_ERRDATA_ERROR_CODE_SHIFT 0 +#define I40E_VFCM_PE_ERRDATA_ERROR_CODE_MASK I40E_MASK(0xF, I40E_VFCM_PE_ERRDATA_ERROR_CODE_SHIFT) +#define I40E_VFCM_PE_ERRDATA_Q_TYPE_SHIFT 4 +#define I40E_VFCM_PE_ERRDATA_Q_TYPE_MASK I40E_MASK(0x7, I40E_VFCM_PE_ERRDATA_Q_TYPE_SHIFT) +#define I40E_VFCM_PE_ERRDATA_Q_NUM_SHIFT 8 +#define I40E_VFCM_PE_ERRDATA_Q_NUM_MASK I40E_MASK(0x3FFFF, I40E_VFCM_PE_ERRDATA_Q_NUM_SHIFT) +#define I40E_VFCM_PE_ERRINFO 0x0000D800 /* Reset: VFR */ +#define I40E_VFCM_PE_ERRINFO_ERROR_VALID_SHIFT 0 +#define I40E_VFCM_PE_ERRINFO_ERROR_VALID_MASK I40E_MASK(0x1, I40E_VFCM_PE_ERRINFO_ERROR_VALID_SHIFT) +#define I40E_VFCM_PE_ERRINFO_ERROR_INST_SHIFT 4 +#define I40E_VFCM_PE_ERRINFO_ERROR_INST_MASK I40E_MASK(0x7, I40E_VFCM_PE_ERRINFO_ERROR_INST_SHIFT) +#define I40E_VFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT 8 +#define I40E_VFCM_PE_ERRINFO_DBL_ERROR_CNT_MASK I40E_MASK(0xFF, I40E_VFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT) +#define I40E_VFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT 16 +#define I40E_VFCM_PE_ERRINFO_RLU_ERROR_CNT_MASK I40E_MASK(0xFF, I40E_VFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT) +#define I40E_VFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT 24 +#define I40E_VFCM_PE_ERRINFO_RLS_ERROR_CNT_MASK I40E_MASK(0xFF, I40E_VFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT) +#define I40E_VFQF_HENA(_i) (0x0000C400 + ((_i) * 4)) /* _i=0...1 */ /* Reset: CORER */ +#define I40E_VFQF_HENA_MAX_INDEX 1 +#define I40E_VFQF_HENA_PTYPE_ENA_SHIFT 0 +#define I40E_VFQF_HENA_PTYPE_ENA_MASK I40E_MASK(0xFFFFFFFF, I40E_VFQF_HENA_PTYPE_ENA_SHIFT) +#define I40E_VFQF_HKEY(_i) (0x0000CC00 + ((_i) * 4)) /* _i=0...12 */ /* Reset: CORER */ +#define I40E_VFQF_HKEY_MAX_INDEX 12 +#define I40E_VFQF_HKEY_KEY_0_SHIFT 0 +#define I40E_VFQF_HKEY_KEY_0_MASK I40E_MASK(0xFF, I40E_VFQF_HKEY_KEY_0_SHIFT) +#define I40E_VFQF_HKEY_KEY_1_SHIFT 8 +#define I40E_VFQF_HKEY_KEY_1_MASK I40E_MASK(0xFF, I40E_VFQF_HKEY_KEY_1_SHIFT) +#define I40E_VFQF_HKEY_KEY_2_SHIFT 16 +#define I40E_VFQF_HKEY_KEY_2_MASK I40E_MASK(0xFF, I40E_VFQF_HKEY_KEY_2_SHIFT) +#define I40E_VFQF_HKEY_KEY_3_SHIFT 24 +#define I40E_VFQF_HKEY_KEY_3_MASK I40E_MASK(0xFF, I40E_VFQF_HKEY_KEY_3_SHIFT) +#define I40E_VFQF_HLUT(_i) (0x0000D000 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ +#define I40E_VFQF_HLUT_MAX_INDEX 15 +#define I40E_VFQF_HLUT_LUT0_SHIFT 0 +#define I40E_VFQF_HLUT_LUT0_MASK I40E_MASK(0xF, I40E_VFQF_HLUT_LUT0_SHIFT) +#define I40E_VFQF_HLUT_LUT1_SHIFT 8 +#define I40E_VFQF_HLUT_LUT1_MASK I40E_MASK(0xF, I40E_VFQF_HLUT_LUT1_SHIFT) +#define I40E_VFQF_HLUT_LUT2_SHIFT 16 +#define I40E_VFQF_HLUT_LUT2_MASK I40E_MASK(0xF, I40E_VFQF_HLUT_LUT2_SHIFT) +#define I40E_VFQF_HLUT_LUT3_SHIFT 24 +#define I40E_VFQF_HLUT_LUT3_MASK I40E_MASK(0xF, I40E_VFQF_HLUT_LUT3_SHIFT) +#define I40E_VFQF_HREGION(_i) (0x0000D400 + ((_i) * 4)) /* _i=0...7 */ /* Reset: CORER */ +#define I40E_VFQF_HREGION_MAX_INDEX 7 +#define I40E_VFQF_HREGION_OVERRIDE_ENA_0_SHIFT 0 +#define I40E_VFQF_HREGION_OVERRIDE_ENA_0_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_0_SHIFT) +#define I40E_VFQF_HREGION_REGION_0_SHIFT 1 +#define I40E_VFQF_HREGION_REGION_0_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_0_SHIFT) +#define I40E_VFQF_HREGION_OVERRIDE_ENA_1_SHIFT 4 +#define I40E_VFQF_HREGION_OVERRIDE_ENA_1_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_1_SHIFT) +#define I40E_VFQF_HREGION_REGION_1_SHIFT 5 +#define I40E_VFQF_HREGION_REGION_1_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_1_SHIFT) +#define I40E_VFQF_HREGION_OVERRIDE_ENA_2_SHIFT 8 +#define I40E_VFQF_HREGION_OVERRIDE_ENA_2_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_2_SHIFT) +#define I40E_VFQF_HREGION_REGION_2_SHIFT 9 +#define I40E_VFQF_HREGION_REGION_2_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_2_SHIFT) +#define I40E_VFQF_HREGION_OVERRIDE_ENA_3_SHIFT 12 +#define I40E_VFQF_HREGION_OVERRIDE_ENA_3_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_3_SHIFT) +#define I40E_VFQF_HREGION_REGION_3_SHIFT 13 +#define I40E_VFQF_HREGION_REGION_3_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_3_SHIFT) +#define I40E_VFQF_HREGION_OVERRIDE_ENA_4_SHIFT 16 +#define I40E_VFQF_HREGION_OVERRIDE_ENA_4_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_4_SHIFT) +#define I40E_VFQF_HREGION_REGION_4_SHIFT 17 +#define I40E_VFQF_HREGION_REGION_4_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_4_SHIFT) +#define I40E_VFQF_HREGION_OVERRIDE_ENA_5_SHIFT 20 +#define I40E_VFQF_HREGION_OVERRIDE_ENA_5_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_5_SHIFT) +#define I40E_VFQF_HREGION_REGION_5_SHIFT 21 +#define I40E_VFQF_HREGION_REGION_5_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_5_SHIFT) +#define I40E_VFQF_HREGION_OVERRIDE_ENA_6_SHIFT 24 +#define I40E_VFQF_HREGION_OVERRIDE_ENA_6_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_6_SHIFT) +#define I40E_VFQF_HREGION_REGION_6_SHIFT 25 +#define I40E_VFQF_HREGION_REGION_6_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_6_SHIFT) +#define I40E_VFQF_HREGION_OVERRIDE_ENA_7_SHIFT 28 +#define I40E_VFQF_HREGION_OVERRIDE_ENA_7_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_7_SHIFT) +#define I40E_VFQF_HREGION_REGION_7_SHIFT 29 +#define I40E_VFQF_HREGION_REGION_7_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_7_SHIFT) +#endif diff --git a/drivers/net/i40e/base/i40e_status.h b/drivers/net/i40e/base/i40e_status.h new file mode 100644 index 0000000..2e693a3 --- /dev/null +++ b/drivers/net/i40e/base/i40e_status.h @@ -0,0 +1,107 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _I40E_STATUS_H_ +#define _I40E_STATUS_H_ + +/* Error Codes */ +enum i40e_status_code { + I40E_SUCCESS = 0, + I40E_ERR_NVM = -1, + I40E_ERR_NVM_CHECKSUM = -2, + I40E_ERR_PHY = -3, + I40E_ERR_CONFIG = -4, + I40E_ERR_PARAM = -5, + I40E_ERR_MAC_TYPE = -6, + I40E_ERR_UNKNOWN_PHY = -7, + I40E_ERR_LINK_SETUP = -8, + I40E_ERR_ADAPTER_STOPPED = -9, + I40E_ERR_INVALID_MAC_ADDR = -10, + I40E_ERR_DEVICE_NOT_SUPPORTED = -11, + I40E_ERR_MASTER_REQUESTS_PENDING = -12, + I40E_ERR_INVALID_LINK_SETTINGS = -13, + I40E_ERR_AUTONEG_NOT_COMPLETE = -14, + I40E_ERR_RESET_FAILED = -15, + I40E_ERR_SWFW_SYNC = -16, + I40E_ERR_NO_AVAILABLE_VSI = -17, + I40E_ERR_NO_MEMORY = -18, + I40E_ERR_BAD_PTR = -19, + I40E_ERR_RING_FULL = -20, + I40E_ERR_INVALID_PD_ID = -21, + I40E_ERR_INVALID_QP_ID = -22, + I40E_ERR_INVALID_CQ_ID = -23, + I40E_ERR_INVALID_CEQ_ID = -24, + I40E_ERR_INVALID_AEQ_ID = -25, + I40E_ERR_INVALID_SIZE = -26, + I40E_ERR_INVALID_ARP_INDEX = -27, + I40E_ERR_INVALID_FPM_FUNC_ID = -28, + I40E_ERR_QP_INVALID_MSG_SIZE = -29, + I40E_ERR_QP_TOOMANY_WRS_POSTED = -30, + I40E_ERR_INVALID_FRAG_COUNT = -31, + I40E_ERR_QUEUE_EMPTY = -32, + I40E_ERR_INVALID_ALIGNMENT = -33, + I40E_ERR_FLUSHED_QUEUE = -34, + I40E_ERR_INVALID_PUSH_PAGE_INDEX = -35, + I40E_ERR_INVALID_IMM_DATA_SIZE = -36, + I40E_ERR_TIMEOUT = -37, + I40E_ERR_OPCODE_MISMATCH = -38, + I40E_ERR_CQP_COMPL_ERROR = -39, + I40E_ERR_INVALID_VF_ID = -40, + I40E_ERR_INVALID_HMCFN_ID = -41, + I40E_ERR_BACKING_PAGE_ERROR = -42, + I40E_ERR_NO_PBLCHUNKS_AVAILABLE = -43, + I40E_ERR_INVALID_PBLE_INDEX = -44, + I40E_ERR_INVALID_SD_INDEX = -45, + I40E_ERR_INVALID_PAGE_DESC_INDEX = -46, + I40E_ERR_INVALID_SD_TYPE = -47, + I40E_ERR_MEMCPY_FAILED = -48, + I40E_ERR_INVALID_HMC_OBJ_INDEX = -49, + I40E_ERR_INVALID_HMC_OBJ_COUNT = -50, + I40E_ERR_INVALID_SRQ_ARM_LIMIT = -51, + I40E_ERR_SRQ_ENABLED = -52, + I40E_ERR_ADMIN_QUEUE_ERROR = -53, + I40E_ERR_ADMIN_QUEUE_TIMEOUT = -54, + I40E_ERR_BUF_TOO_SHORT = -55, + I40E_ERR_ADMIN_QUEUE_FULL = -56, + I40E_ERR_ADMIN_QUEUE_NO_WORK = -57, + I40E_ERR_BAD_IWARP_CQE = -58, + I40E_ERR_NVM_BLANK_MODE = -59, + I40E_ERR_NOT_IMPLEMENTED = -60, + I40E_ERR_PE_DOORBELL_NOT_ENABLED = -61, + I40E_ERR_DIAG_TEST_FAILED = -62, + I40E_ERR_NOT_READY = -63, + I40E_NOT_SUPPORTED = -64, + I40E_ERR_FIRMWARE_API_VERSION = -65, +}; + +#endif /* _I40E_STATUS_H_ */ diff --git a/drivers/net/i40e/base/i40e_type.h b/drivers/net/i40e/base/i40e_type.h new file mode 100644 index 0000000..bb87640 --- /dev/null +++ b/drivers/net/i40e/base/i40e_type.h @@ -0,0 +1,1425 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _I40E_TYPE_H_ +#define _I40E_TYPE_H_ + +#include "i40e_status.h" +#include "i40e_osdep.h" +#include "i40e_register.h" +#include "i40e_adminq.h" +#include "i40e_hmc.h" +#include "i40e_lan_hmc.h" + +#define UNREFERENCED_XPARAMETER +#define UNREFERENCED_1PARAMETER(_p) (_p); +#define UNREFERENCED_2PARAMETER(_p, _q) (_p); (_q); +#define UNREFERENCED_3PARAMETER(_p, _q, _r) (_p); (_q); (_r); +#define UNREFERENCED_4PARAMETER(_p, _q, _r, _s) (_p); (_q); (_r); (_s); +#define UNREFERENCED_5PARAMETER(_p, _q, _r, _s, _t) (_p); (_q); (_r); (_s); (_t); + +/* Vendor ID */ +#define I40E_INTEL_VENDOR_ID 0x8086 + +/* Device IDs */ +#define I40E_DEV_ID_SFP_XL710 0x1572 +#define I40E_DEV_ID_QEMU 0x1574 +#define I40E_DEV_ID_KX_A 0x157F +#define I40E_DEV_ID_KX_B 0x1580 +#define I40E_DEV_ID_KX_C 0x1581 +#define I40E_DEV_ID_QSFP_A 0x1583 +#define I40E_DEV_ID_QSFP_B 0x1584 +#define I40E_DEV_ID_QSFP_C 0x1585 +#define I40E_DEV_ID_10G_BASE_T 0x1586 +#define I40E_DEV_ID_VF 0x154C +#define I40E_DEV_ID_VF_HV 0x1571 + +#define i40e_is_40G_device(d) ((d) == I40E_DEV_ID_QSFP_A || \ + (d) == I40E_DEV_ID_QSFP_B || \ + (d) == I40E_DEV_ID_QSFP_C) + +#ifndef I40E_MASK +/* I40E_MASK is a macro used on 32 bit registers */ +#define I40E_MASK(mask, shift) (mask << shift) +#endif + +#define I40E_MAX_PF 16 +#define I40E_MAX_PF_VSI 64 +#define I40E_MAX_PF_QP 128 +#define I40E_MAX_VSI_QP 16 +#define I40E_MAX_VF_VSI 3 +#define I40E_MAX_CHAINED_RX_BUFFERS 5 +#define I40E_MAX_PF_UDP_OFFLOAD_PORTS 16 + +/* something less than 1 minute */ +#define I40E_HEARTBEAT_TIMEOUT (HZ * 50) + +/* Max default timeout in ms, */ +#define I40E_MAX_NVM_TIMEOUT 18000 + +/* Check whether address is multicast. */ +#define I40E_IS_MULTICAST(address) (bool)(((u8 *)(address))[0] & ((u8)0x01)) + +/* Check whether an address is broadcast. */ +#define I40E_IS_BROADCAST(address) \ + ((((u8 *)(address))[0] == ((u8)0xff)) && \ + (((u8 *)(address))[1] == ((u8)0xff))) + +/* Switch from ms to the 1usec global time (this is the GTIME resolution) */ +#define I40E_MS_TO_GTIME(time) ((time) * 1000) + +/* forward declaration */ +struct i40e_hw; +typedef void (*I40E_ADMINQ_CALLBACK)(struct i40e_hw *, struct i40e_aq_desc *); + +#define I40E_ETH_LENGTH_OF_ADDRESS 6 +/* Data type manipulation macros. */ +#define I40E_HI_DWORD(x) ((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF)) +#define I40E_LO_DWORD(x) ((u32)((x) & 0xFFFFFFFF)) + +#define I40E_HI_WORD(x) ((u16)(((x) >> 16) & 0xFFFF)) +#define I40E_LO_WORD(x) ((u16)((x) & 0xFFFF)) + +#define I40E_HI_BYTE(x) ((u8)(((x) >> 8) & 0xFF)) +#define I40E_LO_BYTE(x) ((u8)((x) & 0xFF)) + +/* Number of Transmit Descriptors must be a multiple of 8. */ +#define I40E_REQ_TX_DESCRIPTOR_MULTIPLE 8 +/* Number of Receive Descriptors must be a multiple of 32 if + * the number of descriptors is greater than 32. + */ +#define I40E_REQ_RX_DESCRIPTOR_MULTIPLE 32 + +#define I40E_DESC_UNUSED(R) \ + ((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \ + (R)->next_to_clean - (R)->next_to_use - 1) + +/* bitfields for Tx queue mapping in QTX_CTL */ +#define I40E_QTX_CTL_VF_QUEUE 0x0 +#define I40E_QTX_CTL_VM_QUEUE 0x1 +#define I40E_QTX_CTL_PF_QUEUE 0x2 + +/* debug masks - set these bits in hw->debug_mask to control output */ +enum i40e_debug_mask { + I40E_DEBUG_INIT = 0x00000001, + I40E_DEBUG_RELEASE = 0x00000002, + + I40E_DEBUG_LINK = 0x00000010, + I40E_DEBUG_PHY = 0x00000020, + I40E_DEBUG_HMC = 0x00000040, + I40E_DEBUG_NVM = 0x00000080, + I40E_DEBUG_LAN = 0x00000100, + I40E_DEBUG_FLOW = 0x00000200, + I40E_DEBUG_DCB = 0x00000400, + I40E_DEBUG_DIAG = 0x00000800, + I40E_DEBUG_FD = 0x00001000, + + I40E_DEBUG_AQ_MESSAGE = 0x01000000, + I40E_DEBUG_AQ_DESCRIPTOR = 0x02000000, + I40E_DEBUG_AQ_DESC_BUFFER = 0x04000000, + I40E_DEBUG_AQ_COMMAND = 0x06000000, + I40E_DEBUG_AQ = 0x0F000000, + + I40E_DEBUG_USER = 0xF0000000, + + I40E_DEBUG_ALL = 0xFFFFFFFF +}; + +/* PCI Bus Info */ +#define I40E_PCI_LINK_STATUS 0xB2 +#define I40E_PCI_LINK_WIDTH 0x3F0 +#define I40E_PCI_LINK_WIDTH_1 0x10 +#define I40E_PCI_LINK_WIDTH_2 0x20 +#define I40E_PCI_LINK_WIDTH_4 0x40 +#define I40E_PCI_LINK_WIDTH_8 0x80 +#define I40E_PCI_LINK_SPEED 0xF +#define I40E_PCI_LINK_SPEED_2500 0x1 +#define I40E_PCI_LINK_SPEED_5000 0x2 +#define I40E_PCI_LINK_SPEED_8000 0x3 + +/* Memory types */ +enum i40e_memset_type { + I40E_NONDMA_MEM = 0, + I40E_DMA_MEM +}; + +/* Memcpy types */ +enum i40e_memcpy_type { + I40E_NONDMA_TO_NONDMA = 0, + I40E_NONDMA_TO_DMA, + I40E_DMA_TO_DMA, + I40E_DMA_TO_NONDMA +}; + +/* These are structs for managing the hardware information and the operations. + * The structures of function pointers are filled out at init time when we + * know for sure exactly which hardware we're working with. This gives us the + * flexibility of using the same main driver code but adapting to slightly + * different hardware needs as new parts are developed. For this architecture, + * the Firmware and AdminQ are intended to insulate the driver from most of the + * future changes, but these structures will also do part of the job. + */ +enum i40e_mac_type { + I40E_MAC_UNKNOWN = 0, + I40E_MAC_X710, + I40E_MAC_XL710, + I40E_MAC_VF, + I40E_MAC_GENERIC, +}; + +enum i40e_media_type { + I40E_MEDIA_TYPE_UNKNOWN = 0, + I40E_MEDIA_TYPE_FIBER, + I40E_MEDIA_TYPE_BASET, + I40E_MEDIA_TYPE_BACKPLANE, + I40E_MEDIA_TYPE_CX4, + I40E_MEDIA_TYPE_DA, + I40E_MEDIA_TYPE_VIRTUAL +}; + +enum i40e_fc_mode { + I40E_FC_NONE = 0, + I40E_FC_RX_PAUSE, + I40E_FC_TX_PAUSE, + I40E_FC_FULL, + I40E_FC_PFC, + I40E_FC_DEFAULT +}; + +enum i40e_set_fc_aq_failures { + I40E_SET_FC_AQ_FAIL_NONE = 0, + I40E_SET_FC_AQ_FAIL_GET = 1, + I40E_SET_FC_AQ_FAIL_SET = 2, + I40E_SET_FC_AQ_FAIL_UPDATE = 4, + I40E_SET_FC_AQ_FAIL_SET_UPDATE = 6 +}; + +enum i40e_vsi_type { + I40E_VSI_MAIN = 0, + I40E_VSI_VMDQ1, + I40E_VSI_VMDQ2, + I40E_VSI_CTRL, + I40E_VSI_FCOE, + I40E_VSI_MIRROR, + I40E_VSI_SRIOV, + I40E_VSI_FDIR, + I40E_VSI_TYPE_UNKNOWN +}; + +enum i40e_queue_type { + I40E_QUEUE_TYPE_RX = 0, + I40E_QUEUE_TYPE_TX, + I40E_QUEUE_TYPE_PE_CEQ, + I40E_QUEUE_TYPE_UNKNOWN +}; + +struct i40e_link_status { + enum i40e_aq_phy_type phy_type; + enum i40e_aq_link_speed link_speed; + u8 link_info; + u8 an_info; + u8 ext_info; + u8 loopback; + bool an_enabled; + /* is Link Status Event notification to SW enabled */ + bool lse_enable; + u16 max_frame_size; + bool crc_enable; + u8 pacing; +}; + +struct i40e_phy_info { + struct i40e_link_status link_info; + struct i40e_link_status link_info_old; + u32 autoneg_advertised; + u32 phy_id; + u32 module_type; + bool get_link_info; + enum i40e_media_type media_type; +}; + +#define I40E_HW_CAP_MAX_GPIO 30 +#define I40E_HW_CAP_MDIO_PORT_MODE_MDIO 0 +#define I40E_HW_CAP_MDIO_PORT_MODE_I2C 1 + +/* Capabilities of a PF or a VF or the whole device */ +struct i40e_hw_capabilities { + u32 switch_mode; +#define I40E_NVM_IMAGE_TYPE_EVB 0x0 +#define I40E_NVM_IMAGE_TYPE_CLOUD 0x2 +#define I40E_NVM_IMAGE_TYPE_UDP_CLOUD 0x3 + + u32 management_mode; + u32 npar_enable; + u32 os2bmc; + u32 valid_functions; + bool sr_iov_1_1; + bool vmdq; + bool evb_802_1_qbg; /* Edge Virtual Bridging */ + bool evb_802_1_qbh; /* Bridge Port Extension */ + bool dcb; + bool fcoe; + bool mfp_mode_1; + bool mgmt_cem; + bool ieee_1588; + bool iwarp; + bool fd; + u32 fd_filters_guaranteed; + u32 fd_filters_best_effort; + bool rss; + u32 rss_table_size; + u32 rss_table_entry_width; + bool led[I40E_HW_CAP_MAX_GPIO]; + bool sdp[I40E_HW_CAP_MAX_GPIO]; + u32 nvm_image_type; + u32 num_flow_director_filters; + u32 num_vfs; + u32 vf_base_id; + u32 num_vsis; + u32 num_rx_qp; + u32 num_tx_qp; + u32 base_queue; + u32 num_msix_vectors; + u32 num_msix_vectors_vf; + u32 led_pin_num; + u32 sdp_pin_num; + u32 mdio_port_num; + u32 mdio_port_mode; + u8 rx_buf_chain_len; + u32 enabled_tcmap; + u32 maxtc; +}; + +struct i40e_mac_info { + enum i40e_mac_type type; + u8 addr[I40E_ETH_LENGTH_OF_ADDRESS]; + u8 perm_addr[I40E_ETH_LENGTH_OF_ADDRESS]; + u8 san_addr[I40E_ETH_LENGTH_OF_ADDRESS]; + u8 port_addr[I40E_ETH_LENGTH_OF_ADDRESS]; + u16 max_fcoeq; +}; + +enum i40e_aq_resources_ids { + I40E_NVM_RESOURCE_ID = 1 +}; + +enum i40e_aq_resource_access_type { + I40E_RESOURCE_READ = 1, + I40E_RESOURCE_WRITE +}; + +struct i40e_nvm_info { + u64 hw_semaphore_timeout; /* 2usec global time (GTIME resolution) */ + u64 hw_semaphore_wait; /* - || - */ + u32 timeout; /* [ms] */ + u16 sr_size; /* Shadow RAM size in words */ + bool blank_nvm_mode; /* is NVM empty (no FW present)*/ + u16 version; /* NVM package version */ + u32 eetrack; /* NVM data version */ +}; + +/* definitions used in NVM update support */ + +enum i40e_nvmupd_cmd { + I40E_NVMUPD_INVALID, + I40E_NVMUPD_READ_CON, + I40E_NVMUPD_READ_SNT, + I40E_NVMUPD_READ_LCB, + I40E_NVMUPD_READ_SA, + I40E_NVMUPD_WRITE_ERA, + I40E_NVMUPD_WRITE_CON, + I40E_NVMUPD_WRITE_SNT, + I40E_NVMUPD_WRITE_LCB, + I40E_NVMUPD_WRITE_SA, + I40E_NVMUPD_CSUM_CON, + I40E_NVMUPD_CSUM_SA, + I40E_NVMUPD_CSUM_LCB, +}; + +enum i40e_nvmupd_state { + I40E_NVMUPD_STATE_INIT, + I40E_NVMUPD_STATE_READING, + I40E_NVMUPD_STATE_WRITING +}; + +/* nvm_access definition and its masks/shifts need to be accessible to + * application, core driver, and shared code. Where is the right file? + */ +#define I40E_NVM_READ 0xB +#define I40E_NVM_WRITE 0xC + +#define I40E_NVM_MOD_PNT_MASK 0xFF + +#define I40E_NVM_TRANS_SHIFT 8 +#define I40E_NVM_TRANS_MASK (0xf << I40E_NVM_TRANS_SHIFT) +#define I40E_NVM_CON 0x0 +#define I40E_NVM_SNT 0x1 +#define I40E_NVM_LCB 0x2 +#define I40E_NVM_SA (I40E_NVM_SNT | I40E_NVM_LCB) +#define I40E_NVM_ERA 0x4 +#define I40E_NVM_CSUM 0x8 + +#define I40E_NVM_ADAPT_SHIFT 16 +#define I40E_NVM_ADAPT_MASK (0xffffULL << I40E_NVM_ADAPT_SHIFT) + +#define I40E_NVMUPD_MAX_DATA 4096 +#define I40E_NVMUPD_IFACE_TIMEOUT 2 /* seconds */ + +struct i40e_nvm_access { + u32 command; + u32 config; + u32 offset; /* in bytes */ + u32 data_size; /* in bytes */ + u8 data[1]; +}; + +/* PCI bus types */ +enum i40e_bus_type { + i40e_bus_type_unknown = 0, + i40e_bus_type_pci, + i40e_bus_type_pcix, + i40e_bus_type_pci_express, + i40e_bus_type_reserved +}; + +/* PCI bus speeds */ +enum i40e_bus_speed { + i40e_bus_speed_unknown = 0, + i40e_bus_speed_33 = 33, + i40e_bus_speed_66 = 66, + i40e_bus_speed_100 = 100, + i40e_bus_speed_120 = 120, + i40e_bus_speed_133 = 133, + i40e_bus_speed_2500 = 2500, + i40e_bus_speed_5000 = 5000, + i40e_bus_speed_8000 = 8000, + i40e_bus_speed_reserved +}; + +/* PCI bus widths */ +enum i40e_bus_width { + i40e_bus_width_unknown = 0, + i40e_bus_width_pcie_x1 = 1, + i40e_bus_width_pcie_x2 = 2, + i40e_bus_width_pcie_x4 = 4, + i40e_bus_width_pcie_x8 = 8, + i40e_bus_width_32 = 32, + i40e_bus_width_64 = 64, + i40e_bus_width_reserved +}; + +/* Bus parameters */ +struct i40e_bus_info { + enum i40e_bus_speed speed; + enum i40e_bus_width width; + enum i40e_bus_type type; + + u16 func; + u16 device; + u16 lan_id; +}; + +/* Flow control (FC) parameters */ +struct i40e_fc_info { + enum i40e_fc_mode current_mode; /* FC mode in effect */ + enum i40e_fc_mode requested_mode; /* FC mode requested by caller */ +}; + +#define I40E_MAX_TRAFFIC_CLASS 8 +#define I40E_MAX_USER_PRIORITY 8 +#define I40E_DCBX_MAX_APPS 32 +#define I40E_LLDPDU_SIZE 1500 + +/* IEEE 802.1Qaz ETS Configuration data */ +struct i40e_ieee_ets_config { + u8 willing; + u8 cbs; + u8 maxtcs; + u8 prioritytable[I40E_MAX_TRAFFIC_CLASS]; + u8 tcbwtable[I40E_MAX_TRAFFIC_CLASS]; + u8 tsatable[I40E_MAX_TRAFFIC_CLASS]; +}; + +/* IEEE 802.1Qaz ETS Recommendation data */ +struct i40e_ieee_ets_recommend { + u8 prioritytable[I40E_MAX_TRAFFIC_CLASS]; + u8 tcbwtable[I40E_MAX_TRAFFIC_CLASS]; + u8 tsatable[I40E_MAX_TRAFFIC_CLASS]; +}; + +/* IEEE 802.1Qaz PFC Configuration data */ +struct i40e_ieee_pfc_config { + u8 willing; + u8 mbc; + u8 pfccap; + u8 pfcenable; +}; + +/* IEEE 802.1Qaz Application Priority data */ +struct i40e_ieee_app_priority_table { + u8 priority; + u8 selector; + u16 protocolid; +}; + +struct i40e_dcbx_config { + u32 numapps; + struct i40e_ieee_ets_config etscfg; + struct i40e_ieee_ets_recommend etsrec; + struct i40e_ieee_pfc_config pfc; + struct i40e_ieee_app_priority_table app[I40E_DCBX_MAX_APPS]; +}; + +/* Port hardware description */ +struct i40e_hw { + u8 *hw_addr; + void *back; + + /* function pointer structs */ + struct i40e_phy_info phy; + struct i40e_mac_info mac; + struct i40e_bus_info bus; + struct i40e_nvm_info nvm; + struct i40e_fc_info fc; + + /* pci info */ + u16 device_id; + u16 vendor_id; + u16 subsystem_device_id; + u16 subsystem_vendor_id; + u8 revision_id; + u8 port; + bool adapter_stopped; + + /* capabilities for entire device and PCI func */ + struct i40e_hw_capabilities dev_caps; + struct i40e_hw_capabilities func_caps; + + /* Flow Director shared filter space */ + u16 fdir_shared_filter_count; + + /* device profile info */ + u8 pf_id; + u16 main_vsi_seid; + + /* Closest numa node to the device */ + u16 numa_node; + + /* Admin Queue info */ + struct i40e_adminq_info aq; + + /* state of nvm update process */ + enum i40e_nvmupd_state nvmupd_state; + + /* HMC info */ + struct i40e_hmc_info hmc; /* HMC info struct */ + + /* LLDP/DCBX Status */ + u16 dcbx_status; + + /* DCBX info */ + struct i40e_dcbx_config local_dcbx_config; + struct i40e_dcbx_config remote_dcbx_config; + + /* debug mask */ + u32 debug_mask; +}; + +struct i40e_driver_version { + u8 major_version; + u8 minor_version; + u8 build_version; + u8 subbuild_version; + u8 driver_string[32]; +}; + +/* RX Descriptors */ +union i40e_16byte_rx_desc { + struct { + __le64 pkt_addr; /* Packet buffer address */ + __le64 hdr_addr; /* Header buffer address */ + } read; + struct { + struct { + struct { + union { + __le16 mirroring_status; + __le16 fcoe_ctx_id; + } mirr_fcoe; + __le16 l2tag1; + } lo_dword; + union { + __le32 rss; /* RSS Hash */ + __le32 fd_id; /* Flow director filter id */ + __le32 fcoe_param; /* FCoE DDP Context id */ + } hi_dword; + } qword0; + struct { + /* ext status/error/pktype/length */ + __le64 status_error_len; + } qword1; + } wb; /* writeback */ +}; + +union i40e_32byte_rx_desc { + struct { + __le64 pkt_addr; /* Packet buffer address */ + __le64 hdr_addr; /* Header buffer address */ + /* bit 0 of hdr_buffer_addr is DD bit */ + __le64 rsvd1; + __le64 rsvd2; + } read; + struct { + struct { + struct { + union { + __le16 mirroring_status; + __le16 fcoe_ctx_id; + } mirr_fcoe; + __le16 l2tag1; + } lo_dword; + union { + __le32 rss; /* RSS Hash */ + __le32 fcoe_param; /* FCoE DDP Context id */ + /* Flow director filter id in case of + * Programming status desc WB + */ + __le32 fd_id; + } hi_dword; + } qword0; + struct { + /* status/error/pktype/length */ + __le64 status_error_len; + } qword1; + struct { + __le16 ext_status; /* extended status */ + __le16 rsvd; + __le16 l2tag2_1; + __le16 l2tag2_2; + } qword2; + struct { + union { + __le32 flex_bytes_lo; + __le32 pe_status; + } lo_dword; + union { + __le32 flex_bytes_hi; + __le32 fd_id; + } hi_dword; + } qword3; + } wb; /* writeback */ +}; + +#define I40E_RXD_QW0_MIRROR_STATUS_SHIFT 8 +#define I40E_RXD_QW0_MIRROR_STATUS_MASK (0x3FUL << \ + I40E_RXD_QW0_MIRROR_STATUS_SHIFT) +#define I40E_RXD_QW0_FCOEINDX_SHIFT 0 +#define I40E_RXD_QW0_FCOEINDX_MASK (0xFFFUL << \ + I40E_RXD_QW0_FCOEINDX_SHIFT) + +enum i40e_rx_desc_status_bits { + /* Note: These are predefined bit offsets */ + I40E_RX_DESC_STATUS_DD_SHIFT = 0, + I40E_RX_DESC_STATUS_EOF_SHIFT = 1, + I40E_RX_DESC_STATUS_L2TAG1P_SHIFT = 2, + I40E_RX_DESC_STATUS_L3L4P_SHIFT = 3, + I40E_RX_DESC_STATUS_CRCP_SHIFT = 4, + I40E_RX_DESC_STATUS_TSYNINDX_SHIFT = 5, /* 2 BITS */ + I40E_RX_DESC_STATUS_TSYNVALID_SHIFT = 7, + I40E_RX_DESC_STATUS_PIF_SHIFT = 8, + I40E_RX_DESC_STATUS_UMBCAST_SHIFT = 9, /* 2 BITS */ + I40E_RX_DESC_STATUS_FLM_SHIFT = 11, + I40E_RX_DESC_STATUS_FLTSTAT_SHIFT = 12, /* 2 BITS */ + I40E_RX_DESC_STATUS_LPBK_SHIFT = 14, + I40E_RX_DESC_STATUS_IPV6EXADD_SHIFT = 15, + I40E_RX_DESC_STATUS_RESERVED_SHIFT = 16, /* 2 BITS */ + I40E_RX_DESC_STATUS_UDP_0_SHIFT = 18, + I40E_RX_DESC_STATUS_LAST /* this entry must be last!!! */ +}; + +#define I40E_RXD_QW1_STATUS_SHIFT 0 +#define I40E_RXD_QW1_STATUS_MASK (((1 << I40E_RX_DESC_STATUS_LAST) - 1) << \ + I40E_RXD_QW1_STATUS_SHIFT) + +#define I40E_RXD_QW1_STATUS_TSYNINDX_SHIFT I40E_RX_DESC_STATUS_TSYNINDX_SHIFT +#define I40E_RXD_QW1_STATUS_TSYNINDX_MASK (0x3UL << \ + I40E_RXD_QW1_STATUS_TSYNINDX_SHIFT) + +#define I40E_RXD_QW1_STATUS_TSYNVALID_SHIFT I40E_RX_DESC_STATUS_TSYNVALID_SHIFT +#define I40E_RXD_QW1_STATUS_TSYNVALID_MASK (0x1UL << \ + I40E_RXD_QW1_STATUS_TSYNVALID_SHIFT) + +#define I40E_RXD_QW1_STATUS_UMBCAST_SHIFT I40E_RX_DESC_STATUS_UMBCAST +#define I40E_RXD_QW1_STATUS_UMBCAST_MASK (0x3UL << \ + I40E_RXD_QW1_STATUS_UMBCAST_SHIFT) + +enum i40e_rx_desc_fltstat_values { + I40E_RX_DESC_FLTSTAT_NO_DATA = 0, + I40E_RX_DESC_FLTSTAT_RSV_FD_ID = 1, /* 16byte desc? FD_ID : RSV */ + I40E_RX_DESC_FLTSTAT_RSV = 2, + I40E_RX_DESC_FLTSTAT_RSS_HASH = 3, +}; + +#define I40E_RXD_PACKET_TYPE_UNICAST 0 +#define I40E_RXD_PACKET_TYPE_MULTICAST 1 +#define I40E_RXD_PACKET_TYPE_BROADCAST 2 +#define I40E_RXD_PACKET_TYPE_MIRRORED 3 + +#define I40E_RXD_QW1_ERROR_SHIFT 19 +#define I40E_RXD_QW1_ERROR_MASK (0xFFUL << I40E_RXD_QW1_ERROR_SHIFT) + +enum i40e_rx_desc_error_bits { + /* Note: These are predefined bit offsets */ + I40E_RX_DESC_ERROR_RXE_SHIFT = 0, + I40E_RX_DESC_ERROR_RECIPE_SHIFT = 1, + I40E_RX_DESC_ERROR_HBO_SHIFT = 2, + I40E_RX_DESC_ERROR_L3L4E_SHIFT = 3, /* 3 BITS */ + I40E_RX_DESC_ERROR_IPE_SHIFT = 3, + I40E_RX_DESC_ERROR_L4E_SHIFT = 4, + I40E_RX_DESC_ERROR_EIPE_SHIFT = 5, + I40E_RX_DESC_ERROR_OVERSIZE_SHIFT = 6, + I40E_RX_DESC_ERROR_PPRS_SHIFT = 7 +}; + +enum i40e_rx_desc_error_l3l4e_fcoe_masks { + I40E_RX_DESC_ERROR_L3L4E_NONE = 0, + I40E_RX_DESC_ERROR_L3L4E_PROT = 1, + I40E_RX_DESC_ERROR_L3L4E_FC = 2, + I40E_RX_DESC_ERROR_L3L4E_DMAC_ERR = 3, + I40E_RX_DESC_ERROR_L3L4E_DMAC_WARN = 4 +}; + +#define I40E_RXD_QW1_PTYPE_SHIFT 30 +#define I40E_RXD_QW1_PTYPE_MASK (0xFFULL << I40E_RXD_QW1_PTYPE_SHIFT) + +/* Packet type non-ip values */ +enum i40e_rx_l2_ptype { + I40E_RX_PTYPE_L2_RESERVED = 0, + I40E_RX_PTYPE_L2_MAC_PAY2 = 1, + I40E_RX_PTYPE_L2_TIMESYNC_PAY2 = 2, + I40E_RX_PTYPE_L2_FIP_PAY2 = 3, + I40E_RX_PTYPE_L2_OUI_PAY2 = 4, + I40E_RX_PTYPE_L2_MACCNTRL_PAY2 = 5, + I40E_RX_PTYPE_L2_LLDP_PAY2 = 6, + I40E_RX_PTYPE_L2_ECP_PAY2 = 7, + I40E_RX_PTYPE_L2_EVB_PAY2 = 8, + I40E_RX_PTYPE_L2_QCN_PAY2 = 9, + I40E_RX_PTYPE_L2_EAPOL_PAY2 = 10, + I40E_RX_PTYPE_L2_ARP = 11, + I40E_RX_PTYPE_L2_FCOE_PAY3 = 12, + I40E_RX_PTYPE_L2_FCOE_FCDATA_PAY3 = 13, + I40E_RX_PTYPE_L2_FCOE_FCRDY_PAY3 = 14, + I40E_RX_PTYPE_L2_FCOE_FCRSP_PAY3 = 15, + I40E_RX_PTYPE_L2_FCOE_FCOTHER_PA = 16, + I40E_RX_PTYPE_L2_FCOE_VFT_PAY3 = 17, + I40E_RX_PTYPE_L2_FCOE_VFT_FCDATA = 18, + I40E_RX_PTYPE_L2_FCOE_VFT_FCRDY = 19, + I40E_RX_PTYPE_L2_FCOE_VFT_FCRSP = 20, + I40E_RX_PTYPE_L2_FCOE_VFT_FCOTHER = 21, + I40E_RX_PTYPE_GRENAT4_MAC_PAY3 = 58, + I40E_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4 = 87, + I40E_RX_PTYPE_GRENAT6_MAC_PAY3 = 124, + I40E_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4 = 153 +}; + +struct i40e_rx_ptype_decoded { + u32 ptype:8; + u32 known:1; + u32 outer_ip:1; + u32 outer_ip_ver:1; + u32 outer_frag:1; + u32 tunnel_type:3; + u32 tunnel_end_prot:2; + u32 tunnel_end_frag:1; + u32 inner_prot:4; + u32 payload_layer:3; +}; + +enum i40e_rx_ptype_outer_ip { + I40E_RX_PTYPE_OUTER_L2 = 0, + I40E_RX_PTYPE_OUTER_IP = 1 +}; + +enum i40e_rx_ptype_outer_ip_ver { + I40E_RX_PTYPE_OUTER_NONE = 0, + I40E_RX_PTYPE_OUTER_IPV4 = 0, + I40E_RX_PTYPE_OUTER_IPV6 = 1 +}; + +enum i40e_rx_ptype_outer_fragmented { + I40E_RX_PTYPE_NOT_FRAG = 0, + I40E_RX_PTYPE_FRAG = 1 +}; + +enum i40e_rx_ptype_tunnel_type { + I40E_RX_PTYPE_TUNNEL_NONE = 0, + I40E_RX_PTYPE_TUNNEL_IP_IP = 1, + I40E_RX_PTYPE_TUNNEL_IP_GRENAT = 2, + I40E_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3, + I40E_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4, +}; + +enum i40e_rx_ptype_tunnel_end_prot { + I40E_RX_PTYPE_TUNNEL_END_NONE = 0, + I40E_RX_PTYPE_TUNNEL_END_IPV4 = 1, + I40E_RX_PTYPE_TUNNEL_END_IPV6 = 2, +}; + +enum i40e_rx_ptype_inner_prot { + I40E_RX_PTYPE_INNER_PROT_NONE = 0, + I40E_RX_PTYPE_INNER_PROT_UDP = 1, + I40E_RX_PTYPE_INNER_PROT_TCP = 2, + I40E_RX_PTYPE_INNER_PROT_SCTP = 3, + I40E_RX_PTYPE_INNER_PROT_ICMP = 4, + I40E_RX_PTYPE_INNER_PROT_TIMESYNC = 5 +}; + +enum i40e_rx_ptype_payload_layer { + I40E_RX_PTYPE_PAYLOAD_LAYER_NONE = 0, + I40E_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1, + I40E_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2, + I40E_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3, +}; + +#define I40E_RX_PTYPE_BIT_MASK 0x0FFFFFFF +#define I40E_RX_PTYPE_SHIFT 56 + +#define I40E_RXD_QW1_LENGTH_PBUF_SHIFT 38 +#define I40E_RXD_QW1_LENGTH_PBUF_MASK (0x3FFFULL << \ + I40E_RXD_QW1_LENGTH_PBUF_SHIFT) + +#define I40E_RXD_QW1_LENGTH_HBUF_SHIFT 52 +#define I40E_RXD_QW1_LENGTH_HBUF_MASK (0x7FFULL << \ + I40E_RXD_QW1_LENGTH_HBUF_SHIFT) + +#define I40E_RXD_QW1_LENGTH_SPH_SHIFT 63 +#define I40E_RXD_QW1_LENGTH_SPH_MASK (0x1ULL << \ + I40E_RXD_QW1_LENGTH_SPH_SHIFT) + +#define I40E_RXD_QW1_NEXTP_SHIFT 38 +#define I40E_RXD_QW1_NEXTP_MASK (0x1FFFULL << I40E_RXD_QW1_NEXTP_SHIFT) + +#define I40E_RXD_QW2_EXT_STATUS_SHIFT 0 +#define I40E_RXD_QW2_EXT_STATUS_MASK (0xFFFFFUL << \ + I40E_RXD_QW2_EXT_STATUS_SHIFT) + +enum i40e_rx_desc_ext_status_bits { + /* Note: These are predefined bit offsets */ + I40E_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT = 0, + I40E_RX_DESC_EXT_STATUS_L2TAG3P_SHIFT = 1, + I40E_RX_DESC_EXT_STATUS_FLEXBL_SHIFT = 2, /* 2 BITS */ + I40E_RX_DESC_EXT_STATUS_FLEXBH_SHIFT = 4, /* 2 BITS */ + I40E_RX_DESC_EXT_STATUS_FDLONGB_SHIFT = 9, + I40E_RX_DESC_EXT_STATUS_FCOELONGB_SHIFT = 10, + I40E_RX_DESC_EXT_STATUS_PELONGB_SHIFT = 11, +}; + +#define I40E_RXD_QW2_L2TAG2_SHIFT 0 +#define I40E_RXD_QW2_L2TAG2_MASK (0xFFFFUL << I40E_RXD_QW2_L2TAG2_SHIFT) + +#define I40E_RXD_QW2_L2TAG3_SHIFT 16 +#define I40E_RXD_QW2_L2TAG3_MASK (0xFFFFUL << I40E_RXD_QW2_L2TAG3_SHIFT) + +enum i40e_rx_desc_pe_status_bits { + /* Note: These are predefined bit offsets */ + I40E_RX_DESC_PE_STATUS_QPID_SHIFT = 0, /* 18 BITS */ + I40E_RX_DESC_PE_STATUS_L4PORT_SHIFT = 0, /* 16 BITS */ + I40E_RX_DESC_PE_STATUS_IPINDEX_SHIFT = 16, /* 8 BITS */ + I40E_RX_DESC_PE_STATUS_QPIDHIT_SHIFT = 24, + I40E_RX_DESC_PE_STATUS_APBVTHIT_SHIFT = 25, + I40E_RX_DESC_PE_STATUS_PORTV_SHIFT = 26, + I40E_RX_DESC_PE_STATUS_URG_SHIFT = 27, + I40E_RX_DESC_PE_STATUS_IPFRAG_SHIFT = 28, + I40E_RX_DESC_PE_STATUS_IPOPT_SHIFT = 29 +}; + +#define I40E_RX_PROG_STATUS_DESC_LENGTH_SHIFT 38 +#define I40E_RX_PROG_STATUS_DESC_LENGTH 0x2000000 + +#define I40E_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT 2 +#define I40E_RX_PROG_STATUS_DESC_QW1_PROGID_MASK (0x7UL << \ + I40E_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT) + +#define I40E_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT 0 +#define I40E_RX_PROG_STATUS_DESC_QW1_STATUS_MASK (0x7FFFUL << \ + I40E_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT) + +#define I40E_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT 19 +#define I40E_RX_PROG_STATUS_DESC_QW1_ERROR_MASK (0x3FUL << \ + I40E_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT) + +enum i40e_rx_prog_status_desc_status_bits { + /* Note: These are predefined bit offsets */ + I40E_RX_PROG_STATUS_DESC_DD_SHIFT = 0, + I40E_RX_PROG_STATUS_DESC_PROG_ID_SHIFT = 2 /* 3 BITS */ +}; + +enum i40e_rx_prog_status_desc_prog_id_masks { + I40E_RX_PROG_STATUS_DESC_FD_FILTER_STATUS = 1, + I40E_RX_PROG_STATUS_DESC_FCOE_CTXT_PROG_STATUS = 2, + I40E_RX_PROG_STATUS_DESC_FCOE_CTXT_INVL_STATUS = 4, +}; + +enum i40e_rx_prog_status_desc_error_bits { + /* Note: These are predefined bit offsets */ + I40E_RX_PROG_STATUS_DESC_FD_TBL_FULL_SHIFT = 0, + I40E_RX_PROG_STATUS_DESC_NO_FD_ENTRY_SHIFT = 1, + I40E_RX_PROG_STATUS_DESC_FCOE_TBL_FULL_SHIFT = 2, + I40E_RX_PROG_STATUS_DESC_FCOE_CONFLICT_SHIFT = 3 +}; + +#define I40E_TWO_BIT_MASK 0x3 +#define I40E_THREE_BIT_MASK 0x7 +#define I40E_FOUR_BIT_MASK 0xF +#define I40E_EIGHTEEN_BIT_MASK 0x3FFFF + +/* TX Descriptor */ +struct i40e_tx_desc { + __le64 buffer_addr; /* Address of descriptor's data buf */ + __le64 cmd_type_offset_bsz; +}; + +#define I40E_TXD_QW1_DTYPE_SHIFT 0 +#define I40E_TXD_QW1_DTYPE_MASK (0xFUL << I40E_TXD_QW1_DTYPE_SHIFT) + +enum i40e_tx_desc_dtype_value { + I40E_TX_DESC_DTYPE_DATA = 0x0, + I40E_TX_DESC_DTYPE_NOP = 0x1, /* same as Context desc */ + I40E_TX_DESC_DTYPE_CONTEXT = 0x1, + I40E_TX_DESC_DTYPE_FCOE_CTX = 0x2, + I40E_TX_DESC_DTYPE_FILTER_PROG = 0x8, + I40E_TX_DESC_DTYPE_DDP_CTX = 0x9, + I40E_TX_DESC_DTYPE_FLEX_DATA = 0xB, + I40E_TX_DESC_DTYPE_FLEX_CTX_1 = 0xC, + I40E_TX_DESC_DTYPE_FLEX_CTX_2 = 0xD, + I40E_TX_DESC_DTYPE_DESC_DONE = 0xF +}; + +#define I40E_TXD_QW1_CMD_SHIFT 4 +#define I40E_TXD_QW1_CMD_MASK (0x3FFUL << I40E_TXD_QW1_CMD_SHIFT) + +enum i40e_tx_desc_cmd_bits { + I40E_TX_DESC_CMD_EOP = 0x0001, + I40E_TX_DESC_CMD_RS = 0x0002, + I40E_TX_DESC_CMD_ICRC = 0x0004, + I40E_TX_DESC_CMD_IL2TAG1 = 0x0008, + I40E_TX_DESC_CMD_DUMMY = 0x0010, + I40E_TX_DESC_CMD_IIPT_NONIP = 0x0000, /* 2 BITS */ + I40E_TX_DESC_CMD_IIPT_IPV6 = 0x0020, /* 2 BITS */ + I40E_TX_DESC_CMD_IIPT_IPV4 = 0x0040, /* 2 BITS */ + I40E_TX_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, /* 2 BITS */ + I40E_TX_DESC_CMD_FCOET = 0x0080, + I40E_TX_DESC_CMD_L4T_EOFT_UNK = 0x0000, /* 2 BITS */ + I40E_TX_DESC_CMD_L4T_EOFT_TCP = 0x0100, /* 2 BITS */ + I40E_TX_DESC_CMD_L4T_EOFT_SCTP = 0x0200, /* 2 BITS */ + I40E_TX_DESC_CMD_L4T_EOFT_UDP = 0x0300, /* 2 BITS */ + I40E_TX_DESC_CMD_L4T_EOFT_EOF_N = 0x0000, /* 2 BITS */ + I40E_TX_DESC_CMD_L4T_EOFT_EOF_T = 0x0100, /* 2 BITS */ + I40E_TX_DESC_CMD_L4T_EOFT_EOF_NI = 0x0200, /* 2 BITS */ + I40E_TX_DESC_CMD_L4T_EOFT_EOF_A = 0x0300, /* 2 BITS */ +}; + +#define I40E_TXD_QW1_OFFSET_SHIFT 16 +#define I40E_TXD_QW1_OFFSET_MASK (0x3FFFFULL << \ + I40E_TXD_QW1_OFFSET_SHIFT) + +enum i40e_tx_desc_length_fields { + /* Note: These are predefined bit offsets */ + I40E_TX_DESC_LENGTH_MACLEN_SHIFT = 0, /* 7 BITS */ + I40E_TX_DESC_LENGTH_IPLEN_SHIFT = 7, /* 7 BITS */ + I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT = 14 /* 4 BITS */ +}; + +#define I40E_TXD_QW1_MACLEN_MASK (0x7FUL << I40E_TX_DESC_LENGTH_MACLEN_SHIFT) +#define I40E_TXD_QW1_IPLEN_MASK (0x7FUL << I40E_TX_DESC_LENGTH_IPLEN_SHIFT) +#define I40E_TXD_QW1_L4LEN_MASK (0xFUL << I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) +#define I40E_TXD_QW1_FCLEN_MASK (0xFUL << I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) + +#define I40E_TXD_QW1_TX_BUF_SZ_SHIFT 34 +#define I40E_TXD_QW1_TX_BUF_SZ_MASK (0x3FFFULL << \ + I40E_TXD_QW1_TX_BUF_SZ_SHIFT) + +#define I40E_TXD_QW1_L2TAG1_SHIFT 48 +#define I40E_TXD_QW1_L2TAG1_MASK (0xFFFFULL << I40E_TXD_QW1_L2TAG1_SHIFT) + +/* Context descriptors */ +struct i40e_tx_context_desc { + __le32 tunneling_params; + __le16 l2tag2; + __le16 rsvd; + __le64 type_cmd_tso_mss; +}; + +#define I40E_TXD_CTX_QW1_DTYPE_SHIFT 0 +#define I40E_TXD_CTX_QW1_DTYPE_MASK (0xFUL << I40E_TXD_CTX_QW1_DTYPE_SHIFT) + +#define I40E_TXD_CTX_QW1_CMD_SHIFT 4 +#define I40E_TXD_CTX_QW1_CMD_MASK (0xFFFFUL << I40E_TXD_CTX_QW1_CMD_SHIFT) + +enum i40e_tx_ctx_desc_cmd_bits { + I40E_TX_CTX_DESC_TSO = 0x01, + I40E_TX_CTX_DESC_TSYN = 0x02, + I40E_TX_CTX_DESC_IL2TAG2 = 0x04, + I40E_TX_CTX_DESC_IL2TAG2_IL2H = 0x08, + I40E_TX_CTX_DESC_SWTCH_NOTAG = 0x00, + I40E_TX_CTX_DESC_SWTCH_UPLINK = 0x10, + I40E_TX_CTX_DESC_SWTCH_LOCAL = 0x20, + I40E_TX_CTX_DESC_SWTCH_VSI = 0x30, + I40E_TX_CTX_DESC_SWPE = 0x40 +}; + +#define I40E_TXD_CTX_QW1_TSO_LEN_SHIFT 30 +#define I40E_TXD_CTX_QW1_TSO_LEN_MASK (0x3FFFFULL << \ + I40E_TXD_CTX_QW1_TSO_LEN_SHIFT) + +#define I40E_TXD_CTX_QW1_MSS_SHIFT 50 +#define I40E_TXD_CTX_QW1_MSS_MASK (0x3FFFULL << \ + I40E_TXD_CTX_QW1_MSS_SHIFT) + +#define I40E_TXD_CTX_QW1_VSI_SHIFT 50 +#define I40E_TXD_CTX_QW1_VSI_MASK (0x1FFULL << I40E_TXD_CTX_QW1_VSI_SHIFT) + +#define I40E_TXD_CTX_QW0_EXT_IP_SHIFT 0 +#define I40E_TXD_CTX_QW0_EXT_IP_MASK (0x3ULL << \ + I40E_TXD_CTX_QW0_EXT_IP_SHIFT) + +enum i40e_tx_ctx_desc_eipt_offload { + I40E_TX_CTX_EXT_IP_NONE = 0x0, + I40E_TX_CTX_EXT_IP_IPV6 = 0x1, + I40E_TX_CTX_EXT_IP_IPV4_NO_CSUM = 0x2, + I40E_TX_CTX_EXT_IP_IPV4 = 0x3 +}; + +#define I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT 2 +#define I40E_TXD_CTX_QW0_EXT_IPLEN_MASK (0x3FULL << \ + I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT) + +#define I40E_TXD_CTX_QW0_NATT_SHIFT 9 +#define I40E_TXD_CTX_QW0_NATT_MASK (0x3ULL << I40E_TXD_CTX_QW0_NATT_SHIFT) + +#define I40E_TXD_CTX_UDP_TUNNELING (0x1ULL << I40E_TXD_CTX_QW0_NATT_SHIFT) +#define I40E_TXD_CTX_GRE_TUNNELING (0x2ULL << I40E_TXD_CTX_QW0_NATT_SHIFT) + +#define I40E_TXD_CTX_QW0_EIP_NOINC_SHIFT 11 +#define I40E_TXD_CTX_QW0_EIP_NOINC_MASK (0x1ULL << \ + I40E_TXD_CTX_QW0_EIP_NOINC_SHIFT) + +#define I40E_TXD_CTX_EIP_NOINC_IPID_CONST I40E_TXD_CTX_QW0_EIP_NOINC_MASK + +#define I40E_TXD_CTX_QW0_NATLEN_SHIFT 12 +#define I40E_TXD_CTX_QW0_NATLEN_MASK (0X7FULL << \ + I40E_TXD_CTX_QW0_NATLEN_SHIFT) + +#define I40E_TXD_CTX_QW0_DECTTL_SHIFT 19 +#define I40E_TXD_CTX_QW0_DECTTL_MASK (0xFULL << \ + I40E_TXD_CTX_QW0_DECTTL_SHIFT) + +struct i40e_nop_desc { + __le64 rsvd; + __le64 dtype_cmd; +}; + +#define I40E_TXD_NOP_QW1_DTYPE_SHIFT 0 +#define I40E_TXD_NOP_QW1_DTYPE_MASK (0xFUL << I40E_TXD_NOP_QW1_DTYPE_SHIFT) + +#define I40E_TXD_NOP_QW1_CMD_SHIFT 4 +#define I40E_TXD_NOP_QW1_CMD_MASK (0x7FUL << I40E_TXD_NOP_QW1_CMD_SHIFT) + +enum i40e_tx_nop_desc_cmd_bits { + /* Note: These are predefined bit offsets */ + I40E_TX_NOP_DESC_EOP_SHIFT = 0, + I40E_TX_NOP_DESC_RS_SHIFT = 1, + I40E_TX_NOP_DESC_RSV_SHIFT = 2 /* 5 bits */ +}; + +struct i40e_filter_program_desc { + __le32 qindex_flex_ptype_vsi; + __le32 rsvd; + __le32 dtype_cmd_cntindex; + __le32 fd_id; +}; +#define I40E_TXD_FLTR_QW0_QINDEX_SHIFT 0 +#define I40E_TXD_FLTR_QW0_QINDEX_MASK (0x7FFUL << \ + I40E_TXD_FLTR_QW0_QINDEX_SHIFT) +#define I40E_TXD_FLTR_QW0_FLEXOFF_SHIFT 11 +#define I40E_TXD_FLTR_QW0_FLEXOFF_MASK (0x7UL << \ + I40E_TXD_FLTR_QW0_FLEXOFF_SHIFT) +#define I40E_TXD_FLTR_QW0_PCTYPE_SHIFT 17 +#define I40E_TXD_FLTR_QW0_PCTYPE_MASK (0x3FUL << \ + I40E_TXD_FLTR_QW0_PCTYPE_SHIFT) + +/* Packet Classifier Types for filters */ +enum i40e_filter_pctype { + /* Note: Values 0-30 are reserved for future use */ + I40E_FILTER_PCTYPE_NONF_IPV4_UDP = 31, + /* Note: Value 32 is reserved for future use */ + I40E_FILTER_PCTYPE_NONF_IPV4_TCP = 33, + I40E_FILTER_PCTYPE_NONF_IPV4_SCTP = 34, + I40E_FILTER_PCTYPE_NONF_IPV4_OTHER = 35, + I40E_FILTER_PCTYPE_FRAG_IPV4 = 36, + /* Note: Values 37-40 are reserved for future use */ + I40E_FILTER_PCTYPE_NONF_IPV6_UDP = 41, + I40E_FILTER_PCTYPE_NONF_IPV6_TCP = 43, + I40E_FILTER_PCTYPE_NONF_IPV6_SCTP = 44, + I40E_FILTER_PCTYPE_NONF_IPV6_OTHER = 45, + I40E_FILTER_PCTYPE_FRAG_IPV6 = 46, + /* Note: Value 47 is reserved for future use */ + I40E_FILTER_PCTYPE_FCOE_OX = 48, + I40E_FILTER_PCTYPE_FCOE_RX = 49, + I40E_FILTER_PCTYPE_FCOE_OTHER = 50, + /* Note: Values 51-62 are reserved for future use */ + I40E_FILTER_PCTYPE_L2_PAYLOAD = 63, +}; + +enum i40e_filter_program_desc_dest { + I40E_FILTER_PROGRAM_DESC_DEST_DROP_PACKET = 0x0, + I40E_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_QINDEX = 0x1, + I40E_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_OTHER = 0x2, +}; + +enum i40e_filter_program_desc_fd_status { + I40E_FILTER_PROGRAM_DESC_FD_STATUS_NONE = 0x0, + I40E_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID = 0x1, + I40E_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID_4FLEX_BYTES = 0x2, + I40E_FILTER_PROGRAM_DESC_FD_STATUS_8FLEX_BYTES = 0x3, +}; + +#define I40E_TXD_FLTR_QW0_DEST_VSI_SHIFT 23 +#define I40E_TXD_FLTR_QW0_DEST_VSI_MASK (0x1FFUL << \ + I40E_TXD_FLTR_QW0_DEST_VSI_SHIFT) + +#define I40E_TXD_FLTR_QW1_DTYPE_SHIFT 0 +#define I40E_TXD_FLTR_QW1_DTYPE_MASK (0xFUL << I40E_TXD_FLTR_QW1_DTYPE_SHIFT) + +#define I40E_TXD_FLTR_QW1_CMD_SHIFT 4 +#define I40E_TXD_FLTR_QW1_CMD_MASK (0xFFFFULL << \ + I40E_TXD_FLTR_QW1_CMD_SHIFT) + +#define I40E_TXD_FLTR_QW1_PCMD_SHIFT (0x0ULL + I40E_TXD_FLTR_QW1_CMD_SHIFT) +#define I40E_TXD_FLTR_QW1_PCMD_MASK (0x7ULL << I40E_TXD_FLTR_QW1_PCMD_SHIFT) + +enum i40e_filter_program_desc_pcmd { + I40E_FILTER_PROGRAM_DESC_PCMD_ADD_UPDATE = 0x1, + I40E_FILTER_PROGRAM_DESC_PCMD_REMOVE = 0x2, +}; + +#define I40E_TXD_FLTR_QW1_DEST_SHIFT (0x3ULL + I40E_TXD_FLTR_QW1_CMD_SHIFT) +#define I40E_TXD_FLTR_QW1_DEST_MASK (0x3ULL << I40E_TXD_FLTR_QW1_DEST_SHIFT) + +#define I40E_TXD_FLTR_QW1_CNT_ENA_SHIFT (0x7ULL + I40E_TXD_FLTR_QW1_CMD_SHIFT) +#define I40E_TXD_FLTR_QW1_CNT_ENA_MASK (0x1ULL << \ + I40E_TXD_FLTR_QW1_CNT_ENA_SHIFT) + +#define I40E_TXD_FLTR_QW1_FD_STATUS_SHIFT (0x9ULL + \ + I40E_TXD_FLTR_QW1_CMD_SHIFT) +#define I40E_TXD_FLTR_QW1_FD_STATUS_MASK (0x3ULL << \ + I40E_TXD_FLTR_QW1_FD_STATUS_SHIFT) + +#define I40E_TXD_FLTR_QW1_CNTINDEX_SHIFT 20 +#define I40E_TXD_FLTR_QW1_CNTINDEX_MASK (0x1FFUL << \ + I40E_TXD_FLTR_QW1_CNTINDEX_SHIFT) + +enum i40e_filter_type { + I40E_FLOW_DIRECTOR_FLTR = 0, + I40E_PE_QUAD_HASH_FLTR = 1, + I40E_ETHERTYPE_FLTR, + I40E_FCOE_CTX_FLTR, + I40E_MAC_VLAN_FLTR, + I40E_HASH_FLTR +}; + +struct i40e_vsi_context { + u16 seid; + u16 uplink_seid; + u16 vsi_number; + u16 vsis_allocated; + u16 vsis_unallocated; + u16 flags; + u8 pf_num; + u8 vf_num; + u8 connection_type; + struct i40e_aqc_vsi_properties_data info; +}; + +struct i40e_veb_context { + u16 seid; + u16 uplink_seid; + u16 veb_number; + u16 vebs_allocated; + u16 vebs_unallocated; + u16 flags; + struct i40e_aqc_get_veb_parameters_completion info; +}; + +/* Statistics collected by each port, VSI, VEB, and S-channel */ +struct i40e_eth_stats { + u64 rx_bytes; /* gorc */ + u64 rx_unicast; /* uprc */ + u64 rx_multicast; /* mprc */ + u64 rx_broadcast; /* bprc */ + u64 rx_discards; /* rdpc */ + u64 rx_unknown_protocol; /* rupp */ + u64 tx_bytes; /* gotc */ + u64 tx_unicast; /* uptc */ + u64 tx_multicast; /* mptc */ + u64 tx_broadcast; /* bptc */ + u64 tx_discards; /* tdpc */ + u64 tx_errors; /* tepc */ +}; + +/* Statistics collected by the MAC */ +struct i40e_hw_port_stats { + /* eth stats collected by the port */ + struct i40e_eth_stats eth; + + /* additional port specific stats */ + u64 tx_dropped_link_down; /* tdold */ + u64 crc_errors; /* crcerrs */ + u64 illegal_bytes; /* illerrc */ + u64 error_bytes; /* errbc */ + u64 mac_local_faults; /* mlfc */ + u64 mac_remote_faults; /* mrfc */ + u64 rx_length_errors; /* rlec */ + u64 link_xon_rx; /* lxonrxc */ + u64 link_xoff_rx; /* lxoffrxc */ + u64 priority_xon_rx[8]; /* pxonrxc[8] */ + u64 priority_xoff_rx[8]; /* pxoffrxc[8] */ + u64 link_xon_tx; /* lxontxc */ + u64 link_xoff_tx; /* lxofftxc */ + u64 priority_xon_tx[8]; /* pxontxc[8] */ + u64 priority_xoff_tx[8]; /* pxofftxc[8] */ + u64 priority_xon_2_xoff[8]; /* pxon2offc[8] */ + u64 rx_size_64; /* prc64 */ + u64 rx_size_127; /* prc127 */ + u64 rx_size_255; /* prc255 */ + u64 rx_size_511; /* prc511 */ + u64 rx_size_1023; /* prc1023 */ + u64 rx_size_1522; /* prc1522 */ + u64 rx_size_big; /* prc9522 */ + u64 rx_undersize; /* ruc */ + u64 rx_fragments; /* rfc */ + u64 rx_oversize; /* roc */ + u64 rx_jabber; /* rjc */ + u64 tx_size_64; /* ptc64 */ + u64 tx_size_127; /* ptc127 */ + u64 tx_size_255; /* ptc255 */ + u64 tx_size_511; /* ptc511 */ + u64 tx_size_1023; /* ptc1023 */ + u64 tx_size_1522; /* ptc1522 */ + u64 tx_size_big; /* ptc9522 */ + u64 mac_short_packet_dropped; /* mspdc */ + u64 checksum_error; /* xec */ + /* flow director stats */ + u64 fd_atr_match; + u64 fd_sb_match; + /* EEE LPI */ + u32 tx_lpi_status; + u32 rx_lpi_status; + u64 tx_lpi_count; /* etlpic */ + u64 rx_lpi_count; /* erlpic */ +}; + +/* Checksum and Shadow RAM pointers */ +#define I40E_SR_NVM_CONTROL_WORD 0x00 +#define I40E_SR_PCIE_ANALOG_CONFIG_PTR 0x03 +#define I40E_SR_PHY_ANALOG_CONFIG_PTR 0x04 +#define I40E_SR_OPTION_ROM_PTR 0x05 +#define I40E_SR_RO_PCIR_REGS_AUTO_LOAD_PTR 0x06 +#define I40E_SR_AUTO_GENERATED_POINTERS_PTR 0x07 +#define I40E_SR_PCIR_REGS_AUTO_LOAD_PTR 0x08 +#define I40E_SR_EMP_GLOBAL_MODULE_PTR 0x09 +#define I40E_SR_RO_PCIE_LCB_PTR 0x0A +#define I40E_SR_EMP_IMAGE_PTR 0x0B +#define I40E_SR_PE_IMAGE_PTR 0x0C +#define I40E_SR_CSR_PROTECTED_LIST_PTR 0x0D +#define I40E_SR_MNG_CONFIG_PTR 0x0E +#define I40E_SR_EMP_MODULE_PTR 0x0F +#define I40E_SR_PBA_BLOCK_PTR 0x16 +#define I40E_SR_BOOT_CONFIG_PTR 0x17 +#define I40E_SR_NVM_IMAGE_VERSION 0x18 +#define I40E_SR_NVM_WAKE_ON_LAN 0x19 +#define I40E_SR_ALTERNATE_SAN_MAC_ADDRESS_PTR 0x27 +#define I40E_SR_PERMANENT_SAN_MAC_ADDRESS_PTR 0x28 +#define I40E_SR_NVM_EETRACK_LO 0x2D +#define I40E_SR_NVM_EETRACK_HI 0x2E +#define I40E_SR_VPD_PTR 0x2F +#define I40E_SR_PXE_SETUP_PTR 0x30 +#define I40E_SR_PXE_CONFIG_CUST_OPTIONS_PTR 0x31 +#define I40E_SR_SW_ETHERNET_MAC_ADDRESS_PTR 0x37 +#define I40E_SR_POR_REGS_AUTO_LOAD_PTR 0x38 +#define I40E_SR_EMPR_REGS_AUTO_LOAD_PTR 0x3A +#define I40E_SR_GLOBR_REGS_AUTO_LOAD_PTR 0x3B +#define I40E_SR_CORER_REGS_AUTO_LOAD_PTR 0x3C +#define I40E_SR_PCIE_ALT_AUTO_LOAD_PTR 0x3E +#define I40E_SR_SW_CHECKSUM_WORD 0x3F +#define I40E_SR_1ST_FREE_PROVISION_AREA_PTR 0x40 +#define I40E_SR_4TH_FREE_PROVISION_AREA_PTR 0x42 +#define I40E_SR_3RD_FREE_PROVISION_AREA_PTR 0x44 +#define I40E_SR_2ND_FREE_PROVISION_AREA_PTR 0x46 +#define I40E_SR_EMP_SR_SETTINGS_PTR 0x48 + +/* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */ +#define I40E_SR_VPD_MODULE_MAX_SIZE 1024 +#define I40E_SR_PCIE_ALT_MODULE_MAX_SIZE 1024 +#define I40E_SR_CONTROL_WORD_1_SHIFT 0x06 +#define I40E_SR_CONTROL_WORD_1_MASK (0x03 << I40E_SR_CONTROL_WORD_1_SHIFT) + +/* Shadow RAM related */ +#define I40E_SR_SECTOR_SIZE_IN_WORDS 0x800 +#define I40E_SR_BUF_ALIGNMENT 4096 +#define I40E_SR_WORDS_IN_1KB 512 +/* Checksum should be calculated such that after adding all the words, + * including the checksum word itself, the sum should be 0xBABA. + */ +#define I40E_SR_SW_CHECKSUM_BASE 0xBABA + +#define I40E_SRRD_SRCTL_ATTEMPTS 100000 + +enum i40e_switch_element_types { + I40E_SWITCH_ELEMENT_TYPE_MAC = 1, + I40E_SWITCH_ELEMENT_TYPE_PF = 2, + I40E_SWITCH_ELEMENT_TYPE_VF = 3, + I40E_SWITCH_ELEMENT_TYPE_EMP = 4, + I40E_SWITCH_ELEMENT_TYPE_BMC = 6, + I40E_SWITCH_ELEMENT_TYPE_PE = 16, + I40E_SWITCH_ELEMENT_TYPE_VEB = 17, + I40E_SWITCH_ELEMENT_TYPE_PA = 18, + I40E_SWITCH_ELEMENT_TYPE_VSI = 19, +}; + +/* Supported EtherType filters */ +enum i40e_ether_type_index { + I40E_ETHER_TYPE_1588 = 0, + I40E_ETHER_TYPE_FIP = 1, + I40E_ETHER_TYPE_OUI_EXTENDED = 2, + I40E_ETHER_TYPE_MAC_CONTROL = 3, + I40E_ETHER_TYPE_LLDP = 4, + I40E_ETHER_TYPE_EVB_PROTOCOL1 = 5, + I40E_ETHER_TYPE_EVB_PROTOCOL2 = 6, + I40E_ETHER_TYPE_QCN_CNM = 7, + I40E_ETHER_TYPE_8021X = 8, + I40E_ETHER_TYPE_ARP = 9, + I40E_ETHER_TYPE_RSV1 = 10, + I40E_ETHER_TYPE_RSV2 = 11, +}; + +/* Filter context base size is 1K */ +#define I40E_HASH_FILTER_BASE_SIZE 1024 +/* Supported Hash filter values */ +enum i40e_hash_filter_size { + I40E_HASH_FILTER_SIZE_1K = 0, + I40E_HASH_FILTER_SIZE_2K = 1, + I40E_HASH_FILTER_SIZE_4K = 2, + I40E_HASH_FILTER_SIZE_8K = 3, + I40E_HASH_FILTER_SIZE_16K = 4, + I40E_HASH_FILTER_SIZE_32K = 5, + I40E_HASH_FILTER_SIZE_64K = 6, + I40E_HASH_FILTER_SIZE_128K = 7, + I40E_HASH_FILTER_SIZE_256K = 8, + I40E_HASH_FILTER_SIZE_512K = 9, + I40E_HASH_FILTER_SIZE_1M = 10, +}; + +/* DMA context base size is 0.5K */ +#define I40E_DMA_CNTX_BASE_SIZE 512 +/* Supported DMA context values */ +enum i40e_dma_cntx_size { + I40E_DMA_CNTX_SIZE_512 = 0, + I40E_DMA_CNTX_SIZE_1K = 1, + I40E_DMA_CNTX_SIZE_2K = 2, + I40E_DMA_CNTX_SIZE_4K = 3, + I40E_DMA_CNTX_SIZE_8K = 4, + I40E_DMA_CNTX_SIZE_16K = 5, + I40E_DMA_CNTX_SIZE_32K = 6, + I40E_DMA_CNTX_SIZE_64K = 7, + I40E_DMA_CNTX_SIZE_128K = 8, + I40E_DMA_CNTX_SIZE_256K = 9, +}; + +/* Supported Hash look up table (LUT) sizes */ +enum i40e_hash_lut_size { + I40E_HASH_LUT_SIZE_128 = 0, + I40E_HASH_LUT_SIZE_512 = 1, +}; + +/* Structure to hold a per PF filter control settings */ +struct i40e_filter_control_settings { + /* number of PE Quad Hash filter buckets */ + enum i40e_hash_filter_size pe_filt_num; + /* number of PE Quad Hash contexts */ + enum i40e_dma_cntx_size pe_cntx_num; + /* number of FCoE filter buckets */ + enum i40e_hash_filter_size fcoe_filt_num; + /* number of FCoE DDP contexts */ + enum i40e_dma_cntx_size fcoe_cntx_num; + /* size of the Hash LUT */ + enum i40e_hash_lut_size hash_lut_size; + /* enable FDIR filters for PF and its VFs */ + bool enable_fdir; + /* enable Ethertype filters for PF and its VFs */ + bool enable_ethtype; + /* enable MAC/VLAN filters for PF and its VFs */ + bool enable_macvlan; +}; + +/* Structure to hold device level control filter counts */ +struct i40e_control_filter_stats { + u16 mac_etype_used; /* Used perfect match MAC/EtherType filters */ + u16 etype_used; /* Used perfect EtherType filters */ + u16 mac_etype_free; /* Un-used perfect match MAC/EtherType filters */ + u16 etype_free; /* Un-used perfect EtherType filters */ +}; + +enum i40e_reset_type { + I40E_RESET_POR = 0, + I40E_RESET_CORER = 1, + I40E_RESET_GLOBR = 2, + I40E_RESET_EMPR = 3, +}; + +/* Offsets into Alternate Ram */ +#define I40E_ALT_STRUCT_FIRST_PF_OFFSET 0 /* in dwords */ +#define I40E_ALT_STRUCT_DWORDS_PER_PF 64 /* in dwords */ +#define I40E_ALT_STRUCT_OUTER_VLAN_TAG_OFFSET 0xD /* in dwords */ +#define I40E_ALT_STRUCT_USER_PRIORITY_OFFSET 0xC /* in dwords */ +#define I40E_ALT_STRUCT_MIN_BW_OFFSET 0xE /* in dwords */ +#define I40E_ALT_STRUCT_MAX_BW_OFFSET 0xF /* in dwords */ + +/* Alternate Ram Bandwidth Masks */ +#define I40E_ALT_BW_VALUE_MASK 0xFF +#define I40E_ALT_BW_RELATIVE_MASK 0x40000000 +#define I40E_ALT_BW_VALID_MASK 0x80000000 + +/* RSS Hash Table Size */ +#define I40E_PFQF_CTL_0_HASHLUTSIZE_512 0x00010000 +#endif /* _I40E_TYPE_H_ */ diff --git a/drivers/net/i40e/base/i40e_virtchnl.h b/drivers/net/i40e/base/i40e_virtchnl.h new file mode 100644 index 0000000..5257b5d --- /dev/null +++ b/drivers/net/i40e/base/i40e_virtchnl.h @@ -0,0 +1,373 @@ +/******************************************************************************* + +Copyright (c) 2013 - 2014, Intel Corporation +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of the Intel Corporation nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS +INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN +CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE +POSSIBILITY OF SUCH DAMAGE. + +***************************************************************************/ + +#ifndef _I40E_VIRTCHNL_H_ +#define _I40E_VIRTCHNL_H_ + +#include "i40e_type.h" + +/* Description: + * This header file describes the VF-PF communication protocol used + * by the various i40e drivers. + * + * Admin queue buffer usage: + * desc->opcode is always i40e_aqc_opc_send_msg_to_pf + * flags, retval, datalen, and data addr are all used normally. + * Firmware copies the cookie fields when sending messages between the PF and + * VF, but uses all other fields internally. Due to this limitation, we + * must send all messages as "indirect", i.e. using an external buffer. + * + * All the vsi indexes are relative to the VF. Each VF can have maximum of + * three VSIs. All the queue indexes are relative to the VSI. Each VF can + * have a maximum of sixteen queues for all of its VSIs. + * + * The PF is required to return a status code in v_retval for all messages + * except RESET_VF, which does not require any response. The return value is of + * i40e_status_code type, defined in the i40e_type.h. + * + * In general, VF driver initialization should roughly follow the order of these + * opcodes. The VF driver must first validate the API version of the PF driver, + * then request a reset, then get resources, then configure queues and + * interrupts. After these operations are complete, the VF driver may start + * its queues, optionally add MAC and VLAN filters, and process traffic. + */ + +/* Opcodes for VF-PF communication. These are placed in the v_opcode field + * of the virtchnl_msg structure. + */ +enum i40e_virtchnl_ops { +/* VF sends req. to pf for the following + * ops. + */ + I40E_VIRTCHNL_OP_UNKNOWN = 0, + I40E_VIRTCHNL_OP_VERSION = 1, /* must ALWAYS be 1 */ + I40E_VIRTCHNL_OP_RESET_VF, + I40E_VIRTCHNL_OP_GET_VF_RESOURCES, + I40E_VIRTCHNL_OP_CONFIG_TX_QUEUE, + I40E_VIRTCHNL_OP_CONFIG_RX_QUEUE, + I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES, + I40E_VIRTCHNL_OP_CONFIG_IRQ_MAP, + I40E_VIRTCHNL_OP_ENABLE_QUEUES, + I40E_VIRTCHNL_OP_DISABLE_QUEUES, + I40E_VIRTCHNL_OP_ADD_ETHER_ADDRESS, + I40E_VIRTCHNL_OP_DEL_ETHER_ADDRESS, + I40E_VIRTCHNL_OP_ADD_VLAN, + I40E_VIRTCHNL_OP_DEL_VLAN, + I40E_VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE, + I40E_VIRTCHNL_OP_GET_STATS, + I40E_VIRTCHNL_OP_FCOE, +/* PF sends status change events to vfs using + * the following op. + */ + I40E_VIRTCHNL_OP_EVENT, +}; + +/* Virtual channel message descriptor. This overlays the admin queue + * descriptor. All other data is passed in external buffers. + */ + +struct i40e_virtchnl_msg { + u8 pad[8]; /* AQ flags/opcode/len/retval fields */ + enum i40e_virtchnl_ops v_opcode; /* avoid confusion with desc->opcode */ + enum i40e_status_code v_retval; /* ditto for desc->retval */ + u32 vfid; /* used by PF when sending to VF */ +}; + +/* Message descriptions and data structures.*/ + +/* I40E_VIRTCHNL_OP_VERSION + * VF posts its version number to the PF. PF responds with its version number + * in the same format, along with a return code. + * Reply from PF has its major/minor versions also in param0 and param1. + * If there is a major version mismatch, then the VF cannot operate. + * If there is a minor version mismatch, then the VF can operate but should + * add a warning to the system log. + * + * This enum element MUST always be specified as == 1, regardless of other + * changes in the API. The PF must always respond to this message without + * error regardless of version mismatch. + */ +#define I40E_VIRTCHNL_VERSION_MAJOR 1 +#define I40E_VIRTCHNL_VERSION_MINOR 0 +struct i40e_virtchnl_version_info { + u32 major; + u32 minor; +}; + +/* I40E_VIRTCHNL_OP_RESET_VF + * VF sends this request to PF with no parameters + * PF does NOT respond! VF driver must delay then poll VFGEN_RSTAT register + * until reset completion is indicated. The admin queue must be reinitialized + * after this operation. + * + * When reset is complete, PF must ensure that all queues in all VSIs associated + * with the VF are stopped, all queue configurations in the HMC are set to 0, + * and all MAC and VLAN filters (except the default MAC address) on all VSIs + * are cleared. + */ + +/* I40E_VIRTCHNL_OP_GET_VF_RESOURCES + * VF sends this request to PF with no parameters + * PF responds with an indirect message containing + * i40e_virtchnl_vf_resource and one or more + * i40e_virtchnl_vsi_resource structures. + */ + +struct i40e_virtchnl_vsi_resource { + u16 vsi_id; + u16 num_queue_pairs; + enum i40e_vsi_type vsi_type; + u16 qset_handle; + u8 default_mac_addr[I40E_ETH_LENGTH_OF_ADDRESS]; +}; +/* VF offload flags */ +#define I40E_VIRTCHNL_VF_OFFLOAD_L2 0x00000001 +#define I40E_VIRTCHNL_VF_OFFLOAD_IWARP 0x00000002 +#define I40E_VIRTCHNL_VF_OFFLOAD_FCOE 0x00000004 +#define I40E_VIRTCHNL_VF_OFFLOAD_VLAN 0x00010000 + +struct i40e_virtchnl_vf_resource { + u16 num_vsis; + u16 num_queue_pairs; + u16 max_vectors; + u16 max_mtu; + + u32 vf_offload_flags; + u32 max_fcoe_contexts; + u32 max_fcoe_filters; + + struct i40e_virtchnl_vsi_resource vsi_res[1]; +}; + +/* I40E_VIRTCHNL_OP_CONFIG_TX_QUEUE + * VF sends this message to set up parameters for one TX queue. + * External data buffer contains one instance of i40e_virtchnl_txq_info. + * PF configures requested queue and returns a status code. + */ + +/* Tx queue config info */ +struct i40e_virtchnl_txq_info { + u16 vsi_id; + u16 queue_id; + u16 ring_len; /* number of descriptors, multiple of 8 */ + u16 headwb_enabled; + u64 dma_ring_addr; + u64 dma_headwb_addr; +}; + +/* I40E_VIRTCHNL_OP_CONFIG_RX_QUEUE + * VF sends this message to set up parameters for one RX queue. + * External data buffer contains one instance of i40e_virtchnl_rxq_info. + * PF configures requested queue and returns a status code. + */ + +/* Rx queue config info */ +struct i40e_virtchnl_rxq_info { + u16 vsi_id; + u16 queue_id; + u32 ring_len; /* number of descriptors, multiple of 32 */ + u16 hdr_size; + u16 splithdr_enabled; + u32 databuffer_size; + u32 max_pkt_size; + u64 dma_ring_addr; + enum i40e_hmc_obj_rx_hsplit_0 rx_split_pos; +}; + +/* I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES + * VF sends this message to set parameters for all active TX and RX queues + * associated with the specified VSI. + * PF configures queues and returns status. + * If the number of queues specified is greater than the number of queues + * associated with the VSI, an error is returned and no queues are configured. + */ +struct i40e_virtchnl_queue_pair_info { + /* NOTE: vsi_id and queue_id should be identical for both queues. */ + struct i40e_virtchnl_txq_info txq; + struct i40e_virtchnl_rxq_info rxq; +}; + +struct i40e_virtchnl_vsi_queue_config_info { + u16 vsi_id; + u16 num_queue_pairs; + struct i40e_virtchnl_queue_pair_info qpair[1]; +}; + +/* I40E_VIRTCHNL_OP_CONFIG_IRQ_MAP + * VF uses this message to map vectors to queues. + * The rxq_map and txq_map fields are bitmaps used to indicate which queues + * are to be associated with the specified vector. + * The "other" causes are always mapped to vector 0. + * PF configures interrupt mapping and returns status. + */ +struct i40e_virtchnl_vector_map { + u16 vsi_id; + u16 vector_id; + u16 rxq_map; + u16 txq_map; + u16 rxitr_idx; + u16 txitr_idx; +}; + +struct i40e_virtchnl_irq_map_info { + u16 num_vectors; + struct i40e_virtchnl_vector_map vecmap[1]; +}; + +/* I40E_VIRTCHNL_OP_ENABLE_QUEUES + * I40E_VIRTCHNL_OP_DISABLE_QUEUES + * VF sends these message to enable or disable TX/RX queue pairs. + * The queues fields are bitmaps indicating which queues to act upon. + * (Currently, we only support 16 queues per VF, but we make the field + * u32 to allow for expansion.) + * PF performs requested action and returns status. + */ +struct i40e_virtchnl_queue_select { + u16 vsi_id; + u16 pad; + u32 rx_queues; + u32 tx_queues; +}; + +/* I40E_VIRTCHNL_OP_ADD_ETHER_ADDRESS + * VF sends this message in order to add one or more unicast or multicast + * address filters for the specified VSI. + * PF adds the filters and returns status. + */ + +/* I40E_VIRTCHNL_OP_DEL_ETHER_ADDRESS + * VF sends this message in order to remove one or more unicast or multicast + * filters for the specified VSI. + * PF removes the filters and returns status. + */ + +struct i40e_virtchnl_ether_addr { + u8 addr[I40E_ETH_LENGTH_OF_ADDRESS]; + u8 pad[2]; +}; + +struct i40e_virtchnl_ether_addr_list { + u16 vsi_id; + u16 num_elements; + struct i40e_virtchnl_ether_addr list[1]; +}; + +/* I40E_VIRTCHNL_OP_ADD_VLAN + * VF sends this message to add one or more VLAN tag filters for receives. + * PF adds the filters and returns status. + * If a port VLAN is configured by the PF, this operation will return an + * error to the VF. + */ + +/* I40E_VIRTCHNL_OP_DEL_VLAN + * VF sends this message to remove one or more VLAN tag filters for receives. + * PF removes the filters and returns status. + * If a port VLAN is configured by the PF, this operation will return an + * error to the VF. + */ + +struct i40e_virtchnl_vlan_filter_list { + u16 vsi_id; + u16 num_elements; + u16 vlan_id[1]; +}; + +/* I40E_VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE + * VF sends VSI id and flags. + * PF returns status code in retval. + * Note: we assume that broadcast accept mode is always enabled. + */ +struct i40e_virtchnl_promisc_info { + u16 vsi_id; + u16 flags; +}; + +#define I40E_FLAG_VF_UNICAST_PROMISC 0x00000001 +#define I40E_FLAG_VF_MULTICAST_PROMISC 0x00000002 + +/* I40E_VIRTCHNL_OP_GET_STATS + * VF sends this message to request stats for the selected VSI. VF uses + * the i40e_virtchnl_queue_select struct to specify the VSI. The queue_id + * field is ignored by the PF. + * + * PF replies with struct i40e_eth_stats in an external buffer. + */ + +/* I40E_VIRTCHNL_OP_EVENT + * PF sends this message to inform the VF driver of events that may affect it. + * No direct response is expected from the VF, though it may generate other + * messages in response to this one. + */ +enum i40e_virtchnl_event_codes { + I40E_VIRTCHNL_EVENT_UNKNOWN = 0, + I40E_VIRTCHNL_EVENT_LINK_CHANGE, + I40E_VIRTCHNL_EVENT_RESET_IMPENDING, + I40E_VIRTCHNL_EVENT_PF_DRIVER_CLOSE, +}; +#define I40E_PF_EVENT_SEVERITY_INFO 0 +#define I40E_PF_EVENT_SEVERITY_ATTENTION 1 +#define I40E_PF_EVENT_SEVERITY_ACTION_REQUIRED 2 +#define I40E_PF_EVENT_SEVERITY_CERTAIN_DOOM 255 + +struct i40e_virtchnl_pf_event { + enum i40e_virtchnl_event_codes event; + union { + struct { + enum i40e_aq_link_speed link_speed; + bool link_status; + } link_event; + } event_data; + + int severity; +}; + +/* VF reset states - these are written into the RSTAT register: + * I40E_VFGEN_RSTAT1 on the PF + * I40E_VFGEN_RSTAT on the VF + * When the PF initiates a reset, it writes 0 + * When the reset is complete, it writes 1 + * When the PF detects that the VF has recovered, it writes 2 + * VF checks this register periodically to determine if a reset has occurred, + * then polls it to know when the reset is complete. + * If either the PF or VF reads the register while the hardware + * is in a reset state, it will return DEADBEEF, which, when masked + * will result in 3. + */ +enum i40e_vfr_states { + I40E_VFR_INPROGRESS = 0, + I40E_VFR_COMPLETED, + I40E_VFR_VFACTIVE, + I40E_VFR_UNKNOWN, +}; + +#endif /* _I40E_VIRTCHNL_H_ */ diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c new file mode 100644 index 0000000..c0517e6 --- /dev/null +++ b/drivers/net/i40e/i40e_ethdev.c @@ -0,0 +1,5716 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "i40e_logs.h" +#include "base/i40e_prototype.h" +#include "base/i40e_adminq_cmd.h" +#include "base/i40e_type.h" +#include "i40e_ethdev.h" +#include "i40e_rxtx.h" +#include "i40e_pf.h" + +/* Maximun number of MAC addresses */ +#define I40E_NUM_MACADDR_MAX 64 +#define I40E_CLEAR_PXE_WAIT_MS 200 + +/* Maximun number of capability elements */ +#define I40E_MAX_CAP_ELE_NUM 128 + +/* Wait count and inteval */ +#define I40E_CHK_Q_ENA_COUNT 1000 +#define I40E_CHK_Q_ENA_INTERVAL_US 1000 + +/* Maximun number of VSI */ +#define I40E_MAX_NUM_VSIS (384UL) + +/* Default queue interrupt throttling time in microseconds */ +#define I40E_ITR_INDEX_DEFAULT 0 +#define I40E_QUEUE_ITR_INTERVAL_DEFAULT 32 /* 32 us */ +#define I40E_QUEUE_ITR_INTERVAL_MAX 8160 /* 8160 us */ + +#define I40E_PRE_TX_Q_CFG_WAIT_US 10 /* 10 us */ + +/* Mask of PF interrupt causes */ +#define I40E_PFINT_ICR0_ENA_MASK ( \ + I40E_PFINT_ICR0_ENA_ECC_ERR_MASK | \ + I40E_PFINT_ICR0_ENA_MAL_DETECT_MASK | \ + I40E_PFINT_ICR0_ENA_GRST_MASK | \ + I40E_PFINT_ICR0_ENA_PCI_EXCEPTION_MASK | \ + I40E_PFINT_ICR0_ENA_STORM_DETECT_MASK | \ + I40E_PFINT_ICR0_ENA_LINK_STAT_CHANGE_MASK | \ + I40E_PFINT_ICR0_ENA_HMC_ERR_MASK | \ + I40E_PFINT_ICR0_ENA_PE_CRITERR_MASK | \ + I40E_PFINT_ICR0_ENA_VFLR_MASK | \ + I40E_PFINT_ICR0_ENA_ADMINQ_MASK) + +#define I40E_FLOW_TYPES ( \ + (1UL << RTE_ETH_FLOW_FRAG_IPV4) | \ + (1UL << RTE_ETH_FLOW_NONFRAG_IPV4_TCP) | \ + (1UL << RTE_ETH_FLOW_NONFRAG_IPV4_UDP) | \ + (1UL << RTE_ETH_FLOW_NONFRAG_IPV4_SCTP) | \ + (1UL << RTE_ETH_FLOW_NONFRAG_IPV4_OTHER) | \ + (1UL << RTE_ETH_FLOW_FRAG_IPV6) | \ + (1UL << RTE_ETH_FLOW_NONFRAG_IPV6_TCP) | \ + (1UL << RTE_ETH_FLOW_NONFRAG_IPV6_UDP) | \ + (1UL << RTE_ETH_FLOW_NONFRAG_IPV6_SCTP) | \ + (1UL << RTE_ETH_FLOW_NONFRAG_IPV6_OTHER) | \ + (1UL << RTE_ETH_FLOW_L2_PAYLOAD)) + +static int eth_i40e_dev_init(struct rte_eth_dev *eth_dev); +static int i40e_dev_configure(struct rte_eth_dev *dev); +static int i40e_dev_start(struct rte_eth_dev *dev); +static void i40e_dev_stop(struct rte_eth_dev *dev); +static void i40e_dev_close(struct rte_eth_dev *dev); +static void i40e_dev_promiscuous_enable(struct rte_eth_dev *dev); +static void i40e_dev_promiscuous_disable(struct rte_eth_dev *dev); +static void i40e_dev_allmulticast_enable(struct rte_eth_dev *dev); +static void i40e_dev_allmulticast_disable(struct rte_eth_dev *dev); +static int i40e_dev_set_link_up(struct rte_eth_dev *dev); +static int i40e_dev_set_link_down(struct rte_eth_dev *dev); +static void i40e_dev_stats_get(struct rte_eth_dev *dev, + struct rte_eth_stats *stats); +static void i40e_dev_stats_reset(struct rte_eth_dev *dev); +static int i40e_dev_queue_stats_mapping_set(struct rte_eth_dev *dev, + uint16_t queue_id, + uint8_t stat_idx, + uint8_t is_rx); +static void i40e_dev_info_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info); +static int i40e_vlan_filter_set(struct rte_eth_dev *dev, + uint16_t vlan_id, + int on); +static void i40e_vlan_tpid_set(struct rte_eth_dev *dev, uint16_t tpid); +static void i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask); +static void i40e_vlan_strip_queue_set(struct rte_eth_dev *dev, + uint16_t queue, + int on); +static int i40e_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on); +static int i40e_dev_led_on(struct rte_eth_dev *dev); +static int i40e_dev_led_off(struct rte_eth_dev *dev); +static int i40e_flow_ctrl_set(struct rte_eth_dev *dev, + struct rte_eth_fc_conf *fc_conf); +static int i40e_priority_flow_ctrl_set(struct rte_eth_dev *dev, + struct rte_eth_pfc_conf *pfc_conf); +static void i40e_macaddr_add(struct rte_eth_dev *dev, + struct ether_addr *mac_addr, + uint32_t index, + uint32_t pool); +static void i40e_macaddr_remove(struct rte_eth_dev *dev, uint32_t index); +static int i40e_dev_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +static int i40e_dev_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); + +static int i40e_get_cap(struct i40e_hw *hw); +static int i40e_pf_parameter_init(struct rte_eth_dev *dev); +static int i40e_pf_setup(struct i40e_pf *pf); +static int i40e_dev_rxtx_init(struct i40e_pf *pf); +static int i40e_vmdq_setup(struct rte_eth_dev *dev); +static void i40e_stat_update_32(struct i40e_hw *hw, uint32_t reg, + bool offset_loaded, uint64_t *offset, uint64_t *stat); +static void i40e_stat_update_48(struct i40e_hw *hw, + uint32_t hireg, + uint32_t loreg, + bool offset_loaded, + uint64_t *offset, + uint64_t *stat); +static void i40e_pf_config_irq0(struct i40e_hw *hw); +static void i40e_dev_interrupt_handler( + __rte_unused struct rte_intr_handle *handle, void *param); +static int i40e_res_pool_init(struct i40e_res_pool_info *pool, + uint32_t base, uint32_t num); +static void i40e_res_pool_destroy(struct i40e_res_pool_info *pool); +static int i40e_res_pool_free(struct i40e_res_pool_info *pool, + uint32_t base); +static int i40e_res_pool_alloc(struct i40e_res_pool_info *pool, + uint16_t num); +static int i40e_dev_init_vlan(struct rte_eth_dev *dev); +static int i40e_veb_release(struct i40e_veb *veb); +static struct i40e_veb *i40e_veb_setup(struct i40e_pf *pf, + struct i40e_vsi *vsi); +static int i40e_pf_config_mq_rx(struct i40e_pf *pf); +static int i40e_vsi_config_double_vlan(struct i40e_vsi *vsi, int on); +static inline int i40e_find_all_vlan_for_mac(struct i40e_vsi *vsi, + struct i40e_macvlan_filter *mv_f, + int num, + struct ether_addr *addr); +static inline int i40e_find_all_mac_for_vlan(struct i40e_vsi *vsi, + struct i40e_macvlan_filter *mv_f, + int num, + uint16_t vlan); +static int i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi); +static int i40e_dev_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf); +static int i40e_dev_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf); +static int i40e_dev_udp_tunnel_add(struct rte_eth_dev *dev, + struct rte_eth_udp_tunnel *udp_tunnel); +static int i40e_dev_udp_tunnel_del(struct rte_eth_dev *dev, + struct rte_eth_udp_tunnel *udp_tunnel); +static int i40e_ethertype_filter_set(struct i40e_pf *pf, + struct rte_eth_ethertype_filter *filter, + bool add); +static int i40e_ethertype_filter_handle(struct rte_eth_dev *dev, + enum rte_filter_op filter_op, + void *arg); +static int i40e_dev_filter_ctrl(struct rte_eth_dev *dev, + enum rte_filter_type filter_type, + enum rte_filter_op filter_op, + void *arg); +static void i40e_configure_registers(struct i40e_hw *hw); +static void i40e_hw_init(struct i40e_hw *hw); + +static const struct rte_pci_id pci_id_i40e_map[] = { +#define RTE_PCI_DEV_ID_DECL_I40E(vend, dev) {RTE_PCI_DEVICE(vend, dev)}, +#include "rte_pci_dev_ids.h" +{ .vendor_id = 0, /* sentinel */ }, +}; + +static const struct eth_dev_ops i40e_eth_dev_ops = { + .dev_configure = i40e_dev_configure, + .dev_start = i40e_dev_start, + .dev_stop = i40e_dev_stop, + .dev_close = i40e_dev_close, + .promiscuous_enable = i40e_dev_promiscuous_enable, + .promiscuous_disable = i40e_dev_promiscuous_disable, + .allmulticast_enable = i40e_dev_allmulticast_enable, + .allmulticast_disable = i40e_dev_allmulticast_disable, + .dev_set_link_up = i40e_dev_set_link_up, + .dev_set_link_down = i40e_dev_set_link_down, + .link_update = i40e_dev_link_update, + .stats_get = i40e_dev_stats_get, + .stats_reset = i40e_dev_stats_reset, + .queue_stats_mapping_set = i40e_dev_queue_stats_mapping_set, + .dev_infos_get = i40e_dev_info_get, + .vlan_filter_set = i40e_vlan_filter_set, + .vlan_tpid_set = i40e_vlan_tpid_set, + .vlan_offload_set = i40e_vlan_offload_set, + .vlan_strip_queue_set = i40e_vlan_strip_queue_set, + .vlan_pvid_set = i40e_vlan_pvid_set, + .rx_queue_start = i40e_dev_rx_queue_start, + .rx_queue_stop = i40e_dev_rx_queue_stop, + .tx_queue_start = i40e_dev_tx_queue_start, + .tx_queue_stop = i40e_dev_tx_queue_stop, + .rx_queue_setup = i40e_dev_rx_queue_setup, + .rx_queue_release = i40e_dev_rx_queue_release, + .rx_queue_count = i40e_dev_rx_queue_count, + .rx_descriptor_done = i40e_dev_rx_descriptor_done, + .tx_queue_setup = i40e_dev_tx_queue_setup, + .tx_queue_release = i40e_dev_tx_queue_release, + .dev_led_on = i40e_dev_led_on, + .dev_led_off = i40e_dev_led_off, + .flow_ctrl_set = i40e_flow_ctrl_set, + .priority_flow_ctrl_set = i40e_priority_flow_ctrl_set, + .mac_addr_add = i40e_macaddr_add, + .mac_addr_remove = i40e_macaddr_remove, + .reta_update = i40e_dev_rss_reta_update, + .reta_query = i40e_dev_rss_reta_query, + .rss_hash_update = i40e_dev_rss_hash_update, + .rss_hash_conf_get = i40e_dev_rss_hash_conf_get, + .udp_tunnel_add = i40e_dev_udp_tunnel_add, + .udp_tunnel_del = i40e_dev_udp_tunnel_del, + .filter_ctrl = i40e_dev_filter_ctrl, +}; + +static struct eth_driver rte_i40e_pmd = { + { + .name = "rte_i40e_pmd", + .id_table = pci_id_i40e_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC, + }, + .eth_dev_init = eth_i40e_dev_init, + .dev_private_size = sizeof(struct i40e_adapter), +}; + +static inline int +i40e_align_floor(int n) +{ + if (n == 0) + return 0; + return (1 << (sizeof(n) * CHAR_BIT - 1 - __builtin_clz(n))); +} + +static inline int +rte_i40e_dev_atomic_read_link_status(struct rte_eth_dev *dev, + struct rte_eth_link *link) +{ + struct rte_eth_link *dst = link; + struct rte_eth_link *src = &(dev->data->dev_link); + + if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst, + *(uint64_t *)src) == 0) + return -1; + + return 0; +} + +static inline int +rte_i40e_dev_atomic_write_link_status(struct rte_eth_dev *dev, + struct rte_eth_link *link) +{ + struct rte_eth_link *dst = &(dev->data->dev_link); + struct rte_eth_link *src = link; + + if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst, + *(uint64_t *)src) == 0) + return -1; + + return 0; +} + +/* + * Driver initialization routine. + * Invoked once at EAL init time. + * Register itself as the [Poll Mode] Driver of PCI IXGBE devices. + */ +static int +rte_i40e_pmd_init(const char *name __rte_unused, + const char *params __rte_unused) +{ + PMD_INIT_FUNC_TRACE(); + rte_eth_driver_register(&rte_i40e_pmd); + + return 0; +} + +static struct rte_driver rte_i40e_driver = { + .type = PMD_PDEV, + .init = rte_i40e_pmd_init, +}; + +PMD_REGISTER_DRIVER(rte_i40e_driver); + +/* + * Initialize registers for flexible payload, which should be set by NVM. + * This should be removed from code once it is fixed in NVM. + */ +#ifndef I40E_GLQF_ORT +#define I40E_GLQF_ORT(_i) (0x00268900 + ((_i) * 4)) +#endif +#ifndef I40E_GLQF_PIT +#define I40E_GLQF_PIT(_i) (0x00268C80 + ((_i) * 4)) +#endif + +static inline void i40e_flex_payload_reg_init(struct i40e_hw *hw) +{ + I40E_WRITE_REG(hw, I40E_GLQF_ORT(18), 0x00000030); + I40E_WRITE_REG(hw, I40E_GLQF_ORT(19), 0x00000030); + I40E_WRITE_REG(hw, I40E_GLQF_ORT(26), 0x0000002B); + I40E_WRITE_REG(hw, I40E_GLQF_ORT(30), 0x0000002B); + I40E_WRITE_REG(hw, I40E_GLQF_ORT(33), 0x000000E0); + I40E_WRITE_REG(hw, I40E_GLQF_ORT(34), 0x000000E3); + I40E_WRITE_REG(hw, I40E_GLQF_ORT(35), 0x000000E6); + I40E_WRITE_REG(hw, I40E_GLQF_ORT(20), 0x00000031); + I40E_WRITE_REG(hw, I40E_GLQF_ORT(23), 0x00000031); + I40E_WRITE_REG(hw, I40E_GLQF_ORT(63), 0x0000002D); + + /* GLQF_PIT Registers */ + I40E_WRITE_REG(hw, I40E_GLQF_PIT(16), 0x00007480); + I40E_WRITE_REG(hw, I40E_GLQF_PIT(17), 0x00007440); +} + +static int +eth_i40e_dev_init(struct rte_eth_dev *dev) +{ + struct rte_pci_device *pci_dev; + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct i40e_vsi *vsi; + int ret; + uint32_t len; + uint8_t aq_fail = 0; + + PMD_INIT_FUNC_TRACE(); + + dev->dev_ops = &i40e_eth_dev_ops; + dev->rx_pkt_burst = i40e_recv_pkts; + dev->tx_pkt_burst = i40e_xmit_pkts; + + /* for secondary processes, we don't initialise any further as primary + * has already done this work. Only check we don't need a different + * RX function */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY){ + if (dev->data->scattered_rx) + dev->rx_pkt_burst = i40e_recv_scattered_pkts; + return 0; + } + pci_dev = dev->pci_dev; + pf->adapter = I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + pf->adapter->eth_dev = dev; + pf->dev_data = dev->data; + + hw->back = I40E_PF_TO_ADAPTER(pf); + hw->hw_addr = (uint8_t *)(pci_dev->mem_resource[0].addr); + if (!hw->hw_addr) { + PMD_INIT_LOG(ERR, "Hardware is not available, " + "as address is NULL"); + return -ENODEV; + } + + hw->vendor_id = pci_dev->id.vendor_id; + hw->device_id = pci_dev->id.device_id; + hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id; + hw->subsystem_device_id = pci_dev->id.subsystem_device_id; + hw->bus.device = pci_dev->addr.devid; + hw->bus.func = pci_dev->addr.function; + + /* Make sure all is clean before doing PF reset */ + i40e_clear_hw(hw); + + /* Initialize the hardware */ + i40e_hw_init(hw); + + /* Reset here to make sure all is clean for each PF */ + ret = i40e_pf_reset(hw); + if (ret) { + PMD_INIT_LOG(ERR, "Failed to reset pf: %d", ret); + return ret; + } + + /* Initialize the shared code (base driver) */ + ret = i40e_init_shared_code(hw); + if (ret) { + PMD_INIT_LOG(ERR, "Failed to init shared code (base driver): %d", ret); + return ret; + } + + /* + * To work around the NVM issue,initialize registers + * for flexible payload by software. + * It should be removed once issues are fixed in NVM. + */ + i40e_flex_payload_reg_init(hw); + + /* Initialize the parameters for adminq */ + i40e_init_adminq_parameter(hw); + ret = i40e_init_adminq(hw); + if (ret != I40E_SUCCESS) { + PMD_INIT_LOG(ERR, "Failed to init adminq: %d", ret); + return -EIO; + } + PMD_INIT_LOG(INFO, "FW %d.%d API %d.%d NVM %02d.%02d.%02d eetrack %04x", + hw->aq.fw_maj_ver, hw->aq.fw_min_ver, + hw->aq.api_maj_ver, hw->aq.api_min_ver, + ((hw->nvm.version >> 12) & 0xf), + ((hw->nvm.version >> 4) & 0xff), + (hw->nvm.version & 0xf), hw->nvm.eetrack); + + /* Disable LLDP */ + ret = i40e_aq_stop_lldp(hw, true, NULL); + if (ret != I40E_SUCCESS) /* Its failure can be ignored */ + PMD_INIT_LOG(INFO, "Failed to stop lldp"); + + /* Clear PXE mode */ + i40e_clear_pxe_mode(hw); + + /* + * On X710, performance number is far from the expectation on recent + * firmware versions. The fix for this issue may not be integrated in + * the following firmware version. So the workaround in software driver + * is needed. It needs to modify the initial values of 3 internal only + * registers. Note that the workaround can be removed when it is fixed + * in firmware in the future. + */ + i40e_configure_registers(hw); + + /* Get hw capabilities */ + ret = i40e_get_cap(hw); + if (ret != I40E_SUCCESS) { + PMD_INIT_LOG(ERR, "Failed to get capabilities: %d", ret); + goto err_get_capabilities; + } + + /* Initialize parameters for PF */ + ret = i40e_pf_parameter_init(dev); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Failed to do parameter init: %d", ret); + goto err_parameter_init; + } + + /* Initialize the queue management */ + ret = i40e_res_pool_init(&pf->qp_pool, 0, hw->func_caps.num_tx_qp); + if (ret < 0) { + PMD_INIT_LOG(ERR, "Failed to init queue pool"); + goto err_qp_pool_init; + } + ret = i40e_res_pool_init(&pf->msix_pool, 1, + hw->func_caps.num_msix_vectors - 1); + if (ret < 0) { + PMD_INIT_LOG(ERR, "Failed to init MSIX pool"); + goto err_msix_pool_init; + } + + /* Initialize lan hmc */ + ret = i40e_init_lan_hmc(hw, hw->func_caps.num_tx_qp, + hw->func_caps.num_rx_qp, 0, 0); + if (ret != I40E_SUCCESS) { + PMD_INIT_LOG(ERR, "Failed to init lan hmc: %d", ret); + goto err_init_lan_hmc; + } + + /* Configure lan hmc */ + ret = i40e_configure_lan_hmc(hw, I40E_HMC_MODEL_DIRECT_ONLY); + if (ret != I40E_SUCCESS) { + PMD_INIT_LOG(ERR, "Failed to configure lan hmc: %d", ret); + goto err_configure_lan_hmc; + } + + /* Get and check the mac address */ + i40e_get_mac_addr(hw, hw->mac.addr); + if (i40e_validate_mac_addr(hw->mac.addr) != I40E_SUCCESS) { + PMD_INIT_LOG(ERR, "mac address is not valid"); + ret = -EIO; + goto err_get_mac_addr; + } + /* Copy the permanent MAC address */ + ether_addr_copy((struct ether_addr *) hw->mac.addr, + (struct ether_addr *) hw->mac.perm_addr); + + /* Disable flow control */ + hw->fc.requested_mode = I40E_FC_NONE; + i40e_set_fc(hw, &aq_fail, TRUE); + + /* PF setup, which includes VSI setup */ + ret = i40e_pf_setup(pf); + if (ret) { + PMD_INIT_LOG(ERR, "Failed to setup pf switch: %d", ret); + goto err_setup_pf_switch; + } + + vsi = pf->main_vsi; + + /* Disable double vlan by default */ + i40e_vsi_config_double_vlan(vsi, FALSE); + + if (!vsi->max_macaddrs) + len = ETHER_ADDR_LEN; + else + len = ETHER_ADDR_LEN * vsi->max_macaddrs; + + /* Should be after VSI initialized */ + dev->data->mac_addrs = rte_zmalloc("i40e", len, 0); + if (!dev->data->mac_addrs) { + PMD_INIT_LOG(ERR, "Failed to allocated memory " + "for storing mac address"); + goto err_mac_alloc; + } + ether_addr_copy((struct ether_addr *)hw->mac.perm_addr, + &dev->data->mac_addrs[0]); + + /* initialize pf host driver to setup SRIOV resource if applicable */ + i40e_pf_host_init(dev); + + /* register callback func to eal lib */ + rte_intr_callback_register(&(pci_dev->intr_handle), + i40e_dev_interrupt_handler, (void *)dev); + + /* configure and enable device interrupt */ + i40e_pf_config_irq0(hw); + i40e_pf_enable_irq0(hw); + + /* enable uio intr after callback register */ + rte_intr_enable(&(pci_dev->intr_handle)); + + return 0; + +err_mac_alloc: + i40e_vsi_release(pf->main_vsi); +err_setup_pf_switch: +err_get_mac_addr: +err_configure_lan_hmc: + (void)i40e_shutdown_lan_hmc(hw); +err_init_lan_hmc: + i40e_res_pool_destroy(&pf->msix_pool); +err_msix_pool_init: + i40e_res_pool_destroy(&pf->qp_pool); +err_qp_pool_init: +err_parameter_init: +err_get_capabilities: + (void)i40e_shutdown_adminq(hw); + + return ret; +} + +static int +i40e_dev_configure(struct rte_eth_dev *dev) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + enum rte_eth_rx_mq_mode mq_mode = dev->data->dev_conf.rxmode.mq_mode; + int ret; + + if (dev->data->dev_conf.fdir_conf.mode == RTE_FDIR_MODE_PERFECT) { + ret = i40e_fdir_setup(pf); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to setup flow director."); + return -ENOTSUP; + } + ret = i40e_fdir_configure(dev); + if (ret < 0) { + PMD_DRV_LOG(ERR, "failed to configure fdir."); + goto err; + } + } else + i40e_fdir_teardown(pf); + + ret = i40e_dev_init_vlan(dev); + if (ret < 0) + goto err; + + /* VMDQ setup. + * Needs to move VMDQ setting out of i40e_pf_config_mq_rx() as VMDQ and + * RSS setting have different requirements. + * General PMD driver call sequence are NIC init, configure, + * rx/tx_queue_setup and dev_start. In rx/tx_queue_setup() function, it + * will try to lookup the VSI that specific queue belongs to if VMDQ + * applicable. So, VMDQ setting has to be done before + * rx/tx_queue_setup(). This function is good to place vmdq_setup. + * For RSS setting, it will try to calculate actual configured RX queue + * number, which will be available after rx_queue_setup(). dev_start() + * function is good to place RSS setup. + */ + if (mq_mode & ETH_MQ_RX_VMDQ_FLAG) { + ret = i40e_vmdq_setup(dev); + if (ret) + goto err; + } + return 0; +err: + i40e_fdir_teardown(pf); + return ret; +} + +void +i40e_vsi_queues_unbind_intr(struct i40e_vsi *vsi) +{ + struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); + uint16_t msix_vect = vsi->msix_intr; + uint16_t i; + + for (i = 0; i < vsi->nb_qps; i++) { + I40E_WRITE_REG(hw, I40E_QINT_TQCTL(vsi->base_queue + i), 0); + I40E_WRITE_REG(hw, I40E_QINT_RQCTL(vsi->base_queue + i), 0); + rte_wmb(); + } + + if (vsi->type != I40E_VSI_SRIOV) { + I40E_WRITE_REG(hw, I40E_PFINT_LNKLSTN(msix_vect - 1), 0); + I40E_WRITE_REG(hw, I40E_PFINT_ITRN(I40E_ITR_INDEX_DEFAULT, + msix_vect - 1), 0); + } else { + uint32_t reg; + reg = (hw->func_caps.num_msix_vectors_vf - 1) * + vsi->user_param + (msix_vect - 1); + + I40E_WRITE_REG(hw, I40E_VPINT_LNKLSTN(reg), 0); + } + I40E_WRITE_FLUSH(hw); +} + +static inline uint16_t +i40e_calc_itr_interval(int16_t interval) +{ + if (interval < 0 || interval > I40E_QUEUE_ITR_INTERVAL_MAX) + interval = I40E_QUEUE_ITR_INTERVAL_DEFAULT; + + /* Convert to hardware count, as writing each 1 represents 2 us */ + return (interval/2); +} + +void +i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi) +{ + uint32_t val; + struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); + uint16_t msix_vect = vsi->msix_intr; + int i; + + for (i = 0; i < vsi->nb_qps; i++) + I40E_WRITE_REG(hw, I40E_QINT_TQCTL(vsi->base_queue + i), 0); + + /* Bind all RX queues to allocated MSIX interrupt */ + for (i = 0; i < vsi->nb_qps; i++) { + val = (msix_vect << I40E_QINT_RQCTL_MSIX_INDX_SHIFT) | + I40E_QINT_RQCTL_ITR_INDX_MASK | + ((vsi->base_queue + i + 1) << + I40E_QINT_RQCTL_NEXTQ_INDX_SHIFT) | + (0 << I40E_QINT_RQCTL_NEXTQ_TYPE_SHIFT) | + I40E_QINT_RQCTL_CAUSE_ENA_MASK; + + if (i == vsi->nb_qps - 1) + val |= I40E_QINT_RQCTL_NEXTQ_INDX_MASK; + I40E_WRITE_REG(hw, I40E_QINT_RQCTL(vsi->base_queue + i), val); + } + + /* Write first RX queue to Link list register as the head element */ + if (vsi->type != I40E_VSI_SRIOV) { + uint16_t interval = + i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL); + + I40E_WRITE_REG(hw, I40E_PFINT_LNKLSTN(msix_vect - 1), + (vsi->base_queue << + I40E_PFINT_LNKLSTN_FIRSTQ_INDX_SHIFT) | + (0x0 << I40E_PFINT_LNKLSTN_FIRSTQ_TYPE_SHIFT)); + + I40E_WRITE_REG(hw, I40E_PFINT_ITRN(I40E_ITR_INDEX_DEFAULT, + msix_vect - 1), interval); + +#ifndef I40E_GLINT_CTL +#define I40E_GLINT_CTL 0x0003F800 +#define I40E_GLINT_CTL_DIS_AUTOMASK_N_MASK 0x4 +#endif + /* Disable auto-mask on enabling of all none-zero interrupt */ + I40E_WRITE_REG(hw, I40E_GLINT_CTL, + I40E_GLINT_CTL_DIS_AUTOMASK_N_MASK); + } else { + uint32_t reg; + + /* num_msix_vectors_vf needs to minus irq0 */ + reg = (hw->func_caps.num_msix_vectors_vf - 1) * + vsi->user_param + (msix_vect - 1); + + I40E_WRITE_REG(hw, I40E_VPINT_LNKLSTN(reg), (vsi->base_queue << + I40E_VPINT_LNKLSTN_FIRSTQ_INDX_SHIFT) | + (0x0 << I40E_VPINT_LNKLSTN_FIRSTQ_TYPE_SHIFT)); + } + + I40E_WRITE_FLUSH(hw); +} + +static void +i40e_vsi_enable_queues_intr(struct i40e_vsi *vsi) +{ + struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); + uint16_t interval = i40e_calc_itr_interval(\ + RTE_LIBRTE_I40E_ITR_INTERVAL); + + I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTLN(vsi->msix_intr - 1), + I40E_PFINT_DYN_CTLN_INTENA_MASK | + I40E_PFINT_DYN_CTLN_CLEARPBA_MASK | + (0 << I40E_PFINT_DYN_CTLN_ITR_INDX_SHIFT) | + (interval << I40E_PFINT_DYN_CTLN_INTERVAL_SHIFT)); +} + +static void +i40e_vsi_disable_queues_intr(struct i40e_vsi *vsi) +{ + struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); + + I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTLN(vsi->msix_intr - 1), 0); +} + +static inline uint8_t +i40e_parse_link_speed(uint16_t eth_link_speed) +{ + uint8_t link_speed = I40E_LINK_SPEED_UNKNOWN; + + switch (eth_link_speed) { + case ETH_LINK_SPEED_40G: + link_speed = I40E_LINK_SPEED_40GB; + break; + case ETH_LINK_SPEED_20G: + link_speed = I40E_LINK_SPEED_20GB; + break; + case ETH_LINK_SPEED_10G: + link_speed = I40E_LINK_SPEED_10GB; + break; + case ETH_LINK_SPEED_1000: + link_speed = I40E_LINK_SPEED_1GB; + break; + case ETH_LINK_SPEED_100: + link_speed = I40E_LINK_SPEED_100MB; + break; + } + + return link_speed; +} + +static int +i40e_phy_conf_link(struct i40e_hw *hw, uint8_t abilities, uint8_t force_speed) +{ + enum i40e_status_code status; + struct i40e_aq_get_phy_abilities_resp phy_ab; + struct i40e_aq_set_phy_config phy_conf; + const uint8_t mask = I40E_AQ_PHY_FLAG_PAUSE_TX | + I40E_AQ_PHY_FLAG_PAUSE_RX | + I40E_AQ_PHY_FLAG_LOW_POWER; + const uint8_t advt = I40E_LINK_SPEED_40GB | + I40E_LINK_SPEED_10GB | + I40E_LINK_SPEED_1GB | + I40E_LINK_SPEED_100MB; + int ret = -ENOTSUP; + + status = i40e_aq_get_phy_capabilities(hw, false, false, &phy_ab, + NULL); + if (status) + return ret; + + memset(&phy_conf, 0, sizeof(phy_conf)); + + /* bits 0-2 use the values from get_phy_abilities_resp */ + abilities &= ~mask; + abilities |= phy_ab.abilities & mask; + + /* update ablities and speed */ + if (abilities & I40E_AQ_PHY_AN_ENABLED) + phy_conf.link_speed = advt; + else + phy_conf.link_speed = force_speed; + + phy_conf.abilities = abilities; + + /* use get_phy_abilities_resp value for the rest */ + phy_conf.phy_type = phy_ab.phy_type; + phy_conf.eee_capability = phy_ab.eee_capability; + phy_conf.eeer = phy_ab.eeer_val; + phy_conf.low_power_ctrl = phy_ab.d3_lpan; + + PMD_DRV_LOG(DEBUG, "\tCurrent: abilities %x, link_speed %x", + phy_ab.abilities, phy_ab.link_speed); + PMD_DRV_LOG(DEBUG, "\tConfig: abilities %x, link_speed %x", + phy_conf.abilities, phy_conf.link_speed); + + status = i40e_aq_set_phy_config(hw, &phy_conf, NULL); + if (status) + return ret; + + return I40E_SUCCESS; +} + +static int +i40e_apply_link_speed(struct rte_eth_dev *dev) +{ + uint8_t speed; + uint8_t abilities = 0; + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct rte_eth_conf *conf = &dev->data->dev_conf; + + speed = i40e_parse_link_speed(conf->link_speed); + abilities |= I40E_AQ_PHY_ENABLE_ATOMIC_LINK; + if (conf->link_speed == ETH_LINK_SPEED_AUTONEG) + abilities |= I40E_AQ_PHY_AN_ENABLED; + else + abilities |= I40E_AQ_PHY_LINK_ENABLED; + + return i40e_phy_conf_link(hw, abilities, speed); +} + +static int +i40e_dev_start(struct rte_eth_dev *dev) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct i40e_vsi *main_vsi = pf->main_vsi; + int ret, i; + + if ((dev->data->dev_conf.link_duplex != ETH_LINK_AUTONEG_DUPLEX) && + (dev->data->dev_conf.link_duplex != ETH_LINK_FULL_DUPLEX)) { + PMD_INIT_LOG(ERR, "Invalid link_duplex (%hu) for port %hhu", + dev->data->dev_conf.link_duplex, + dev->data->port_id); + return -EINVAL; + } + + /* Initialize VSI */ + ret = i40e_dev_rxtx_init(pf); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to init rx/tx queues"); + goto err_up; + } + + /* Map queues with MSIX interrupt */ + i40e_vsi_queues_bind_intr(main_vsi); + i40e_vsi_enable_queues_intr(main_vsi); + + /* Map VMDQ VSI queues with MSIX interrupt */ + for (i = 0; i < pf->nb_cfg_vmdq_vsi; i++) { + i40e_vsi_queues_bind_intr(pf->vmdq[i].vsi); + i40e_vsi_enable_queues_intr(pf->vmdq[i].vsi); + } + + /* enable FDIR MSIX interrupt */ + if (pf->fdir.fdir_vsi) { + i40e_vsi_queues_bind_intr(pf->fdir.fdir_vsi); + i40e_vsi_enable_queues_intr(pf->fdir.fdir_vsi); + } + + /* Enable all queues which have been configured */ + ret = i40e_dev_switch_queues(pf, TRUE); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to enable VSI"); + goto err_up; + } + + /* Enable receiving broadcast packets */ + ret = i40e_aq_set_vsi_broadcast(hw, main_vsi->seid, true, NULL); + if (ret != I40E_SUCCESS) + PMD_DRV_LOG(INFO, "fail to set vsi broadcast"); + + for (i = 0; i < pf->nb_cfg_vmdq_vsi; i++) { + ret = i40e_aq_set_vsi_broadcast(hw, pf->vmdq[i].vsi->seid, + true, NULL); + if (ret != I40E_SUCCESS) + PMD_DRV_LOG(INFO, "fail to set vsi broadcast"); + } + + /* Apply link configure */ + ret = i40e_apply_link_speed(dev); + if (I40E_SUCCESS != ret) { + PMD_DRV_LOG(ERR, "Fail to apply link setting"); + goto err_up; + } + + return I40E_SUCCESS; + +err_up: + i40e_dev_switch_queues(pf, FALSE); + i40e_dev_clear_queues(dev); + + return ret; +} + +static void +i40e_dev_stop(struct rte_eth_dev *dev) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_vsi *main_vsi = pf->main_vsi; + int i; + + /* Disable all queues */ + i40e_dev_switch_queues(pf, FALSE); + + /* un-map queues with interrupt registers */ + i40e_vsi_disable_queues_intr(main_vsi); + i40e_vsi_queues_unbind_intr(main_vsi); + + for (i = 0; i < pf->nb_cfg_vmdq_vsi; i++) { + i40e_vsi_disable_queues_intr(pf->vmdq[i].vsi); + i40e_vsi_queues_unbind_intr(pf->vmdq[i].vsi); + } + + if (pf->fdir.fdir_vsi) { + i40e_vsi_queues_bind_intr(pf->fdir.fdir_vsi); + i40e_vsi_enable_queues_intr(pf->fdir.fdir_vsi); + } + /* Clear all queues and release memory */ + i40e_dev_clear_queues(dev); + + /* Set link down */ + i40e_dev_set_link_down(dev); + +} + +static void +i40e_dev_close(struct rte_eth_dev *dev) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t reg; + + PMD_INIT_FUNC_TRACE(); + + i40e_dev_stop(dev); + + /* Disable interrupt */ + i40e_pf_disable_irq0(hw); + rte_intr_disable(&(dev->pci_dev->intr_handle)); + + /* shutdown and destroy the HMC */ + i40e_shutdown_lan_hmc(hw); + + /* release all the existing VSIs and VEBs */ + i40e_fdir_teardown(pf); + i40e_vsi_release(pf->main_vsi); + + /* shutdown the adminq */ + i40e_aq_queue_shutdown(hw, true); + i40e_shutdown_adminq(hw); + + i40e_res_pool_destroy(&pf->qp_pool); + i40e_res_pool_destroy(&pf->msix_pool); + + /* force a PF reset to clean anything leftover */ + reg = I40E_READ_REG(hw, I40E_PFGEN_CTRL); + I40E_WRITE_REG(hw, I40E_PFGEN_CTRL, + (reg | I40E_PFGEN_CTRL_PFSWR_MASK)); + I40E_WRITE_FLUSH(hw); +} + +static void +i40e_dev_promiscuous_enable(struct rte_eth_dev *dev) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct i40e_vsi *vsi = pf->main_vsi; + int status; + + status = i40e_aq_set_vsi_unicast_promiscuous(hw, vsi->seid, + true, NULL); + if (status != I40E_SUCCESS) + PMD_DRV_LOG(ERR, "Failed to enable unicast promiscuous"); + + status = i40e_aq_set_vsi_multicast_promiscuous(hw, vsi->seid, + TRUE, NULL); + if (status != I40E_SUCCESS) + PMD_DRV_LOG(ERR, "Failed to enable multicast promiscuous"); + +} + +static void +i40e_dev_promiscuous_disable(struct rte_eth_dev *dev) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct i40e_vsi *vsi = pf->main_vsi; + int status; + + status = i40e_aq_set_vsi_unicast_promiscuous(hw, vsi->seid, + false, NULL); + if (status != I40E_SUCCESS) + PMD_DRV_LOG(ERR, "Failed to disable unicast promiscuous"); + + status = i40e_aq_set_vsi_multicast_promiscuous(hw, vsi->seid, + false, NULL); + if (status != I40E_SUCCESS) + PMD_DRV_LOG(ERR, "Failed to disable multicast promiscuous"); +} + +static void +i40e_dev_allmulticast_enable(struct rte_eth_dev *dev) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct i40e_vsi *vsi = pf->main_vsi; + int ret; + + ret = i40e_aq_set_vsi_multicast_promiscuous(hw, vsi->seid, TRUE, NULL); + if (ret != I40E_SUCCESS) + PMD_DRV_LOG(ERR, "Failed to enable multicast promiscuous"); +} + +static void +i40e_dev_allmulticast_disable(struct rte_eth_dev *dev) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct i40e_vsi *vsi = pf->main_vsi; + int ret; + + if (dev->data->promiscuous == 1) + return; /* must remain in all_multicast mode */ + + ret = i40e_aq_set_vsi_multicast_promiscuous(hw, + vsi->seid, FALSE, NULL); + if (ret != I40E_SUCCESS) + PMD_DRV_LOG(ERR, "Failed to disable multicast promiscuous"); +} + +/* + * Set device link up. + */ +static int +i40e_dev_set_link_up(struct rte_eth_dev *dev) +{ + /* re-apply link speed setting */ + return i40e_apply_link_speed(dev); +} + +/* + * Set device link down. + */ +static int +i40e_dev_set_link_down(__rte_unused struct rte_eth_dev *dev) +{ + uint8_t speed = I40E_LINK_SPEED_UNKNOWN; + uint8_t abilities = I40E_AQ_PHY_ENABLE_ATOMIC_LINK; + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + return i40e_phy_conf_link(hw, abilities, speed); +} + +int +i40e_dev_link_update(struct rte_eth_dev *dev, + int wait_to_complete) +{ +#define CHECK_INTERVAL 100 /* 100ms */ +#define MAX_REPEAT_TIME 10 /* 1s (10 * 100ms) in total */ + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct i40e_link_status link_status; + struct rte_eth_link link, old; + int status; + unsigned rep_cnt = MAX_REPEAT_TIME; + + memset(&link, 0, sizeof(link)); + memset(&old, 0, sizeof(old)); + memset(&link_status, 0, sizeof(link_status)); + rte_i40e_dev_atomic_read_link_status(dev, &old); + + do { + /* Get link status information from hardware */ + status = i40e_aq_get_link_info(hw, false, &link_status, NULL); + if (status != I40E_SUCCESS) { + link.link_speed = ETH_LINK_SPEED_100; + link.link_duplex = ETH_LINK_FULL_DUPLEX; + PMD_DRV_LOG(ERR, "Failed to get link info"); + goto out; + } + + link.link_status = link_status.link_info & I40E_AQ_LINK_UP; + if (!wait_to_complete) + break; + + rte_delay_ms(CHECK_INTERVAL); + } while (!link.link_status && rep_cnt--); + + if (!link.link_status) + goto out; + + /* i40e uses full duplex only */ + link.link_duplex = ETH_LINK_FULL_DUPLEX; + + /* Parse the link status */ + switch (link_status.link_speed) { + case I40E_LINK_SPEED_100MB: + link.link_speed = ETH_LINK_SPEED_100; + break; + case I40E_LINK_SPEED_1GB: + link.link_speed = ETH_LINK_SPEED_1000; + break; + case I40E_LINK_SPEED_10GB: + link.link_speed = ETH_LINK_SPEED_10G; + break; + case I40E_LINK_SPEED_20GB: + link.link_speed = ETH_LINK_SPEED_20G; + break; + case I40E_LINK_SPEED_40GB: + link.link_speed = ETH_LINK_SPEED_40G; + break; + default: + link.link_speed = ETH_LINK_SPEED_100; + break; + } + +out: + rte_i40e_dev_atomic_write_link_status(dev, &link); + if (link.link_status == old.link_status) + return -1; + + return 0; +} + +/* Get all the statistics of a VSI */ +void +i40e_update_vsi_stats(struct i40e_vsi *vsi) +{ + struct i40e_eth_stats *oes = &vsi->eth_stats_offset; + struct i40e_eth_stats *nes = &vsi->eth_stats; + struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); + int idx = rte_le_to_cpu_16(vsi->info.stat_counter_idx); + + i40e_stat_update_48(hw, I40E_GLV_GORCH(idx), I40E_GLV_GORCL(idx), + vsi->offset_loaded, &oes->rx_bytes, + &nes->rx_bytes); + i40e_stat_update_48(hw, I40E_GLV_UPRCH(idx), I40E_GLV_UPRCL(idx), + vsi->offset_loaded, &oes->rx_unicast, + &nes->rx_unicast); + i40e_stat_update_48(hw, I40E_GLV_MPRCH(idx), I40E_GLV_MPRCL(idx), + vsi->offset_loaded, &oes->rx_multicast, + &nes->rx_multicast); + i40e_stat_update_48(hw, I40E_GLV_BPRCH(idx), I40E_GLV_BPRCL(idx), + vsi->offset_loaded, &oes->rx_broadcast, + &nes->rx_broadcast); + i40e_stat_update_32(hw, I40E_GLV_RDPC(idx), vsi->offset_loaded, + &oes->rx_discards, &nes->rx_discards); + /* GLV_REPC not supported */ + /* GLV_RMPC not supported */ + i40e_stat_update_32(hw, I40E_GLV_RUPP(idx), vsi->offset_loaded, + &oes->rx_unknown_protocol, + &nes->rx_unknown_protocol); + i40e_stat_update_48(hw, I40E_GLV_GOTCH(idx), I40E_GLV_GOTCL(idx), + vsi->offset_loaded, &oes->tx_bytes, + &nes->tx_bytes); + i40e_stat_update_48(hw, I40E_GLV_UPTCH(idx), I40E_GLV_UPTCL(idx), + vsi->offset_loaded, &oes->tx_unicast, + &nes->tx_unicast); + i40e_stat_update_48(hw, I40E_GLV_MPTCH(idx), I40E_GLV_MPTCL(idx), + vsi->offset_loaded, &oes->tx_multicast, + &nes->tx_multicast); + i40e_stat_update_48(hw, I40E_GLV_BPTCH(idx), I40E_GLV_BPTCL(idx), + vsi->offset_loaded, &oes->tx_broadcast, + &nes->tx_broadcast); + /* GLV_TDPC not supported */ + i40e_stat_update_32(hw, I40E_GLV_TEPC(idx), vsi->offset_loaded, + &oes->tx_errors, &nes->tx_errors); + vsi->offset_loaded = true; + + PMD_DRV_LOG(DEBUG, "***************** VSI[%u] stats start *******************", + vsi->vsi_id); + PMD_DRV_LOG(DEBUG, "rx_bytes: %lu", nes->rx_bytes); + PMD_DRV_LOG(DEBUG, "rx_unicast: %lu", nes->rx_unicast); + PMD_DRV_LOG(DEBUG, "rx_multicast: %lu", nes->rx_multicast); + PMD_DRV_LOG(DEBUG, "rx_broadcast: %lu", nes->rx_broadcast); + PMD_DRV_LOG(DEBUG, "rx_discards: %lu", nes->rx_discards); + PMD_DRV_LOG(DEBUG, "rx_unknown_protocol: %lu", + nes->rx_unknown_protocol); + PMD_DRV_LOG(DEBUG, "tx_bytes: %lu", nes->tx_bytes); + PMD_DRV_LOG(DEBUG, "tx_unicast: %lu", nes->tx_unicast); + PMD_DRV_LOG(DEBUG, "tx_multicast: %lu", nes->tx_multicast); + PMD_DRV_LOG(DEBUG, "tx_broadcast: %lu", nes->tx_broadcast); + PMD_DRV_LOG(DEBUG, "tx_discards: %lu", nes->tx_discards); + PMD_DRV_LOG(DEBUG, "tx_errors: %lu", nes->tx_errors); + PMD_DRV_LOG(DEBUG, "***************** VSI[%u] stats end *******************", + vsi->vsi_id); +} + +/* Get all statistics of a port */ +static void +i40e_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + uint32_t i; + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct i40e_hw_port_stats *ns = &pf->stats; /* new stats */ + struct i40e_hw_port_stats *os = &pf->stats_offset; /* old stats */ + + /* Get statistics of struct i40e_eth_stats */ + i40e_stat_update_48(hw, I40E_GLPRT_GORCH(hw->port), + I40E_GLPRT_GORCL(hw->port), + pf->offset_loaded, &os->eth.rx_bytes, + &ns->eth.rx_bytes); + i40e_stat_update_48(hw, I40E_GLPRT_UPRCH(hw->port), + I40E_GLPRT_UPRCL(hw->port), + pf->offset_loaded, &os->eth.rx_unicast, + &ns->eth.rx_unicast); + i40e_stat_update_48(hw, I40E_GLPRT_MPRCH(hw->port), + I40E_GLPRT_MPRCL(hw->port), + pf->offset_loaded, &os->eth.rx_multicast, + &ns->eth.rx_multicast); + i40e_stat_update_48(hw, I40E_GLPRT_BPRCH(hw->port), + I40E_GLPRT_BPRCL(hw->port), + pf->offset_loaded, &os->eth.rx_broadcast, + &ns->eth.rx_broadcast); + i40e_stat_update_32(hw, I40E_GLPRT_RDPC(hw->port), + pf->offset_loaded, &os->eth.rx_discards, + &ns->eth.rx_discards); + /* GLPRT_REPC not supported */ + /* GLPRT_RMPC not supported */ + i40e_stat_update_32(hw, I40E_GLPRT_RUPP(hw->port), + pf->offset_loaded, + &os->eth.rx_unknown_protocol, + &ns->eth.rx_unknown_protocol); + i40e_stat_update_48(hw, I40E_GLPRT_GOTCH(hw->port), + I40E_GLPRT_GOTCL(hw->port), + pf->offset_loaded, &os->eth.tx_bytes, + &ns->eth.tx_bytes); + i40e_stat_update_48(hw, I40E_GLPRT_UPTCH(hw->port), + I40E_GLPRT_UPTCL(hw->port), + pf->offset_loaded, &os->eth.tx_unicast, + &ns->eth.tx_unicast); + i40e_stat_update_48(hw, I40E_GLPRT_MPTCH(hw->port), + I40E_GLPRT_MPTCL(hw->port), + pf->offset_loaded, &os->eth.tx_multicast, + &ns->eth.tx_multicast); + i40e_stat_update_48(hw, I40E_GLPRT_BPTCH(hw->port), + I40E_GLPRT_BPTCL(hw->port), + pf->offset_loaded, &os->eth.tx_broadcast, + &ns->eth.tx_broadcast); + i40e_stat_update_32(hw, I40E_GLPRT_TDPC(hw->port), + pf->offset_loaded, &os->eth.tx_discards, + &ns->eth.tx_discards); + /* GLPRT_TEPC not supported */ + + /* additional port specific stats */ + i40e_stat_update_32(hw, I40E_GLPRT_TDOLD(hw->port), + pf->offset_loaded, &os->tx_dropped_link_down, + &ns->tx_dropped_link_down); + i40e_stat_update_32(hw, I40E_GLPRT_CRCERRS(hw->port), + pf->offset_loaded, &os->crc_errors, + &ns->crc_errors); + i40e_stat_update_32(hw, I40E_GLPRT_ILLERRC(hw->port), + pf->offset_loaded, &os->illegal_bytes, + &ns->illegal_bytes); + /* GLPRT_ERRBC not supported */ + i40e_stat_update_32(hw, I40E_GLPRT_MLFC(hw->port), + pf->offset_loaded, &os->mac_local_faults, + &ns->mac_local_faults); + i40e_stat_update_32(hw, I40E_GLPRT_MRFC(hw->port), + pf->offset_loaded, &os->mac_remote_faults, + &ns->mac_remote_faults); + i40e_stat_update_32(hw, I40E_GLPRT_RLEC(hw->port), + pf->offset_loaded, &os->rx_length_errors, + &ns->rx_length_errors); + i40e_stat_update_32(hw, I40E_GLPRT_LXONRXC(hw->port), + pf->offset_loaded, &os->link_xon_rx, + &ns->link_xon_rx); + i40e_stat_update_32(hw, I40E_GLPRT_LXOFFRXC(hw->port), + pf->offset_loaded, &os->link_xoff_rx, + &ns->link_xoff_rx); + for (i = 0; i < 8; i++) { + i40e_stat_update_32(hw, I40E_GLPRT_PXONRXC(hw->port, i), + pf->offset_loaded, + &os->priority_xon_rx[i], + &ns->priority_xon_rx[i]); + i40e_stat_update_32(hw, I40E_GLPRT_PXOFFRXC(hw->port, i), + pf->offset_loaded, + &os->priority_xoff_rx[i], + &ns->priority_xoff_rx[i]); + } + i40e_stat_update_32(hw, I40E_GLPRT_LXONTXC(hw->port), + pf->offset_loaded, &os->link_xon_tx, + &ns->link_xon_tx); + i40e_stat_update_32(hw, I40E_GLPRT_LXOFFTXC(hw->port), + pf->offset_loaded, &os->link_xoff_tx, + &ns->link_xoff_tx); + for (i = 0; i < 8; i++) { + i40e_stat_update_32(hw, I40E_GLPRT_PXONTXC(hw->port, i), + pf->offset_loaded, + &os->priority_xon_tx[i], + &ns->priority_xon_tx[i]); + i40e_stat_update_32(hw, I40E_GLPRT_PXOFFTXC(hw->port, i), + pf->offset_loaded, + &os->priority_xoff_tx[i], + &ns->priority_xoff_tx[i]); + i40e_stat_update_32(hw, I40E_GLPRT_RXON2OFFCNT(hw->port, i), + pf->offset_loaded, + &os->priority_xon_2_xoff[i], + &ns->priority_xon_2_xoff[i]); + } + i40e_stat_update_48(hw, I40E_GLPRT_PRC64H(hw->port), + I40E_GLPRT_PRC64L(hw->port), + pf->offset_loaded, &os->rx_size_64, + &ns->rx_size_64); + i40e_stat_update_48(hw, I40E_GLPRT_PRC127H(hw->port), + I40E_GLPRT_PRC127L(hw->port), + pf->offset_loaded, &os->rx_size_127, + &ns->rx_size_127); + i40e_stat_update_48(hw, I40E_GLPRT_PRC255H(hw->port), + I40E_GLPRT_PRC255L(hw->port), + pf->offset_loaded, &os->rx_size_255, + &ns->rx_size_255); + i40e_stat_update_48(hw, I40E_GLPRT_PRC511H(hw->port), + I40E_GLPRT_PRC511L(hw->port), + pf->offset_loaded, &os->rx_size_511, + &ns->rx_size_511); + i40e_stat_update_48(hw, I40E_GLPRT_PRC1023H(hw->port), + I40E_GLPRT_PRC1023L(hw->port), + pf->offset_loaded, &os->rx_size_1023, + &ns->rx_size_1023); + i40e_stat_update_48(hw, I40E_GLPRT_PRC1522H(hw->port), + I40E_GLPRT_PRC1522L(hw->port), + pf->offset_loaded, &os->rx_size_1522, + &ns->rx_size_1522); + i40e_stat_update_48(hw, I40E_GLPRT_PRC9522H(hw->port), + I40E_GLPRT_PRC9522L(hw->port), + pf->offset_loaded, &os->rx_size_big, + &ns->rx_size_big); + i40e_stat_update_32(hw, I40E_GLPRT_RUC(hw->port), + pf->offset_loaded, &os->rx_undersize, + &ns->rx_undersize); + i40e_stat_update_32(hw, I40E_GLPRT_RFC(hw->port), + pf->offset_loaded, &os->rx_fragments, + &ns->rx_fragments); + i40e_stat_update_32(hw, I40E_GLPRT_ROC(hw->port), + pf->offset_loaded, &os->rx_oversize, + &ns->rx_oversize); + i40e_stat_update_32(hw, I40E_GLPRT_RJC(hw->port), + pf->offset_loaded, &os->rx_jabber, + &ns->rx_jabber); + i40e_stat_update_48(hw, I40E_GLPRT_PTC64H(hw->port), + I40E_GLPRT_PTC64L(hw->port), + pf->offset_loaded, &os->tx_size_64, + &ns->tx_size_64); + i40e_stat_update_48(hw, I40E_GLPRT_PTC127H(hw->port), + I40E_GLPRT_PTC127L(hw->port), + pf->offset_loaded, &os->tx_size_127, + &ns->tx_size_127); + i40e_stat_update_48(hw, I40E_GLPRT_PTC255H(hw->port), + I40E_GLPRT_PTC255L(hw->port), + pf->offset_loaded, &os->tx_size_255, + &ns->tx_size_255); + i40e_stat_update_48(hw, I40E_GLPRT_PTC511H(hw->port), + I40E_GLPRT_PTC511L(hw->port), + pf->offset_loaded, &os->tx_size_511, + &ns->tx_size_511); + i40e_stat_update_48(hw, I40E_GLPRT_PTC1023H(hw->port), + I40E_GLPRT_PTC1023L(hw->port), + pf->offset_loaded, &os->tx_size_1023, + &ns->tx_size_1023); + i40e_stat_update_48(hw, I40E_GLPRT_PTC1522H(hw->port), + I40E_GLPRT_PTC1522L(hw->port), + pf->offset_loaded, &os->tx_size_1522, + &ns->tx_size_1522); + i40e_stat_update_48(hw, I40E_GLPRT_PTC9522H(hw->port), + I40E_GLPRT_PTC9522L(hw->port), + pf->offset_loaded, &os->tx_size_big, + &ns->tx_size_big); + i40e_stat_update_32(hw, I40E_GLQF_PCNT(pf->fdir.match_counter_index), + pf->offset_loaded, + &os->fd_sb_match, &ns->fd_sb_match); + /* GLPRT_MSPDC not supported */ + /* GLPRT_XEC not supported */ + + pf->offset_loaded = true; + + if (pf->main_vsi) + i40e_update_vsi_stats(pf->main_vsi); + + stats->ipackets = ns->eth.rx_unicast + ns->eth.rx_multicast + + ns->eth.rx_broadcast; + stats->opackets = ns->eth.tx_unicast + ns->eth.tx_multicast + + ns->eth.tx_broadcast; + stats->ibytes = ns->eth.rx_bytes; + stats->obytes = ns->eth.tx_bytes; + stats->oerrors = ns->eth.tx_errors; + stats->imcasts = ns->eth.rx_multicast; + stats->fdirmatch = ns->fd_sb_match; + + /* Rx Errors */ + stats->ibadcrc = ns->crc_errors; + stats->ibadlen = ns->rx_length_errors + ns->rx_undersize + + ns->rx_oversize + ns->rx_fragments + ns->rx_jabber; + stats->imissed = ns->eth.rx_discards; + stats->ierrors = stats->ibadcrc + stats->ibadlen + stats->imissed; + + PMD_DRV_LOG(DEBUG, "***************** PF stats start *******************"); + PMD_DRV_LOG(DEBUG, "rx_bytes: %lu", ns->eth.rx_bytes); + PMD_DRV_LOG(DEBUG, "rx_unicast: %lu", ns->eth.rx_unicast); + PMD_DRV_LOG(DEBUG, "rx_multicast: %lu", ns->eth.rx_multicast); + PMD_DRV_LOG(DEBUG, "rx_broadcast: %lu", ns->eth.rx_broadcast); + PMD_DRV_LOG(DEBUG, "rx_discards: %lu", ns->eth.rx_discards); + PMD_DRV_LOG(DEBUG, "rx_unknown_protocol: %lu", + ns->eth.rx_unknown_protocol); + PMD_DRV_LOG(DEBUG, "tx_bytes: %lu", ns->eth.tx_bytes); + PMD_DRV_LOG(DEBUG, "tx_unicast: %lu", ns->eth.tx_unicast); + PMD_DRV_LOG(DEBUG, "tx_multicast: %lu", ns->eth.tx_multicast); + PMD_DRV_LOG(DEBUG, "tx_broadcast: %lu", ns->eth.tx_broadcast); + PMD_DRV_LOG(DEBUG, "tx_discards: %lu", ns->eth.tx_discards); + PMD_DRV_LOG(DEBUG, "tx_errors: %lu", ns->eth.tx_errors); + + PMD_DRV_LOG(DEBUG, "tx_dropped_link_down: %lu", + ns->tx_dropped_link_down); + PMD_DRV_LOG(DEBUG, "crc_errors: %lu", ns->crc_errors); + PMD_DRV_LOG(DEBUG, "illegal_bytes: %lu", + ns->illegal_bytes); + PMD_DRV_LOG(DEBUG, "error_bytes: %lu", ns->error_bytes); + PMD_DRV_LOG(DEBUG, "mac_local_faults: %lu", + ns->mac_local_faults); + PMD_DRV_LOG(DEBUG, "mac_remote_faults: %lu", + ns->mac_remote_faults); + PMD_DRV_LOG(DEBUG, "rx_length_errors: %lu", + ns->rx_length_errors); + PMD_DRV_LOG(DEBUG, "link_xon_rx: %lu", ns->link_xon_rx); + PMD_DRV_LOG(DEBUG, "link_xoff_rx: %lu", ns->link_xoff_rx); + for (i = 0; i < 8; i++) { + PMD_DRV_LOG(DEBUG, "priority_xon_rx[%d]: %lu", + i, ns->priority_xon_rx[i]); + PMD_DRV_LOG(DEBUG, "priority_xoff_rx[%d]: %lu", + i, ns->priority_xoff_rx[i]); + } + PMD_DRV_LOG(DEBUG, "link_xon_tx: %lu", ns->link_xon_tx); + PMD_DRV_LOG(DEBUG, "link_xoff_tx: %lu", ns->link_xoff_tx); + for (i = 0; i < 8; i++) { + PMD_DRV_LOG(DEBUG, "priority_xon_tx[%d]: %lu", + i, ns->priority_xon_tx[i]); + PMD_DRV_LOG(DEBUG, "priority_xoff_tx[%d]: %lu", + i, ns->priority_xoff_tx[i]); + PMD_DRV_LOG(DEBUG, "priority_xon_2_xoff[%d]: %lu", + i, ns->priority_xon_2_xoff[i]); + } + PMD_DRV_LOG(DEBUG, "rx_size_64: %lu", ns->rx_size_64); + PMD_DRV_LOG(DEBUG, "rx_size_127: %lu", ns->rx_size_127); + PMD_DRV_LOG(DEBUG, "rx_size_255: %lu", ns->rx_size_255); + PMD_DRV_LOG(DEBUG, "rx_size_511: %lu", ns->rx_size_511); + PMD_DRV_LOG(DEBUG, "rx_size_1023: %lu", ns->rx_size_1023); + PMD_DRV_LOG(DEBUG, "rx_size_1522: %lu", ns->rx_size_1522); + PMD_DRV_LOG(DEBUG, "rx_size_big: %lu", ns->rx_size_big); + PMD_DRV_LOG(DEBUG, "rx_undersize: %lu", ns->rx_undersize); + PMD_DRV_LOG(DEBUG, "rx_fragments: %lu", ns->rx_fragments); + PMD_DRV_LOG(DEBUG, "rx_oversize: %lu", ns->rx_oversize); + PMD_DRV_LOG(DEBUG, "rx_jabber: %lu", ns->rx_jabber); + PMD_DRV_LOG(DEBUG, "tx_size_64: %lu", ns->tx_size_64); + PMD_DRV_LOG(DEBUG, "tx_size_127: %lu", ns->tx_size_127); + PMD_DRV_LOG(DEBUG, "tx_size_255: %lu", ns->tx_size_255); + PMD_DRV_LOG(DEBUG, "tx_size_511: %lu", ns->tx_size_511); + PMD_DRV_LOG(DEBUG, "tx_size_1023: %lu", ns->tx_size_1023); + PMD_DRV_LOG(DEBUG, "tx_size_1522: %lu", ns->tx_size_1522); + PMD_DRV_LOG(DEBUG, "tx_size_big: %lu", ns->tx_size_big); + PMD_DRV_LOG(DEBUG, "mac_short_packet_dropped: %lu", + ns->mac_short_packet_dropped); + PMD_DRV_LOG(DEBUG, "checksum_error: %lu", + ns->checksum_error); + PMD_DRV_LOG(DEBUG, "fdir_match: %lu", ns->fd_sb_match); + PMD_DRV_LOG(DEBUG, "***************** PF stats end ********************"); +} + +/* Reset the statistics */ +static void +i40e_dev_stats_reset(struct rte_eth_dev *dev) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + + /* It results in reloading the start point of each counter */ + pf->offset_loaded = false; +} + +static int +i40e_dev_queue_stats_mapping_set(__rte_unused struct rte_eth_dev *dev, + __rte_unused uint16_t queue_id, + __rte_unused uint8_t stat_idx, + __rte_unused uint8_t is_rx) +{ + PMD_INIT_FUNC_TRACE(); + + return -ENOSYS; +} + +static void +i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_vsi *vsi = pf->main_vsi; + + dev_info->max_rx_queues = vsi->nb_qps; + dev_info->max_tx_queues = vsi->nb_qps; + dev_info->min_rx_bufsize = I40E_BUF_SIZE_MIN; + dev_info->max_rx_pktlen = I40E_FRAME_SIZE_MAX; + dev_info->max_mac_addrs = vsi->max_macaddrs; + dev_info->max_vfs = dev->pci_dev->max_vfs; + dev_info->rx_offload_capa = + DEV_RX_OFFLOAD_VLAN_STRIP | + DEV_RX_OFFLOAD_IPV4_CKSUM | + DEV_RX_OFFLOAD_UDP_CKSUM | + DEV_RX_OFFLOAD_TCP_CKSUM; + dev_info->tx_offload_capa = + DEV_TX_OFFLOAD_VLAN_INSERT | + DEV_TX_OFFLOAD_IPV4_CKSUM | + DEV_TX_OFFLOAD_UDP_CKSUM | + DEV_TX_OFFLOAD_TCP_CKSUM | + DEV_TX_OFFLOAD_SCTP_CKSUM | + DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | + DEV_TX_OFFLOAD_TCP_TSO; + dev_info->reta_size = pf->hash_lut_size; + dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = I40E_DEFAULT_RX_PTHRESH, + .hthresh = I40E_DEFAULT_RX_HTHRESH, + .wthresh = I40E_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = I40E_DEFAULT_RX_FREE_THRESH, + .rx_drop_en = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = I40E_DEFAULT_TX_PTHRESH, + .hthresh = I40E_DEFAULT_TX_HTHRESH, + .wthresh = I40E_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = I40E_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = I40E_DEFAULT_TX_RSBIT_THRESH, + .txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS | + ETH_TXQ_FLAGS_NOOFFLOADS, + }; + + if (pf->flags | I40E_FLAG_VMDQ) { + dev_info->max_vmdq_pools = pf->max_nb_vmdq_vsi; + dev_info->vmdq_queue_base = dev_info->max_rx_queues; + dev_info->vmdq_queue_num = pf->vmdq_nb_qps * + pf->max_nb_vmdq_vsi; + dev_info->vmdq_pool_base = I40E_VMDQ_POOL_BASE; + dev_info->max_rx_queues += dev_info->vmdq_queue_num; + dev_info->max_tx_queues += dev_info->vmdq_queue_num; + } +} + +static int +i40e_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_vsi *vsi = pf->main_vsi; + PMD_INIT_FUNC_TRACE(); + + if (on) + return i40e_vsi_add_vlan(vsi, vlan_id); + else + return i40e_vsi_delete_vlan(vsi, vlan_id); +} + +static void +i40e_vlan_tpid_set(__rte_unused struct rte_eth_dev *dev, + __rte_unused uint16_t tpid) +{ + PMD_INIT_FUNC_TRACE(); +} + +static void +i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_vsi *vsi = pf->main_vsi; + + if (mask & ETH_VLAN_STRIP_MASK) { + /* Enable or disable VLAN stripping */ + if (dev->data->dev_conf.rxmode.hw_vlan_strip) + i40e_vsi_config_vlan_stripping(vsi, TRUE); + else + i40e_vsi_config_vlan_stripping(vsi, FALSE); + } + + if (mask & ETH_VLAN_EXTEND_MASK) { + if (dev->data->dev_conf.rxmode.hw_vlan_extend) + i40e_vsi_config_double_vlan(vsi, TRUE); + else + i40e_vsi_config_double_vlan(vsi, FALSE); + } +} + +static void +i40e_vlan_strip_queue_set(__rte_unused struct rte_eth_dev *dev, + __rte_unused uint16_t queue, + __rte_unused int on) +{ + PMD_INIT_FUNC_TRACE(); +} + +static int +i40e_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_vsi *vsi = pf->main_vsi; + struct rte_eth_dev_data *data = I40E_VSI_TO_DEV_DATA(vsi); + struct i40e_vsi_vlan_pvid_info info; + + memset(&info, 0, sizeof(info)); + info.on = on; + if (info.on) + info.config.pvid = pvid; + else { + info.config.reject.tagged = + data->dev_conf.txmode.hw_vlan_reject_tagged; + info.config.reject.untagged = + data->dev_conf.txmode.hw_vlan_reject_untagged; + } + + return i40e_vsi_vlan_pvid_set(vsi, &info); +} + +static int +i40e_dev_led_on(struct rte_eth_dev *dev) +{ + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t mode = i40e_led_get(hw); + + if (mode == 0) + i40e_led_set(hw, 0xf, true); /* 0xf means led always true */ + + return 0; +} + +static int +i40e_dev_led_off(struct rte_eth_dev *dev) +{ + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t mode = i40e_led_get(hw); + + if (mode != 0) + i40e_led_set(hw, 0, false); + + return 0; +} + +static int +i40e_flow_ctrl_set(__rte_unused struct rte_eth_dev *dev, + __rte_unused struct rte_eth_fc_conf *fc_conf) +{ + PMD_INIT_FUNC_TRACE(); + + return -ENOSYS; +} + +static int +i40e_priority_flow_ctrl_set(__rte_unused struct rte_eth_dev *dev, + __rte_unused struct rte_eth_pfc_conf *pfc_conf) +{ + PMD_INIT_FUNC_TRACE(); + + return -ENOSYS; +} + +/* Add a MAC address, and update filters */ +static void +i40e_macaddr_add(struct rte_eth_dev *dev, + struct ether_addr *mac_addr, + __rte_unused uint32_t index, + uint32_t pool) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_mac_filter_info mac_filter; + struct i40e_vsi *vsi; + int ret; + + /* If VMDQ not enabled or configured, return */ + if (pool != 0 && (!(pf->flags | I40E_FLAG_VMDQ) || !pf->nb_cfg_vmdq_vsi)) { + PMD_DRV_LOG(ERR, "VMDQ not %s, can't set mac to pool %u", + pf->flags | I40E_FLAG_VMDQ ? "configured" : "enabled", + pool); + return; + } + + if (pool > pf->nb_cfg_vmdq_vsi) { + PMD_DRV_LOG(ERR, "Pool number %u invalid. Max pool is %u", + pool, pf->nb_cfg_vmdq_vsi); + return; + } + + (void)rte_memcpy(&mac_filter.mac_addr, mac_addr, ETHER_ADDR_LEN); + mac_filter.filter_type = RTE_MACVLAN_PERFECT_MATCH; + + if (pool == 0) + vsi = pf->main_vsi; + else + vsi = pf->vmdq[pool - 1].vsi; + + ret = i40e_vsi_add_mac(vsi, &mac_filter); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to add MACVLAN filter"); + return; + } +} + +/* Remove a MAC address, and update filters */ +static void +i40e_macaddr_remove(struct rte_eth_dev *dev, uint32_t index) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_vsi *vsi; + struct rte_eth_dev_data *data = dev->data; + struct ether_addr *macaddr; + int ret; + uint32_t i; + uint64_t pool_sel; + + macaddr = &(data->mac_addrs[index]); + + pool_sel = dev->data->mac_pool_sel[index]; + + for (i = 0; i < sizeof(pool_sel) * CHAR_BIT; i++) { + if (pool_sel & (1ULL << i)) { + if (i == 0) + vsi = pf->main_vsi; + else { + /* No VMDQ pool enabled or configured */ + if (!(pf->flags | I40E_FLAG_VMDQ) || + (i > pf->nb_cfg_vmdq_vsi)) { + PMD_DRV_LOG(ERR, "No VMDQ pool enabled" + "/configured"); + return; + } + vsi = pf->vmdq[i - 1].vsi; + } + ret = i40e_vsi_delete_mac(vsi, macaddr); + + if (ret) { + PMD_DRV_LOG(ERR, "Failed to remove MACVLAN filter"); + return; + } + } + } +} + +/* Set perfect match or hash match of MAC and VLAN for a VF */ +static int +i40e_vf_mac_filter_set(struct i40e_pf *pf, + struct rte_eth_mac_filter *filter, + bool add) +{ + struct i40e_hw *hw; + struct i40e_mac_filter_info mac_filter; + struct ether_addr old_mac; + struct ether_addr *new_mac; + struct i40e_pf_vf *vf = NULL; + uint16_t vf_id; + int ret; + + if (pf == NULL) { + PMD_DRV_LOG(ERR, "Invalid PF argument."); + return -EINVAL; + } + hw = I40E_PF_TO_HW(pf); + + if (filter == NULL) { + PMD_DRV_LOG(ERR, "Invalid mac filter argument."); + return -EINVAL; + } + + new_mac = &filter->mac_addr; + + if (is_zero_ether_addr(new_mac)) { + PMD_DRV_LOG(ERR, "Invalid ethernet address."); + return -EINVAL; + } + + vf_id = filter->dst_id; + + if (vf_id > pf->vf_num - 1 || !pf->vfs) { + PMD_DRV_LOG(ERR, "Invalid argument."); + return -EINVAL; + } + vf = &pf->vfs[vf_id]; + + if (add && is_same_ether_addr(new_mac, &(pf->dev_addr))) { + PMD_DRV_LOG(INFO, "Ignore adding permanent MAC address."); + return -EINVAL; + } + + if (add) { + (void)rte_memcpy(&old_mac, hw->mac.addr, ETHER_ADDR_LEN); + (void)rte_memcpy(hw->mac.addr, new_mac->addr_bytes, + ETHER_ADDR_LEN); + (void)rte_memcpy(&mac_filter.mac_addr, &filter->mac_addr, + ETHER_ADDR_LEN); + + mac_filter.filter_type = filter->filter_type; + ret = i40e_vsi_add_mac(vf->vsi, &mac_filter); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to add MAC filter."); + return -1; + } + ether_addr_copy(new_mac, &pf->dev_addr); + } else { + (void)rte_memcpy(hw->mac.addr, hw->mac.perm_addr, + ETHER_ADDR_LEN); + ret = i40e_vsi_delete_mac(vf->vsi, &filter->mac_addr); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to delete MAC filter."); + return -1; + } + + /* Clear device address as it has been removed */ + if (is_same_ether_addr(&(pf->dev_addr), new_mac)) + memset(&pf->dev_addr, 0, sizeof(struct ether_addr)); + } + + return 0; +} + +/* MAC filter handle */ +static int +i40e_mac_filter_handle(struct rte_eth_dev *dev, enum rte_filter_op filter_op, + void *arg) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct rte_eth_mac_filter *filter; + struct i40e_hw *hw = I40E_PF_TO_HW(pf); + int ret = I40E_NOT_SUPPORTED; + + filter = (struct rte_eth_mac_filter *)(arg); + + switch (filter_op) { + case RTE_ETH_FILTER_NOP: + ret = I40E_SUCCESS; + break; + case RTE_ETH_FILTER_ADD: + i40e_pf_disable_irq0(hw); + if (filter->is_vf) + ret = i40e_vf_mac_filter_set(pf, filter, 1); + i40e_pf_enable_irq0(hw); + break; + case RTE_ETH_FILTER_DELETE: + i40e_pf_disable_irq0(hw); + if (filter->is_vf) + ret = i40e_vf_mac_filter_set(pf, filter, 0); + i40e_pf_enable_irq0(hw); + break; + default: + PMD_DRV_LOG(ERR, "unknown operation %u", filter_op); + ret = I40E_ERR_PARAM; + break; + } + + return ret; +} + +static int +i40e_dev_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t lut, l; + uint16_t i, j, lut_size = pf->hash_lut_size; + uint16_t idx, shift; + uint8_t mask; + + if (reta_size != lut_size || + reta_size > ETH_RSS_RETA_SIZE_512) { + PMD_DRV_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number hardware can supported " + "(%d)\n", reta_size, lut_size); + return -EINVAL; + } + + for (i = 0; i < reta_size; i += I40E_4_BIT_WIDTH) { + idx = i / RTE_RETA_GROUP_SIZE; + shift = i % RTE_RETA_GROUP_SIZE; + mask = (uint8_t)((reta_conf[idx].mask >> shift) & + I40E_4_BIT_MASK); + if (!mask) + continue; + if (mask == I40E_4_BIT_MASK) + l = 0; + else + l = I40E_READ_REG(hw, I40E_PFQF_HLUT(i >> 2)); + for (j = 0, lut = 0; j < I40E_4_BIT_WIDTH; j++) { + if (mask & (0x1 << j)) + lut |= reta_conf[idx].reta[shift + j] << + (CHAR_BIT * j); + else + lut |= l & (I40E_8_BIT_MASK << (CHAR_BIT * j)); + } + I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut); + } + + return 0; +} + +static int +i40e_dev_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t lut; + uint16_t i, j, lut_size = pf->hash_lut_size; + uint16_t idx, shift; + uint8_t mask; + + if (reta_size != lut_size || + reta_size > ETH_RSS_RETA_SIZE_512) { + PMD_DRV_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number hardware can supported " + "(%d)\n", reta_size, lut_size); + return -EINVAL; + } + + for (i = 0; i < reta_size; i += I40E_4_BIT_WIDTH) { + idx = i / RTE_RETA_GROUP_SIZE; + shift = i % RTE_RETA_GROUP_SIZE; + mask = (uint8_t)((reta_conf[idx].mask >> shift) & + I40E_4_BIT_MASK); + if (!mask) + continue; + + lut = I40E_READ_REG(hw, I40E_PFQF_HLUT(i >> 2)); + for (j = 0; j < I40E_4_BIT_WIDTH; j++) { + if (mask & (0x1 << j)) + reta_conf[idx].reta[shift + j] = ((lut >> + (CHAR_BIT * j)) & I40E_8_BIT_MASK); + } + } + + return 0; +} + +/** + * i40e_allocate_dma_mem_d - specific memory alloc for shared code (base driver) + * @hw: pointer to the HW structure + * @mem: pointer to mem struct to fill out + * @size: size of memory requested + * @alignment: what to align the allocation to + **/ +enum i40e_status_code +i40e_allocate_dma_mem_d(__attribute__((unused)) struct i40e_hw *hw, + struct i40e_dma_mem *mem, + u64 size, + u32 alignment) +{ + static uint64_t id = 0; + const struct rte_memzone *mz = NULL; + char z_name[RTE_MEMZONE_NAMESIZE]; + + if (!mem) + return I40E_ERR_PARAM; + + id++; + snprintf(z_name, sizeof(z_name), "i40e_dma_%"PRIu64, id); +#ifdef RTE_LIBRTE_XEN_DOM0 + mz = rte_memzone_reserve_bounded(z_name, size, 0, 0, alignment, + RTE_PGSIZE_2M); +#else + mz = rte_memzone_reserve_aligned(z_name, size, 0, 0, alignment); +#endif + if (!mz) + return I40E_ERR_NO_MEMORY; + + mem->id = id; + mem->size = size; + mem->va = mz->addr; +#ifdef RTE_LIBRTE_XEN_DOM0 + mem->pa = rte_mem_phy2mch(mz->memseg_id, mz->phys_addr); +#else + mem->pa = mz->phys_addr; +#endif + + return I40E_SUCCESS; +} + +/** + * i40e_free_dma_mem_d - specific memory free for shared code (base driver) + * @hw: pointer to the HW structure + * @mem: ptr to mem struct to free + **/ +enum i40e_status_code +i40e_free_dma_mem_d(__attribute__((unused)) struct i40e_hw *hw, + struct i40e_dma_mem *mem) +{ + if (!mem || !mem->va) + return I40E_ERR_PARAM; + + mem->va = NULL; + mem->pa = (u64)0; + + return I40E_SUCCESS; +} + +/** + * i40e_allocate_virt_mem_d - specific memory alloc for shared code (base driver) + * @hw: pointer to the HW structure + * @mem: pointer to mem struct to fill out + * @size: size of memory requested + **/ +enum i40e_status_code +i40e_allocate_virt_mem_d(__attribute__((unused)) struct i40e_hw *hw, + struct i40e_virt_mem *mem, + u32 size) +{ + if (!mem) + return I40E_ERR_PARAM; + + mem->size = size; + mem->va = rte_zmalloc("i40e", size, 0); + + if (mem->va) + return I40E_SUCCESS; + else + return I40E_ERR_NO_MEMORY; +} + +/** + * i40e_free_virt_mem_d - specific memory free for shared code (base driver) + * @hw: pointer to the HW structure + * @mem: pointer to mem struct to free + **/ +enum i40e_status_code +i40e_free_virt_mem_d(__attribute__((unused)) struct i40e_hw *hw, + struct i40e_virt_mem *mem) +{ + if (!mem) + return I40E_ERR_PARAM; + + rte_free(mem->va); + mem->va = NULL; + + return I40E_SUCCESS; +} + +void +i40e_init_spinlock_d(struct i40e_spinlock *sp) +{ + rte_spinlock_init(&sp->spinlock); +} + +void +i40e_acquire_spinlock_d(struct i40e_spinlock *sp) +{ + rte_spinlock_lock(&sp->spinlock); +} + +void +i40e_release_spinlock_d(struct i40e_spinlock *sp) +{ + rte_spinlock_unlock(&sp->spinlock); +} + +void +i40e_destroy_spinlock_d(__attribute__((unused)) struct i40e_spinlock *sp) +{ + return; +} + +/** + * Get the hardware capabilities, which will be parsed + * and saved into struct i40e_hw. + */ +static int +i40e_get_cap(struct i40e_hw *hw) +{ + struct i40e_aqc_list_capabilities_element_resp *buf; + uint16_t len, size = 0; + int ret; + + /* Calculate a huge enough buff for saving response data temporarily */ + len = sizeof(struct i40e_aqc_list_capabilities_element_resp) * + I40E_MAX_CAP_ELE_NUM; + buf = rte_zmalloc("i40e", len, 0); + if (!buf) { + PMD_DRV_LOG(ERR, "Failed to allocate memory"); + return I40E_ERR_NO_MEMORY; + } + + /* Get, parse the capabilities and save it to hw */ + ret = i40e_aq_discover_capabilities(hw, buf, len, &size, + i40e_aqc_opc_list_func_capabilities, NULL); + if (ret != I40E_SUCCESS) + PMD_DRV_LOG(ERR, "Failed to discover capabilities"); + + /* Free the temporary buffer after being used */ + rte_free(buf); + + return ret; +} + +static int +i40e_pf_parameter_init(struct rte_eth_dev *dev) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_hw *hw = I40E_PF_TO_HW(pf); + uint16_t sum_queues = 0, sum_vsis, left_queues; + + /* First check if FW support SRIOV */ + if (dev->pci_dev->max_vfs && !hw->func_caps.sr_iov_1_1) { + PMD_INIT_LOG(ERR, "HW configuration doesn't support SRIOV"); + return -EINVAL; + } + + pf->flags = I40E_FLAG_HEADER_SPLIT_DISABLED; + pf->max_num_vsi = RTE_MIN(hw->func_caps.num_vsis, I40E_MAX_NUM_VSIS); + PMD_INIT_LOG(INFO, "Max supported VSIs:%u", pf->max_num_vsi); + /* Allocate queues for pf */ + if (hw->func_caps.rss) { + pf->flags |= I40E_FLAG_RSS; + pf->lan_nb_qps = RTE_MIN(hw->func_caps.num_tx_qp, + (uint32_t)(1 << hw->func_caps.rss_table_entry_width)); + pf->lan_nb_qps = i40e_align_floor(pf->lan_nb_qps); + } else + pf->lan_nb_qps = 1; + sum_queues = pf->lan_nb_qps; + /* Default VSI is not counted in */ + sum_vsis = 0; + PMD_INIT_LOG(INFO, "PF queue pairs:%u", pf->lan_nb_qps); + + if (hw->func_caps.sr_iov_1_1 && dev->pci_dev->max_vfs) { + pf->flags |= I40E_FLAG_SRIOV; + pf->vf_nb_qps = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF; + if (dev->pci_dev->max_vfs > hw->func_caps.num_vfs) { + PMD_INIT_LOG(ERR, "Config VF number %u, " + "max supported %u.", + dev->pci_dev->max_vfs, + hw->func_caps.num_vfs); + return -EINVAL; + } + if (pf->vf_nb_qps > I40E_MAX_QP_NUM_PER_VF) { + PMD_INIT_LOG(ERR, "FVL VF queue %u, " + "max support %u queues.", + pf->vf_nb_qps, I40E_MAX_QP_NUM_PER_VF); + return -EINVAL; + } + pf->vf_num = dev->pci_dev->max_vfs; + sum_queues += pf->vf_nb_qps * pf->vf_num; + sum_vsis += pf->vf_num; + PMD_INIT_LOG(INFO, "Max VF num:%u each has queue pairs:%u", + pf->vf_num, pf->vf_nb_qps); + } else + pf->vf_num = 0; + + if (hw->func_caps.vmdq) { + pf->flags |= I40E_FLAG_VMDQ; + pf->vmdq_nb_qps = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM; + pf->max_nb_vmdq_vsi = 1; + /* + * If VMDQ available, assume a single VSI can be created. Will adjust + * later. + */ + sum_queues += pf->vmdq_nb_qps * pf->max_nb_vmdq_vsi; + sum_vsis += pf->max_nb_vmdq_vsi; + } else { + pf->vmdq_nb_qps = 0; + pf->max_nb_vmdq_vsi = 0; + } + pf->nb_cfg_vmdq_vsi = 0; + + if (hw->func_caps.fd) { + pf->flags |= I40E_FLAG_FDIR; + pf->fdir_nb_qps = I40E_DEFAULT_QP_NUM_FDIR; + /** + * Each flow director consumes one VSI and one queue, + * but can't calculate out predictably here. + */ + } + + if (sum_vsis > pf->max_num_vsi || + sum_queues > hw->func_caps.num_rx_qp) { + PMD_INIT_LOG(ERR, "VSI/QUEUE setting can't be satisfied"); + PMD_INIT_LOG(ERR, "Max VSIs: %u, asked:%u", + pf->max_num_vsi, sum_vsis); + PMD_INIT_LOG(ERR, "Total queue pairs:%u, asked:%u", + hw->func_caps.num_rx_qp, sum_queues); + return -EINVAL; + } + + /* Adjust VMDQ setting to support as many VMs as possible */ + if (pf->flags & I40E_FLAG_VMDQ) { + left_queues = hw->func_caps.num_rx_qp - sum_queues; + + pf->max_nb_vmdq_vsi += RTE_MIN(left_queues / pf->vmdq_nb_qps, + pf->max_num_vsi - sum_vsis); + + /* Limit the max VMDQ number that rte_ether that can support */ + pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi, + ETH_64_POOLS - 1); + + PMD_INIT_LOG(INFO, "Max VMDQ VSI num:%u", + pf->max_nb_vmdq_vsi); + PMD_INIT_LOG(INFO, "VMDQ queue pairs:%u", pf->vmdq_nb_qps); + } + + /* Each VSI occupy 1 MSIX interrupt at least, plus IRQ0 for misc intr + * cause */ + if (sum_vsis > hw->func_caps.num_msix_vectors - 1) { + PMD_INIT_LOG(ERR, "Too many VSIs(%u), MSIX intr(%u) not enough", + sum_vsis, hw->func_caps.num_msix_vectors); + return -EINVAL; + } + return I40E_SUCCESS; +} + +static int +i40e_pf_get_switch_config(struct i40e_pf *pf) +{ + struct i40e_hw *hw = I40E_PF_TO_HW(pf); + struct i40e_aqc_get_switch_config_resp *switch_config; + struct i40e_aqc_switch_config_element_resp *element; + uint16_t start_seid = 0, num_reported; + int ret; + + switch_config = (struct i40e_aqc_get_switch_config_resp *)\ + rte_zmalloc("i40e", I40E_AQ_LARGE_BUF, 0); + if (!switch_config) { + PMD_DRV_LOG(ERR, "Failed to allocated memory"); + return -ENOMEM; + } + + /* Get the switch configurations */ + ret = i40e_aq_get_switch_config(hw, switch_config, + I40E_AQ_LARGE_BUF, &start_seid, NULL); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to get switch configurations"); + goto fail; + } + num_reported = rte_le_to_cpu_16(switch_config->header.num_reported); + if (num_reported != 1) { /* The number should be 1 */ + PMD_DRV_LOG(ERR, "Wrong number of switch config reported"); + goto fail; + } + + /* Parse the switch configuration elements */ + element = &(switch_config->element[0]); + if (element->element_type == I40E_SWITCH_ELEMENT_TYPE_VSI) { + pf->mac_seid = rte_le_to_cpu_16(element->uplink_seid); + pf->main_vsi_seid = rte_le_to_cpu_16(element->seid); + } else + PMD_DRV_LOG(INFO, "Unknown element type"); + +fail: + rte_free(switch_config); + + return ret; +} + +static int +i40e_res_pool_init (struct i40e_res_pool_info *pool, uint32_t base, + uint32_t num) +{ + struct pool_entry *entry; + + if (pool == NULL || num == 0) + return -EINVAL; + + entry = rte_zmalloc("i40e", sizeof(*entry), 0); + if (entry == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate memory for resource pool"); + return -ENOMEM; + } + + /* queue heap initialize */ + pool->num_free = num; + pool->num_alloc = 0; + pool->base = base; + LIST_INIT(&pool->alloc_list); + LIST_INIT(&pool->free_list); + + /* Initialize element */ + entry->base = 0; + entry->len = num; + + LIST_INSERT_HEAD(&pool->free_list, entry, next); + return 0; +} + +static void +i40e_res_pool_destroy(struct i40e_res_pool_info *pool) +{ + struct pool_entry *entry; + + if (pool == NULL) + return; + + LIST_FOREACH(entry, &pool->alloc_list, next) { + LIST_REMOVE(entry, next); + rte_free(entry); + } + + LIST_FOREACH(entry, &pool->free_list, next) { + LIST_REMOVE(entry, next); + rte_free(entry); + } + + pool->num_free = 0; + pool->num_alloc = 0; + pool->base = 0; + LIST_INIT(&pool->alloc_list); + LIST_INIT(&pool->free_list); +} + +static int +i40e_res_pool_free(struct i40e_res_pool_info *pool, + uint32_t base) +{ + struct pool_entry *entry, *next, *prev, *valid_entry = NULL; + uint32_t pool_offset; + int insert; + + if (pool == NULL) { + PMD_DRV_LOG(ERR, "Invalid parameter"); + return -EINVAL; + } + + pool_offset = base - pool->base; + /* Lookup in alloc list */ + LIST_FOREACH(entry, &pool->alloc_list, next) { + if (entry->base == pool_offset) { + valid_entry = entry; + LIST_REMOVE(entry, next); + break; + } + } + + /* Not find, return */ + if (valid_entry == NULL) { + PMD_DRV_LOG(ERR, "Failed to find entry"); + return -EINVAL; + } + + /** + * Found it, move it to free list and try to merge. + * In order to make merge easier, always sort it by qbase. + * Find adjacent prev and last entries. + */ + prev = next = NULL; + LIST_FOREACH(entry, &pool->free_list, next) { + if (entry->base > valid_entry->base) { + next = entry; + break; + } + prev = entry; + } + + insert = 0; + /* Try to merge with next one*/ + if (next != NULL) { + /* Merge with next one */ + if (valid_entry->base + valid_entry->len == next->base) { + next->base = valid_entry->base; + next->len += valid_entry->len; + rte_free(valid_entry); + valid_entry = next; + insert = 1; + } + } + + if (prev != NULL) { + /* Merge with previous one */ + if (prev->base + prev->len == valid_entry->base) { + prev->len += valid_entry->len; + /* If it merge with next one, remove next node */ + if (insert == 1) { + LIST_REMOVE(valid_entry, next); + rte_free(valid_entry); + } else { + rte_free(valid_entry); + insert = 1; + } + } + } + + /* Not find any entry to merge, insert */ + if (insert == 0) { + if (prev != NULL) + LIST_INSERT_AFTER(prev, valid_entry, next); + else if (next != NULL) + LIST_INSERT_BEFORE(next, valid_entry, next); + else /* It's empty list, insert to head */ + LIST_INSERT_HEAD(&pool->free_list, valid_entry, next); + } + + pool->num_free += valid_entry->len; + pool->num_alloc -= valid_entry->len; + + return 0; +} + +static int +i40e_res_pool_alloc(struct i40e_res_pool_info *pool, + uint16_t num) +{ + struct pool_entry *entry, *valid_entry; + + if (pool == NULL || num == 0) { + PMD_DRV_LOG(ERR, "Invalid parameter"); + return -EINVAL; + } + + if (pool->num_free < num) { + PMD_DRV_LOG(ERR, "No resource. ask:%u, available:%u", + num, pool->num_free); + return -ENOMEM; + } + + valid_entry = NULL; + /* Lookup in free list and find most fit one */ + LIST_FOREACH(entry, &pool->free_list, next) { + if (entry->len >= num) { + /* Find best one */ + if (entry->len == num) { + valid_entry = entry; + break; + } + if (valid_entry == NULL || valid_entry->len > entry->len) + valid_entry = entry; + } + } + + /* Not find one to satisfy the request, return */ + if (valid_entry == NULL) { + PMD_DRV_LOG(ERR, "No valid entry found"); + return -ENOMEM; + } + /** + * The entry have equal queue number as requested, + * remove it from alloc_list. + */ + if (valid_entry->len == num) { + LIST_REMOVE(valid_entry, next); + } else { + /** + * The entry have more numbers than requested, + * create a new entry for alloc_list and minus its + * queue base and number in free_list. + */ + entry = rte_zmalloc("res_pool", sizeof(*entry), 0); + if (entry == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate memory for " + "resource pool"); + return -ENOMEM; + } + entry->base = valid_entry->base; + entry->len = num; + valid_entry->base += num; + valid_entry->len -= num; + valid_entry = entry; + } + + /* Insert it into alloc list, not sorted */ + LIST_INSERT_HEAD(&pool->alloc_list, valid_entry, next); + + pool->num_free -= valid_entry->len; + pool->num_alloc += valid_entry->len; + + return (valid_entry->base + pool->base); +} + +/** + * bitmap_is_subset - Check whether src2 is subset of src1 + **/ +static inline int +bitmap_is_subset(uint8_t src1, uint8_t src2) +{ + return !((src1 ^ src2) & src2); +} + +static int +validate_tcmap_parameter(struct i40e_vsi *vsi, uint8_t enabled_tcmap) +{ + struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); + + /* If DCB is not supported, only default TC is supported */ + if (!hw->func_caps.dcb && enabled_tcmap != I40E_DEFAULT_TCMAP) { + PMD_DRV_LOG(ERR, "DCB is not enabled, only TC0 is supported"); + return -EINVAL; + } + + if (!bitmap_is_subset(hw->func_caps.enabled_tcmap, enabled_tcmap)) { + PMD_DRV_LOG(ERR, "Enabled TC map 0x%x not applicable to " + "HW support 0x%x", hw->func_caps.enabled_tcmap, + enabled_tcmap); + return -EINVAL; + } + return I40E_SUCCESS; +} + +int +i40e_vsi_vlan_pvid_set(struct i40e_vsi *vsi, + struct i40e_vsi_vlan_pvid_info *info) +{ + struct i40e_hw *hw; + struct i40e_vsi_context ctxt; + uint8_t vlan_flags = 0; + int ret; + + if (vsi == NULL || info == NULL) { + PMD_DRV_LOG(ERR, "invalid parameters"); + return I40E_ERR_PARAM; + } + + if (info->on) { + vsi->info.pvid = info->config.pvid; + /** + * If insert pvid is enabled, only tagged pkts are + * allowed to be sent out. + */ + vlan_flags |= I40E_AQ_VSI_PVLAN_INSERT_PVID | + I40E_AQ_VSI_PVLAN_MODE_TAGGED; + } else { + vsi->info.pvid = 0; + if (info->config.reject.tagged == 0) + vlan_flags |= I40E_AQ_VSI_PVLAN_MODE_TAGGED; + + if (info->config.reject.untagged == 0) + vlan_flags |= I40E_AQ_VSI_PVLAN_MODE_UNTAGGED; + } + vsi->info.port_vlan_flags &= ~(I40E_AQ_VSI_PVLAN_INSERT_PVID | + I40E_AQ_VSI_PVLAN_MODE_MASK); + vsi->info.port_vlan_flags |= vlan_flags; + vsi->info.valid_sections = + rte_cpu_to_le_16(I40E_AQ_VSI_PROP_VLAN_VALID); + memset(&ctxt, 0, sizeof(ctxt)); + (void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info)); + ctxt.seid = vsi->seid; + + hw = I40E_VSI_TO_HW(vsi); + ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL); + if (ret != I40E_SUCCESS) + PMD_DRV_LOG(ERR, "Failed to update VSI params"); + + return ret; +} + +static int +i40e_vsi_update_tc_bandwidth(struct i40e_vsi *vsi, uint8_t enabled_tcmap) +{ + struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); + int i, ret; + struct i40e_aqc_configure_vsi_tc_bw_data tc_bw_data; + + ret = validate_tcmap_parameter(vsi, enabled_tcmap); + if (ret != I40E_SUCCESS) + return ret; + + if (!vsi->seid) { + PMD_DRV_LOG(ERR, "seid not valid"); + return -EINVAL; + } + + memset(&tc_bw_data, 0, sizeof(tc_bw_data)); + tc_bw_data.tc_valid_bits = enabled_tcmap; + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) + tc_bw_data.tc_bw_credits[i] = + (enabled_tcmap & (1 << i)) ? 1 : 0; + + ret = i40e_aq_config_vsi_tc_bw(hw, vsi->seid, &tc_bw_data, NULL); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to configure TC BW"); + return ret; + } + + (void)rte_memcpy(vsi->info.qs_handle, tc_bw_data.qs_handles, + sizeof(vsi->info.qs_handle)); + return I40E_SUCCESS; +} + +static int +i40e_vsi_config_tc_queue_mapping(struct i40e_vsi *vsi, + struct i40e_aqc_vsi_properties_data *info, + uint8_t enabled_tcmap) +{ + int ret, total_tc = 0, i; + uint16_t qpnum_per_tc, bsf, qp_idx; + + ret = validate_tcmap_parameter(vsi, enabled_tcmap); + if (ret != I40E_SUCCESS) + return ret; + + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) + if (enabled_tcmap & (1 << i)) + total_tc++; + vsi->enabled_tc = enabled_tcmap; + + /* Number of queues per enabled TC */ + qpnum_per_tc = i40e_align_floor(vsi->nb_qps / total_tc); + qpnum_per_tc = RTE_MIN(qpnum_per_tc, I40E_MAX_Q_PER_TC); + bsf = rte_bsf32(qpnum_per_tc); + + /* Adjust the queue number to actual queues that can be applied */ + vsi->nb_qps = qpnum_per_tc * total_tc; + + /** + * Configure TC and queue mapping parameters, for enabled TC, + * allocate qpnum_per_tc queues to this traffic. For disabled TC, + * default queue will serve it. + */ + qp_idx = 0; + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) { + if (vsi->enabled_tc & (1 << i)) { + info->tc_mapping[i] = rte_cpu_to_le_16((qp_idx << + I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT) | + (bsf << I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT)); + qp_idx += qpnum_per_tc; + } else + info->tc_mapping[i] = 0; + } + + /* Associate queue number with VSI */ + if (vsi->type == I40E_VSI_SRIOV) { + info->mapping_flags |= + rte_cpu_to_le_16(I40E_AQ_VSI_QUE_MAP_NONCONTIG); + for (i = 0; i < vsi->nb_qps; i++) + info->queue_mapping[i] = + rte_cpu_to_le_16(vsi->base_queue + i); + } else { + info->mapping_flags |= + rte_cpu_to_le_16(I40E_AQ_VSI_QUE_MAP_CONTIG); + info->queue_mapping[0] = rte_cpu_to_le_16(vsi->base_queue); + } + info->valid_sections |= + rte_cpu_to_le_16(I40E_AQ_VSI_PROP_QUEUE_MAP_VALID); + + return I40E_SUCCESS; +} + +static int +i40e_veb_release(struct i40e_veb *veb) +{ + struct i40e_vsi *vsi; + struct i40e_hw *hw; + + if (veb == NULL || veb->associate_vsi == NULL) + return -EINVAL; + + if (!TAILQ_EMPTY(&veb->head)) { + PMD_DRV_LOG(ERR, "VEB still has VSI attached, can't remove"); + return -EACCES; + } + + vsi = veb->associate_vsi; + hw = I40E_VSI_TO_HW(vsi); + + vsi->uplink_seid = veb->uplink_seid; + i40e_aq_delete_element(hw, veb->seid, NULL); + rte_free(veb); + vsi->veb = NULL; + return I40E_SUCCESS; +} + +/* Setup a veb */ +static struct i40e_veb * +i40e_veb_setup(struct i40e_pf *pf, struct i40e_vsi *vsi) +{ + struct i40e_veb *veb; + int ret; + struct i40e_hw *hw; + + if (NULL == pf || vsi == NULL) { + PMD_DRV_LOG(ERR, "veb setup failed, " + "associated VSI shouldn't null"); + return NULL; + } + hw = I40E_PF_TO_HW(pf); + + veb = rte_zmalloc("i40e_veb", sizeof(struct i40e_veb), 0); + if (!veb) { + PMD_DRV_LOG(ERR, "Failed to allocate memory for veb"); + goto fail; + } + + veb->associate_vsi = vsi; + TAILQ_INIT(&veb->head); + veb->uplink_seid = vsi->uplink_seid; + + ret = i40e_aq_add_veb(hw, veb->uplink_seid, vsi->seid, + I40E_DEFAULT_TCMAP, false, false, &veb->seid, NULL); + + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Add veb failed, aq_err: %d", + hw->aq.asq_last_status); + goto fail; + } + + /* get statistics index */ + ret = i40e_aq_get_veb_parameters(hw, veb->seid, NULL, NULL, + &veb->stats_idx, NULL, NULL, NULL); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Get veb statics index failed, aq_err: %d", + hw->aq.asq_last_status); + goto fail; + } + + /* Get VEB bandwidth, to be implemented */ + /* Now associated vsi binding to the VEB, set uplink to this VEB */ + vsi->uplink_seid = veb->seid; + + return veb; +fail: + rte_free(veb); + return NULL; +} + +int +i40e_vsi_release(struct i40e_vsi *vsi) +{ + struct i40e_pf *pf; + struct i40e_hw *hw; + struct i40e_vsi_list *vsi_list; + int ret; + struct i40e_mac_filter *f; + + if (!vsi) + return I40E_SUCCESS; + + pf = I40E_VSI_TO_PF(vsi); + hw = I40E_VSI_TO_HW(vsi); + + /* VSI has child to attach, release child first */ + if (vsi->veb) { + TAILQ_FOREACH(vsi_list, &vsi->veb->head, list) { + if (i40e_vsi_release(vsi_list->vsi) != I40E_SUCCESS) + return -1; + TAILQ_REMOVE(&vsi->veb->head, vsi_list, list); + } + i40e_veb_release(vsi->veb); + } + + /* Remove all macvlan filters of the VSI */ + i40e_vsi_remove_all_macvlan_filter(vsi); + TAILQ_FOREACH(f, &vsi->mac_list, next) + rte_free(f); + + if (vsi->type != I40E_VSI_MAIN) { + /* Remove vsi from parent's sibling list */ + if (vsi->parent_vsi == NULL || vsi->parent_vsi->veb == NULL) { + PMD_DRV_LOG(ERR, "VSI's parent VSI is NULL"); + return I40E_ERR_PARAM; + } + TAILQ_REMOVE(&vsi->parent_vsi->veb->head, + &vsi->sib_vsi_list, list); + + /* Remove all switch element of the VSI */ + ret = i40e_aq_delete_element(hw, vsi->seid, NULL); + if (ret != I40E_SUCCESS) + PMD_DRV_LOG(ERR, "Failed to delete element"); + } + i40e_res_pool_free(&pf->qp_pool, vsi->base_queue); + + if (vsi->type != I40E_VSI_SRIOV) + i40e_res_pool_free(&pf->msix_pool, vsi->msix_intr); + rte_free(vsi); + + return I40E_SUCCESS; +} + +static int +i40e_update_default_filter_setting(struct i40e_vsi *vsi) +{ + struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); + struct i40e_aqc_remove_macvlan_element_data def_filter; + struct i40e_mac_filter_info filter; + int ret; + + if (vsi->type != I40E_VSI_MAIN) + return I40E_ERR_CONFIG; + memset(&def_filter, 0, sizeof(def_filter)); + (void)rte_memcpy(def_filter.mac_addr, hw->mac.perm_addr, + ETH_ADDR_LEN); + def_filter.vlan_tag = 0; + def_filter.flags = I40E_AQC_MACVLAN_DEL_PERFECT_MATCH | + I40E_AQC_MACVLAN_DEL_IGNORE_VLAN; + ret = i40e_aq_remove_macvlan(hw, vsi->seid, &def_filter, 1, NULL); + if (ret != I40E_SUCCESS) { + struct i40e_mac_filter *f; + struct ether_addr *mac; + + PMD_DRV_LOG(WARNING, "Cannot remove the default " + "macvlan filter"); + /* It needs to add the permanent mac into mac list */ + f = rte_zmalloc("macv_filter", sizeof(*f), 0); + if (f == NULL) { + PMD_DRV_LOG(ERR, "failed to allocate memory"); + return I40E_ERR_NO_MEMORY; + } + mac = &f->mac_info.mac_addr; + (void)rte_memcpy(&mac->addr_bytes, hw->mac.perm_addr, + ETH_ADDR_LEN); + f->mac_info.filter_type = RTE_MACVLAN_PERFECT_MATCH; + TAILQ_INSERT_TAIL(&vsi->mac_list, f, next); + vsi->mac_num++; + + return ret; + } + (void)rte_memcpy(&filter.mac_addr, + (struct ether_addr *)(hw->mac.perm_addr), ETH_ADDR_LEN); + filter.filter_type = RTE_MACVLAN_PERFECT_MATCH; + return i40e_vsi_add_mac(vsi, &filter); +} + +static int +i40e_vsi_dump_bw_config(struct i40e_vsi *vsi) +{ + struct i40e_aqc_query_vsi_bw_config_resp bw_config; + struct i40e_aqc_query_vsi_ets_sla_config_resp ets_sla_config; + struct i40e_hw *hw = &vsi->adapter->hw; + i40e_status ret; + int i; + + memset(&bw_config, 0, sizeof(bw_config)); + ret = i40e_aq_query_vsi_bw_config(hw, vsi->seid, &bw_config, NULL); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "VSI failed to get bandwidth configuration %u", + hw->aq.asq_last_status); + return ret; + } + + memset(&ets_sla_config, 0, sizeof(ets_sla_config)); + ret = i40e_aq_query_vsi_ets_sla_config(hw, vsi->seid, + &ets_sla_config, NULL); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "VSI failed to get TC bandwdith " + "configuration %u", hw->aq.asq_last_status); + return ret; + } + + /* Not store the info yet, just print out */ + PMD_DRV_LOG(INFO, "VSI bw limit:%u", bw_config.port_bw_limit); + PMD_DRV_LOG(INFO, "VSI max_bw:%u", bw_config.max_bw); + for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) { + PMD_DRV_LOG(INFO, "\tVSI TC%u:share credits %u", i, + ets_sla_config.share_credits[i]); + PMD_DRV_LOG(INFO, "\tVSI TC%u:credits %u", i, + rte_le_to_cpu_16(ets_sla_config.credits[i])); + PMD_DRV_LOG(INFO, "\tVSI TC%u: max credits: %u", i, + rte_le_to_cpu_16(ets_sla_config.credits[i / 4]) >> + (i * 4)); + } + + return 0; +} + +/* Setup a VSI */ +struct i40e_vsi * +i40e_vsi_setup(struct i40e_pf *pf, + enum i40e_vsi_type type, + struct i40e_vsi *uplink_vsi, + uint16_t user_param) +{ + struct i40e_hw *hw = I40E_PF_TO_HW(pf); + struct i40e_vsi *vsi; + struct i40e_mac_filter_info filter; + int ret; + struct i40e_vsi_context ctxt; + struct ether_addr broadcast = + {.addr_bytes = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff}}; + + if (type != I40E_VSI_MAIN && uplink_vsi == NULL) { + PMD_DRV_LOG(ERR, "VSI setup failed, " + "VSI link shouldn't be NULL"); + return NULL; + } + + if (type == I40E_VSI_MAIN && uplink_vsi != NULL) { + PMD_DRV_LOG(ERR, "VSI setup failed, MAIN VSI " + "uplink VSI should be NULL"); + return NULL; + } + + /* If uplink vsi didn't setup VEB, create one first */ + if (type != I40E_VSI_MAIN && uplink_vsi->veb == NULL) { + uplink_vsi->veb = i40e_veb_setup(pf, uplink_vsi); + + if (NULL == uplink_vsi->veb) { + PMD_DRV_LOG(ERR, "VEB setup failed"); + return NULL; + } + } + + vsi = rte_zmalloc("i40e_vsi", sizeof(struct i40e_vsi), 0); + if (!vsi) { + PMD_DRV_LOG(ERR, "Failed to allocate memory for vsi"); + return NULL; + } + TAILQ_INIT(&vsi->mac_list); + vsi->type = type; + vsi->adapter = I40E_PF_TO_ADAPTER(pf); + vsi->max_macaddrs = I40E_NUM_MACADDR_MAX; + vsi->parent_vsi = uplink_vsi; + vsi->user_param = user_param; + /* Allocate queues */ + switch (vsi->type) { + case I40E_VSI_MAIN : + vsi->nb_qps = pf->lan_nb_qps; + break; + case I40E_VSI_SRIOV : + vsi->nb_qps = pf->vf_nb_qps; + break; + case I40E_VSI_VMDQ2: + vsi->nb_qps = pf->vmdq_nb_qps; + break; + case I40E_VSI_FDIR: + vsi->nb_qps = pf->fdir_nb_qps; + break; + default: + goto fail_mem; + } + /* + * The filter status descriptor is reported in rx queue 0, + * while the tx queue for fdir filter programming has no + * such constraints, can be non-zero queues. + * To simplify it, choose FDIR vsi use queue 0 pair. + * To make sure it will use queue 0 pair, queue allocation + * need be done before this function is called + */ + if (type != I40E_VSI_FDIR) { + ret = i40e_res_pool_alloc(&pf->qp_pool, vsi->nb_qps); + if (ret < 0) { + PMD_DRV_LOG(ERR, "VSI %d allocate queue failed %d", + vsi->seid, ret); + goto fail_mem; + } + vsi->base_queue = ret; + } else + vsi->base_queue = I40E_FDIR_QUEUE_ID; + + /* VF has MSIX interrupt in VF range, don't allocate here */ + if (type != I40E_VSI_SRIOV) { + ret = i40e_res_pool_alloc(&pf->msix_pool, 1); + if (ret < 0) { + PMD_DRV_LOG(ERR, "VSI %d get heap failed %d", vsi->seid, ret); + goto fail_queue_alloc; + } + vsi->msix_intr = ret; + } else + vsi->msix_intr = 0; + /* Add VSI */ + if (type == I40E_VSI_MAIN) { + /* For main VSI, no need to add since it's default one */ + vsi->uplink_seid = pf->mac_seid; + vsi->seid = pf->main_vsi_seid; + /* Bind queues with specific MSIX interrupt */ + /** + * Needs 2 interrupt at least, one for misc cause which will + * enabled from OS side, Another for queues binding the + * interrupt from device side only. + */ + + /* Get default VSI parameters from hardware */ + memset(&ctxt, 0, sizeof(ctxt)); + ctxt.seid = vsi->seid; + ctxt.pf_num = hw->pf_id; + ctxt.uplink_seid = vsi->uplink_seid; + ctxt.vf_num = 0; + ret = i40e_aq_get_vsi_params(hw, &ctxt, NULL); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to get VSI params"); + goto fail_msix_alloc; + } + (void)rte_memcpy(&vsi->info, &ctxt.info, + sizeof(struct i40e_aqc_vsi_properties_data)); + vsi->vsi_id = ctxt.vsi_number; + vsi->info.valid_sections = 0; + + /* Configure tc, enabled TC0 only */ + if (i40e_vsi_update_tc_bandwidth(vsi, I40E_DEFAULT_TCMAP) != + I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to update TC bandwidth"); + goto fail_msix_alloc; + } + + /* TC, queue mapping */ + memset(&ctxt, 0, sizeof(ctxt)); + vsi->info.valid_sections |= + rte_cpu_to_le_16(I40E_AQ_VSI_PROP_VLAN_VALID); + vsi->info.port_vlan_flags = I40E_AQ_VSI_PVLAN_MODE_ALL | + I40E_AQ_VSI_PVLAN_EMOD_STR_BOTH; + (void)rte_memcpy(&ctxt.info, &vsi->info, + sizeof(struct i40e_aqc_vsi_properties_data)); + ret = i40e_vsi_config_tc_queue_mapping(vsi, &ctxt.info, + I40E_DEFAULT_TCMAP); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to configure " + "TC queue mapping"); + goto fail_msix_alloc; + } + ctxt.seid = vsi->seid; + ctxt.pf_num = hw->pf_id; + ctxt.uplink_seid = vsi->uplink_seid; + ctxt.vf_num = 0; + + /* Update VSI parameters */ + ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to update VSI params"); + goto fail_msix_alloc; + } + + (void)rte_memcpy(&vsi->info.tc_mapping, &ctxt.info.tc_mapping, + sizeof(vsi->info.tc_mapping)); + (void)rte_memcpy(&vsi->info.queue_mapping, + &ctxt.info.queue_mapping, + sizeof(vsi->info.queue_mapping)); + vsi->info.mapping_flags = ctxt.info.mapping_flags; + vsi->info.valid_sections = 0; + + (void)rte_memcpy(pf->dev_addr.addr_bytes, hw->mac.perm_addr, + ETH_ADDR_LEN); + + /** + * Updating default filter settings are necessary to prevent + * reception of tagged packets. + * Some old firmware configurations load a default macvlan + * filter which accepts both tagged and untagged packets. + * The updating is to use a normal filter instead if needed. + * For NVM 4.2.2 or after, the updating is not needed anymore. + * The firmware with correct configurations load the default + * macvlan filter which is expected and cannot be removed. + */ + i40e_update_default_filter_setting(vsi); + } else if (type == I40E_VSI_SRIOV) { + memset(&ctxt, 0, sizeof(ctxt)); + /** + * For other VSI, the uplink_seid equals to uplink VSI's + * uplink_seid since they share same VEB + */ + vsi->uplink_seid = uplink_vsi->uplink_seid; + ctxt.pf_num = hw->pf_id; + ctxt.vf_num = hw->func_caps.vf_base_id + user_param; + ctxt.uplink_seid = vsi->uplink_seid; + ctxt.connection_type = 0x1; + ctxt.flags = I40E_AQ_VSI_TYPE_VF; + + /** + * Do not configure switch ID to enable VEB switch by + * I40E_AQ_VSI_SW_ID_FLAG_ALLOW_LB. Because in Fortville, + * if the source mac address of packet sent from VF is not + * listed in the VEB's mac table, the VEB will switch the + * packet back to the VF. Need to enable it when HW issue + * is fixed. + */ + + /* Configure port/vlan */ + ctxt.info.valid_sections |= + rte_cpu_to_le_16(I40E_AQ_VSI_PROP_VLAN_VALID); + ctxt.info.port_vlan_flags |= I40E_AQ_VSI_PVLAN_MODE_ALL; + ret = i40e_vsi_config_tc_queue_mapping(vsi, &ctxt.info, + I40E_DEFAULT_TCMAP); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to configure " + "TC queue mapping"); + goto fail_msix_alloc; + } + ctxt.info.up_enable_bits = I40E_DEFAULT_TCMAP; + ctxt.info.valid_sections |= + rte_cpu_to_le_16(I40E_AQ_VSI_PROP_SCHED_VALID); + /** + * Since VSI is not created yet, only configure parameter, + * will add vsi below. + */ + } else if (type == I40E_VSI_VMDQ2) { + memset(&ctxt, 0, sizeof(ctxt)); + /* + * For other VSI, the uplink_seid equals to uplink VSI's + * uplink_seid since they share same VEB + */ + vsi->uplink_seid = uplink_vsi->uplink_seid; + ctxt.pf_num = hw->pf_id; + ctxt.vf_num = 0; + ctxt.uplink_seid = vsi->uplink_seid; + ctxt.connection_type = 0x1; + ctxt.flags = I40E_AQ_VSI_TYPE_VMDQ2; + + ctxt.info.valid_sections |= + rte_cpu_to_le_16(I40E_AQ_VSI_PROP_SWITCH_VALID); + /* user_param carries flag to enable loop back */ + if (user_param) { + ctxt.info.switch_id = + rte_cpu_to_le_16(I40E_AQ_VSI_SW_ID_FLAG_LOCAL_LB); + ctxt.info.switch_id |= + rte_cpu_to_le_16(I40E_AQ_VSI_SW_ID_FLAG_ALLOW_LB); + } + + /* Configure port/vlan */ + ctxt.info.valid_sections |= + rte_cpu_to_le_16(I40E_AQ_VSI_PROP_VLAN_VALID); + ctxt.info.port_vlan_flags |= I40E_AQ_VSI_PVLAN_MODE_ALL; + ret = i40e_vsi_config_tc_queue_mapping(vsi, &ctxt.info, + I40E_DEFAULT_TCMAP); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to configure " + "TC queue mapping"); + goto fail_msix_alloc; + } + ctxt.info.up_enable_bits = I40E_DEFAULT_TCMAP; + ctxt.info.valid_sections |= + rte_cpu_to_le_16(I40E_AQ_VSI_PROP_SCHED_VALID); + } else if (type == I40E_VSI_FDIR) { + memset(&ctxt, 0, sizeof(ctxt)); + vsi->uplink_seid = uplink_vsi->uplink_seid; + ctxt.pf_num = hw->pf_id; + ctxt.vf_num = 0; + ctxt.uplink_seid = vsi->uplink_seid; + ctxt.connection_type = 0x1; /* regular data port */ + ctxt.flags = I40E_AQ_VSI_TYPE_PF; + ret = i40e_vsi_config_tc_queue_mapping(vsi, &ctxt.info, + I40E_DEFAULT_TCMAP); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to configure " + "TC queue mapping."); + goto fail_msix_alloc; + } + ctxt.info.up_enable_bits = I40E_DEFAULT_TCMAP; + ctxt.info.valid_sections |= + rte_cpu_to_le_16(I40E_AQ_VSI_PROP_SCHED_VALID); + } else { + PMD_DRV_LOG(ERR, "VSI: Not support other type VSI yet"); + goto fail_msix_alloc; + } + + if (vsi->type != I40E_VSI_MAIN) { + ret = i40e_aq_add_vsi(hw, &ctxt, NULL); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "add vsi failed, aq_err=%d", + hw->aq.asq_last_status); + goto fail_msix_alloc; + } + memcpy(&vsi->info, &ctxt.info, sizeof(ctxt.info)); + vsi->info.valid_sections = 0; + vsi->seid = ctxt.seid; + vsi->vsi_id = ctxt.vsi_number; + vsi->sib_vsi_list.vsi = vsi; + TAILQ_INSERT_TAIL(&uplink_vsi->veb->head, + &vsi->sib_vsi_list, list); + } + + /* MAC/VLAN configuration */ + (void)rte_memcpy(&filter.mac_addr, &broadcast, ETHER_ADDR_LEN); + filter.filter_type = RTE_MACVLAN_PERFECT_MATCH; + + ret = i40e_vsi_add_mac(vsi, &filter); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to add MACVLAN filter"); + goto fail_msix_alloc; + } + + /* Get VSI BW information */ + i40e_vsi_dump_bw_config(vsi); + return vsi; +fail_msix_alloc: + i40e_res_pool_free(&pf->msix_pool,vsi->msix_intr); +fail_queue_alloc: + i40e_res_pool_free(&pf->qp_pool,vsi->base_queue); +fail_mem: + rte_free(vsi); + return NULL; +} + +/* Configure vlan stripping on or off */ +int +i40e_vsi_config_vlan_stripping(struct i40e_vsi *vsi, bool on) +{ + struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); + struct i40e_vsi_context ctxt; + uint8_t vlan_flags; + int ret = I40E_SUCCESS; + + /* Check if it has been already on or off */ + if (vsi->info.valid_sections & + rte_cpu_to_le_16(I40E_AQ_VSI_PROP_VLAN_VALID)) { + if (on) { + if ((vsi->info.port_vlan_flags & + I40E_AQ_VSI_PVLAN_EMOD_MASK) == 0) + return 0; /* already on */ + } else { + if ((vsi->info.port_vlan_flags & + I40E_AQ_VSI_PVLAN_EMOD_MASK) == + I40E_AQ_VSI_PVLAN_EMOD_MASK) + return 0; /* already off */ + } + } + + if (on) + vlan_flags = I40E_AQ_VSI_PVLAN_EMOD_STR_BOTH; + else + vlan_flags = I40E_AQ_VSI_PVLAN_EMOD_NOTHING; + vsi->info.valid_sections = + rte_cpu_to_le_16(I40E_AQ_VSI_PROP_VLAN_VALID); + vsi->info.port_vlan_flags &= ~(I40E_AQ_VSI_PVLAN_EMOD_MASK); + vsi->info.port_vlan_flags |= vlan_flags; + ctxt.seid = vsi->seid; + (void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info)); + ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL); + if (ret) + PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan stripping", + on ? "enable" : "disable"); + + return ret; +} + +static int +i40e_dev_init_vlan(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + int ret; + + /* Apply vlan offload setting */ + i40e_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK); + + /* Apply double-vlan setting, not implemented yet */ + + /* Apply pvid setting */ + ret = i40e_vlan_pvid_set(dev, data->dev_conf.txmode.pvid, + data->dev_conf.txmode.hw_vlan_insert_pvid); + if (ret) + PMD_DRV_LOG(INFO, "Failed to update VSI params"); + + return ret; +} + +static int +i40e_vsi_config_double_vlan(struct i40e_vsi *vsi, int on) +{ + struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); + + return i40e_aq_set_port_parameters(hw, vsi->seid, 0, 1, on, NULL); +} + +static int +i40e_update_flow_control(struct i40e_hw *hw) +{ +#define I40E_LINK_PAUSE_RXTX (I40E_AQ_LINK_PAUSE_RX | I40E_AQ_LINK_PAUSE_TX) + struct i40e_link_status link_status; + uint32_t rxfc = 0, txfc = 0, reg; + uint8_t an_info; + int ret; + + memset(&link_status, 0, sizeof(link_status)); + ret = i40e_aq_get_link_info(hw, FALSE, &link_status, NULL); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to get link status information"); + goto write_reg; /* Disable flow control */ + } + + an_info = hw->phy.link_info.an_info; + if (!(an_info & I40E_AQ_AN_COMPLETED)) { + PMD_DRV_LOG(INFO, "Link auto negotiation not completed"); + ret = I40E_ERR_NOT_READY; + goto write_reg; /* Disable flow control */ + } + /** + * If link auto negotiation is enabled, flow control needs to + * be configured according to it + */ + switch (an_info & I40E_LINK_PAUSE_RXTX) { + case I40E_LINK_PAUSE_RXTX: + rxfc = 1; + txfc = 1; + hw->fc.current_mode = I40E_FC_FULL; + break; + case I40E_AQ_LINK_PAUSE_RX: + rxfc = 1; + hw->fc.current_mode = I40E_FC_RX_PAUSE; + break; + case I40E_AQ_LINK_PAUSE_TX: + txfc = 1; + hw->fc.current_mode = I40E_FC_TX_PAUSE; + break; + default: + hw->fc.current_mode = I40E_FC_NONE; + break; + } + +write_reg: + I40E_WRITE_REG(hw, I40E_PRTDCB_FCCFG, + txfc << I40E_PRTDCB_FCCFG_TFCE_SHIFT); + reg = I40E_READ_REG(hw, I40E_PRTDCB_MFLCN); + reg &= ~I40E_PRTDCB_MFLCN_RFCE_MASK; + reg |= rxfc << I40E_PRTDCB_MFLCN_RFCE_SHIFT; + I40E_WRITE_REG(hw, I40E_PRTDCB_MFLCN, reg); + + return ret; +} + +/* PF setup */ +static int +i40e_pf_setup(struct i40e_pf *pf) +{ + struct i40e_hw *hw = I40E_PF_TO_HW(pf); + struct i40e_filter_control_settings settings; + struct i40e_vsi *vsi; + int ret; + + /* Clear all stats counters */ + pf->offset_loaded = FALSE; + memset(&pf->stats, 0, sizeof(struct i40e_hw_port_stats)); + memset(&pf->stats_offset, 0, sizeof(struct i40e_hw_port_stats)); + + ret = i40e_pf_get_switch_config(pf); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Could not get switch config, err %d", ret); + return ret; + } + if (pf->flags & I40E_FLAG_FDIR) { + /* make queue allocated first, let FDIR use queue pair 0*/ + ret = i40e_res_pool_alloc(&pf->qp_pool, I40E_DEFAULT_QP_NUM_FDIR); + if (ret != I40E_FDIR_QUEUE_ID) { + PMD_DRV_LOG(ERR, "queue allocation fails for FDIR :" + " ret =%d", ret); + pf->flags &= ~I40E_FLAG_FDIR; + } + } + /* main VSI setup */ + vsi = i40e_vsi_setup(pf, I40E_VSI_MAIN, NULL, 0); + if (!vsi) { + PMD_DRV_LOG(ERR, "Setup of main vsi failed"); + return I40E_ERR_NOT_READY; + } + pf->main_vsi = vsi; + + /* Configure filter control */ + memset(&settings, 0, sizeof(settings)); + if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_128) + settings.hash_lut_size = I40E_HASH_LUT_SIZE_128; + else if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_512) + settings.hash_lut_size = I40E_HASH_LUT_SIZE_512; + else { + PMD_DRV_LOG(ERR, "Hash lookup table size (%u) not supported\n", + hw->func_caps.rss_table_size); + return I40E_ERR_PARAM; + } + PMD_DRV_LOG(INFO, "Hardware capability of hash lookup table " + "size: %u\n", hw->func_caps.rss_table_size); + pf->hash_lut_size = hw->func_caps.rss_table_size; + + /* Enable ethtype and macvlan filters */ + settings.enable_ethtype = TRUE; + settings.enable_macvlan = TRUE; + ret = i40e_set_filter_control(hw, &settings); + if (ret) + PMD_INIT_LOG(WARNING, "setup_pf_filter_control failed: %d", + ret); + + /* Update flow control according to the auto negotiation */ + i40e_update_flow_control(hw); + + return I40E_SUCCESS; +} + +int +i40e_switch_tx_queue(struct i40e_hw *hw, uint16_t q_idx, bool on) +{ + uint32_t reg; + uint16_t j; + + /** + * Set or clear TX Queue Disable flags, + * which is required by hardware. + */ + i40e_pre_tx_queue_cfg(hw, q_idx, on); + rte_delay_us(I40E_PRE_TX_Q_CFG_WAIT_US); + + /* Wait until the request is finished */ + for (j = 0; j < I40E_CHK_Q_ENA_COUNT; j++) { + rte_delay_us(I40E_CHK_Q_ENA_INTERVAL_US); + reg = I40E_READ_REG(hw, I40E_QTX_ENA(q_idx)); + if (!(((reg >> I40E_QTX_ENA_QENA_REQ_SHIFT) & 0x1) ^ + ((reg >> I40E_QTX_ENA_QENA_STAT_SHIFT) + & 0x1))) { + break; + } + } + if (on) { + if (reg & I40E_QTX_ENA_QENA_STAT_MASK) + return I40E_SUCCESS; /* already on, skip next steps */ + + I40E_WRITE_REG(hw, I40E_QTX_HEAD(q_idx), 0); + reg |= I40E_QTX_ENA_QENA_REQ_MASK; + } else { + if (!(reg & I40E_QTX_ENA_QENA_STAT_MASK)) + return I40E_SUCCESS; /* already off, skip next steps */ + reg &= ~I40E_QTX_ENA_QENA_REQ_MASK; + } + /* Write the register */ + I40E_WRITE_REG(hw, I40E_QTX_ENA(q_idx), reg); + /* Check the result */ + for (j = 0; j < I40E_CHK_Q_ENA_COUNT; j++) { + rte_delay_us(I40E_CHK_Q_ENA_INTERVAL_US); + reg = I40E_READ_REG(hw, I40E_QTX_ENA(q_idx)); + if (on) { + if ((reg & I40E_QTX_ENA_QENA_REQ_MASK) && + (reg & I40E_QTX_ENA_QENA_STAT_MASK)) + break; + } else { + if (!(reg & I40E_QTX_ENA_QENA_REQ_MASK) && + !(reg & I40E_QTX_ENA_QENA_STAT_MASK)) + break; + } + } + /* Check if it is timeout */ + if (j >= I40E_CHK_Q_ENA_COUNT) { + PMD_DRV_LOG(ERR, "Failed to %s tx queue[%u]", + (on ? "enable" : "disable"), q_idx); + return I40E_ERR_TIMEOUT; + } + + return I40E_SUCCESS; +} + +/* Swith on or off the tx queues */ +static int +i40e_dev_switch_tx_queues(struct i40e_pf *pf, bool on) +{ + struct rte_eth_dev_data *dev_data = pf->dev_data; + struct i40e_tx_queue *txq; + struct rte_eth_dev *dev = pf->adapter->eth_dev; + uint16_t i; + int ret; + + for (i = 0; i < dev_data->nb_tx_queues; i++) { + txq = dev_data->tx_queues[i]; + /* Don't operate the queue if not configured or + * if starting only per queue */ + if (!txq || !txq->q_set || (on && txq->tx_deferred_start)) + continue; + if (on) + ret = i40e_dev_tx_queue_start(dev, i); + else + ret = i40e_dev_tx_queue_stop(dev, i); + if ( ret != I40E_SUCCESS) + return ret; + } + + return I40E_SUCCESS; +} + +int +i40e_switch_rx_queue(struct i40e_hw *hw, uint16_t q_idx, bool on) +{ + uint32_t reg; + uint16_t j; + + /* Wait until the request is finished */ + for (j = 0; j < I40E_CHK_Q_ENA_COUNT; j++) { + rte_delay_us(I40E_CHK_Q_ENA_INTERVAL_US); + reg = I40E_READ_REG(hw, I40E_QRX_ENA(q_idx)); + if (!((reg >> I40E_QRX_ENA_QENA_REQ_SHIFT) & 0x1) ^ + ((reg >> I40E_QRX_ENA_QENA_STAT_SHIFT) & 0x1)) + break; + } + + if (on) { + if (reg & I40E_QRX_ENA_QENA_STAT_MASK) + return I40E_SUCCESS; /* Already on, skip next steps */ + reg |= I40E_QRX_ENA_QENA_REQ_MASK; + } else { + if (!(reg & I40E_QRX_ENA_QENA_STAT_MASK)) + return I40E_SUCCESS; /* Already off, skip next steps */ + reg &= ~I40E_QRX_ENA_QENA_REQ_MASK; + } + + /* Write the register */ + I40E_WRITE_REG(hw, I40E_QRX_ENA(q_idx), reg); + /* Check the result */ + for (j = 0; j < I40E_CHK_Q_ENA_COUNT; j++) { + rte_delay_us(I40E_CHK_Q_ENA_INTERVAL_US); + reg = I40E_READ_REG(hw, I40E_QRX_ENA(q_idx)); + if (on) { + if ((reg & I40E_QRX_ENA_QENA_REQ_MASK) && + (reg & I40E_QRX_ENA_QENA_STAT_MASK)) + break; + } else { + if (!(reg & I40E_QRX_ENA_QENA_REQ_MASK) && + !(reg & I40E_QRX_ENA_QENA_STAT_MASK)) + break; + } + } + + /* Check if it is timeout */ + if (j >= I40E_CHK_Q_ENA_COUNT) { + PMD_DRV_LOG(ERR, "Failed to %s rx queue[%u]", + (on ? "enable" : "disable"), q_idx); + return I40E_ERR_TIMEOUT; + } + + return I40E_SUCCESS; +} +/* Switch on or off the rx queues */ +static int +i40e_dev_switch_rx_queues(struct i40e_pf *pf, bool on) +{ + struct rte_eth_dev_data *dev_data = pf->dev_data; + struct i40e_rx_queue *rxq; + struct rte_eth_dev *dev = pf->adapter->eth_dev; + uint16_t i; + int ret; + + for (i = 0; i < dev_data->nb_rx_queues; i++) { + rxq = dev_data->rx_queues[i]; + /* Don't operate the queue if not configured or + * if starting only per queue */ + if (!rxq || !rxq->q_set || (on && rxq->rx_deferred_start)) + continue; + if (on) + ret = i40e_dev_rx_queue_start(dev, i); + else + ret = i40e_dev_rx_queue_stop(dev, i); + if (ret != I40E_SUCCESS) + return ret; + } + + return I40E_SUCCESS; +} + +/* Switch on or off all the rx/tx queues */ +int +i40e_dev_switch_queues(struct i40e_pf *pf, bool on) +{ + int ret; + + if (on) { + /* enable rx queues before enabling tx queues */ + ret = i40e_dev_switch_rx_queues(pf, on); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to switch rx queues"); + return ret; + } + ret = i40e_dev_switch_tx_queues(pf, on); + } else { + /* Stop tx queues before stopping rx queues */ + ret = i40e_dev_switch_tx_queues(pf, on); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to switch tx queues"); + return ret; + } + ret = i40e_dev_switch_rx_queues(pf, on); + } + + return ret; +} + +/* Initialize VSI for TX */ +static int +i40e_dev_tx_init(struct i40e_pf *pf) +{ + struct rte_eth_dev_data *data = pf->dev_data; + uint16_t i; + uint32_t ret = I40E_SUCCESS; + struct i40e_tx_queue *txq; + + for (i = 0; i < data->nb_tx_queues; i++) { + txq = data->tx_queues[i]; + if (!txq || !txq->q_set) + continue; + ret = i40e_tx_queue_init(txq); + if (ret != I40E_SUCCESS) + break; + } + + return ret; +} + +/* Initialize VSI for RX */ +static int +i40e_dev_rx_init(struct i40e_pf *pf) +{ + struct rte_eth_dev_data *data = pf->dev_data; + int ret = I40E_SUCCESS; + uint16_t i; + struct i40e_rx_queue *rxq; + + i40e_pf_config_mq_rx(pf); + for (i = 0; i < data->nb_rx_queues; i++) { + rxq = data->rx_queues[i]; + if (!rxq || !rxq->q_set) + continue; + + ret = i40e_rx_queue_init(rxq); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to do RX queue " + "initialization"); + break; + } + } + + return ret; +} + +static int +i40e_dev_rxtx_init(struct i40e_pf *pf) +{ + int err; + + err = i40e_dev_tx_init(pf); + if (err) { + PMD_DRV_LOG(ERR, "Failed to do TX initialization"); + return err; + } + err = i40e_dev_rx_init(pf); + if (err) { + PMD_DRV_LOG(ERR, "Failed to do RX initialization"); + return err; + } + + return err; +} + +static int +i40e_vmdq_setup(struct rte_eth_dev *dev) +{ + struct rte_eth_conf *conf = &dev->data->dev_conf; + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + int i, err, conf_vsis, j, loop; + struct i40e_vsi *vsi; + struct i40e_vmdq_info *vmdq_info; + struct rte_eth_vmdq_rx_conf *vmdq_conf; + struct i40e_hw *hw = I40E_PF_TO_HW(pf); + + /* + * Disable interrupt to avoid message from VF. Furthermore, it will + * avoid race condition in VSI creation/destroy. + */ + i40e_pf_disable_irq0(hw); + + if ((pf->flags & I40E_FLAG_VMDQ) == 0) { + PMD_INIT_LOG(ERR, "FW doesn't support VMDQ"); + return -ENOTSUP; + } + + conf_vsis = conf->rx_adv_conf.vmdq_rx_conf.nb_queue_pools; + if (conf_vsis > pf->max_nb_vmdq_vsi) { + PMD_INIT_LOG(ERR, "VMDQ config: %u, max support:%u", + conf->rx_adv_conf.vmdq_rx_conf.nb_queue_pools, + pf->max_nb_vmdq_vsi); + return -ENOTSUP; + } + + if (pf->vmdq != NULL) { + PMD_INIT_LOG(INFO, "VMDQ already configured"); + return 0; + } + + pf->vmdq = rte_zmalloc("vmdq_info_struct", + sizeof(*vmdq_info) * conf_vsis, 0); + + if (pf->vmdq == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate memory"); + return -ENOMEM; + } + + vmdq_conf = &conf->rx_adv_conf.vmdq_rx_conf; + + /* Create VMDQ VSI */ + for (i = 0; i < conf_vsis; i++) { + vsi = i40e_vsi_setup(pf, I40E_VSI_VMDQ2, pf->main_vsi, + vmdq_conf->enable_loop_back); + if (vsi == NULL) { + PMD_INIT_LOG(ERR, "Failed to create VMDQ VSI"); + err = -1; + goto err_vsi_setup; + } + vmdq_info = &pf->vmdq[i]; + vmdq_info->pf = pf; + vmdq_info->vsi = vsi; + } + pf->nb_cfg_vmdq_vsi = conf_vsis; + + /* Configure Vlan */ + loop = sizeof(vmdq_conf->pool_map[0].pools) * CHAR_BIT; + for (i = 0; i < vmdq_conf->nb_pool_maps; i++) { + for (j = 0; j < loop && j < pf->nb_cfg_vmdq_vsi; j++) { + if (vmdq_conf->pool_map[i].pools & (1UL << j)) { + PMD_INIT_LOG(INFO, "Add vlan %u to vmdq pool %u", + vmdq_conf->pool_map[i].vlan_id, j); + + err = i40e_vsi_add_vlan(pf->vmdq[j].vsi, + vmdq_conf->pool_map[i].vlan_id); + if (err) { + PMD_INIT_LOG(ERR, "Failed to add vlan"); + err = -1; + goto err_vsi_setup; + } + } + } + } + + i40e_pf_enable_irq0(hw); + + return 0; + +err_vsi_setup: + for (i = 0; i < conf_vsis; i++) + if (pf->vmdq[i].vsi == NULL) + break; + else + i40e_vsi_release(pf->vmdq[i].vsi); + + rte_free(pf->vmdq); + pf->vmdq = NULL; + i40e_pf_enable_irq0(hw); + return err; +} + +static void +i40e_stat_update_32(struct i40e_hw *hw, + uint32_t reg, + bool offset_loaded, + uint64_t *offset, + uint64_t *stat) +{ + uint64_t new_data; + + new_data = (uint64_t)I40E_READ_REG(hw, reg); + if (!offset_loaded) + *offset = new_data; + + if (new_data >= *offset) + *stat = (uint64_t)(new_data - *offset); + else + *stat = (uint64_t)((new_data + + ((uint64_t)1 << I40E_32_BIT_WIDTH)) - *offset); +} + +static void +i40e_stat_update_48(struct i40e_hw *hw, + uint32_t hireg, + uint32_t loreg, + bool offset_loaded, + uint64_t *offset, + uint64_t *stat) +{ + uint64_t new_data; + + new_data = (uint64_t)I40E_READ_REG(hw, loreg); + new_data |= ((uint64_t)(I40E_READ_REG(hw, hireg) & + I40E_16_BIT_MASK)) << I40E_32_BIT_WIDTH; + + if (!offset_loaded) + *offset = new_data; + + if (new_data >= *offset) + *stat = new_data - *offset; + else + *stat = (uint64_t)((new_data + + ((uint64_t)1 << I40E_48_BIT_WIDTH)) - *offset); + + *stat &= I40E_48_BIT_MASK; +} + +/* Disable IRQ0 */ +void +i40e_pf_disable_irq0(struct i40e_hw *hw) +{ + /* Disable all interrupt types */ + I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0, 0); + I40E_WRITE_FLUSH(hw); +} + +/* Enable IRQ0 */ +void +i40e_pf_enable_irq0(struct i40e_hw *hw) +{ + I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0, + I40E_PFINT_DYN_CTL0_INTENA_MASK | + I40E_PFINT_DYN_CTL0_CLEARPBA_MASK | + I40E_PFINT_DYN_CTL0_ITR_INDX_MASK); + I40E_WRITE_FLUSH(hw); +} + +static void +i40e_pf_config_irq0(struct i40e_hw *hw) +{ + /* read pending request and disable first */ + i40e_pf_disable_irq0(hw); + I40E_WRITE_REG(hw, I40E_PFINT_ICR0_ENA, I40E_PFINT_ICR0_ENA_MASK); + I40E_WRITE_REG(hw, I40E_PFINT_STAT_CTL0, + I40E_PFINT_STAT_CTL0_OTHER_ITR_INDX_MASK); + + /* Link no queues with irq0 */ + I40E_WRITE_REG(hw, I40E_PFINT_LNKLST0, + I40E_PFINT_LNKLST0_FIRSTQ_INDX_MASK); +} + +static void +i40e_dev_handle_vfr_event(struct rte_eth_dev *dev) +{ + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + int i; + uint16_t abs_vf_id; + uint32_t index, offset, val; + + if (!pf->vfs) + return; + /** + * Try to find which VF trigger a reset, use absolute VF id to access + * since the reg is global register. + */ + for (i = 0; i < pf->vf_num; i++) { + abs_vf_id = hw->func_caps.vf_base_id + i; + index = abs_vf_id / I40E_UINT32_BIT_SIZE; + offset = abs_vf_id % I40E_UINT32_BIT_SIZE; + val = I40E_READ_REG(hw, I40E_GLGEN_VFLRSTAT(index)); + /* VFR event occured */ + if (val & (0x1 << offset)) { + int ret; + + /* Clear the event first */ + I40E_WRITE_REG(hw, I40E_GLGEN_VFLRSTAT(index), + (0x1 << offset)); + PMD_DRV_LOG(INFO, "VF %u reset occured", abs_vf_id); + /** + * Only notify a VF reset event occured, + * don't trigger another SW reset + */ + ret = i40e_pf_host_vf_reset(&pf->vfs[i], 0); + if (ret != I40E_SUCCESS) + PMD_DRV_LOG(ERR, "Failed to do VF reset"); + } + } +} + +static void +i40e_dev_handle_aq_msg(struct rte_eth_dev *dev) +{ + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct i40e_arq_event_info info; + uint16_t pending, opcode; + int ret; + + info.buf_len = I40E_AQ_BUF_SZ; + info.msg_buf = rte_zmalloc("msg_buffer", info.buf_len, 0); + if (!info.msg_buf) { + PMD_DRV_LOG(ERR, "Failed to allocate mem"); + return; + } + + pending = 1; + while (pending) { + ret = i40e_clean_arq_element(hw, &info, &pending); + + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(INFO, "Failed to read msg from AdminQ, " + "aq_err: %u", hw->aq.asq_last_status); + break; + } + opcode = rte_le_to_cpu_16(info.desc.opcode); + + switch (opcode) { + case i40e_aqc_opc_send_msg_to_pf: + /* Refer to i40e_aq_send_msg_to_pf() for argument layout*/ + i40e_pf_host_handle_vf_msg(dev, + rte_le_to_cpu_16(info.desc.retval), + rte_le_to_cpu_32(info.desc.cookie_high), + rte_le_to_cpu_32(info.desc.cookie_low), + info.msg_buf, + info.msg_len); + break; + default: + PMD_DRV_LOG(ERR, "Request %u is not supported yet", + opcode); + break; + } + } + rte_free(info.msg_buf); +} + +/* + * Interrupt handler is registered as the alarm callback for handling LSC + * interrupt in a definite of time, in order to wait the NIC into a stable + * state. Currently it waits 1 sec in i40e for the link up interrupt, and + * no need for link down interrupt. + */ +static void +i40e_dev_interrupt_delayed_handler(void *param) +{ + struct rte_eth_dev *dev = (struct rte_eth_dev *)param; + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t icr0; + + /* read interrupt causes again */ + icr0 = I40E_READ_REG(hw, I40E_PFINT_ICR0); + +#ifdef RTE_LIBRTE_I40E_DEBUG_DRIVER + if (icr0 & I40E_PFINT_ICR0_ECC_ERR_MASK) + PMD_DRV_LOG(ERR, "ICR0: unrecoverable ECC error\n"); + if (icr0 & I40E_PFINT_ICR0_MAL_DETECT_MASK) + PMD_DRV_LOG(ERR, "ICR0: malicious programming detected\n"); + if (icr0 & I40E_PFINT_ICR0_GRST_MASK) + PMD_DRV_LOG(INFO, "ICR0: global reset requested\n"); + if (icr0 & I40E_PFINT_ICR0_PCI_EXCEPTION_MASK) + PMD_DRV_LOG(INFO, "ICR0: PCI exception\n activated\n"); + if (icr0 & I40E_PFINT_ICR0_STORM_DETECT_MASK) + PMD_DRV_LOG(INFO, "ICR0: a change in the storm control " + "state\n"); + if (icr0 & I40E_PFINT_ICR0_HMC_ERR_MASK) + PMD_DRV_LOG(ERR, "ICR0: HMC error\n"); + if (icr0 & I40E_PFINT_ICR0_PE_CRITERR_MASK) + PMD_DRV_LOG(ERR, "ICR0: protocol engine critical error\n"); +#endif /* RTE_LIBRTE_I40E_DEBUG_DRIVER */ + + if (icr0 & I40E_PFINT_ICR0_VFLR_MASK) { + PMD_DRV_LOG(INFO, "INT:VF reset detected\n"); + i40e_dev_handle_vfr_event(dev); + } + if (icr0 & I40E_PFINT_ICR0_ADMINQ_MASK) { + PMD_DRV_LOG(INFO, "INT:ADMINQ event\n"); + i40e_dev_handle_aq_msg(dev); + } + + /* handle the link up interrupt in an alarm callback */ + i40e_dev_link_update(dev, 0); + _rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC); + + i40e_pf_enable_irq0(hw); + rte_intr_enable(&(dev->pci_dev->intr_handle)); +} + +/** + * Interrupt handler triggered by NIC for handling + * specific interrupt. + * + * @param handle + * Pointer to interrupt handle. + * @param param + * The address of parameter (struct rte_eth_dev *) regsitered before. + * + * @return + * void + */ +static void +i40e_dev_interrupt_handler(__rte_unused struct rte_intr_handle *handle, + void *param) +{ + struct rte_eth_dev *dev = (struct rte_eth_dev *)param; + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t icr0; + + /* Disable interrupt */ + i40e_pf_disable_irq0(hw); + + /* read out interrupt causes */ + icr0 = I40E_READ_REG(hw, I40E_PFINT_ICR0); + + /* No interrupt event indicated */ + if (!(icr0 & I40E_PFINT_ICR0_INTEVENT_MASK)) { + PMD_DRV_LOG(INFO, "No interrupt event"); + goto done; + } +#ifdef RTE_LIBRTE_I40E_DEBUG_DRIVER + if (icr0 & I40E_PFINT_ICR0_ECC_ERR_MASK) + PMD_DRV_LOG(ERR, "ICR0: unrecoverable ECC error"); + if (icr0 & I40E_PFINT_ICR0_MAL_DETECT_MASK) + PMD_DRV_LOG(ERR, "ICR0: malicious programming detected"); + if (icr0 & I40E_PFINT_ICR0_GRST_MASK) + PMD_DRV_LOG(INFO, "ICR0: global reset requested"); + if (icr0 & I40E_PFINT_ICR0_PCI_EXCEPTION_MASK) + PMD_DRV_LOG(INFO, "ICR0: PCI exception activated"); + if (icr0 & I40E_PFINT_ICR0_STORM_DETECT_MASK) + PMD_DRV_LOG(INFO, "ICR0: a change in the storm control state"); + if (icr0 & I40E_PFINT_ICR0_HMC_ERR_MASK) + PMD_DRV_LOG(ERR, "ICR0: HMC error"); + if (icr0 & I40E_PFINT_ICR0_PE_CRITERR_MASK) + PMD_DRV_LOG(ERR, "ICR0: protocol engine critical error"); +#endif /* RTE_LIBRTE_I40E_DEBUG_DRIVER */ + + if (icr0 & I40E_PFINT_ICR0_VFLR_MASK) { + PMD_DRV_LOG(INFO, "ICR0: VF reset detected"); + i40e_dev_handle_vfr_event(dev); + } + if (icr0 & I40E_PFINT_ICR0_ADMINQ_MASK) { + PMD_DRV_LOG(INFO, "ICR0: adminq event"); + i40e_dev_handle_aq_msg(dev); + } + + /* Link Status Change interrupt */ + if (icr0 & I40E_PFINT_ICR0_LINK_STAT_CHANGE_MASK) { +#define I40E_US_PER_SECOND 1000000 + struct rte_eth_link link; + + PMD_DRV_LOG(INFO, "ICR0: link status changed\n"); + memset(&link, 0, sizeof(link)); + rte_i40e_dev_atomic_read_link_status(dev, &link); + i40e_dev_link_update(dev, 0); + + /* + * For link up interrupt, it needs to wait 1 second to let the + * hardware be a stable state. Otherwise several consecutive + * interrupts can be observed. + * For link down interrupt, no need to wait. + */ + if (!link.link_status && rte_eal_alarm_set(I40E_US_PER_SECOND, + i40e_dev_interrupt_delayed_handler, (void *)dev) >= 0) + return; + else + _rte_eth_dev_callback_process(dev, + RTE_ETH_EVENT_INTR_LSC); + } + +done: + /* Enable interrupt */ + i40e_pf_enable_irq0(hw); + rte_intr_enable(&(dev->pci_dev->intr_handle)); +} + +static int +i40e_add_macvlan_filters(struct i40e_vsi *vsi, + struct i40e_macvlan_filter *filter, + int total) +{ + int ele_num, ele_buff_size; + int num, actual_num, i; + uint16_t flags; + int ret = I40E_SUCCESS; + struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); + struct i40e_aqc_add_macvlan_element_data *req_list; + + if (filter == NULL || total == 0) + return I40E_ERR_PARAM; + ele_num = hw->aq.asq_buf_size / sizeof(*req_list); + ele_buff_size = hw->aq.asq_buf_size; + + req_list = rte_zmalloc("macvlan_add", ele_buff_size, 0); + if (req_list == NULL) { + PMD_DRV_LOG(ERR, "Fail to allocate memory"); + return I40E_ERR_NO_MEMORY; + } + + num = 0; + do { + actual_num = (num + ele_num > total) ? (total - num) : ele_num; + memset(req_list, 0, ele_buff_size); + + for (i = 0; i < actual_num; i++) { + (void)rte_memcpy(req_list[i].mac_addr, + &filter[num + i].macaddr, ETH_ADDR_LEN); + req_list[i].vlan_tag = + rte_cpu_to_le_16(filter[num + i].vlan_id); + + switch (filter[num + i].filter_type) { + case RTE_MAC_PERFECT_MATCH: + flags = I40E_AQC_MACVLAN_ADD_PERFECT_MATCH | + I40E_AQC_MACVLAN_ADD_IGNORE_VLAN; + break; + case RTE_MACVLAN_PERFECT_MATCH: + flags = I40E_AQC_MACVLAN_ADD_PERFECT_MATCH; + break; + case RTE_MAC_HASH_MATCH: + flags = I40E_AQC_MACVLAN_ADD_HASH_MATCH | + I40E_AQC_MACVLAN_ADD_IGNORE_VLAN; + break; + case RTE_MACVLAN_HASH_MATCH: + flags = I40E_AQC_MACVLAN_ADD_HASH_MATCH; + break; + default: + PMD_DRV_LOG(ERR, "Invalid MAC match type\n"); + ret = I40E_ERR_PARAM; + goto DONE; + } + + req_list[i].queue_number = 0; + + req_list[i].flags = rte_cpu_to_le_16(flags); + } + + ret = i40e_aq_add_macvlan(hw, vsi->seid, req_list, + actual_num, NULL); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to add macvlan filter"); + goto DONE; + } + num += actual_num; + } while (num < total); + +DONE: + rte_free(req_list); + return ret; +} + +static int +i40e_remove_macvlan_filters(struct i40e_vsi *vsi, + struct i40e_macvlan_filter *filter, + int total) +{ + int ele_num, ele_buff_size; + int num, actual_num, i; + uint16_t flags; + int ret = I40E_SUCCESS; + struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); + struct i40e_aqc_remove_macvlan_element_data *req_list; + + if (filter == NULL || total == 0) + return I40E_ERR_PARAM; + + ele_num = hw->aq.asq_buf_size / sizeof(*req_list); + ele_buff_size = hw->aq.asq_buf_size; + + req_list = rte_zmalloc("macvlan_remove", ele_buff_size, 0); + if (req_list == NULL) { + PMD_DRV_LOG(ERR, "Fail to allocate memory"); + return I40E_ERR_NO_MEMORY; + } + + num = 0; + do { + actual_num = (num + ele_num > total) ? (total - num) : ele_num; + memset(req_list, 0, ele_buff_size); + + for (i = 0; i < actual_num; i++) { + (void)rte_memcpy(req_list[i].mac_addr, + &filter[num + i].macaddr, ETH_ADDR_LEN); + req_list[i].vlan_tag = + rte_cpu_to_le_16(filter[num + i].vlan_id); + + switch (filter[num + i].filter_type) { + case RTE_MAC_PERFECT_MATCH: + flags = I40E_AQC_MACVLAN_DEL_PERFECT_MATCH | + I40E_AQC_MACVLAN_DEL_IGNORE_VLAN; + break; + case RTE_MACVLAN_PERFECT_MATCH: + flags = I40E_AQC_MACVLAN_DEL_PERFECT_MATCH; + break; + case RTE_MAC_HASH_MATCH: + flags = I40E_AQC_MACVLAN_DEL_HASH_MATCH | + I40E_AQC_MACVLAN_DEL_IGNORE_VLAN; + break; + case RTE_MACVLAN_HASH_MATCH: + flags = I40E_AQC_MACVLAN_DEL_HASH_MATCH; + break; + default: + PMD_DRV_LOG(ERR, "Invalid MAC filter type\n"); + ret = I40E_ERR_PARAM; + goto DONE; + } + req_list[i].flags = rte_cpu_to_le_16(flags); + } + + ret = i40e_aq_remove_macvlan(hw, vsi->seid, req_list, + actual_num, NULL); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to remove macvlan filter"); + goto DONE; + } + num += actual_num; + } while (num < total); + +DONE: + rte_free(req_list); + return ret; +} + +/* Find out specific MAC filter */ +static struct i40e_mac_filter * +i40e_find_mac_filter(struct i40e_vsi *vsi, + struct ether_addr *macaddr) +{ + struct i40e_mac_filter *f; + + TAILQ_FOREACH(f, &vsi->mac_list, next) { + if (is_same_ether_addr(macaddr, &f->mac_info.mac_addr)) + return f; + } + + return NULL; +} + +static bool +i40e_find_vlan_filter(struct i40e_vsi *vsi, + uint16_t vlan_id) +{ + uint32_t vid_idx, vid_bit; + + if (vlan_id > ETH_VLAN_ID_MAX) + return 0; + + vid_idx = I40E_VFTA_IDX(vlan_id); + vid_bit = I40E_VFTA_BIT(vlan_id); + + if (vsi->vfta[vid_idx] & vid_bit) + return 1; + else + return 0; +} + +static void +i40e_set_vlan_filter(struct i40e_vsi *vsi, + uint16_t vlan_id, bool on) +{ + uint32_t vid_idx, vid_bit; + + if (vlan_id > ETH_VLAN_ID_MAX) + return; + + vid_idx = I40E_VFTA_IDX(vlan_id); + vid_bit = I40E_VFTA_BIT(vlan_id); + + if (on) + vsi->vfta[vid_idx] |= vid_bit; + else + vsi->vfta[vid_idx] &= ~vid_bit; +} + +/** + * Find all vlan options for specific mac addr, + * return with actual vlan found. + */ +static inline int +i40e_find_all_vlan_for_mac(struct i40e_vsi *vsi, + struct i40e_macvlan_filter *mv_f, + int num, struct ether_addr *addr) +{ + int i; + uint32_t j, k; + + /** + * Not to use i40e_find_vlan_filter to decrease the loop time, + * although the code looks complex. + */ + if (num < vsi->vlan_num) + return I40E_ERR_PARAM; + + i = 0; + for (j = 0; j < I40E_VFTA_SIZE; j++) { + if (vsi->vfta[j]) { + for (k = 0; k < I40E_UINT32_BIT_SIZE; k++) { + if (vsi->vfta[j] & (1 << k)) { + if (i > num - 1) { + PMD_DRV_LOG(ERR, "vlan number " + "not match"); + return I40E_ERR_PARAM; + } + (void)rte_memcpy(&mv_f[i].macaddr, + addr, ETH_ADDR_LEN); + mv_f[i].vlan_id = + j * I40E_UINT32_BIT_SIZE + k; + i++; + } + } + } + } + return I40E_SUCCESS; +} + +static inline int +i40e_find_all_mac_for_vlan(struct i40e_vsi *vsi, + struct i40e_macvlan_filter *mv_f, + int num, + uint16_t vlan) +{ + int i = 0; + struct i40e_mac_filter *f; + + if (num < vsi->mac_num) + return I40E_ERR_PARAM; + + TAILQ_FOREACH(f, &vsi->mac_list, next) { + if (i > num - 1) { + PMD_DRV_LOG(ERR, "buffer number not match"); + return I40E_ERR_PARAM; + } + (void)rte_memcpy(&mv_f[i].macaddr, &f->mac_info.mac_addr, + ETH_ADDR_LEN); + mv_f[i].vlan_id = vlan; + mv_f[i].filter_type = f->mac_info.filter_type; + i++; + } + + return I40E_SUCCESS; +} + +static int +i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi) +{ + int i, num; + struct i40e_mac_filter *f; + struct i40e_macvlan_filter *mv_f; + int ret = I40E_SUCCESS; + + if (vsi == NULL || vsi->mac_num == 0) + return I40E_ERR_PARAM; + + /* Case that no vlan is set */ + if (vsi->vlan_num == 0) + num = vsi->mac_num; + else + num = vsi->mac_num * vsi->vlan_num; + + mv_f = rte_zmalloc("macvlan_data", num * sizeof(*mv_f), 0); + if (mv_f == NULL) { + PMD_DRV_LOG(ERR, "failed to allocate memory"); + return I40E_ERR_NO_MEMORY; + } + + i = 0; + if (vsi->vlan_num == 0) { + TAILQ_FOREACH(f, &vsi->mac_list, next) { + (void)rte_memcpy(&mv_f[i].macaddr, + &f->mac_info.mac_addr, ETH_ADDR_LEN); + mv_f[i].vlan_id = 0; + i++; + } + } else { + TAILQ_FOREACH(f, &vsi->mac_list, next) { + ret = i40e_find_all_vlan_for_mac(vsi,&mv_f[i], + vsi->vlan_num, &f->mac_info.mac_addr); + if (ret != I40E_SUCCESS) + goto DONE; + i += vsi->vlan_num; + } + } + + ret = i40e_remove_macvlan_filters(vsi, mv_f, num); +DONE: + rte_free(mv_f); + + return ret; +} + +int +i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan) +{ + struct i40e_macvlan_filter *mv_f; + int mac_num; + int ret = I40E_SUCCESS; + + if (!vsi || vlan > ETHER_MAX_VLAN_ID) + return I40E_ERR_PARAM; + + /* If it's already set, just return */ + if (i40e_find_vlan_filter(vsi,vlan)) + return I40E_SUCCESS; + + mac_num = vsi->mac_num; + + if (mac_num == 0) { + PMD_DRV_LOG(ERR, "Error! VSI doesn't have a mac addr"); + return I40E_ERR_PARAM; + } + + mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0); + + if (mv_f == NULL) { + PMD_DRV_LOG(ERR, "failed to allocate memory"); + return I40E_ERR_NO_MEMORY; + } + + ret = i40e_find_all_mac_for_vlan(vsi, mv_f, mac_num, vlan); + + if (ret != I40E_SUCCESS) + goto DONE; + + ret = i40e_add_macvlan_filters(vsi, mv_f, mac_num); + + if (ret != I40E_SUCCESS) + goto DONE; + + i40e_set_vlan_filter(vsi, vlan, 1); + + vsi->vlan_num++; + ret = I40E_SUCCESS; +DONE: + rte_free(mv_f); + return ret; +} + +int +i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan) +{ + struct i40e_macvlan_filter *mv_f; + int mac_num; + int ret = I40E_SUCCESS; + + /** + * Vlan 0 is the generic filter for untagged packets + * and can't be removed. + */ + if (!vsi || vlan == 0 || vlan > ETHER_MAX_VLAN_ID) + return I40E_ERR_PARAM; + + /* If can't find it, just return */ + if (!i40e_find_vlan_filter(vsi, vlan)) + return I40E_ERR_PARAM; + + mac_num = vsi->mac_num; + + if (mac_num == 0) { + PMD_DRV_LOG(ERR, "Error! VSI doesn't have a mac addr"); + return I40E_ERR_PARAM; + } + + mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0); + + if (mv_f == NULL) { + PMD_DRV_LOG(ERR, "failed to allocate memory"); + return I40E_ERR_NO_MEMORY; + } + + ret = i40e_find_all_mac_for_vlan(vsi, mv_f, mac_num, vlan); + + if (ret != I40E_SUCCESS) + goto DONE; + + ret = i40e_remove_macvlan_filters(vsi, mv_f, mac_num); + + if (ret != I40E_SUCCESS) + goto DONE; + + /* This is last vlan to remove, replace all mac filter with vlan 0 */ + if (vsi->vlan_num == 1) { + ret = i40e_find_all_mac_for_vlan(vsi, mv_f, mac_num, 0); + if (ret != I40E_SUCCESS) + goto DONE; + + ret = i40e_add_macvlan_filters(vsi, mv_f, mac_num); + if (ret != I40E_SUCCESS) + goto DONE; + } + + i40e_set_vlan_filter(vsi, vlan, 0); + + vsi->vlan_num--; + ret = I40E_SUCCESS; +DONE: + rte_free(mv_f); + return ret; +} + +int +i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter) +{ + struct i40e_mac_filter *f; + struct i40e_macvlan_filter *mv_f; + int i, vlan_num = 0; + int ret = I40E_SUCCESS; + + /* If it's add and we've config it, return */ + f = i40e_find_mac_filter(vsi, &mac_filter->mac_addr); + if (f != NULL) + return I40E_SUCCESS; + if ((mac_filter->filter_type == RTE_MACVLAN_PERFECT_MATCH) || + (mac_filter->filter_type == RTE_MACVLAN_HASH_MATCH)) { + + /** + * If vlan_num is 0, that's the first time to add mac, + * set mask for vlan_id 0. + */ + if (vsi->vlan_num == 0) { + i40e_set_vlan_filter(vsi, 0, 1); + vsi->vlan_num = 1; + } + vlan_num = vsi->vlan_num; + } else if ((mac_filter->filter_type == RTE_MAC_PERFECT_MATCH) || + (mac_filter->filter_type == RTE_MAC_HASH_MATCH)) + vlan_num = 1; + + mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0); + if (mv_f == NULL) { + PMD_DRV_LOG(ERR, "failed to allocate memory"); + return I40E_ERR_NO_MEMORY; + } + + for (i = 0; i < vlan_num; i++) { + mv_f[i].filter_type = mac_filter->filter_type; + (void)rte_memcpy(&mv_f[i].macaddr, &mac_filter->mac_addr, + ETH_ADDR_LEN); + } + + if (mac_filter->filter_type == RTE_MACVLAN_PERFECT_MATCH || + mac_filter->filter_type == RTE_MACVLAN_HASH_MATCH) { + ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num, + &mac_filter->mac_addr); + if (ret != I40E_SUCCESS) + goto DONE; + } + + ret = i40e_add_macvlan_filters(vsi, mv_f, vlan_num); + if (ret != I40E_SUCCESS) + goto DONE; + + /* Add the mac addr into mac list */ + f = rte_zmalloc("macv_filter", sizeof(*f), 0); + if (f == NULL) { + PMD_DRV_LOG(ERR, "failed to allocate memory"); + ret = I40E_ERR_NO_MEMORY; + goto DONE; + } + (void)rte_memcpy(&f->mac_info.mac_addr, &mac_filter->mac_addr, + ETH_ADDR_LEN); + f->mac_info.filter_type = mac_filter->filter_type; + TAILQ_INSERT_TAIL(&vsi->mac_list, f, next); + vsi->mac_num++; + + ret = I40E_SUCCESS; +DONE: + rte_free(mv_f); + + return ret; +} + +int +i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct ether_addr *addr) +{ + struct i40e_mac_filter *f; + struct i40e_macvlan_filter *mv_f; + int i, vlan_num; + enum rte_mac_filter_type filter_type; + int ret = I40E_SUCCESS; + + /* Can't find it, return an error */ + f = i40e_find_mac_filter(vsi, addr); + if (f == NULL) + return I40E_ERR_PARAM; + + vlan_num = vsi->vlan_num; + filter_type = f->mac_info.filter_type; + if (filter_type == RTE_MACVLAN_PERFECT_MATCH || + filter_type == RTE_MACVLAN_HASH_MATCH) { + if (vlan_num == 0) { + PMD_DRV_LOG(ERR, "VLAN number shouldn't be 0\n"); + return I40E_ERR_PARAM; + } + } else if (filter_type == RTE_MAC_PERFECT_MATCH || + filter_type == RTE_MAC_HASH_MATCH) + vlan_num = 1; + + mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0); + if (mv_f == NULL) { + PMD_DRV_LOG(ERR, "failed to allocate memory"); + return I40E_ERR_NO_MEMORY; + } + + for (i = 0; i < vlan_num; i++) { + mv_f[i].filter_type = filter_type; + (void)rte_memcpy(&mv_f[i].macaddr, &f->mac_info.mac_addr, + ETH_ADDR_LEN); + } + if (filter_type == RTE_MACVLAN_PERFECT_MATCH || + filter_type == RTE_MACVLAN_HASH_MATCH) { + ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num, addr); + if (ret != I40E_SUCCESS) + goto DONE; + } + + ret = i40e_remove_macvlan_filters(vsi, mv_f, vlan_num); + if (ret != I40E_SUCCESS) + goto DONE; + + /* Remove the mac addr into mac list */ + TAILQ_REMOVE(&vsi->mac_list, f, next); + rte_free(f); + vsi->mac_num--; + + ret = I40E_SUCCESS; +DONE: + rte_free(mv_f); + return ret; +} + +/* Configure hash enable flags for RSS */ +uint64_t +i40e_config_hena(uint64_t flags) +{ + uint64_t hena = 0; + + if (!flags) + return hena; + + if (flags & ETH_RSS_FRAG_IPV4) + hena |= 1ULL << I40E_FILTER_PCTYPE_FRAG_IPV4; + if (flags & ETH_RSS_NONFRAG_IPV4_TCP) + hena |= 1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_TCP; + if (flags & ETH_RSS_NONFRAG_IPV4_UDP) + hena |= 1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_UDP; + if (flags & ETH_RSS_NONFRAG_IPV4_SCTP) + hena |= 1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_SCTP; + if (flags & ETH_RSS_NONFRAG_IPV4_OTHER) + hena |= 1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_OTHER; + if (flags & ETH_RSS_FRAG_IPV6) + hena |= 1ULL << I40E_FILTER_PCTYPE_FRAG_IPV6; + if (flags & ETH_RSS_NONFRAG_IPV6_TCP) + hena |= 1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_TCP; + if (flags & ETH_RSS_NONFRAG_IPV6_UDP) + hena |= 1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_UDP; + if (flags & ETH_RSS_NONFRAG_IPV6_SCTP) + hena |= 1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_SCTP; + if (flags & ETH_RSS_NONFRAG_IPV6_OTHER) + hena |= 1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_OTHER; + if (flags & ETH_RSS_L2_PAYLOAD) + hena |= 1ULL << I40E_FILTER_PCTYPE_L2_PAYLOAD; + + return hena; +} + +/* Parse the hash enable flags */ +uint64_t +i40e_parse_hena(uint64_t flags) +{ + uint64_t rss_hf = 0; + + if (!flags) + return rss_hf; + if (flags & (1ULL << I40E_FILTER_PCTYPE_FRAG_IPV4)) + rss_hf |= ETH_RSS_FRAG_IPV4; + if (flags & (1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_TCP)) + rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP; + if (flags & (1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_UDP)) + rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP; + if (flags & (1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_SCTP)) + rss_hf |= ETH_RSS_NONFRAG_IPV4_SCTP; + if (flags & (1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_OTHER)) + rss_hf |= ETH_RSS_NONFRAG_IPV4_OTHER; + if (flags & (1ULL << I40E_FILTER_PCTYPE_FRAG_IPV6)) + rss_hf |= ETH_RSS_FRAG_IPV6; + if (flags & (1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_TCP)) + rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP; + if (flags & (1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_UDP)) + rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP; + if (flags & (1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_SCTP)) + rss_hf |= ETH_RSS_NONFRAG_IPV6_SCTP; + if (flags & (1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_OTHER)) + rss_hf |= ETH_RSS_NONFRAG_IPV6_OTHER; + if (flags & (1ULL << I40E_FILTER_PCTYPE_L2_PAYLOAD)) + rss_hf |= ETH_RSS_L2_PAYLOAD; + + return rss_hf; +} + +/* Disable RSS */ +static void +i40e_pf_disable_rss(struct i40e_pf *pf) +{ + struct i40e_hw *hw = I40E_PF_TO_HW(pf); + uint64_t hena; + + hena = (uint64_t)I40E_READ_REG(hw, I40E_PFQF_HENA(0)); + hena |= ((uint64_t)I40E_READ_REG(hw, I40E_PFQF_HENA(1))) << 32; + hena &= ~I40E_RSS_HENA_ALL; + I40E_WRITE_REG(hw, I40E_PFQF_HENA(0), (uint32_t)hena); + I40E_WRITE_REG(hw, I40E_PFQF_HENA(1), (uint32_t)(hena >> 32)); + I40E_WRITE_FLUSH(hw); +} + +static int +i40e_hw_rss_hash_set(struct i40e_hw *hw, struct rte_eth_rss_conf *rss_conf) +{ + uint32_t *hash_key; + uint8_t hash_key_len; + uint64_t rss_hf; + uint16_t i; + uint64_t hena; + + hash_key = (uint32_t *)(rss_conf->rss_key); + hash_key_len = rss_conf->rss_key_len; + if (hash_key != NULL && hash_key_len >= + (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) { + /* Fill in RSS hash key */ + for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++) + I40E_WRITE_REG(hw, I40E_PFQF_HKEY(i), hash_key[i]); + } + + rss_hf = rss_conf->rss_hf; + hena = (uint64_t)I40E_READ_REG(hw, I40E_PFQF_HENA(0)); + hena |= ((uint64_t)I40E_READ_REG(hw, I40E_PFQF_HENA(1))) << 32; + hena &= ~I40E_RSS_HENA_ALL; + hena |= i40e_config_hena(rss_hf); + I40E_WRITE_REG(hw, I40E_PFQF_HENA(0), (uint32_t)hena); + I40E_WRITE_REG(hw, I40E_PFQF_HENA(1), (uint32_t)(hena >> 32)); + I40E_WRITE_FLUSH(hw); + + return 0; +} + +static int +i40e_dev_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint64_t rss_hf = rss_conf->rss_hf & I40E_RSS_OFFLOAD_ALL; + uint64_t hena; + + hena = (uint64_t)I40E_READ_REG(hw, I40E_PFQF_HENA(0)); + hena |= ((uint64_t)I40E_READ_REG(hw, I40E_PFQF_HENA(1))) << 32; + if (!(hena & I40E_RSS_HENA_ALL)) { /* RSS disabled */ + if (rss_hf != 0) /* Enable RSS */ + return -EINVAL; + return 0; /* Nothing to do */ + } + /* RSS enabled */ + if (rss_hf == 0) /* Disable RSS */ + return -EINVAL; + + return i40e_hw_rss_hash_set(hw, rss_conf); +} + +static int +i40e_dev_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t *hash_key = (uint32_t *)(rss_conf->rss_key); + uint64_t hena; + uint16_t i; + + if (hash_key != NULL) { + for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++) + hash_key[i] = I40E_READ_REG(hw, I40E_PFQF_HKEY(i)); + rss_conf->rss_key_len = i * sizeof(uint32_t); + } + hena = (uint64_t)I40E_READ_REG(hw, I40E_PFQF_HENA(0)); + hena |= ((uint64_t)I40E_READ_REG(hw, I40E_PFQF_HENA(1))) << 32; + rss_conf->rss_hf = i40e_parse_hena(hena); + + return 0; +} + +static int +i40e_dev_get_filter_type(uint16_t filter_type, uint16_t *flag) +{ + switch (filter_type) { + case RTE_TUNNEL_FILTER_IMAC_IVLAN: + *flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN; + break; + case RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID: + *flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID; + break; + case RTE_TUNNEL_FILTER_IMAC_TENID: + *flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID; + break; + case RTE_TUNNEL_FILTER_OMAC_TENID_IMAC: + *flag = I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC; + break; + case ETH_TUNNEL_FILTER_IMAC: + *flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC; + break; + default: + PMD_DRV_LOG(ERR, "invalid tunnel filter type"); + return -EINVAL; + } + + return 0; +} + +static int +i40e_dev_tunnel_filter_set(struct i40e_pf *pf, + struct rte_eth_tunnel_filter_conf *tunnel_filter, + uint8_t add) +{ + uint16_t ip_type; + uint8_t tun_type = 0; + int val, ret = 0; + struct i40e_hw *hw = I40E_PF_TO_HW(pf); + struct i40e_vsi *vsi = pf->main_vsi; + struct i40e_aqc_add_remove_cloud_filters_element_data *cld_filter; + struct i40e_aqc_add_remove_cloud_filters_element_data *pfilter; + + cld_filter = rte_zmalloc("tunnel_filter", + sizeof(struct i40e_aqc_add_remove_cloud_filters_element_data), + 0); + + if (NULL == cld_filter) { + PMD_DRV_LOG(ERR, "Failed to alloc memory."); + return -EINVAL; + } + pfilter = cld_filter; + + (void)rte_memcpy(&pfilter->outer_mac, tunnel_filter->outer_mac, + sizeof(struct ether_addr)); + (void)rte_memcpy(&pfilter->inner_mac, tunnel_filter->inner_mac, + sizeof(struct ether_addr)); + + pfilter->inner_vlan = tunnel_filter->inner_vlan; + if (tunnel_filter->ip_type == RTE_TUNNEL_IPTYPE_IPV4) { + ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV4; + (void)rte_memcpy(&pfilter->ipaddr.v4.data, + &tunnel_filter->ip_addr, + sizeof(pfilter->ipaddr.v4.data)); + } else { + ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV6; + (void)rte_memcpy(&pfilter->ipaddr.v6.data, + &tunnel_filter->ip_addr, + sizeof(pfilter->ipaddr.v6.data)); + } + + /* check tunneled type */ + switch (tunnel_filter->tunnel_type) { + case RTE_TUNNEL_TYPE_VXLAN: + tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_XVLAN; + break; + case RTE_TUNNEL_TYPE_NVGRE: + tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_NVGRE_OMAC; + break; + default: + /* Other tunnel types is not supported. */ + PMD_DRV_LOG(ERR, "tunnel type is not supported."); + rte_free(cld_filter); + return -EINVAL; + } + + val = i40e_dev_get_filter_type(tunnel_filter->filter_type, + &pfilter->flags); + if (val < 0) { + rte_free(cld_filter); + return -EINVAL; + } + + pfilter->flags |= I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE | ip_type | + (tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT); + pfilter->tenant_id = tunnel_filter->tenant_id; + pfilter->queue_number = tunnel_filter->queue_id; + + if (add) + ret = i40e_aq_add_cloud_filters(hw, vsi->seid, cld_filter, 1); + else + ret = i40e_aq_remove_cloud_filters(hw, vsi->seid, + cld_filter, 1); + + rte_free(cld_filter); + return ret; +} + +static int +i40e_get_vxlan_port_idx(struct i40e_pf *pf, uint16_t port) +{ + uint8_t i; + + for (i = 0; i < I40E_MAX_PF_UDP_OFFLOAD_PORTS; i++) { + if (pf->vxlan_ports[i] == port) + return i; + } + + return -1; +} + +static int +i40e_add_vxlan_port(struct i40e_pf *pf, uint16_t port) +{ + int idx, ret; + uint8_t filter_idx; + struct i40e_hw *hw = I40E_PF_TO_HW(pf); + + idx = i40e_get_vxlan_port_idx(pf, port); + + /* Check if port already exists */ + if (idx >= 0) { + PMD_DRV_LOG(ERR, "Port %d already offloaded", port); + return -EINVAL; + } + + /* Now check if there is space to add the new port */ + idx = i40e_get_vxlan_port_idx(pf, 0); + if (idx < 0) { + PMD_DRV_LOG(ERR, "Maximum number of UDP ports reached," + "not adding port %d", port); + return -ENOSPC; + } + + ret = i40e_aq_add_udp_tunnel(hw, port, I40E_AQC_TUNNEL_TYPE_VXLAN, + &filter_idx, NULL); + if (ret < 0) { + PMD_DRV_LOG(ERR, "Failed to add VXLAN UDP port %d", port); + return -1; + } + + PMD_DRV_LOG(INFO, "Added port %d with AQ command with index %d", + port, filter_idx); + + /* New port: add it and mark its index in the bitmap */ + pf->vxlan_ports[idx] = port; + pf->vxlan_bitmap |= (1 << idx); + + if (!(pf->flags & I40E_FLAG_VXLAN)) + pf->flags |= I40E_FLAG_VXLAN; + + return 0; +} + +static int +i40e_del_vxlan_port(struct i40e_pf *pf, uint16_t port) +{ + int idx; + struct i40e_hw *hw = I40E_PF_TO_HW(pf); + + if (!(pf->flags & I40E_FLAG_VXLAN)) { + PMD_DRV_LOG(ERR, "VXLAN UDP port was not configured."); + return -EINVAL; + } + + idx = i40e_get_vxlan_port_idx(pf, port); + + if (idx < 0) { + PMD_DRV_LOG(ERR, "Port %d doesn't exist", port); + return -EINVAL; + } + + if (i40e_aq_del_udp_tunnel(hw, idx, NULL) < 0) { + PMD_DRV_LOG(ERR, "Failed to delete VXLAN UDP port %d", port); + return -1; + } + + PMD_DRV_LOG(INFO, "Deleted port %d with AQ command with index %d", + port, idx); + + pf->vxlan_ports[idx] = 0; + pf->vxlan_bitmap &= ~(1 << idx); + + if (!pf->vxlan_bitmap) + pf->flags &= ~I40E_FLAG_VXLAN; + + return 0; +} + +/* Add UDP tunneling port */ +static int +i40e_dev_udp_tunnel_add(struct rte_eth_dev *dev, + struct rte_eth_udp_tunnel *udp_tunnel) +{ + int ret = 0; + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + + if (udp_tunnel == NULL) + return -EINVAL; + + switch (udp_tunnel->prot_type) { + case RTE_TUNNEL_TYPE_VXLAN: + ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port); + break; + + case RTE_TUNNEL_TYPE_GENEVE: + case RTE_TUNNEL_TYPE_TEREDO: + PMD_DRV_LOG(ERR, "Tunnel type is not supported now."); + ret = -1; + break; + + default: + PMD_DRV_LOG(ERR, "Invalid tunnel type"); + ret = -1; + break; + } + + return ret; +} + +/* Remove UDP tunneling port */ +static int +i40e_dev_udp_tunnel_del(struct rte_eth_dev *dev, + struct rte_eth_udp_tunnel *udp_tunnel) +{ + int ret = 0; + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + + if (udp_tunnel == NULL) + return -EINVAL; + + switch (udp_tunnel->prot_type) { + case RTE_TUNNEL_TYPE_VXLAN: + ret = i40e_del_vxlan_port(pf, udp_tunnel->udp_port); + break; + case RTE_TUNNEL_TYPE_GENEVE: + case RTE_TUNNEL_TYPE_TEREDO: + PMD_DRV_LOG(ERR, "Tunnel type is not supported now."); + ret = -1; + break; + default: + PMD_DRV_LOG(ERR, "Invalid tunnel type"); + ret = -1; + break; + } + + return ret; +} + +/* Calculate the maximum number of contiguous PF queues that are configured */ +static int +i40e_pf_calc_configured_queues_num(struct i40e_pf *pf) +{ + struct rte_eth_dev_data *data = pf->dev_data; + int i, num; + struct i40e_rx_queue *rxq; + + num = 0; + for (i = 0; i < pf->lan_nb_qps; i++) { + rxq = data->rx_queues[i]; + if (rxq && rxq->q_set) + num++; + else + break; + } + + return num; +} + +/* Configure RSS */ +static int +i40e_pf_config_rss(struct i40e_pf *pf) +{ + struct i40e_hw *hw = I40E_PF_TO_HW(pf); + struct rte_eth_rss_conf rss_conf; + uint32_t i, lut = 0; + uint16_t j, num; + + /* + * If both VMDQ and RSS enabled, not all of PF queues are configured. + * It's necessary to calulate the actual PF queues that are configured. + */ + if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) { + num = i40e_pf_calc_configured_queues_num(pf); + num = i40e_align_floor(num); + } else + num = i40e_align_floor(pf->dev_data->nb_rx_queues); + + PMD_INIT_LOG(INFO, "Max of contiguous %u PF queues are configured", + num); + + if (num == 0) { + PMD_INIT_LOG(ERR, "No PF queues are configured to enable RSS"); + return -ENOTSUP; + } + + for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) { + if (j == num) + j = 0; + lut = (lut << 8) | (j & ((0x1 << + hw->func_caps.rss_table_entry_width) - 1)); + if ((i & 3) == 3) + I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut); + } + + rss_conf = pf->dev_data->dev_conf.rx_adv_conf.rss_conf; + if ((rss_conf.rss_hf & I40E_RSS_OFFLOAD_ALL) == 0) { + i40e_pf_disable_rss(pf); + return 0; + } + if (rss_conf.rss_key == NULL || rss_conf.rss_key_len < + (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) { + /* Random default keys */ + static uint32_t rss_key_default[] = {0x6b793944, + 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8, + 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605, + 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581}; + + rss_conf.rss_key = (uint8_t *)rss_key_default; + rss_conf.rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) * + sizeof(uint32_t); + } + + return i40e_hw_rss_hash_set(hw, &rss_conf); +} + +static int +i40e_tunnel_filter_param_check(struct i40e_pf *pf, + struct rte_eth_tunnel_filter_conf *filter) +{ + if (pf == NULL || filter == NULL) { + PMD_DRV_LOG(ERR, "Invalid parameter"); + return -EINVAL; + } + + if (filter->queue_id >= pf->dev_data->nb_rx_queues) { + PMD_DRV_LOG(ERR, "Invalid queue ID"); + return -EINVAL; + } + + if (filter->inner_vlan > ETHER_MAX_VLAN_ID) { + PMD_DRV_LOG(ERR, "Invalid inner VLAN ID"); + return -EINVAL; + } + + if ((filter->filter_type & ETH_TUNNEL_FILTER_OMAC) && + (is_zero_ether_addr(filter->outer_mac))) { + PMD_DRV_LOG(ERR, "Cannot add NULL outer MAC address"); + return -EINVAL; + } + + if ((filter->filter_type & ETH_TUNNEL_FILTER_IMAC) && + (is_zero_ether_addr(filter->inner_mac))) { + PMD_DRV_LOG(ERR, "Cannot add NULL inner MAC address"); + return -EINVAL; + } + + return 0; +} + +static int +i40e_tunnel_filter_handle(struct rte_eth_dev *dev, enum rte_filter_op filter_op, + void *arg) +{ + struct rte_eth_tunnel_filter_conf *filter; + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + int ret = I40E_SUCCESS; + + filter = (struct rte_eth_tunnel_filter_conf *)(arg); + + if (i40e_tunnel_filter_param_check(pf, filter) < 0) + return I40E_ERR_PARAM; + + switch (filter_op) { + case RTE_ETH_FILTER_NOP: + if (!(pf->flags & I40E_FLAG_VXLAN)) + ret = I40E_NOT_SUPPORTED; + case RTE_ETH_FILTER_ADD: + ret = i40e_dev_tunnel_filter_set(pf, filter, 1); + break; + case RTE_ETH_FILTER_DELETE: + ret = i40e_dev_tunnel_filter_set(pf, filter, 0); + break; + default: + PMD_DRV_LOG(ERR, "unknown operation %u", filter_op); + ret = I40E_ERR_PARAM; + break; + } + + return ret; +} + +static int +i40e_pf_config_mq_rx(struct i40e_pf *pf) +{ + int ret = 0; + enum rte_eth_rx_mq_mode mq_mode = pf->dev_data->dev_conf.rxmode.mq_mode; + + if (mq_mode & ETH_MQ_RX_DCB_FLAG) { + PMD_INIT_LOG(ERR, "i40e doesn't support DCB yet"); + return -ENOTSUP; + } + + /* RSS setup */ + if (mq_mode & ETH_MQ_RX_RSS_FLAG) + ret = i40e_pf_config_rss(pf); + else + i40e_pf_disable_rss(pf); + + return ret; +} + +/* Get the symmetric hash enable configurations per port */ +static void +i40e_get_symmetric_hash_enable_per_port(struct i40e_hw *hw, uint8_t *enable) +{ + uint32_t reg = I40E_READ_REG(hw, I40E_PRTQF_CTL_0); + + *enable = reg & I40E_PRTQF_CTL_0_HSYM_ENA_MASK ? 1 : 0; +} + +/* Set the symmetric hash enable configurations per port */ +static void +i40e_set_symmetric_hash_enable_per_port(struct i40e_hw *hw, uint8_t enable) +{ + uint32_t reg = I40E_READ_REG(hw, I40E_PRTQF_CTL_0); + + if (enable > 0) { + if (reg & I40E_PRTQF_CTL_0_HSYM_ENA_MASK) { + PMD_DRV_LOG(INFO, "Symmetric hash has already " + "been enabled"); + return; + } + reg |= I40E_PRTQF_CTL_0_HSYM_ENA_MASK; + } else { + if (!(reg & I40E_PRTQF_CTL_0_HSYM_ENA_MASK)) { + PMD_DRV_LOG(INFO, "Symmetric hash has already " + "been disabled"); + return; + } + reg &= ~I40E_PRTQF_CTL_0_HSYM_ENA_MASK; + } + I40E_WRITE_REG(hw, I40E_PRTQF_CTL_0, reg); + I40E_WRITE_FLUSH(hw); +} + +/* + * Get global configurations of hash function type and symmetric hash enable + * per flow type (pctype). Note that global configuration means it affects all + * the ports on the same NIC. + */ +static int +i40e_get_hash_filter_global_config(struct i40e_hw *hw, + struct rte_eth_hash_global_conf *g_cfg) +{ + uint32_t reg, mask = I40E_FLOW_TYPES; + uint16_t i; + enum i40e_filter_pctype pctype; + + memset(g_cfg, 0, sizeof(*g_cfg)); + reg = I40E_READ_REG(hw, I40E_GLQF_CTL); + if (reg & I40E_GLQF_CTL_HTOEP_MASK) + g_cfg->hash_func = RTE_ETH_HASH_FUNCTION_TOEPLITZ; + else + g_cfg->hash_func = RTE_ETH_HASH_FUNCTION_SIMPLE_XOR; + PMD_DRV_LOG(DEBUG, "Hash function is %s", + (reg & I40E_GLQF_CTL_HTOEP_MASK) ? "Toeplitz" : "Simple XOR"); + + for (i = 0; mask && i < RTE_ETH_FLOW_MAX; i++) { + if (!(mask & (1UL << i))) + continue; + mask &= ~(1UL << i); + /* Bit set indicats the coresponding flow type is supported */ + g_cfg->valid_bit_mask[0] |= (1UL << i); + pctype = i40e_flowtype_to_pctype(i); + reg = I40E_READ_REG(hw, I40E_GLQF_HSYM(pctype)); + if (reg & I40E_GLQF_HSYM_SYMH_ENA_MASK) + g_cfg->sym_hash_enable_mask[0] |= (1UL << i); + } + + return 0; +} + +static int +i40e_hash_global_config_check(struct rte_eth_hash_global_conf *g_cfg) +{ + uint32_t i; + uint32_t mask0, i40e_mask = I40E_FLOW_TYPES; + + if (g_cfg->hash_func != RTE_ETH_HASH_FUNCTION_TOEPLITZ && + g_cfg->hash_func != RTE_ETH_HASH_FUNCTION_SIMPLE_XOR && + g_cfg->hash_func != RTE_ETH_HASH_FUNCTION_DEFAULT) { + PMD_DRV_LOG(ERR, "Unsupported hash function type %d", + g_cfg->hash_func); + return -EINVAL; + } + + /* + * As i40e supports less than 32 flow types, only first 32 bits need to + * be checked. + */ + mask0 = g_cfg->valid_bit_mask[0]; + for (i = 0; i < RTE_SYM_HASH_MASK_ARRAY_SIZE; i++) { + if (i == 0) { + /* Check if any unsupported flow type configured */ + if ((mask0 | i40e_mask) ^ i40e_mask) + goto mask_err; + } else { + if (g_cfg->valid_bit_mask[i]) + goto mask_err; + } + } + + return 0; + +mask_err: + PMD_DRV_LOG(ERR, "i40e unsupported flow type bit(s) configured"); + + return -EINVAL; +} + +/* + * Set global configurations of hash function type and symmetric hash enable + * per flow type (pctype). Note any modifying global configuration will affect + * all the ports on the same NIC. + */ +static int +i40e_set_hash_filter_global_config(struct i40e_hw *hw, + struct rte_eth_hash_global_conf *g_cfg) +{ + int ret; + uint16_t i; + uint32_t reg; + uint32_t mask0 = g_cfg->valid_bit_mask[0]; + enum i40e_filter_pctype pctype; + + /* Check the input parameters */ + ret = i40e_hash_global_config_check(g_cfg); + if (ret < 0) + return ret; + + for (i = 0; mask0 && i < UINT32_BIT; i++) { + if (!(mask0 & (1UL << i))) + continue; + mask0 &= ~(1UL << i); + pctype = i40e_flowtype_to_pctype(i); + reg = (g_cfg->sym_hash_enable_mask[0] & (1UL << i)) ? + I40E_GLQF_HSYM_SYMH_ENA_MASK : 0; + I40E_WRITE_REG(hw, I40E_GLQF_HSYM(pctype), reg); + } + + reg = I40E_READ_REG(hw, I40E_GLQF_CTL); + if (g_cfg->hash_func == RTE_ETH_HASH_FUNCTION_TOEPLITZ) { + /* Toeplitz */ + if (reg & I40E_GLQF_CTL_HTOEP_MASK) { + PMD_DRV_LOG(DEBUG, "Hash function already set to " + "Toeplitz"); + goto out; + } + reg |= I40E_GLQF_CTL_HTOEP_MASK; + } else if (g_cfg->hash_func == RTE_ETH_HASH_FUNCTION_SIMPLE_XOR) { + /* Simple XOR */ + if (!(reg & I40E_GLQF_CTL_HTOEP_MASK)) { + PMD_DRV_LOG(DEBUG, "Hash function already set to " + "Simple XOR"); + goto out; + } + reg &= ~I40E_GLQF_CTL_HTOEP_MASK; + } else + /* Use the default, and keep it as it is */ + goto out; + + I40E_WRITE_REG(hw, I40E_GLQF_CTL, reg); + +out: + I40E_WRITE_FLUSH(hw); + + return 0; +} + +static int +i40e_hash_filter_get(struct i40e_hw *hw, struct rte_eth_hash_filter_info *info) +{ + int ret = 0; + + if (!hw || !info) { + PMD_DRV_LOG(ERR, "Invalid pointer"); + return -EFAULT; + } + + switch (info->info_type) { + case RTE_ETH_HASH_FILTER_SYM_HASH_ENA_PER_PORT: + i40e_get_symmetric_hash_enable_per_port(hw, + &(info->info.enable)); + break; + case RTE_ETH_HASH_FILTER_GLOBAL_CONFIG: + ret = i40e_get_hash_filter_global_config(hw, + &(info->info.global_conf)); + break; + default: + PMD_DRV_LOG(ERR, "Hash filter info type (%d) not supported", + info->info_type); + ret = -EINVAL; + break; + } + + return ret; +} + +static int +i40e_hash_filter_set(struct i40e_hw *hw, struct rte_eth_hash_filter_info *info) +{ + int ret = 0; + + if (!hw || !info) { + PMD_DRV_LOG(ERR, "Invalid pointer"); + return -EFAULT; + } + + switch (info->info_type) { + case RTE_ETH_HASH_FILTER_SYM_HASH_ENA_PER_PORT: + i40e_set_symmetric_hash_enable_per_port(hw, info->info.enable); + break; + case RTE_ETH_HASH_FILTER_GLOBAL_CONFIG: + ret = i40e_set_hash_filter_global_config(hw, + &(info->info.global_conf)); + break; + default: + PMD_DRV_LOG(ERR, "Hash filter info type (%d) not supported", + info->info_type); + ret = -EINVAL; + break; + } + + return ret; +} + +/* Operations for hash function */ +static int +i40e_hash_filter_ctrl(struct rte_eth_dev *dev, + enum rte_filter_op filter_op, + void *arg) +{ + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + int ret = 0; + + switch (filter_op) { + case RTE_ETH_FILTER_NOP: + break; + case RTE_ETH_FILTER_GET: + ret = i40e_hash_filter_get(hw, + (struct rte_eth_hash_filter_info *)arg); + break; + case RTE_ETH_FILTER_SET: + ret = i40e_hash_filter_set(hw, + (struct rte_eth_hash_filter_info *)arg); + break; + default: + PMD_DRV_LOG(WARNING, "Filter operation (%d) not supported", + filter_op); + ret = -ENOTSUP; + break; + } + + return ret; +} + +/* + * Configure ethertype filter, which can director packet by filtering + * with mac address and ether_type or only ether_type + */ +static int +i40e_ethertype_filter_set(struct i40e_pf *pf, + struct rte_eth_ethertype_filter *filter, + bool add) +{ + struct i40e_hw *hw = I40E_PF_TO_HW(pf); + struct i40e_control_filter_stats stats; + uint16_t flags = 0; + int ret; + + if (filter->queue >= pf->dev_data->nb_rx_queues) { + PMD_DRV_LOG(ERR, "Invalid queue ID"); + return -EINVAL; + } + if (filter->ether_type == ETHER_TYPE_IPv4 || + filter->ether_type == ETHER_TYPE_IPv6) { + PMD_DRV_LOG(ERR, "unsupported ether_type(0x%04x) in" + " control packet filter.", filter->ether_type); + return -EINVAL; + } + if (filter->ether_type == ETHER_TYPE_VLAN) + PMD_DRV_LOG(WARNING, "filter vlan ether_type in first tag is" + " not supported."); + + if (!(filter->flags & RTE_ETHTYPE_FLAGS_MAC)) + flags |= I40E_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC; + if (filter->flags & RTE_ETHTYPE_FLAGS_DROP) + flags |= I40E_AQC_ADD_CONTROL_PACKET_FLAGS_DROP; + flags |= I40E_AQC_ADD_CONTROL_PACKET_FLAGS_TO_QUEUE; + + memset(&stats, 0, sizeof(stats)); + ret = i40e_aq_add_rem_control_packet_filter(hw, + filter->mac_addr.addr_bytes, + filter->ether_type, flags, + pf->main_vsi->seid, + filter->queue, add, &stats, NULL); + + PMD_DRV_LOG(INFO, "add/rem control packet filter, return %d," + " mac_etype_used = %u, etype_used = %u," + " mac_etype_free = %u, etype_free = %u\n", + ret, stats.mac_etype_used, stats.etype_used, + stats.mac_etype_free, stats.etype_free); + if (ret < 0) + return -ENOSYS; + return 0; +} + +/* + * Handle operations for ethertype filter. + */ +static int +i40e_ethertype_filter_handle(struct rte_eth_dev *dev, + enum rte_filter_op filter_op, + void *arg) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + int ret = 0; + + if (filter_op == RTE_ETH_FILTER_NOP) + return ret; + + if (arg == NULL) { + PMD_DRV_LOG(ERR, "arg shouldn't be NULL for operation %u", + filter_op); + return -EINVAL; + } + + switch (filter_op) { + case RTE_ETH_FILTER_ADD: + ret = i40e_ethertype_filter_set(pf, + (struct rte_eth_ethertype_filter *)arg, + TRUE); + break; + case RTE_ETH_FILTER_DELETE: + ret = i40e_ethertype_filter_set(pf, + (struct rte_eth_ethertype_filter *)arg, + FALSE); + break; + default: + PMD_DRV_LOG(ERR, "unsupported operation %u\n", filter_op); + ret = -ENOSYS; + break; + } + return ret; +} + +static int +i40e_dev_filter_ctrl(struct rte_eth_dev *dev, + enum rte_filter_type filter_type, + enum rte_filter_op filter_op, + void *arg) +{ + int ret = 0; + + if (dev == NULL) + return -EINVAL; + + switch (filter_type) { + case RTE_ETH_FILTER_HASH: + ret = i40e_hash_filter_ctrl(dev, filter_op, arg); + break; + case RTE_ETH_FILTER_MACVLAN: + ret = i40e_mac_filter_handle(dev, filter_op, arg); + break; + case RTE_ETH_FILTER_ETHERTYPE: + ret = i40e_ethertype_filter_handle(dev, filter_op, arg); + break; + case RTE_ETH_FILTER_TUNNEL: + ret = i40e_tunnel_filter_handle(dev, filter_op, arg); + break; + case RTE_ETH_FILTER_FDIR: + ret = i40e_fdir_ctrl_func(dev, filter_op, arg); + break; + default: + PMD_DRV_LOG(WARNING, "Filter type (%d) not supported", + filter_type); + ret = -EINVAL; + break; + } + + return ret; +} + +/* + * As some registers wouldn't be reset unless a global hardware reset, + * hardware initialization is needed to put those registers into an + * expected initial state. + */ +static void +i40e_hw_init(struct i40e_hw *hw) +{ + /* clear the PF Queue Filter control register */ + I40E_WRITE_REG(hw, I40E_PFQF_CTL_0, 0); + + /* Disable symmetric hash per port */ + i40e_set_symmetric_hash_enable_per_port(hw, 0); +} + +enum i40e_filter_pctype +i40e_flowtype_to_pctype(uint16_t flow_type) +{ + static const enum i40e_filter_pctype pctype_table[] = { + [RTE_ETH_FLOW_FRAG_IPV4] = I40E_FILTER_PCTYPE_FRAG_IPV4, + [RTE_ETH_FLOW_NONFRAG_IPV4_UDP] = + I40E_FILTER_PCTYPE_NONF_IPV4_UDP, + [RTE_ETH_FLOW_NONFRAG_IPV4_TCP] = + I40E_FILTER_PCTYPE_NONF_IPV4_TCP, + [RTE_ETH_FLOW_NONFRAG_IPV4_SCTP] = + I40E_FILTER_PCTYPE_NONF_IPV4_SCTP, + [RTE_ETH_FLOW_NONFRAG_IPV4_OTHER] = + I40E_FILTER_PCTYPE_NONF_IPV4_OTHER, + [RTE_ETH_FLOW_FRAG_IPV6] = I40E_FILTER_PCTYPE_FRAG_IPV6, + [RTE_ETH_FLOW_NONFRAG_IPV6_UDP] = + I40E_FILTER_PCTYPE_NONF_IPV6_UDP, + [RTE_ETH_FLOW_NONFRAG_IPV6_TCP] = + I40E_FILTER_PCTYPE_NONF_IPV6_TCP, + [RTE_ETH_FLOW_NONFRAG_IPV6_SCTP] = + I40E_FILTER_PCTYPE_NONF_IPV6_SCTP, + [RTE_ETH_FLOW_NONFRAG_IPV6_OTHER] = + I40E_FILTER_PCTYPE_NONF_IPV6_OTHER, + [RTE_ETH_FLOW_L2_PAYLOAD] = I40E_FILTER_PCTYPE_L2_PAYLOAD, + }; + + return pctype_table[flow_type]; +} + +uint16_t +i40e_pctype_to_flowtype(enum i40e_filter_pctype pctype) +{ + static const uint16_t flowtype_table[] = { + [I40E_FILTER_PCTYPE_FRAG_IPV4] = RTE_ETH_FLOW_FRAG_IPV4, + [I40E_FILTER_PCTYPE_NONF_IPV4_UDP] = + RTE_ETH_FLOW_NONFRAG_IPV4_UDP, + [I40E_FILTER_PCTYPE_NONF_IPV4_TCP] = + RTE_ETH_FLOW_NONFRAG_IPV4_TCP, + [I40E_FILTER_PCTYPE_NONF_IPV4_SCTP] = + RTE_ETH_FLOW_NONFRAG_IPV4_SCTP, + [I40E_FILTER_PCTYPE_NONF_IPV4_OTHER] = + RTE_ETH_FLOW_NONFRAG_IPV4_OTHER, + [I40E_FILTER_PCTYPE_FRAG_IPV6] = RTE_ETH_FLOW_FRAG_IPV6, + [I40E_FILTER_PCTYPE_NONF_IPV6_UDP] = + RTE_ETH_FLOW_NONFRAG_IPV6_UDP, + [I40E_FILTER_PCTYPE_NONF_IPV6_TCP] = + RTE_ETH_FLOW_NONFRAG_IPV6_TCP, + [I40E_FILTER_PCTYPE_NONF_IPV6_SCTP] = + RTE_ETH_FLOW_NONFRAG_IPV6_SCTP, + [I40E_FILTER_PCTYPE_NONF_IPV6_OTHER] = + RTE_ETH_FLOW_NONFRAG_IPV6_OTHER, + [I40E_FILTER_PCTYPE_L2_PAYLOAD] = RTE_ETH_FLOW_L2_PAYLOAD, + }; + + return flowtype_table[pctype]; +} + +static int +i40e_debug_read_register(struct i40e_hw *hw, uint32_t addr, uint64_t *val) +{ + struct i40e_aq_desc desc; + enum i40e_status_code status; + + i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_debug_read_reg); + desc.params.internal.param1 = rte_cpu_to_le_32(addr); + status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL); + if (status < 0) + return status; + + *val = ((uint64_t)(rte_le_to_cpu_32(desc.params.internal.param2)) << + (CHAR_BIT * sizeof(uint32_t))) + + rte_le_to_cpu_32(desc.params.internal.param3); + + return status; +} + +/* + * On X710, performance number is far from the expectation on recent firmware + * versions; on XL710, performance number is also far from the expectation on + * recent firmware versions, if promiscuous mode is disabled, or promiscuous + * mode is enabled and port MAC address is equal to the packet destination MAC + * address. The fix for this issue may not be integrated in the following + * firmware version. So the workaround in software driver is needed. It needs + * to modify the initial values of 3 internal only registers for both X710 and + * XL710. Note that the values for X710 or XL710 could be different, and the + * workaround can be removed when it is fixed in firmware in the future. + */ + +/* For both X710 and XL710 */ +#define I40E_GL_SWR_PRI_JOIN_MAP_0_VALUE 0x10000200 +#define I40E_GL_SWR_PRI_JOIN_MAP_0 0x26CE00 + +#define I40E_GL_SWR_PRI_JOIN_MAP_2_VALUE 0x011f0200 +#define I40E_GL_SWR_PRI_JOIN_MAP_2 0x26CE08 + +/* For X710 */ +#define I40E_GL_SWR_PM_UP_THR_EF_VALUE 0x03030303 +/* For XL710 */ +#define I40E_GL_SWR_PM_UP_THR_SF_VALUE 0x06060606 +#define I40E_GL_SWR_PM_UP_THR 0x269FBC + +static void +i40e_configure_registers(struct i40e_hw *hw) +{ + static struct { + uint32_t addr; + uint64_t val; + } reg_table[] = { + {I40E_GL_SWR_PRI_JOIN_MAP_0, I40E_GL_SWR_PRI_JOIN_MAP_0_VALUE}, + {I40E_GL_SWR_PRI_JOIN_MAP_2, I40E_GL_SWR_PRI_JOIN_MAP_2_VALUE}, + {I40E_GL_SWR_PM_UP_THR, 0}, /* Compute value dynamically */ + }; + uint64_t reg; + uint32_t i; + int ret; + + for (i = 0; i < RTE_DIM(reg_table); i++) { + if (reg_table[i].addr == I40E_GL_SWR_PM_UP_THR) { + if (i40e_is_40G_device(hw->device_id)) /* For XL710 */ + reg_table[i].val = + I40E_GL_SWR_PM_UP_THR_SF_VALUE; + else /* For X710 */ + reg_table[i].val = + I40E_GL_SWR_PM_UP_THR_EF_VALUE; + } + + ret = i40e_debug_read_register(hw, reg_table[i].addr, ®); + if (ret < 0) { + PMD_DRV_LOG(ERR, "Failed to read from 0x%"PRIx32, + reg_table[i].addr); + break; + } + PMD_DRV_LOG(DEBUG, "Read from 0x%"PRIx32": 0x%"PRIx64, + reg_table[i].addr, reg); + if (reg == reg_table[i].val) + continue; + + ret = i40e_aq_debug_write_register(hw, reg_table[i].addr, + reg_table[i].val, NULL); + if (ret < 0) { + PMD_DRV_LOG(ERR, "Failed to write 0x%"PRIx64" to the " + "address of 0x%"PRIx32, reg_table[i].val, + reg_table[i].addr); + break; + } + PMD_DRV_LOG(DEBUG, "Write 0x%"PRIx64" to the address of " + "0x%"PRIx32, reg_table[i].val, reg_table[i].addr); + } +} diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h new file mode 100644 index 0000000..b9bed5a --- /dev/null +++ b/drivers/net/i40e/i40e_ethdev.h @@ -0,0 +1,567 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _I40E_ETHDEV_H_ +#define _I40E_ETHDEV_H_ + +#include + +#define I40E_AQ_LEN 32 +#define I40E_AQ_BUF_SZ 4096 +/* Number of queues per TC should be one of 1, 2, 4, 8, 16, 32, 64 */ +#define I40E_MAX_Q_PER_TC 64 +#define I40E_NUM_DESC_DEFAULT 512 +#define I40E_NUM_DESC_ALIGN 32 +#define I40E_BUF_SIZE_MIN 1024 +#define I40E_FRAME_SIZE_MAX 9728 +#define I40E_QUEUE_BASE_ADDR_UNIT 128 +/* number of VSIs and queue default setting */ +#define I40E_MAX_QP_NUM_PER_VF 16 +#define I40E_DEFAULT_QP_NUM_FDIR 1 +#define I40E_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t)) +#define I40E_VFTA_SIZE (4096 / I40E_UINT32_BIT_SIZE) +/* + * vlan_id is a 12 bit number. + * The VFTA array is actually a 4096 bit array, 128 of 32bit elements. + * 2^5 = 32. The val of lower 5 bits specifies the bit in the 32bit element. + * The higher 7 bit val specifies VFTA array index. + */ +#define I40E_VFTA_BIT(vlan_id) (1 << ((vlan_id) & 0x1F)) +#define I40E_VFTA_IDX(vlan_id) ((vlan_id) >> 5) + +/* Default TC traffic in case DCB is not enabled */ +#define I40E_DEFAULT_TCMAP 0x1 +#define I40E_FDIR_QUEUE_ID 0 + +/* Always assign pool 0 to main VSI, VMDQ will start from 1 */ +#define I40E_VMDQ_POOL_BASE 1 + +#define I40E_DEFAULT_RX_FREE_THRESH 32 +#define I40E_DEFAULT_RX_PTHRESH 8 +#define I40E_DEFAULT_RX_HTHRESH 8 +#define I40E_DEFAULT_RX_WTHRESH 0 + +#define I40E_DEFAULT_TX_FREE_THRESH 32 +#define I40E_DEFAULT_TX_PTHRESH 32 +#define I40E_DEFAULT_TX_HTHRESH 0 +#define I40E_DEFAULT_TX_WTHRESH 0 +#define I40E_DEFAULT_TX_RSBIT_THRESH 32 + +/* Bit shift and mask */ +#define I40E_4_BIT_WIDTH (CHAR_BIT / 2) +#define I40E_4_BIT_MASK RTE_LEN2MASK(I40E_4_BIT_WIDTH, uint8_t) +#define I40E_8_BIT_WIDTH CHAR_BIT +#define I40E_8_BIT_MASK UINT8_MAX +#define I40E_16_BIT_WIDTH (CHAR_BIT * 2) +#define I40E_16_BIT_MASK UINT16_MAX +#define I40E_32_BIT_WIDTH (CHAR_BIT * 4) +#define I40E_32_BIT_MASK UINT32_MAX +#define I40E_48_BIT_WIDTH (CHAR_BIT * 6) +#define I40E_48_BIT_MASK RTE_LEN2MASK(I40E_48_BIT_WIDTH, uint64_t) + +/* index flex payload per layer */ +enum i40e_flxpld_layer_idx { + I40E_FLXPLD_L2_IDX = 0, + I40E_FLXPLD_L3_IDX = 1, + I40E_FLXPLD_L4_IDX = 2, + I40E_MAX_FLXPLD_LAYER = 3, +}; +#define I40E_MAX_FLXPLD_FIED 3 /* max number of flex payload fields */ +#define I40E_FDIR_BITMASK_NUM_WORD 2 /* max number of bitmask words */ +#define I40E_FDIR_MAX_FLEXWORD_NUM 8 /* max number of flexpayload words */ +#define I40E_FDIR_MAX_FLEX_LEN 16 /* len in bytes of flex payload */ + +/* i40e flags */ +#define I40E_FLAG_RSS (1ULL << 0) +#define I40E_FLAG_DCB (1ULL << 1) +#define I40E_FLAG_VMDQ (1ULL << 2) +#define I40E_FLAG_SRIOV (1ULL << 3) +#define I40E_FLAG_HEADER_SPLIT_DISABLED (1ULL << 4) +#define I40E_FLAG_HEADER_SPLIT_ENABLED (1ULL << 5) +#define I40E_FLAG_FDIR (1ULL << 6) +#define I40E_FLAG_VXLAN (1ULL << 7) +#define I40E_FLAG_ALL (I40E_FLAG_RSS | \ + I40E_FLAG_DCB | \ + I40E_FLAG_VMDQ | \ + I40E_FLAG_SRIOV | \ + I40E_FLAG_HEADER_SPLIT_DISABLED | \ + I40E_FLAG_HEADER_SPLIT_ENABLED | \ + I40E_FLAG_FDIR | \ + I40E_FLAG_VXLAN) + +#define I40E_RSS_OFFLOAD_ALL ( \ + ETH_RSS_FRAG_IPV4 | \ + ETH_RSS_NONFRAG_IPV4_TCP | \ + ETH_RSS_NONFRAG_IPV4_UDP | \ + ETH_RSS_NONFRAG_IPV4_SCTP | \ + ETH_RSS_NONFRAG_IPV4_OTHER | \ + ETH_RSS_FRAG_IPV6 | \ + ETH_RSS_NONFRAG_IPV6_TCP | \ + ETH_RSS_NONFRAG_IPV6_UDP | \ + ETH_RSS_NONFRAG_IPV6_SCTP | \ + ETH_RSS_NONFRAG_IPV6_OTHER | \ + ETH_RSS_L2_PAYLOAD) + +/* All bits of RSS hash enable */ +#define I40E_RSS_HENA_ALL ( \ + (1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_UDP) | \ + (1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_TCP) | \ + (1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_SCTP) | \ + (1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_OTHER) | \ + (1ULL << I40E_FILTER_PCTYPE_FRAG_IPV4) | \ + (1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_UDP) | \ + (1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_TCP) | \ + (1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_SCTP) | \ + (1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_OTHER) | \ + (1ULL << I40E_FILTER_PCTYPE_FRAG_IPV6) | \ + (1ULL << I40E_FILTER_PCTYPE_FCOE_OX) | \ + (1ULL << I40E_FILTER_PCTYPE_FCOE_RX) | \ + (1ULL << I40E_FILTER_PCTYPE_FCOE_OTHER) | \ + (1ULL << I40E_FILTER_PCTYPE_L2_PAYLOAD)) + +struct i40e_adapter; + +/** + * MAC filter structure + */ +struct i40e_mac_filter_info { + enum rte_mac_filter_type filter_type; + struct ether_addr mac_addr; +}; + +TAILQ_HEAD(i40e_mac_filter_list, i40e_mac_filter); + +/* MAC filter list structure */ +struct i40e_mac_filter { + TAILQ_ENTRY(i40e_mac_filter) next; + struct i40e_mac_filter_info mac_info; +}; + +TAILQ_HEAD(i40e_vsi_list_head, i40e_vsi_list); + +struct i40e_vsi; + +/* VSI list structure */ +struct i40e_vsi_list { + TAILQ_ENTRY(i40e_vsi_list) list; + struct i40e_vsi *vsi; +}; + +struct i40e_rx_queue; +struct i40e_tx_queue; + +/* Structure that defines a VEB */ +struct i40e_veb { + struct i40e_vsi_list_head head; + struct i40e_vsi *associate_vsi; /* Associate VSI who owns the VEB */ + uint16_t seid; /* The seid of VEB itself */ + uint16_t uplink_seid; /* The uplink seid of this VEB */ + uint16_t stats_idx; + struct i40e_eth_stats stats; +}; + +/* i40e MACVLAN filter structure */ +struct i40e_macvlan_filter { + struct ether_addr macaddr; + enum rte_mac_filter_type filter_type; + uint16_t vlan_id; +}; + +/* + * Structure that defines a VSI, associated with a adapter. + */ +struct i40e_vsi { + struct i40e_adapter *adapter; /* Backreference to associated adapter */ + struct i40e_aqc_vsi_properties_data info; /* VSI properties */ + + struct i40e_eth_stats eth_stats_offset; + struct i40e_eth_stats eth_stats; + /* + * When drivers loaded, only a default main VSI exists. In case new VSI + * needs to add, HW needs to know the layout that VSIs are organized. + * Besides that, VSI isan element and can't switch packets, which needs + * to add new component VEB to perform switching. So, a new VSI needs + * to specify the the uplink VSI (Parent VSI) before created. The + * uplink VSI will check whether it had a VEB to switch packets. If no, + * it will try to create one. Then, uplink VSI will move the new VSI + * into its' sib_vsi_list to manage all the downlink VSI. + * sib_vsi_list: the VSI list that shared the same uplink VSI. + * parent_vsi : the uplink VSI. It's NULL for main VSI. + * veb : the VEB associates with the VSI. + */ + struct i40e_vsi_list sib_vsi_list; /* sibling vsi list */ + struct i40e_vsi *parent_vsi; + struct i40e_veb *veb; /* Associated veb, could be null */ + bool offset_loaded; + enum i40e_vsi_type type; /* VSI types */ + uint16_t vlan_num; /* Total VLAN number */ + uint16_t mac_num; /* Total mac number */ + uint32_t vfta[I40E_VFTA_SIZE]; /* VLAN bitmap */ + struct i40e_mac_filter_list mac_list; /* macvlan filter list */ + /* specific VSI-defined parameters, SRIOV stored the vf_id */ + uint32_t user_param; + uint16_t seid; /* The seid of VSI itself */ + uint16_t uplink_seid; /* The uplink seid of this VSI */ + uint16_t nb_qps; /* Number of queue pairs VSI can occupy */ + uint16_t max_macaddrs; /* Maximum number of MAC addresses */ + uint16_t base_queue; /* The first queue index of this VSI */ + /* + * The offset to visit VSI related register, assigned by HW when + * creating VSI + */ + uint16_t vsi_id; + uint16_t msix_intr; /* The MSIX interrupt binds to VSI */ + uint8_t enabled_tc; /* The traffic class enabled */ +}; + +struct pool_entry { + LIST_ENTRY(pool_entry) next; + uint16_t base; + uint16_t len; +}; + +LIST_HEAD(res_list, pool_entry); + +struct i40e_res_pool_info { + uint32_t base; /* Resource start index */ + uint32_t num_alloc; /* Allocated resource number */ + uint32_t num_free; /* Total available resource number */ + struct res_list alloc_list; /* Allocated resource list */ + struct res_list free_list; /* Available resource list */ +}; + +enum I40E_VF_STATE { + I40E_VF_INACTIVE = 0, + I40E_VF_INRESET, + I40E_VF_ININIT, + I40E_VF_ACTIVE, +}; + +/* + * Structure to store private data for PF host. + */ +struct i40e_pf_vf { + struct i40e_pf *pf; + struct i40e_vsi *vsi; + enum I40E_VF_STATE state; /* The number of queue pairs availiable */ + uint16_t vf_idx; /* VF index in pf->vfs */ + uint16_t lan_nb_qps; /* Actual queues allocated */ + uint16_t reset_cnt; /* Total vf reset times */ +}; + +/* + * Structure to store private data for VMDQ instance + */ +struct i40e_vmdq_info { + struct i40e_pf *pf; + struct i40e_vsi *vsi; +}; + +/* + * Structure to store flex pit for flow diretor. + */ +struct i40e_fdir_flex_pit { + uint8_t src_offset; /* offset in words from the beginning of payload */ + uint8_t size; /* size in words */ + uint8_t dst_offset; /* offset in words of flexible payload */ +}; + +struct i40e_fdir_flex_mask { + uint8_t word_mask; /**< Bit i enables word i of flexible payload */ + struct { + uint8_t offset; + uint16_t mask; + } bitmask[I40E_FDIR_BITMASK_NUM_WORD]; +}; + +#define I40E_FILTER_PCTYPE_MAX 64 +/* + * A structure used to define fields of a FDIR related info. + */ +struct i40e_fdir_info { + struct i40e_vsi *fdir_vsi; /* pointer to fdir VSI structure */ + uint16_t match_counter_index; /* Statistic counter index used for fdir*/ + struct i40e_tx_queue *txq; + struct i40e_rx_queue *rxq; + void *prg_pkt; /* memory for fdir program packet */ + uint64_t dma_addr; /* physic address of packet memory*/ + /* + * the rule how bytes stream is extracted as flexible payload + * for each payload layer, the setting can up to three elements + */ + struct i40e_fdir_flex_pit flex_set[I40E_MAX_FLXPLD_LAYER * I40E_MAX_FLXPLD_FIED]; + struct i40e_fdir_flex_mask flex_mask[I40E_FILTER_PCTYPE_MAX]; +}; + +/* + * Structure to store private data specific for PF instance. + */ +struct i40e_pf { + struct i40e_adapter *adapter; /* The adapter this PF associate to */ + struct i40e_vsi *main_vsi; /* pointer to main VSI structure */ + uint16_t mac_seid; /* The seid of the MAC of this PF */ + uint16_t main_vsi_seid; /* The seid of the main VSI */ + uint16_t max_num_vsi; + struct i40e_res_pool_info qp_pool; /*Queue pair pool */ + struct i40e_res_pool_info msix_pool; /* MSIX interrupt pool */ + + struct i40e_hw_port_stats stats_offset; + struct i40e_hw_port_stats stats; + bool offset_loaded; + + struct rte_eth_dev_data *dev_data; /* Pointer to the device data */ + struct ether_addr dev_addr; /* PF device mac address */ + uint64_t flags; /* PF featuer flags */ + /* All kinds of queue pair setting for different VSIs */ + struct i40e_pf_vf *vfs; + uint16_t vf_num; + /* Each of below queue pairs should be power of 2 since it's the + precondition after TC configuration applied */ + uint16_t lan_nb_qps; /* The number of queue pairs of LAN */ + uint16_t vmdq_nb_qps; /* The number of queue pairs of VMDq */ + uint16_t vf_nb_qps; /* The number of queue pairs of VF */ + uint16_t fdir_nb_qps; /* The number of queue pairs of Flow Director */ + uint16_t hash_lut_size; /* The size of hash lookup table */ + /* store VXLAN UDP ports */ + uint16_t vxlan_ports[I40E_MAX_PF_UDP_OFFLOAD_PORTS]; + uint16_t vxlan_bitmap; /* Vxlan bit mask */ + + /* VMDQ related info */ + uint16_t max_nb_vmdq_vsi; /* Max number of VMDQ VSIs supported */ + uint16_t nb_cfg_vmdq_vsi; /* number of VMDQ VSIs configured */ + struct i40e_vmdq_info *vmdq; + + struct i40e_fdir_info fdir; /* flow director info */ +}; + +enum pending_msg { + PFMSG_LINK_CHANGE = 0x1, + PFMSG_RESET_IMPENDING = 0x2, + PFMSG_DRIVER_CLOSE = 0x4, +}; + +struct i40e_vsi_vlan_pvid_info { + uint16_t on; /* Enable or disable pvid */ + union { + uint16_t pvid; /* Valid in case 'on' is set to set pvid */ + struct { + /* Valid in case 'on' is cleared. 'tagged' will reject tagged packets, + * while 'untagged' will reject untagged packets. + */ + uint8_t tagged; + uint8_t untagged; + } reject; + } config; +}; + +struct i40e_vf_rx_queues { + uint64_t rx_dma_addr; + uint32_t rx_ring_len; + uint32_t buff_size; +}; + +struct i40e_vf_tx_queues { + uint64_t tx_dma_addr; + uint32_t tx_ring_len; +}; + +/* + * Structure to store private data specific for VF instance. + */ +struct i40e_vf { + struct i40e_adapter *adapter; /* The adapter this VF associate to */ + struct rte_eth_dev_data *dev_data; /* Pointer to the device data */ + uint16_t num_queue_pairs; + uint16_t max_pkt_len; /* Maximum packet length */ + bool promisc_unicast_enabled; + bool promisc_multicast_enabled; + + uint32_t version_major; /* Major version number */ + uint32_t version_minor; /* Minor version number */ + uint16_t promisc_flags; /* Promiscuous setting */ + uint32_t vlan[I40E_VFTA_SIZE]; /* VLAN bit map */ + + /* Event from pf */ + bool dev_closed; + bool link_up; + bool vf_reset; + volatile uint32_t pend_cmd; /* pending command not finished yet */ + u16 pend_msg; /* flags indicates events from pf not handled yet */ + + /* VSI info */ + struct i40e_virtchnl_vf_resource *vf_res; /* All VSIs */ + struct i40e_virtchnl_vsi_resource *vsi_res; /* LAN VSI */ + struct i40e_vsi vsi; +}; + +/* + * Structure to store private data for each PF/VF instance. + */ +struct i40e_adapter { + /* Common for both PF and VF */ + struct i40e_hw hw; + struct rte_eth_dev *eth_dev; + + /* Specific for PF or VF */ + union { + struct i40e_pf pf; + struct i40e_vf vf; + }; +}; + +int i40e_dev_switch_queues(struct i40e_pf *pf, bool on); +int i40e_vsi_release(struct i40e_vsi *vsi); +struct i40e_vsi *i40e_vsi_setup(struct i40e_pf *pf, + enum i40e_vsi_type type, + struct i40e_vsi *uplink_vsi, + uint16_t user_param); +int i40e_switch_rx_queue(struct i40e_hw *hw, uint16_t q_idx, bool on); +int i40e_switch_tx_queue(struct i40e_hw *hw, uint16_t q_idx, bool on); +int i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan); +int i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan); +int i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *filter); +int i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct ether_addr *addr); +void i40e_update_vsi_stats(struct i40e_vsi *vsi); +void i40e_pf_disable_irq0(struct i40e_hw *hw); +void i40e_pf_enable_irq0(struct i40e_hw *hw); +int i40e_dev_link_update(struct rte_eth_dev *dev, + __rte_unused int wait_to_complete); +void i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi); +void i40e_vsi_queues_unbind_intr(struct i40e_vsi *vsi); +int i40e_vsi_vlan_pvid_set(struct i40e_vsi *vsi, + struct i40e_vsi_vlan_pvid_info *info); +int i40e_vsi_config_vlan_stripping(struct i40e_vsi *vsi, bool on); +uint64_t i40e_config_hena(uint64_t flags); +uint64_t i40e_parse_hena(uint64_t flags); +enum i40e_status_code i40e_fdir_setup_tx_resources(struct i40e_pf *pf); +enum i40e_status_code i40e_fdir_setup_rx_resources(struct i40e_pf *pf); +int i40e_fdir_setup(struct i40e_pf *pf); +const struct rte_memzone *i40e_memzone_reserve(const char *name, + uint32_t len, + int socket_id); +int i40e_fdir_configure(struct rte_eth_dev *dev); +void i40e_fdir_teardown(struct i40e_pf *pf); +enum i40e_filter_pctype i40e_flowtype_to_pctype(uint16_t flow_type); +uint16_t i40e_pctype_to_flowtype(enum i40e_filter_pctype pctype); +int i40e_fdir_ctrl_func(struct rte_eth_dev *dev, + enum rte_filter_op filter_op, + void *arg); + +/* I40E_DEV_PRIVATE_TO */ +#define I40E_DEV_PRIVATE_TO_PF(adapter) \ + (&((struct i40e_adapter *)adapter)->pf) +#define I40E_DEV_PRIVATE_TO_HW(adapter) \ + (&((struct i40e_adapter *)adapter)->hw) +#define I40E_DEV_PRIVATE_TO_ADAPTER(adapter) \ + ((struct i40e_adapter *)adapter) + +/* I40EVF_DEV_PRIVATE_TO */ +#define I40EVF_DEV_PRIVATE_TO_VF(adapter) \ + (&((struct i40e_adapter *)adapter)->vf) + +static inline struct i40e_vsi * +i40e_get_vsi_from_adapter(struct i40e_adapter *adapter) +{ + struct i40e_hw *hw; + + if (!adapter) + return NULL; + + hw = I40E_DEV_PRIVATE_TO_HW(adapter); + if (hw->mac.type == I40E_MAC_VF) { + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(adapter); + return &vf->vsi; + } else { + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(adapter); + return pf->main_vsi; + } +} +#define I40E_DEV_PRIVATE_TO_MAIN_VSI(adapter) \ + i40e_get_vsi_from_adapter((struct i40e_adapter *)adapter) + +/* I40E_VSI_TO */ +#define I40E_VSI_TO_HW(vsi) \ + (&(((struct i40e_vsi *)vsi)->adapter->hw)) +#define I40E_VSI_TO_PF(vsi) \ + (&(((struct i40e_vsi *)vsi)->adapter->pf)) +#define I40E_VSI_TO_DEV_DATA(vsi) \ + (((struct i40e_vsi *)vsi)->adapter->pf.dev_data) +#define I40E_VSI_TO_ETH_DEV(vsi) \ + (((struct i40e_vsi *)vsi)->adapter->eth_dev) + +/* I40E_PF_TO */ +#define I40E_PF_TO_HW(pf) \ + (&(((struct i40e_pf *)pf)->adapter->hw)) +#define I40E_PF_TO_ADAPTER(pf) \ + ((struct i40e_adapter *)pf->adapter) + +/* I40E_VF_TO */ +#define I40E_VF_TO_HW(vf) \ + (&(((struct i40e_vf *)vf)->adapter->hw)) + +static inline void +i40e_init_adminq_parameter(struct i40e_hw *hw) +{ + hw->aq.num_arq_entries = I40E_AQ_LEN; + hw->aq.num_asq_entries = I40E_AQ_LEN; + hw->aq.arq_buf_size = I40E_AQ_BUF_SZ; + hw->aq.asq_buf_size = I40E_AQ_BUF_SZ; +} + +#define I40E_VALID_FLOW(flow_type) \ + ((flow_type) == RTE_ETH_FLOW_FRAG_IPV4 || \ + (flow_type) == RTE_ETH_FLOW_NONFRAG_IPV4_TCP || \ + (flow_type) == RTE_ETH_FLOW_NONFRAG_IPV4_UDP || \ + (flow_type) == RTE_ETH_FLOW_NONFRAG_IPV4_SCTP || \ + (flow_type) == RTE_ETH_FLOW_NONFRAG_IPV4_OTHER || \ + (flow_type) == RTE_ETH_FLOW_FRAG_IPV6 || \ + (flow_type) == RTE_ETH_FLOW_NONFRAG_IPV6_TCP || \ + (flow_type) == RTE_ETH_FLOW_NONFRAG_IPV6_UDP || \ + (flow_type) == RTE_ETH_FLOW_NONFRAG_IPV6_SCTP || \ + (flow_type) == RTE_ETH_FLOW_NONFRAG_IPV6_OTHER || \ + (flow_type) == RTE_ETH_FLOW_L2_PAYLOAD) + +#define I40E_VALID_PCTYPE(pctype) \ + ((pctype) == I40E_FILTER_PCTYPE_FRAG_IPV4 || \ + (pctype) == I40E_FILTER_PCTYPE_NONF_IPV4_TCP || \ + (pctype) == I40E_FILTER_PCTYPE_NONF_IPV4_UDP || \ + (pctype) == I40E_FILTER_PCTYPE_NONF_IPV4_SCTP || \ + (pctype) == I40E_FILTER_PCTYPE_NONF_IPV4_OTHER || \ + (pctype) == I40E_FILTER_PCTYPE_FRAG_IPV6 || \ + (pctype) == I40E_FILTER_PCTYPE_NONF_IPV6_UDP || \ + (pctype) == I40E_FILTER_PCTYPE_NONF_IPV6_TCP || \ + (pctype) == I40E_FILTER_PCTYPE_NONF_IPV6_SCTP || \ + (pctype) == I40E_FILTER_PCTYPE_NONF_IPV6_OTHER || \ + (pctype) == I40E_FILTER_PCTYPE_L2_PAYLOAD) + +#endif /* _I40E_ETHDEV_H_ */ diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c new file mode 100644 index 0000000..9f92a2f --- /dev/null +++ b/drivers/net/i40e/i40e_ethdev_vf.c @@ -0,0 +1,1893 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "i40e_logs.h" +#include "base/i40e_prototype.h" +#include "base/i40e_adminq_cmd.h" +#include "base/i40e_type.h" + +#include "i40e_rxtx.h" +#include "i40e_ethdev.h" +#include "i40e_pf.h" +#define I40EVF_VSI_DEFAULT_MSIX_INTR 1 + +/* busy wait delay in msec */ +#define I40EVF_BUSY_WAIT_DELAY 10 +#define I40EVF_BUSY_WAIT_COUNT 50 +#define MAX_RESET_WAIT_CNT 20 + +struct i40evf_arq_msg_info { + enum i40e_virtchnl_ops ops; + enum i40e_status_code result; + uint16_t buf_len; + uint16_t msg_len; + uint8_t *msg; +}; + +struct vf_cmd_info { + enum i40e_virtchnl_ops ops; + uint8_t *in_args; + uint32_t in_args_size; + uint8_t *out_buffer; + /* Input & output type. pass in buffer size and pass out + * actual return result + */ + uint32_t out_size; +}; + +enum i40evf_aq_result { + I40EVF_MSG_ERR = -1, /* Meet error when accessing admin queue */ + I40EVF_MSG_NON, /* Read nothing from admin queue */ + I40EVF_MSG_SYS, /* Read system msg from admin queue */ + I40EVF_MSG_CMD, /* Read async command result */ +}; + +/* A share buffer to store the command result from PF driver */ +static uint8_t cmd_result_buffer[I40E_AQ_BUF_SZ]; + +static int i40evf_dev_configure(struct rte_eth_dev *dev); +static int i40evf_dev_start(struct rte_eth_dev *dev); +static void i40evf_dev_stop(struct rte_eth_dev *dev); +static void i40evf_dev_info_get(struct rte_eth_dev *dev, + struct rte_eth_dev_info *dev_info); +static int i40evf_dev_link_update(struct rte_eth_dev *dev, + __rte_unused int wait_to_complete); +static void i40evf_dev_stats_get(struct rte_eth_dev *dev, + struct rte_eth_stats *stats); +static int i40evf_vlan_filter_set(struct rte_eth_dev *dev, + uint16_t vlan_id, int on); +static void i40evf_vlan_offload_set(struct rte_eth_dev *dev, int mask); +static int i40evf_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, + int on); +static void i40evf_dev_close(struct rte_eth_dev *dev); +static void i40evf_dev_promiscuous_enable(struct rte_eth_dev *dev); +static void i40evf_dev_promiscuous_disable(struct rte_eth_dev *dev); +static void i40evf_dev_allmulticast_enable(struct rte_eth_dev *dev); +static void i40evf_dev_allmulticast_disable(struct rte_eth_dev *dev); +static int i40evf_get_link_status(struct rte_eth_dev *dev, + struct rte_eth_link *link); +static int i40evf_init_vlan(struct rte_eth_dev *dev); +static int i40evf_dev_rx_queue_start(struct rte_eth_dev *dev, + uint16_t rx_queue_id); +static int i40evf_dev_rx_queue_stop(struct rte_eth_dev *dev, + uint16_t rx_queue_id); +static int i40evf_dev_tx_queue_start(struct rte_eth_dev *dev, + uint16_t tx_queue_id); +static int i40evf_dev_tx_queue_stop(struct rte_eth_dev *dev, + uint16_t tx_queue_id); +static int i40evf_dev_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +static int i40evf_dev_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +static int i40evf_config_rss(struct i40e_vf *vf); +static int i40evf_dev_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf); +static int i40evf_dev_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf); + +/* Default hash key buffer for RSS */ +static uint32_t rss_key_default[I40E_VFQF_HKEY_MAX_INDEX + 1]; + +static const struct eth_dev_ops i40evf_eth_dev_ops = { + .dev_configure = i40evf_dev_configure, + .dev_start = i40evf_dev_start, + .dev_stop = i40evf_dev_stop, + .promiscuous_enable = i40evf_dev_promiscuous_enable, + .promiscuous_disable = i40evf_dev_promiscuous_disable, + .allmulticast_enable = i40evf_dev_allmulticast_enable, + .allmulticast_disable = i40evf_dev_allmulticast_disable, + .link_update = i40evf_dev_link_update, + .stats_get = i40evf_dev_stats_get, + .dev_close = i40evf_dev_close, + .dev_infos_get = i40evf_dev_info_get, + .vlan_filter_set = i40evf_vlan_filter_set, + .vlan_offload_set = i40evf_vlan_offload_set, + .vlan_pvid_set = i40evf_vlan_pvid_set, + .rx_queue_start = i40evf_dev_rx_queue_start, + .rx_queue_stop = i40evf_dev_rx_queue_stop, + .tx_queue_start = i40evf_dev_tx_queue_start, + .tx_queue_stop = i40evf_dev_tx_queue_stop, + .rx_queue_setup = i40e_dev_rx_queue_setup, + .rx_queue_release = i40e_dev_rx_queue_release, + .tx_queue_setup = i40e_dev_tx_queue_setup, + .tx_queue_release = i40e_dev_tx_queue_release, + .reta_update = i40evf_dev_rss_reta_update, + .reta_query = i40evf_dev_rss_reta_query, + .rss_hash_update = i40evf_dev_rss_hash_update, + .rss_hash_conf_get = i40evf_dev_rss_hash_conf_get, +}; + +static int +i40evf_set_mac_type(struct i40e_hw *hw) +{ + int status = I40E_ERR_DEVICE_NOT_SUPPORTED; + + if (hw->vendor_id == I40E_INTEL_VENDOR_ID) { + switch (hw->device_id) { + case I40E_DEV_ID_VF: + case I40E_DEV_ID_VF_HV: + hw->mac.type = I40E_MAC_VF; + status = I40E_SUCCESS; + break; + default: + ; + } + } + + return status; +} + +/* + * Parse admin queue message. + * + * return value: + * < 0: meet error + * 0: read sys msg + * > 0: read cmd result + */ +static enum i40evf_aq_result +i40evf_parse_pfmsg(struct i40e_vf *vf, + struct i40e_arq_event_info *event, + struct i40evf_arq_msg_info *data) +{ + enum i40e_virtchnl_ops opcode = (enum i40e_virtchnl_ops)\ + rte_le_to_cpu_32(event->desc.cookie_high); + enum i40e_status_code retval = (enum i40e_status_code)\ + rte_le_to_cpu_32(event->desc.cookie_low); + enum i40evf_aq_result ret = I40EVF_MSG_CMD; + + /* pf sys event */ + if (opcode == I40E_VIRTCHNL_OP_EVENT) { + struct i40e_virtchnl_pf_event *vpe = + (struct i40e_virtchnl_pf_event *)event->msg_buf; + + /* Initialize ret to sys event */ + ret = I40EVF_MSG_SYS; + switch (vpe->event) { + case I40E_VIRTCHNL_EVENT_LINK_CHANGE: + vf->link_up = + vpe->event_data.link_event.link_status; + vf->pend_msg |= PFMSG_LINK_CHANGE; + PMD_DRV_LOG(INFO, "Link status update:%s", + vf->link_up ? "up" : "down"); + break; + case I40E_VIRTCHNL_EVENT_RESET_IMPENDING: + vf->vf_reset = true; + vf->pend_msg |= PFMSG_RESET_IMPENDING; + PMD_DRV_LOG(INFO, "vf is reseting"); + break; + case I40E_VIRTCHNL_EVENT_PF_DRIVER_CLOSE: + vf->dev_closed = true; + vf->pend_msg |= PFMSG_DRIVER_CLOSE; + PMD_DRV_LOG(INFO, "PF driver closed"); + break; + default: + PMD_DRV_LOG(ERR, "%s: Unknown event %d from pf", + __func__, vpe->event); + } + } else { + /* async reply msg on command issued by vf previously */ + ret = I40EVF_MSG_CMD; + /* Actual data length read from PF */ + data->msg_len = event->msg_len; + } + /* fill the ops and result to notify VF */ + data->result = retval; + data->ops = opcode; + + return ret; +} + +/* + * Read data in admin queue to get msg from pf driver + */ +static enum i40evf_aq_result +i40evf_read_pfmsg(struct rte_eth_dev *dev, struct i40evf_arq_msg_info *data) +{ + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct i40e_arq_event_info event; + int ret; + enum i40evf_aq_result result = I40EVF_MSG_NON; + + event.buf_len = data->buf_len; + event.msg_buf = data->msg; + ret = i40e_clean_arq_element(hw, &event, NULL); + /* Can't read any msg from adminQ */ + if (ret) { + if (ret == I40E_ERR_ADMIN_QUEUE_NO_WORK) + result = I40EVF_MSG_NON; + else + result = I40EVF_MSG_ERR; + return result; + } + + /* Parse the event */ + result = i40evf_parse_pfmsg(vf, &event, data); + + return result; +} + +/* + * Polling read until command result return from pf driver or meet error. + */ +static int +i40evf_wait_cmd_done(struct rte_eth_dev *dev, + struct i40evf_arq_msg_info *data) +{ + int i = 0; + enum i40evf_aq_result ret; + +#define MAX_TRY_TIMES 10 +#define ASQ_DELAY_MS 50 + do { + /* Delay some time first */ + rte_delay_ms(ASQ_DELAY_MS); + ret = i40evf_read_pfmsg(dev, data); + if (ret == I40EVF_MSG_CMD) + return 0; + else if (ret == I40EVF_MSG_ERR) + return -1; + + /* If don't read msg or read sys event, continue */ + } while(i++ < MAX_TRY_TIMES); + + return -1; +} + +/** + * clear current command. Only call in case execute + * _atomic_set_cmd successfully. + */ +static inline void +_clear_cmd(struct i40e_vf *vf) +{ + rte_wmb(); + vf->pend_cmd = I40E_VIRTCHNL_OP_UNKNOWN; +} + +/* + * Check there is pending cmd in execution. If none, set new command. + */ +static inline int +_atomic_set_cmd(struct i40e_vf *vf, enum i40e_virtchnl_ops ops) +{ + int ret = rte_atomic32_cmpset(&vf->pend_cmd, + I40E_VIRTCHNL_OP_UNKNOWN, ops); + + if (!ret) + PMD_DRV_LOG(ERR, "There is incomplete cmd %d", vf->pend_cmd); + + return !ret; +} + +static int +i40evf_execute_vf_cmd(struct rte_eth_dev *dev, struct vf_cmd_info *args) +{ + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + int err = -1; + struct i40evf_arq_msg_info info; + + if (_atomic_set_cmd(vf, args->ops)) + return -1; + + info.msg = args->out_buffer; + info.buf_len = args->out_size; + info.ops = I40E_VIRTCHNL_OP_UNKNOWN; + info.result = I40E_SUCCESS; + + err = i40e_aq_send_msg_to_pf(hw, args->ops, I40E_SUCCESS, + args->in_args, args->in_args_size, NULL); + if (err) { + PMD_DRV_LOG(ERR, "fail to send cmd %d", args->ops); + return err; + } + + err = i40evf_wait_cmd_done(dev, &info); + /* read message and it's expected one */ + if (!err && args->ops == info.ops) + _clear_cmd(vf); + else if (err) + PMD_DRV_LOG(ERR, "Failed to read message from AdminQ"); + else if (args->ops != info.ops) + PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u", + args->ops, info.ops); + + return (err | info.result); +} + +/* + * Check API version with sync wait until version read or fail from admin queue + */ +static int +i40evf_check_api_version(struct rte_eth_dev *dev) +{ + struct i40e_virtchnl_version_info version, *pver; + int err; + struct vf_cmd_info args; + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + + version.major = I40E_VIRTCHNL_VERSION_MAJOR; + version.minor = I40E_VIRTCHNL_VERSION_MINOR; + + args.ops = I40E_VIRTCHNL_OP_VERSION; + args.in_args = (uint8_t *)&version; + args.in_args_size = sizeof(version); + args.out_buffer = cmd_result_buffer; + args.out_size = I40E_AQ_BUF_SZ; + + err = i40evf_execute_vf_cmd(dev, &args); + if (err) { + PMD_INIT_LOG(ERR, "fail to execute command OP_VERSION"); + return err; + } + + pver = (struct i40e_virtchnl_version_info *)args.out_buffer; + vf->version_major = pver->major; + vf->version_minor = pver->minor; + if (vf->version_major == I40E_DPDK_VERSION_MAJOR) + PMD_DRV_LOG(INFO, "Peer is DPDK PF host"); + else if ((vf->version_major == I40E_VIRTCHNL_VERSION_MAJOR) && + (vf->version_minor == I40E_VIRTCHNL_VERSION_MINOR)) + PMD_DRV_LOG(INFO, "Peer is Linux PF host"); + else { + PMD_INIT_LOG(ERR, "PF/VF API version mismatch:(%u.%u)-(%u.%u)", + vf->version_major, vf->version_minor, + I40E_VIRTCHNL_VERSION_MAJOR, + I40E_VIRTCHNL_VERSION_MINOR); + return -1; + } + + return 0; +} + +static int +i40evf_get_vf_resource(struct rte_eth_dev *dev) +{ + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + int err; + struct vf_cmd_info args; + uint32_t len; + + args.ops = I40E_VIRTCHNL_OP_GET_VF_RESOURCES; + args.in_args = NULL; + args.in_args_size = 0; + args.out_buffer = cmd_result_buffer; + args.out_size = I40E_AQ_BUF_SZ; + + err = i40evf_execute_vf_cmd(dev, &args); + + if (err) { + PMD_DRV_LOG(ERR, "fail to execute command OP_GET_VF_RESOURCE"); + return err; + } + + len = sizeof(struct i40e_virtchnl_vf_resource) + + I40E_MAX_VF_VSI * sizeof(struct i40e_virtchnl_vsi_resource); + + (void)rte_memcpy(vf->vf_res, args.out_buffer, + RTE_MIN(args.out_size, len)); + i40e_vf_parse_hw_config(hw, vf->vf_res); + + return 0; +} + +static int +i40evf_config_promisc(struct rte_eth_dev *dev, + bool enable_unicast, + bool enable_multicast) +{ + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + int err; + struct vf_cmd_info args; + struct i40e_virtchnl_promisc_info promisc; + + promisc.flags = 0; + promisc.vsi_id = vf->vsi_res->vsi_id; + + if (enable_unicast) + promisc.flags |= I40E_FLAG_VF_UNICAST_PROMISC; + + if (enable_multicast) + promisc.flags |= I40E_FLAG_VF_MULTICAST_PROMISC; + + args.ops = I40E_VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE; + args.in_args = (uint8_t *)&promisc; + args.in_args_size = sizeof(promisc); + args.out_buffer = cmd_result_buffer; + args.out_size = I40E_AQ_BUF_SZ; + + err = i40evf_execute_vf_cmd(dev, &args); + + if (err) + PMD_DRV_LOG(ERR, "fail to execute command " + "CONFIG_PROMISCUOUS_MODE"); + return err; +} + +/* Configure vlan and double vlan offload. Use flag to specify which part to configure */ +static int +i40evf_config_vlan_offload(struct rte_eth_dev *dev, + bool enable_vlan_strip) +{ + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + int err; + struct vf_cmd_info args; + struct i40e_virtchnl_vlan_offload_info offload; + + offload.vsi_id = vf->vsi_res->vsi_id; + offload.enable_vlan_strip = enable_vlan_strip; + + args.ops = (enum i40e_virtchnl_ops)I40E_VIRTCHNL_OP_CFG_VLAN_OFFLOAD; + args.in_args = (uint8_t *)&offload; + args.in_args_size = sizeof(offload); + args.out_buffer = cmd_result_buffer; + args.out_size = I40E_AQ_BUF_SZ; + + err = i40evf_execute_vf_cmd(dev, &args); + if (err) + PMD_DRV_LOG(ERR, "fail to execute command CFG_VLAN_OFFLOAD"); + + return err; +} + +static int +i40evf_config_vlan_pvid(struct rte_eth_dev *dev, + struct i40e_vsi_vlan_pvid_info *info) +{ + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + int err; + struct vf_cmd_info args; + struct i40e_virtchnl_pvid_info tpid_info; + + if (dev == NULL || info == NULL) { + PMD_DRV_LOG(ERR, "invalid parameters"); + return I40E_ERR_PARAM; + } + + memset(&tpid_info, 0, sizeof(tpid_info)); + tpid_info.vsi_id = vf->vsi_res->vsi_id; + (void)rte_memcpy(&tpid_info.info, info, sizeof(*info)); + + args.ops = (enum i40e_virtchnl_ops)I40E_VIRTCHNL_OP_CFG_VLAN_PVID; + args.in_args = (uint8_t *)&tpid_info; + args.in_args_size = sizeof(tpid_info); + args.out_buffer = cmd_result_buffer; + args.out_size = I40E_AQ_BUF_SZ; + + err = i40evf_execute_vf_cmd(dev, &args); + if (err) + PMD_DRV_LOG(ERR, "fail to execute command CFG_VLAN_PVID"); + + return err; +} + +static void +i40evf_fill_virtchnl_vsi_txq_info(struct i40e_virtchnl_txq_info *txq_info, + uint16_t vsi_id, + uint16_t queue_id, + uint16_t nb_txq, + struct i40e_tx_queue *txq) +{ + txq_info->vsi_id = vsi_id; + txq_info->queue_id = queue_id; + if (queue_id < nb_txq) { + txq_info->ring_len = txq->nb_tx_desc; + txq_info->dma_ring_addr = txq->tx_ring_phys_addr; + } +} + +static void +i40evf_fill_virtchnl_vsi_rxq_info(struct i40e_virtchnl_rxq_info *rxq_info, + uint16_t vsi_id, + uint16_t queue_id, + uint16_t nb_rxq, + uint32_t max_pkt_size, + struct i40e_rx_queue *rxq) +{ + rxq_info->vsi_id = vsi_id; + rxq_info->queue_id = queue_id; + rxq_info->max_pkt_size = max_pkt_size; + if (queue_id < nb_rxq) { + rxq_info->ring_len = rxq->nb_rx_desc; + rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr; + rxq_info->databuffer_size = + (rte_pktmbuf_data_room_size(rxq->mp) - + RTE_PKTMBUF_HEADROOM); + } +} + +/* It configures VSI queues to co-work with Linux PF host */ +static int +i40evf_configure_vsi_queues(struct rte_eth_dev *dev) +{ + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct i40e_rx_queue **rxq = + (struct i40e_rx_queue **)dev->data->rx_queues; + struct i40e_tx_queue **txq = + (struct i40e_tx_queue **)dev->data->tx_queues; + struct i40e_virtchnl_vsi_queue_config_info *vc_vqci; + struct i40e_virtchnl_queue_pair_info *vc_qpi; + struct vf_cmd_info args; + uint16_t i, nb_qp = vf->num_queue_pairs; + const uint32_t size = + I40E_VIRTCHNL_CONFIG_VSI_QUEUES_SIZE(vc_vqci, nb_qp); + uint8_t buff[size]; + int ret; + + memset(buff, 0, sizeof(buff)); + vc_vqci = (struct i40e_virtchnl_vsi_queue_config_info *)buff; + vc_vqci->vsi_id = vf->vsi_res->vsi_id; + vc_vqci->num_queue_pairs = nb_qp; + + for (i = 0, vc_qpi = vc_vqci->qpair; i < nb_qp; i++, vc_qpi++) { + i40evf_fill_virtchnl_vsi_txq_info(&vc_qpi->txq, + vc_vqci->vsi_id, i, dev->data->nb_tx_queues, txq[i]); + i40evf_fill_virtchnl_vsi_rxq_info(&vc_qpi->rxq, + vc_vqci->vsi_id, i, dev->data->nb_rx_queues, + vf->max_pkt_len, rxq[i]); + } + memset(&args, 0, sizeof(args)); + args.ops = I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES; + args.in_args = (uint8_t *)vc_vqci; + args.in_args_size = size; + args.out_buffer = cmd_result_buffer; + args.out_size = I40E_AQ_BUF_SZ; + ret = i40evf_execute_vf_cmd(dev, &args); + if (ret) + PMD_DRV_LOG(ERR, "Failed to execute command of " + "I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES\n"); + + return ret; +} + +/* It configures VSI queues to co-work with DPDK PF host */ +static int +i40evf_configure_vsi_queues_ext(struct rte_eth_dev *dev) +{ + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct i40e_rx_queue **rxq = + (struct i40e_rx_queue **)dev->data->rx_queues; + struct i40e_tx_queue **txq = + (struct i40e_tx_queue **)dev->data->tx_queues; + struct i40e_virtchnl_vsi_queue_config_ext_info *vc_vqcei; + struct i40e_virtchnl_queue_pair_ext_info *vc_qpei; + struct vf_cmd_info args; + uint16_t i, nb_qp = vf->num_queue_pairs; + const uint32_t size = + I40E_VIRTCHNL_CONFIG_VSI_QUEUES_SIZE(vc_vqcei, nb_qp); + uint8_t buff[size]; + int ret; + + memset(buff, 0, sizeof(buff)); + vc_vqcei = (struct i40e_virtchnl_vsi_queue_config_ext_info *)buff; + vc_vqcei->vsi_id = vf->vsi_res->vsi_id; + vc_vqcei->num_queue_pairs = nb_qp; + vc_qpei = vc_vqcei->qpair; + for (i = 0; i < nb_qp; i++, vc_qpei++) { + i40evf_fill_virtchnl_vsi_txq_info(&vc_qpei->txq, + vc_vqcei->vsi_id, i, dev->data->nb_tx_queues, txq[i]); + i40evf_fill_virtchnl_vsi_rxq_info(&vc_qpei->rxq, + vc_vqcei->vsi_id, i, dev->data->nb_rx_queues, + vf->max_pkt_len, rxq[i]); + if (i < dev->data->nb_rx_queues) + /* + * It adds extra info for configuring VSI queues, which + * is needed to enable the configurable crc stripping + * in VF. + */ + vc_qpei->rxq_ext.crcstrip = + dev->data->dev_conf.rxmode.hw_strip_crc; + } + memset(&args, 0, sizeof(args)); + args.ops = + (enum i40e_virtchnl_ops)I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES_EXT; + args.in_args = (uint8_t *)vc_vqcei; + args.in_args_size = size; + args.out_buffer = cmd_result_buffer; + args.out_size = I40E_AQ_BUF_SZ; + ret = i40evf_execute_vf_cmd(dev, &args); + if (ret) + PMD_DRV_LOG(ERR, "Failed to execute command of " + "I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES_EXT\n"); + + return ret; +} + +static int +i40evf_configure_queues(struct rte_eth_dev *dev) +{ + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + + if (vf->version_major == I40E_DPDK_VERSION_MAJOR) + /* To support DPDK PF host */ + return i40evf_configure_vsi_queues_ext(dev); + else + /* To support Linux PF host */ + return i40evf_configure_vsi_queues(dev); +} + +static int +i40evf_config_irq_map(struct rte_eth_dev *dev) +{ + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct vf_cmd_info args; + uint8_t cmd_buffer[sizeof(struct i40e_virtchnl_irq_map_info) + \ + sizeof(struct i40e_virtchnl_vector_map)]; + struct i40e_virtchnl_irq_map_info *map_info; + int i, err; + map_info = (struct i40e_virtchnl_irq_map_info *)cmd_buffer; + map_info->num_vectors = 1; + map_info->vecmap[0].rxitr_idx = RTE_LIBRTE_I40E_ITR_INTERVAL / 2; + map_info->vecmap[0].txitr_idx = RTE_LIBRTE_I40E_ITR_INTERVAL / 2; + map_info->vecmap[0].vsi_id = vf->vsi_res->vsi_id; + /* Alway use default dynamic MSIX interrupt */ + map_info->vecmap[0].vector_id = I40EVF_VSI_DEFAULT_MSIX_INTR; + /* Don't map any tx queue */ + map_info->vecmap[0].txq_map = 0; + map_info->vecmap[0].rxq_map = 0; + for (i = 0; i < dev->data->nb_rx_queues; i++) + map_info->vecmap[0].rxq_map |= 1 << i; + + args.ops = I40E_VIRTCHNL_OP_CONFIG_IRQ_MAP; + args.in_args = (u8 *)cmd_buffer; + args.in_args_size = sizeof(cmd_buffer); + args.out_buffer = cmd_result_buffer; + args.out_size = I40E_AQ_BUF_SZ; + err = i40evf_execute_vf_cmd(dev, &args); + if (err) + PMD_DRV_LOG(ERR, "fail to execute command OP_ENABLE_QUEUES"); + + return err; +} + +static int +i40evf_switch_queue(struct rte_eth_dev *dev, bool isrx, uint16_t qid, + bool on) +{ + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct i40e_virtchnl_queue_select queue_select; + int err; + struct vf_cmd_info args; + memset(&queue_select, 0, sizeof(queue_select)); + queue_select.vsi_id = vf->vsi_res->vsi_id; + + if (isrx) + queue_select.rx_queues |= 1 << qid; + else + queue_select.tx_queues |= 1 << qid; + + if (on) + args.ops = I40E_VIRTCHNL_OP_ENABLE_QUEUES; + else + args.ops = I40E_VIRTCHNL_OP_DISABLE_QUEUES; + args.in_args = (u8 *)&queue_select; + args.in_args_size = sizeof(queue_select); + args.out_buffer = cmd_result_buffer; + args.out_size = I40E_AQ_BUF_SZ; + err = i40evf_execute_vf_cmd(dev, &args); + if (err) + PMD_DRV_LOG(ERR, "fail to switch %s %u %s", + isrx ? "RX" : "TX", qid, on ? "on" : "off"); + + return err; +} + +static int +i40evf_start_queues(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *dev_data = dev->data; + int i; + struct i40e_rx_queue *rxq; + struct i40e_tx_queue *txq; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev_data->rx_queues[i]; + if (rxq->rx_deferred_start) + continue; + if (i40evf_dev_rx_queue_start(dev, i) != 0) { + PMD_DRV_LOG(ERR, "Fail to start queue %u", i); + return -1; + } + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev_data->tx_queues[i]; + if (txq->tx_deferred_start) + continue; + if (i40evf_dev_tx_queue_start(dev, i) != 0) { + PMD_DRV_LOG(ERR, "Fail to start queue %u", i); + return -1; + } + } + + return 0; +} + +static int +i40evf_stop_queues(struct rte_eth_dev *dev) +{ + int i; + + /* Stop TX queues first */ + for (i = 0; i < dev->data->nb_tx_queues; i++) { + if (i40evf_dev_tx_queue_stop(dev, i) != 0) { + PMD_DRV_LOG(ERR, "Fail to start queue %u", i); + return -1; + } + } + + /* Then stop RX queues */ + for (i = 0; i < dev->data->nb_rx_queues; i++) { + if (i40evf_dev_rx_queue_stop(dev, i) != 0) { + PMD_DRV_LOG(ERR, "Fail to start queue %u", i); + return -1; + } + } + + return 0; +} + +static int +i40evf_add_mac_addr(struct rte_eth_dev *dev, struct ether_addr *addr) +{ + struct i40e_virtchnl_ether_addr_list *list; + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + uint8_t cmd_buffer[sizeof(struct i40e_virtchnl_ether_addr_list) + \ + sizeof(struct i40e_virtchnl_ether_addr)]; + int err; + struct vf_cmd_info args; + + if (i40e_validate_mac_addr(addr->addr_bytes) != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Invalid mac:%x:%x:%x:%x:%x:%x", + addr->addr_bytes[0], addr->addr_bytes[1], + addr->addr_bytes[2], addr->addr_bytes[3], + addr->addr_bytes[4], addr->addr_bytes[5]); + return -1; + } + + list = (struct i40e_virtchnl_ether_addr_list *)cmd_buffer; + list->vsi_id = vf->vsi_res->vsi_id; + list->num_elements = 1; + (void)rte_memcpy(list->list[0].addr, addr->addr_bytes, + sizeof(addr->addr_bytes)); + + args.ops = I40E_VIRTCHNL_OP_ADD_ETHER_ADDRESS; + args.in_args = cmd_buffer; + args.in_args_size = sizeof(cmd_buffer); + args.out_buffer = cmd_result_buffer; + args.out_size = I40E_AQ_BUF_SZ; + err = i40evf_execute_vf_cmd(dev, &args); + if (err) + PMD_DRV_LOG(ERR, "fail to execute command " + "OP_ADD_ETHER_ADDRESS"); + + return err; +} + +static int +i40evf_del_mac_addr(struct rte_eth_dev *dev, struct ether_addr *addr) +{ + struct i40e_virtchnl_ether_addr_list *list; + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + uint8_t cmd_buffer[sizeof(struct i40e_virtchnl_ether_addr_list) + \ + sizeof(struct i40e_virtchnl_ether_addr)]; + int err; + struct vf_cmd_info args; + + if (i40e_validate_mac_addr(addr->addr_bytes) != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Invalid mac:%x-%x-%x-%x-%x-%x", + addr->addr_bytes[0], addr->addr_bytes[1], + addr->addr_bytes[2], addr->addr_bytes[3], + addr->addr_bytes[4], addr->addr_bytes[5]); + return -1; + } + + list = (struct i40e_virtchnl_ether_addr_list *)cmd_buffer; + list->vsi_id = vf->vsi_res->vsi_id; + list->num_elements = 1; + (void)rte_memcpy(list->list[0].addr, addr->addr_bytes, + sizeof(addr->addr_bytes)); + + args.ops = I40E_VIRTCHNL_OP_DEL_ETHER_ADDRESS; + args.in_args = cmd_buffer; + args.in_args_size = sizeof(cmd_buffer); + args.out_buffer = cmd_result_buffer; + args.out_size = I40E_AQ_BUF_SZ; + err = i40evf_execute_vf_cmd(dev, &args); + if (err) + PMD_DRV_LOG(ERR, "fail to execute command " + "OP_DEL_ETHER_ADDRESS"); + + return err; +} + +static int +i40evf_get_statics(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct i40e_virtchnl_queue_select q_stats; + struct i40e_eth_stats *pstats; + int err; + struct vf_cmd_info args; + + memset(&q_stats, 0, sizeof(q_stats)); + q_stats.vsi_id = vf->vsi_res->vsi_id; + args.ops = I40E_VIRTCHNL_OP_GET_STATS; + args.in_args = (u8 *)&q_stats; + args.in_args_size = sizeof(q_stats); + args.out_buffer = cmd_result_buffer; + args.out_size = I40E_AQ_BUF_SZ; + + err = i40evf_execute_vf_cmd(dev, &args); + if (err) { + PMD_DRV_LOG(ERR, "fail to execute command OP_GET_STATS"); + return err; + } + pstats = (struct i40e_eth_stats *)args.out_buffer; + stats->ipackets = pstats->rx_unicast + pstats->rx_multicast + + pstats->rx_broadcast; + stats->opackets = pstats->tx_broadcast + pstats->tx_multicast + + pstats->tx_unicast; + stats->ierrors = pstats->rx_discards; + stats->oerrors = pstats->tx_errors + pstats->tx_discards; + stats->ibytes = pstats->rx_bytes; + stats->obytes = pstats->tx_bytes; + + return 0; +} + +static int +i40evf_add_vlan(struct rte_eth_dev *dev, uint16_t vlanid) +{ + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct i40e_virtchnl_vlan_filter_list *vlan_list; + uint8_t cmd_buffer[sizeof(struct i40e_virtchnl_vlan_filter_list) + + sizeof(uint16_t)]; + int err; + struct vf_cmd_info args; + + vlan_list = (struct i40e_virtchnl_vlan_filter_list *)cmd_buffer; + vlan_list->vsi_id = vf->vsi_res->vsi_id; + vlan_list->num_elements = 1; + vlan_list->vlan_id[0] = vlanid; + + args.ops = I40E_VIRTCHNL_OP_ADD_VLAN; + args.in_args = (u8 *)&cmd_buffer; + args.in_args_size = sizeof(cmd_buffer); + args.out_buffer = cmd_result_buffer; + args.out_size = I40E_AQ_BUF_SZ; + err = i40evf_execute_vf_cmd(dev, &args); + if (err) + PMD_DRV_LOG(ERR, "fail to execute command OP_ADD_VLAN"); + + return err; +} + +static int +i40evf_del_vlan(struct rte_eth_dev *dev, uint16_t vlanid) +{ + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct i40e_virtchnl_vlan_filter_list *vlan_list; + uint8_t cmd_buffer[sizeof(struct i40e_virtchnl_vlan_filter_list) + + sizeof(uint16_t)]; + int err; + struct vf_cmd_info args; + + vlan_list = (struct i40e_virtchnl_vlan_filter_list *)cmd_buffer; + vlan_list->vsi_id = vf->vsi_res->vsi_id; + vlan_list->num_elements = 1; + vlan_list->vlan_id[0] = vlanid; + + args.ops = I40E_VIRTCHNL_OP_DEL_VLAN; + args.in_args = (u8 *)&cmd_buffer; + args.in_args_size = sizeof(cmd_buffer); + args.out_buffer = cmd_result_buffer; + args.out_size = I40E_AQ_BUF_SZ; + err = i40evf_execute_vf_cmd(dev, &args); + if (err) + PMD_DRV_LOG(ERR, "fail to execute command OP_DEL_VLAN"); + + return err; +} + +static int +i40evf_get_link_status(struct rte_eth_dev *dev, struct rte_eth_link *link) +{ + int err; + struct vf_cmd_info args; + struct rte_eth_link *new_link; + + args.ops = (enum i40e_virtchnl_ops)I40E_VIRTCHNL_OP_GET_LINK_STAT; + args.in_args = NULL; + args.in_args_size = 0; + args.out_buffer = cmd_result_buffer; + args.out_size = I40E_AQ_BUF_SZ; + err = i40evf_execute_vf_cmd(dev, &args); + if (err) { + PMD_DRV_LOG(ERR, "fail to execute command OP_GET_LINK_STAT"); + return err; + } + + new_link = (struct rte_eth_link *)args.out_buffer; + (void)rte_memcpy(link, new_link, sizeof(*link)); + + return 0; +} + +static const struct rte_pci_id pci_id_i40evf_map[] = { +#define RTE_PCI_DEV_ID_DECL_I40EVF(vend, dev) {RTE_PCI_DEVICE(vend, dev)}, +#include "rte_pci_dev_ids.h" +{ .vendor_id = 0, /* sentinel */ }, +}; + +static inline int +i40evf_dev_atomic_write_link_status(struct rte_eth_dev *dev, + struct rte_eth_link *link) +{ + struct rte_eth_link *dst = &(dev->data->dev_link); + struct rte_eth_link *src = link; + + if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst, + *(uint64_t *)src) == 0) + return -1; + + return 0; +} + +static int +i40evf_reset_vf(struct i40e_hw *hw) +{ + int i, reset; + + if (i40e_vf_reset(hw) != I40E_SUCCESS) { + PMD_INIT_LOG(ERR, "Reset VF NIC failed"); + return -1; + } + /** + * After issuing vf reset command to pf, pf won't necessarily + * reset vf, it depends on what state it exactly is. If it's not + * initialized yet, it won't have vf reset since it's in a certain + * state. If not, it will try to reset. Even vf is reset, pf will + * set I40E_VFGEN_RSTAT to COMPLETE first, then wait 10ms and set + * it to ACTIVE. In this duration, vf may not catch the moment that + * COMPLETE is set. So, for vf, we'll try to wait a long time. + */ + rte_delay_ms(200); + + for (i = 0; i < MAX_RESET_WAIT_CNT; i++) { + reset = rd32(hw, I40E_VFGEN_RSTAT) & + I40E_VFGEN_RSTAT_VFR_STATE_MASK; + reset = reset >> I40E_VFGEN_RSTAT_VFR_STATE_SHIFT; + if (I40E_VFR_COMPLETED == reset || I40E_VFR_VFACTIVE == reset) + break; + else + rte_delay_ms(50); + } + + if (i >= MAX_RESET_WAIT_CNT) { + PMD_INIT_LOG(ERR, "Reset VF NIC failed"); + return -1; + } + + return 0; +} + +static int +i40evf_init_vf(struct rte_eth_dev *dev) +{ + int i, err, bufsz; + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + + vf->adapter = I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + vf->dev_data = dev->data; + err = i40evf_set_mac_type(hw); + if (err) { + PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err); + goto err; + } + + i40e_init_adminq_parameter(hw); + err = i40e_init_adminq(hw); + if (err) { + PMD_INIT_LOG(ERR, "init_adminq failed: %d", err); + goto err; + } + + + /* Reset VF and wait until it's complete */ + if (i40evf_reset_vf(hw)) { + PMD_INIT_LOG(ERR, "reset NIC failed"); + goto err_aq; + } + + /* VF reset, shutdown admin queue and initialize again */ + if (i40e_shutdown_adminq(hw) != I40E_SUCCESS) { + PMD_INIT_LOG(ERR, "i40e_shutdown_adminq failed"); + return -1; + } + + i40e_init_adminq_parameter(hw); + if (i40e_init_adminq(hw) != I40E_SUCCESS) { + PMD_INIT_LOG(ERR, "init_adminq failed"); + return -1; + } + if (i40evf_check_api_version(dev) != 0) { + PMD_INIT_LOG(ERR, "check_api version failed"); + goto err_aq; + } + bufsz = sizeof(struct i40e_virtchnl_vf_resource) + + (I40E_MAX_VF_VSI * sizeof(struct i40e_virtchnl_vsi_resource)); + vf->vf_res = rte_zmalloc("vf_res", bufsz, 0); + if (!vf->vf_res) { + PMD_INIT_LOG(ERR, "unable to allocate vf_res memory"); + goto err_aq; + } + + if (i40evf_get_vf_resource(dev) != 0) { + PMD_INIT_LOG(ERR, "i40evf_get_vf_config failed"); + goto err_alloc; + } + + /* got VF config message back from PF, now we can parse it */ + for (i = 0; i < vf->vf_res->num_vsis; i++) { + if (vf->vf_res->vsi_res[i].vsi_type == I40E_VSI_SRIOV) + vf->vsi_res = &vf->vf_res->vsi_res[i]; + } + + if (!vf->vsi_res) { + PMD_INIT_LOG(ERR, "no LAN VSI found"); + goto err_alloc; + } + + vf->vsi.vsi_id = vf->vsi_res->vsi_id; + vf->vsi.type = vf->vsi_res->vsi_type; + vf->vsi.nb_qps = vf->vsi_res->num_queue_pairs; + + /* check mac addr, if it's not valid, genrate one */ + if (I40E_SUCCESS != i40e_validate_mac_addr(\ + vf->vsi_res->default_mac_addr)) + eth_random_addr(vf->vsi_res->default_mac_addr); + + ether_addr_copy((struct ether_addr *)vf->vsi_res->default_mac_addr, + (struct ether_addr *)hw->mac.addr); + + return 0; + +err_alloc: + rte_free(vf->vf_res); +err_aq: + i40e_shutdown_adminq(hw); /* ignore error */ +err: + return -1; +} + +static int +i40evf_dev_init(struct rte_eth_dev *eth_dev) +{ + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(\ + eth_dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + /* assign ops func pointer */ + eth_dev->dev_ops = &i40evf_eth_dev_ops; + eth_dev->rx_pkt_burst = &i40e_recv_pkts; + eth_dev->tx_pkt_burst = &i40e_xmit_pkts; + + /* + * For secondary processes, we don't initialise any further as primary + * has already done this work. + */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY){ + if (eth_dev->data->scattered_rx) + eth_dev->rx_pkt_burst = i40e_recv_scattered_pkts; + return 0; + } + + hw->vendor_id = eth_dev->pci_dev->id.vendor_id; + hw->device_id = eth_dev->pci_dev->id.device_id; + hw->subsystem_vendor_id = eth_dev->pci_dev->id.subsystem_vendor_id; + hw->subsystem_device_id = eth_dev->pci_dev->id.subsystem_device_id; + hw->bus.device = eth_dev->pci_dev->addr.devid; + hw->bus.func = eth_dev->pci_dev->addr.function; + hw->hw_addr = (void *)eth_dev->pci_dev->mem_resource[0].addr; + + if(i40evf_init_vf(eth_dev) != 0) { + PMD_INIT_LOG(ERR, "Init vf failed"); + return -1; + } + + /* copy mac addr */ + eth_dev->data->mac_addrs = rte_zmalloc("i40evf_mac", + ETHER_ADDR_LEN, 0); + if (eth_dev->data->mac_addrs == NULL) { + PMD_INIT_LOG(ERR, "Failed to allocate %d bytes needed to " + "store MAC addresses", ETHER_ADDR_LEN); + return -ENOMEM; + } + ether_addr_copy((struct ether_addr *)hw->mac.addr, + (struct ether_addr *)eth_dev->data->mac_addrs); + + return 0; +} + +/* + * virtual function driver struct + */ +static struct eth_driver rte_i40evf_pmd = { + { + .name = "rte_i40evf_pmd", + .id_table = pci_id_i40evf_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING, + }, + .eth_dev_init = i40evf_dev_init, + .dev_private_size = sizeof(struct i40e_vf), +}; + +/* + * VF Driver initialization routine. + * Invoked one at EAL init time. + * Register itself as the [Virtual Poll Mode] Driver of PCI Fortville devices. + */ +static int +rte_i40evf_pmd_init(const char *name __rte_unused, + const char *params __rte_unused) +{ + PMD_INIT_FUNC_TRACE(); + + rte_eth_driver_register(&rte_i40evf_pmd); + + return 0; +} + +static struct rte_driver rte_i40evf_driver = { + .type = PMD_PDEV, + .init = rte_i40evf_pmd_init, +}; + +PMD_REGISTER_DRIVER(rte_i40evf_driver); + +static int +i40evf_dev_configure(struct rte_eth_dev *dev) +{ + return i40evf_init_vlan(dev); +} + +static int +i40evf_init_vlan(struct rte_eth_dev *dev) +{ + struct rte_eth_dev_data *data = dev->data; + int ret; + + /* Apply vlan offload setting */ + i40evf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK); + + /* Apply pvid setting */ + ret = i40evf_vlan_pvid_set(dev, data->dev_conf.txmode.pvid, + data->dev_conf.txmode.hw_vlan_insert_pvid); + return ret; +} + +static void +i40evf_vlan_offload_set(struct rte_eth_dev *dev, int mask) +{ + bool enable_vlan_strip = 0; + struct rte_eth_conf *dev_conf = &dev->data->dev_conf; + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + + /* Linux pf host doesn't support vlan offload yet */ + if (vf->version_major == I40E_DPDK_VERSION_MAJOR) { + /* Vlan stripping setting */ + if (mask & ETH_VLAN_STRIP_MASK) { + /* Enable or disable VLAN stripping */ + if (dev_conf->rxmode.hw_vlan_strip) + enable_vlan_strip = 1; + else + enable_vlan_strip = 0; + + i40evf_config_vlan_offload(dev, enable_vlan_strip); + } + } +} + +static int +i40evf_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on) +{ + struct rte_eth_conf *dev_conf = &dev->data->dev_conf; + struct i40e_vsi_vlan_pvid_info info; + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + + memset(&info, 0, sizeof(info)); + info.on = on; + + /* Linux pf host don't support vlan offload yet */ + if (vf->version_major == I40E_DPDK_VERSION_MAJOR) { + if (info.on) + info.config.pvid = pvid; + else { + info.config.reject.tagged = + dev_conf->txmode.hw_vlan_reject_tagged; + info.config.reject.untagged = + dev_conf->txmode.hw_vlan_reject_untagged; + } + return i40evf_config_vlan_pvid(dev, &info); + } + + return 0; +} + +static int +i40evf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct i40e_rx_queue *rxq; + int err = 0; + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id < dev->data->nb_rx_queues) { + rxq = dev->data->rx_queues[rx_queue_id]; + + err = i40e_alloc_rx_queue_mbufs(rxq); + if (err) { + PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf"); + return err; + } + + rte_wmb(); + + /* Init the RX tail register. */ + I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); + I40EVF_WRITE_FLUSH(hw); + + /* Ready to switch the queue on */ + err = i40evf_switch_queue(dev, TRUE, rx_queue_id, TRUE); + + if (err) + PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on", + rx_queue_id); + } + + return err; +} + +static int +i40evf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct i40e_rx_queue *rxq; + int err; + + if (rx_queue_id < dev->data->nb_rx_queues) { + rxq = dev->data->rx_queues[rx_queue_id]; + + err = i40evf_switch_queue(dev, TRUE, rx_queue_id, FALSE); + + if (err) { + PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off", + rx_queue_id); + return err; + } + + i40e_rx_queue_release_mbufs(rxq); + i40e_reset_rx_queue(rxq); + } + + return 0; +} + +static int +i40evf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + int err = 0; + + PMD_INIT_FUNC_TRACE(); + + if (tx_queue_id < dev->data->nb_tx_queues) { + + /* Ready to switch the queue on */ + err = i40evf_switch_queue(dev, FALSE, tx_queue_id, TRUE); + + if (err) + PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on", + tx_queue_id); + } + + return err; +} + +static int +i40evf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct i40e_tx_queue *txq; + int err; + + if (tx_queue_id < dev->data->nb_tx_queues) { + txq = dev->data->tx_queues[tx_queue_id]; + + err = i40evf_switch_queue(dev, FALSE, tx_queue_id, FALSE); + + if (err) { + PMD_DRV_LOG(ERR, "Failed to switch TX queue %u of", + tx_queue_id); + return err; + } + + i40e_tx_queue_release_mbufs(txq); + i40e_reset_tx_queue(txq); + } + + return 0; +} + +static int +i40evf_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + int ret; + + if (on) + ret = i40evf_add_vlan(dev, vlan_id); + else + ret = i40evf_del_vlan(dev,vlan_id); + + return ret; +} + +static int +i40evf_rx_init(struct rte_eth_dev *dev) +{ + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + uint16_t i; + struct i40e_rx_queue **rxq = + (struct i40e_rx_queue **)dev->data->rx_queues; + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + i40evf_config_rss(vf); + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq[i]->qrx_tail = hw->hw_addr + I40E_QRX_TAIL1(i); + I40E_PCI_REG_WRITE(rxq[i]->qrx_tail, rxq[i]->nb_rx_desc - 1); + } + + /* Flush the operation to write registers */ + I40EVF_WRITE_FLUSH(hw); + + return 0; +} + +static void +i40evf_tx_init(struct rte_eth_dev *dev) +{ + uint16_t i; + struct i40e_tx_queue **txq = + (struct i40e_tx_queue **)dev->data->tx_queues; + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + for (i = 0; i < dev->data->nb_tx_queues; i++) + txq[i]->qtx_tail = hw->hw_addr + I40E_QTX_TAIL1(i); +} + +static inline void +i40evf_enable_queues_intr(struct i40e_hw *hw) +{ + I40E_WRITE_REG(hw, I40E_VFINT_DYN_CTLN1(I40EVF_VSI_DEFAULT_MSIX_INTR - 1), + I40E_VFINT_DYN_CTLN1_INTENA_MASK | + I40E_VFINT_DYN_CTLN_CLEARPBA_MASK); +} + +static inline void +i40evf_disable_queues_intr(struct i40e_hw *hw) +{ + I40E_WRITE_REG(hw, I40E_VFINT_DYN_CTLN1(I40EVF_VSI_DEFAULT_MSIX_INTR - 1), + 0); +} + +static int +i40evf_dev_start(struct rte_eth_dev *dev) +{ + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ether_addr mac_addr; + + PMD_INIT_FUNC_TRACE(); + + vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len; + if (dev->data->dev_conf.rxmode.jumbo_frame == 1) { + if (vf->max_pkt_len <= ETHER_MAX_LEN || + vf->max_pkt_len > I40E_FRAME_SIZE_MAX) { + PMD_DRV_LOG(ERR, "maximum packet length must " + "be larger than %u and smaller than %u," + "as jumbo frame is enabled", + (uint32_t)ETHER_MAX_LEN, + (uint32_t)I40E_FRAME_SIZE_MAX); + return I40E_ERR_CONFIG; + } + } else { + if (vf->max_pkt_len < ETHER_MIN_LEN || + vf->max_pkt_len > ETHER_MAX_LEN) { + PMD_DRV_LOG(ERR, "maximum packet length must be " + "larger than %u and smaller than %u, " + "as jumbo frame is disabled", + (uint32_t)ETHER_MIN_LEN, + (uint32_t)ETHER_MAX_LEN); + return I40E_ERR_CONFIG; + } + } + + vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues, + dev->data->nb_tx_queues); + + if (i40evf_rx_init(dev) != 0){ + PMD_DRV_LOG(ERR, "failed to do RX init"); + return -1; + } + + i40evf_tx_init(dev); + + if (i40evf_configure_queues(dev) != 0) { + PMD_DRV_LOG(ERR, "configure queues failed"); + goto err_queue; + } + if (i40evf_config_irq_map(dev)) { + PMD_DRV_LOG(ERR, "config_irq_map failed"); + goto err_queue; + } + + /* Set mac addr */ + (void)rte_memcpy(mac_addr.addr_bytes, hw->mac.addr, + sizeof(mac_addr.addr_bytes)); + if (i40evf_add_mac_addr(dev, &mac_addr)) { + PMD_DRV_LOG(ERR, "Failed to add mac addr"); + goto err_queue; + } + + if (i40evf_start_queues(dev) != 0) { + PMD_DRV_LOG(ERR, "enable queues failed"); + goto err_mac; + } + + i40evf_enable_queues_intr(hw); + return 0; + +err_mac: + i40evf_del_mac_addr(dev, &mac_addr); +err_queue: + return -1; +} + +static void +i40evf_dev_stop(struct rte_eth_dev *dev) +{ + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + i40evf_disable_queues_intr(hw); + i40evf_stop_queues(dev); +} + +static int +i40evf_dev_link_update(struct rte_eth_dev *dev, + __rte_unused int wait_to_complete) +{ + struct rte_eth_link new_link; + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + /* + * DPDK pf host provide interfacet to acquire link status + * while Linux driver does not + */ + if (vf->version_major == I40E_DPDK_VERSION_MAJOR) + i40evf_get_link_status(dev, &new_link); + else { + /* Always assume it's up, for Linux driver PF host */ + new_link.link_duplex = ETH_LINK_AUTONEG_DUPLEX; + new_link.link_speed = ETH_LINK_SPEED_10000; + new_link.link_status = 1; + } + i40evf_dev_atomic_write_link_status(dev, &new_link); + + return 0; +} + +static void +i40evf_dev_promiscuous_enable(struct rte_eth_dev *dev) +{ + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + int ret; + + /* If enabled, just return */ + if (vf->promisc_unicast_enabled) + return; + + ret = i40evf_config_promisc(dev, 1, vf->promisc_multicast_enabled); + if (ret == 0) + vf->promisc_unicast_enabled = TRUE; +} + +static void +i40evf_dev_promiscuous_disable(struct rte_eth_dev *dev) +{ + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + int ret; + + /* If disabled, just return */ + if (!vf->promisc_unicast_enabled) + return; + + ret = i40evf_config_promisc(dev, 0, vf->promisc_multicast_enabled); + if (ret == 0) + vf->promisc_unicast_enabled = FALSE; +} + +static void +i40evf_dev_allmulticast_enable(struct rte_eth_dev *dev) +{ + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + int ret; + + /* If enabled, just return */ + if (vf->promisc_multicast_enabled) + return; + + ret = i40evf_config_promisc(dev, vf->promisc_unicast_enabled, 1); + if (ret == 0) + vf->promisc_multicast_enabled = TRUE; +} + +static void +i40evf_dev_allmulticast_disable(struct rte_eth_dev *dev) +{ + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + int ret; + + /* If enabled, just return */ + if (!vf->promisc_multicast_enabled) + return; + + ret = i40evf_config_promisc(dev, vf->promisc_unicast_enabled, 0); + if (ret == 0) + vf->promisc_multicast_enabled = FALSE; +} + +static void +i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) +{ + struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + + memset(dev_info, 0, sizeof(*dev_info)); + dev_info->max_rx_queues = vf->vsi_res->num_queue_pairs; + dev_info->max_tx_queues = vf->vsi_res->num_queue_pairs; + dev_info->min_rx_bufsize = I40E_BUF_SIZE_MIN; + dev_info->max_rx_pktlen = I40E_FRAME_SIZE_MAX; + dev_info->reta_size = ETH_RSS_RETA_SIZE_64; + dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL; + + dev_info->default_rxconf = (struct rte_eth_rxconf) { + .rx_thresh = { + .pthresh = I40E_DEFAULT_RX_PTHRESH, + .hthresh = I40E_DEFAULT_RX_HTHRESH, + .wthresh = I40E_DEFAULT_RX_WTHRESH, + }, + .rx_free_thresh = I40E_DEFAULT_RX_FREE_THRESH, + .rx_drop_en = 0, + }; + + dev_info->default_txconf = (struct rte_eth_txconf) { + .tx_thresh = { + .pthresh = I40E_DEFAULT_TX_PTHRESH, + .hthresh = I40E_DEFAULT_TX_HTHRESH, + .wthresh = I40E_DEFAULT_TX_WTHRESH, + }, + .tx_free_thresh = I40E_DEFAULT_TX_FREE_THRESH, + .tx_rs_thresh = I40E_DEFAULT_TX_RSBIT_THRESH, + .txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS | + ETH_TXQ_FLAGS_NOOFFLOADS, + }; +} + +static void +i40evf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + if (i40evf_get_statics(dev, stats)) + PMD_DRV_LOG(ERR, "Get statics failed"); +} + +static void +i40evf_dev_close(struct rte_eth_dev *dev) +{ + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + i40evf_dev_stop(dev); + i40evf_reset_vf(hw); + i40e_shutdown_adminq(hw); +} + +static int +i40evf_dev_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t lut, l; + uint16_t i, j; + uint16_t idx, shift; + uint8_t mask; + + if (reta_size != ETH_RSS_RETA_SIZE_64) { + PMD_DRV_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number of hardware can " + "support (%d)\n", reta_size, ETH_RSS_RETA_SIZE_64); + return -EINVAL; + } + + for (i = 0; i < reta_size; i += I40E_4_BIT_WIDTH) { + idx = i / RTE_RETA_GROUP_SIZE; + shift = i % RTE_RETA_GROUP_SIZE; + mask = (uint8_t)((reta_conf[idx].mask >> shift) & + I40E_4_BIT_MASK); + if (!mask) + continue; + if (mask == I40E_4_BIT_MASK) + l = 0; + else + l = I40E_READ_REG(hw, I40E_VFQF_HLUT(i >> 2)); + + for (j = 0, lut = 0; j < I40E_4_BIT_WIDTH; j++) { + if (mask & (0x1 << j)) + lut |= reta_conf[idx].reta[shift + j] << + (CHAR_BIT * j); + else + lut |= l & (I40E_8_BIT_MASK << (CHAR_BIT * j)); + } + I40E_WRITE_REG(hw, I40E_VFQF_HLUT(i >> 2), lut); + } + + return 0; +} + +static int +i40evf_dev_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t lut; + uint16_t i, j; + uint16_t idx, shift; + uint8_t mask; + + if (reta_size != ETH_RSS_RETA_SIZE_64) { + PMD_DRV_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number of hardware can " + "support (%d)\n", reta_size, ETH_RSS_RETA_SIZE_64); + return -EINVAL; + } + + for (i = 0; i < reta_size; i += I40E_4_BIT_WIDTH) { + idx = i / RTE_RETA_GROUP_SIZE; + shift = i % RTE_RETA_GROUP_SIZE; + mask = (uint8_t)((reta_conf[idx].mask >> shift) & + I40E_4_BIT_MASK); + if (!mask) + continue; + + lut = I40E_READ_REG(hw, I40E_VFQF_HLUT(i >> 2)); + for (j = 0; j < I40E_4_BIT_WIDTH; j++) { + if (mask & (0x1 << j)) + reta_conf[idx].reta[shift + j] = + ((lut >> (CHAR_BIT * j)) & + I40E_8_BIT_MASK); + } + } + + return 0; +} + +static int +i40evf_hw_rss_hash_set(struct i40e_hw *hw, struct rte_eth_rss_conf *rss_conf) +{ + uint32_t *hash_key; + uint8_t hash_key_len; + uint64_t rss_hf, hena; + + hash_key = (uint32_t *)(rss_conf->rss_key); + hash_key_len = rss_conf->rss_key_len; + if (hash_key != NULL && hash_key_len >= + (I40E_VFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) { + uint16_t i; + + for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++) + I40E_WRITE_REG(hw, I40E_VFQF_HKEY(i), hash_key[i]); + } + + rss_hf = rss_conf->rss_hf; + hena = (uint64_t)I40E_READ_REG(hw, I40E_VFQF_HENA(0)); + hena |= ((uint64_t)I40E_READ_REG(hw, I40E_VFQF_HENA(1))) << 32; + hena &= ~I40E_RSS_HENA_ALL; + hena |= i40e_config_hena(rss_hf); + I40E_WRITE_REG(hw, I40E_VFQF_HENA(0), (uint32_t)hena); + I40E_WRITE_REG(hw, I40E_VFQF_HENA(1), (uint32_t)(hena >> 32)); + I40EVF_WRITE_FLUSH(hw); + + return 0; +} + +static void +i40evf_disable_rss(struct i40e_vf *vf) +{ + struct i40e_hw *hw = I40E_VF_TO_HW(vf); + uint64_t hena; + + hena = (uint64_t)I40E_READ_REG(hw, I40E_VFQF_HENA(0)); + hena |= ((uint64_t)I40E_READ_REG(hw, I40E_VFQF_HENA(1))) << 32; + hena &= ~I40E_RSS_HENA_ALL; + I40E_WRITE_REG(hw, I40E_VFQF_HENA(0), (uint32_t)hena); + I40E_WRITE_REG(hw, I40E_VFQF_HENA(1), (uint32_t)(hena >> 32)); + I40EVF_WRITE_FLUSH(hw); +} + +static int +i40evf_config_rss(struct i40e_vf *vf) +{ + struct i40e_hw *hw = I40E_VF_TO_HW(vf); + struct rte_eth_rss_conf rss_conf; + uint32_t i, j, lut = 0, nb_q = (I40E_VFQF_HLUT_MAX_INDEX + 1) * 4; + + if (vf->dev_data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) { + i40evf_disable_rss(vf); + PMD_DRV_LOG(DEBUG, "RSS not configured\n"); + return 0; + } + + /* Fill out the look up table */ + for (i = 0, j = 0; i < nb_q; i++, j++) { + if (j >= vf->num_queue_pairs) + j = 0; + lut = (lut << 8) | j; + if ((i & 3) == 3) + I40E_WRITE_REG(hw, I40E_VFQF_HLUT(i >> 2), lut); + } + + rss_conf = vf->dev_data->dev_conf.rx_adv_conf.rss_conf; + if ((rss_conf.rss_hf & I40E_RSS_OFFLOAD_ALL) == 0) { + i40evf_disable_rss(vf); + PMD_DRV_LOG(DEBUG, "No hash flag is set\n"); + return 0; + } + + if (rss_conf.rss_key == NULL || rss_conf.rss_key_len < nb_q) { + /* Calculate the default hash key */ + for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++) + rss_key_default[i] = (uint32_t)rte_rand(); + rss_conf.rss_key = (uint8_t *)rss_key_default; + rss_conf.rss_key_len = nb_q; + } + + return i40evf_hw_rss_hash_set(hw, &rss_conf); +} + +static int +i40evf_dev_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint64_t rss_hf = rss_conf->rss_hf & I40E_RSS_OFFLOAD_ALL; + uint64_t hena; + + hena = (uint64_t)I40E_READ_REG(hw, I40E_VFQF_HENA(0)); + hena |= ((uint64_t)I40E_READ_REG(hw, I40E_VFQF_HENA(1))) << 32; + if (!(hena & I40E_RSS_HENA_ALL)) { /* RSS disabled */ + if (rss_hf != 0) /* Enable RSS */ + return -EINVAL; + return 0; + } + + /* RSS enabled */ + if (rss_hf == 0) /* Disable RSS */ + return -EINVAL; + + return i40evf_hw_rss_hash_set(hw, rss_conf); +} + +static int +i40evf_dev_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t *hash_key = (uint32_t *)(rss_conf->rss_key); + uint64_t hena; + uint16_t i; + + if (hash_key) { + for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++) + hash_key[i] = I40E_READ_REG(hw, I40E_VFQF_HKEY(i)); + rss_conf->rss_key_len = i * sizeof(uint32_t); + } + hena = (uint64_t)I40E_READ_REG(hw, I40E_VFQF_HENA(0)); + hena |= ((uint64_t)I40E_READ_REG(hw, I40E_VFQF_HENA(1))) << 32; + rss_conf->rss_hf = i40e_parse_hena(hena); + + return 0; +} diff --git a/drivers/net/i40e/i40e_fdir.c b/drivers/net/i40e/i40e_fdir.c new file mode 100644 index 0000000..e688b4f --- /dev/null +++ b/drivers/net/i40e/i40e_fdir.c @@ -0,0 +1,1361 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "i40e_logs.h" +#include "base/i40e_type.h" +#include "i40e_ethdev.h" +#include "i40e_rxtx.h" + +#define I40E_FDIR_MZ_NAME "FDIR_MEMZONE" +#ifndef IPV6_ADDR_LEN +#define IPV6_ADDR_LEN 16 +#endif + +#define I40E_FDIR_PKT_LEN 512 +#define I40E_FDIR_IP_DEFAULT_LEN 420 +#define I40E_FDIR_IP_DEFAULT_TTL 0x40 +#define I40E_FDIR_IP_DEFAULT_VERSION_IHL 0x45 +#define I40E_FDIR_TCP_DEFAULT_DATAOFF 0x50 +#define I40E_FDIR_IPv6_DEFAULT_VTC_FLOW 0x60300000 +#define I40E_FDIR_IPv6_DEFAULT_HOP_LIMITS 0xFF +#define I40E_FDIR_IPv6_PAYLOAD_LEN 380 +#define I40E_FDIR_UDP_DEFAULT_LEN 400 + +/* Wait count and interval for fdir filter programming */ +#define I40E_FDIR_WAIT_COUNT 10 +#define I40E_FDIR_WAIT_INTERVAL_US 1000 + +/* Wait count and interval for fdir filter flush */ +#define I40E_FDIR_FLUSH_RETRY 50 +#define I40E_FDIR_FLUSH_INTERVAL_MS 5 + +#define I40E_COUNTER_PF 2 +/* Statistic counter index for one pf */ +#define I40E_COUNTER_INDEX_FDIR(pf_id) (0 + (pf_id) * I40E_COUNTER_PF) +#define I40E_MAX_FLX_SOURCE_OFF 480 +#define I40E_FLX_OFFSET_IN_FIELD_VECTOR 50 + +#define NONUSE_FLX_PIT_DEST_OFF 63 +#define NONUSE_FLX_PIT_FSIZE 1 +#define MK_FLX_PIT(src_offset, fsize, dst_offset) ( \ + (((src_offset) << I40E_PRTQF_FLX_PIT_SOURCE_OFF_SHIFT) & \ + I40E_PRTQF_FLX_PIT_SOURCE_OFF_MASK) | \ + (((fsize) << I40E_PRTQF_FLX_PIT_FSIZE_SHIFT) & \ + I40E_PRTQF_FLX_PIT_FSIZE_MASK) | \ + ((((dst_offset) + I40E_FLX_OFFSET_IN_FIELD_VECTOR) << \ + I40E_PRTQF_FLX_PIT_DEST_OFF_SHIFT) & \ + I40E_PRTQF_FLX_PIT_DEST_OFF_MASK)) + +#define I40E_FDIR_FLOWS ( \ + (1 << RTE_ETH_FLOW_FRAG_IPV4) | \ + (1 << RTE_ETH_FLOW_NONFRAG_IPV4_UDP) | \ + (1 << RTE_ETH_FLOW_NONFRAG_IPV4_TCP) | \ + (1 << RTE_ETH_FLOW_NONFRAG_IPV4_SCTP) | \ + (1 << RTE_ETH_FLOW_NONFRAG_IPV4_OTHER) | \ + (1 << RTE_ETH_FLOW_FRAG_IPV6) | \ + (1 << RTE_ETH_FLOW_NONFRAG_IPV6_UDP) | \ + (1 << RTE_ETH_FLOW_NONFRAG_IPV6_TCP) | \ + (1 << RTE_ETH_FLOW_NONFRAG_IPV6_SCTP) | \ + (1 << RTE_ETH_FLOW_NONFRAG_IPV6_OTHER)) + +#define I40E_FLEX_WORD_MASK(off) (0x80 >> (off)) + +static int i40e_fdir_rx_queue_init(struct i40e_rx_queue *rxq); +static int i40e_check_fdir_flex_conf( + const struct rte_eth_fdir_flex_conf *conf); +static void i40e_set_flx_pld_cfg(struct i40e_pf *pf, + const struct rte_eth_flex_payload_cfg *cfg); +static void i40e_set_flex_mask_on_pctype(struct i40e_pf *pf, + enum i40e_filter_pctype pctype, + const struct rte_eth_fdir_flex_mask *mask_cfg); +static int i40e_fdir_construct_pkt(struct i40e_pf *pf, + const struct rte_eth_fdir_input *fdir_input, + unsigned char *raw_pkt); +static int i40e_add_del_fdir_filter(struct rte_eth_dev *dev, + const struct rte_eth_fdir_filter *filter, + bool add); +static int i40e_fdir_filter_programming(struct i40e_pf *pf, + enum i40e_filter_pctype pctype, + const struct rte_eth_fdir_filter *filter, + bool add); +static int i40e_fdir_flush(struct rte_eth_dev *dev); +static void i40e_fdir_info_get(struct rte_eth_dev *dev, + struct rte_eth_fdir_info *fdir); +static void i40e_fdir_stats_get(struct rte_eth_dev *dev, + struct rte_eth_fdir_stats *stat); + +static int +i40e_fdir_rx_queue_init(struct i40e_rx_queue *rxq) +{ + struct i40e_hw *hw = I40E_VSI_TO_HW(rxq->vsi); + struct i40e_hmc_obj_rxq rx_ctx; + int err = I40E_SUCCESS; + + memset(&rx_ctx, 0, sizeof(struct i40e_hmc_obj_rxq)); + /* Init the RX queue in hardware */ + rx_ctx.dbuff = I40E_RXBUF_SZ_1024 >> I40E_RXQ_CTX_DBUFF_SHIFT; + rx_ctx.hbuff = 0; + rx_ctx.base = rxq->rx_ring_phys_addr / I40E_QUEUE_BASE_ADDR_UNIT; + rx_ctx.qlen = rxq->nb_rx_desc; +#ifndef RTE_LIBRTE_I40E_16BYTE_RX_DESC + rx_ctx.dsize = 1; +#endif + rx_ctx.dtype = i40e_header_split_none; + rx_ctx.hsplit_0 = I40E_HEADER_SPLIT_NONE; + rx_ctx.rxmax = ETHER_MAX_LEN; + rx_ctx.tphrdesc_ena = 1; + rx_ctx.tphwdesc_ena = 1; + rx_ctx.tphdata_ena = 1; + rx_ctx.tphhead_ena = 1; + rx_ctx.lrxqthresh = 2; + rx_ctx.crcstrip = 0; + rx_ctx.l2tsel = 1; + rx_ctx.showiv = 1; + rx_ctx.prefena = 1; + + err = i40e_clear_lan_rx_queue_context(hw, rxq->reg_idx); + if (err != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to clear FDIR RX queue context."); + return err; + } + err = i40e_set_lan_rx_queue_context(hw, rxq->reg_idx, &rx_ctx); + if (err != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to set FDIR RX queue context."); + return err; + } + rxq->qrx_tail = hw->hw_addr + + I40E_QRX_TAIL(rxq->vsi->base_queue); + + rte_wmb(); + /* Init the RX tail regieter. */ + I40E_PCI_REG_WRITE(rxq->qrx_tail, 0); + I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); + + return err; +} + +/* + * i40e_fdir_setup - reserve and initialize the Flow Director resources + * @pf: board private structure + */ +int +i40e_fdir_setup(struct i40e_pf *pf) +{ + struct i40e_hw *hw = I40E_PF_TO_HW(pf); + struct i40e_vsi *vsi; + int err = I40E_SUCCESS; + char z_name[RTE_MEMZONE_NAMESIZE]; + const struct rte_memzone *mz = NULL; + struct rte_eth_dev *eth_dev = pf->adapter->eth_dev; + + if ((pf->flags & I40E_FLAG_FDIR) == 0) { + PMD_INIT_LOG(ERR, "HW doesn't support FDIR"); + return I40E_NOT_SUPPORTED; + } + + PMD_DRV_LOG(INFO, "FDIR HW Capabilities: num_filters_guaranteed = %u," + " num_filters_best_effort = %u.", + hw->func_caps.fd_filters_guaranteed, + hw->func_caps.fd_filters_best_effort); + + vsi = pf->fdir.fdir_vsi; + if (vsi) { + PMD_DRV_LOG(INFO, "FDIR initialization has been done."); + return I40E_SUCCESS; + } + /* make new FDIR VSI */ + vsi = i40e_vsi_setup(pf, I40E_VSI_FDIR, pf->main_vsi, 0); + if (!vsi) { + PMD_DRV_LOG(ERR, "Couldn't create FDIR VSI."); + return I40E_ERR_NO_AVAILABLE_VSI; + } + pf->fdir.fdir_vsi = vsi; + + /*Fdir tx queue setup*/ + err = i40e_fdir_setup_tx_resources(pf); + if (err) { + PMD_DRV_LOG(ERR, "Failed to setup FDIR TX resources."); + goto fail_setup_tx; + } + + /*Fdir rx queue setup*/ + err = i40e_fdir_setup_rx_resources(pf); + if (err) { + PMD_DRV_LOG(ERR, "Failed to setup FDIR RX resources."); + goto fail_setup_rx; + } + + err = i40e_tx_queue_init(pf->fdir.txq); + if (err) { + PMD_DRV_LOG(ERR, "Failed to do FDIR TX initialization."); + goto fail_mem; + } + + /* need switch on before dev start*/ + err = i40e_switch_tx_queue(hw, vsi->base_queue, TRUE); + if (err) { + PMD_DRV_LOG(ERR, "Failed to do fdir TX switch on."); + goto fail_mem; + } + + /* Init the rx queue in hardware */ + err = i40e_fdir_rx_queue_init(pf->fdir.rxq); + if (err) { + PMD_DRV_LOG(ERR, "Failed to do FDIR RX initialization."); + goto fail_mem; + } + + /* switch on rx queue */ + err = i40e_switch_rx_queue(hw, vsi->base_queue, TRUE); + if (err) { + PMD_DRV_LOG(ERR, "Failed to do FDIR RX switch on."); + goto fail_mem; + } + + /* reserve memory for the fdir programming packet */ + snprintf(z_name, sizeof(z_name), "%s_%s_%d", + eth_dev->driver->pci_drv.name, + I40E_FDIR_MZ_NAME, + eth_dev->data->port_id); + mz = i40e_memzone_reserve(z_name, I40E_FDIR_PKT_LEN, SOCKET_ID_ANY); + if (!mz) { + PMD_DRV_LOG(ERR, "Cannot init memzone for " + "flow director program packet."); + err = I40E_ERR_NO_MEMORY; + goto fail_mem; + } + pf->fdir.prg_pkt = mz->addr; +#ifdef RTE_LIBRTE_XEN_DOM0 + pf->fdir.dma_addr = rte_mem_phy2mch(mz->memseg_id, mz->phys_addr); +#else + pf->fdir.dma_addr = (uint64_t)mz->phys_addr; +#endif + pf->fdir.match_counter_index = I40E_COUNTER_INDEX_FDIR(hw->pf_id); + PMD_DRV_LOG(INFO, "FDIR setup successfully, with programming queue %u.", + vsi->base_queue); + return I40E_SUCCESS; + +fail_mem: + i40e_dev_rx_queue_release(pf->fdir.rxq); + pf->fdir.rxq = NULL; +fail_setup_rx: + i40e_dev_tx_queue_release(pf->fdir.txq); + pf->fdir.txq = NULL; +fail_setup_tx: + i40e_vsi_release(vsi); + pf->fdir.fdir_vsi = NULL; + return err; +} + +/* + * i40e_fdir_teardown - release the Flow Director resources + * @pf: board private structure + */ +void +i40e_fdir_teardown(struct i40e_pf *pf) +{ + struct i40e_hw *hw = I40E_PF_TO_HW(pf); + struct i40e_vsi *vsi; + + vsi = pf->fdir.fdir_vsi; + if (!vsi) + return; + i40e_switch_tx_queue(hw, vsi->base_queue, FALSE); + i40e_switch_rx_queue(hw, vsi->base_queue, FALSE); + i40e_dev_rx_queue_release(pf->fdir.rxq); + pf->fdir.rxq = NULL; + i40e_dev_tx_queue_release(pf->fdir.txq); + pf->fdir.txq = NULL; + i40e_vsi_release(vsi); + pf->fdir.fdir_vsi = NULL; +} + +/* check whether the flow director table in empty */ +static inline int +i40e_fdir_empty(struct i40e_hw *hw) +{ + uint32_t guarant_cnt, best_cnt; + + guarant_cnt = (uint32_t)((I40E_READ_REG(hw, I40E_PFQF_FDSTAT) & + I40E_PFQF_FDSTAT_GUARANT_CNT_MASK) >> + I40E_PFQF_FDSTAT_GUARANT_CNT_SHIFT); + best_cnt = (uint32_t)((I40E_READ_REG(hw, I40E_PFQF_FDSTAT) & + I40E_PFQF_FDSTAT_BEST_CNT_MASK) >> + I40E_PFQF_FDSTAT_BEST_CNT_SHIFT); + if (best_cnt + guarant_cnt > 0) + return -1; + + return 0; +} + +/* + * Initialize the configuration about bytes stream extracted as flexible payload + * and mask setting + */ +static inline void +i40e_init_flx_pld(struct i40e_pf *pf) +{ + struct i40e_hw *hw = I40E_PF_TO_HW(pf); + uint8_t pctype; + int i, index; + + /* + * Define the bytes stream extracted as flexible payload in + * field vector. By default, select 8 words from the beginning + * of payload as flexible payload. + */ + for (i = I40E_FLXPLD_L2_IDX; i < I40E_MAX_FLXPLD_LAYER; i++) { + index = i * I40E_MAX_FLXPLD_FIED; + pf->fdir.flex_set[index].src_offset = 0; + pf->fdir.flex_set[index].size = I40E_FDIR_MAX_FLEXWORD_NUM; + pf->fdir.flex_set[index].dst_offset = 0; + I40E_WRITE_REG(hw, I40E_PRTQF_FLX_PIT(index), 0x0000C900); + I40E_WRITE_REG(hw, + I40E_PRTQF_FLX_PIT(index + 1), 0x0000FC29);/*non-used*/ + I40E_WRITE_REG(hw, + I40E_PRTQF_FLX_PIT(index + 2), 0x0000FC2A);/*non-used*/ + } + + /* initialize the masks */ + for (pctype = I40E_FILTER_PCTYPE_NONF_IPV4_UDP; + pctype <= I40E_FILTER_PCTYPE_FRAG_IPV6; pctype++) { + pf->fdir.flex_mask[pctype].word_mask = 0; + I40E_WRITE_REG(hw, I40E_PRTQF_FD_FLXINSET(pctype), 0); + for (i = 0; i < I40E_FDIR_BITMASK_NUM_WORD; i++) { + pf->fdir.flex_mask[pctype].bitmask[i].offset = 0; + pf->fdir.flex_mask[pctype].bitmask[i].mask = 0; + I40E_WRITE_REG(hw, I40E_PRTQF_FD_MSK(pctype, i), 0); + } + } +} + +#define I40E_WORD(hi, lo) (uint16_t)((((hi) << 8) & 0xFF00) | ((lo) & 0xFF)) + +#define I40E_VALIDATE_FLEX_PIT(flex_pit1, flex_pit2) do { \ + if ((flex_pit2).src_offset < \ + (flex_pit1).src_offset + (flex_pit1).size) { \ + PMD_DRV_LOG(ERR, "src_offset should be not" \ + " less than than previous offset" \ + " + previous FSIZE."); \ + return -EINVAL; \ + } \ +} while (0) + +/* + * i40e_srcoff_to_flx_pit - transform the src_offset into flex_pit structure, + * and the flex_pit will be sorted by it's src_offset value + */ +static inline uint16_t +i40e_srcoff_to_flx_pit(const uint16_t *src_offset, + struct i40e_fdir_flex_pit *flex_pit) +{ + uint16_t src_tmp, size, num = 0; + uint16_t i, k, j = 0; + + while (j < I40E_FDIR_MAX_FLEX_LEN) { + size = 1; + for (; j < I40E_FDIR_MAX_FLEX_LEN - 1; j++) { + if (src_offset[j + 1] == src_offset[j] + 1) + size++; + else + break; + } + src_tmp = src_offset[j] + 1 - size; + /* the flex_pit need to be sort by src_offset */ + for (i = 0; i < num; i++) { + if (src_tmp < flex_pit[i].src_offset) + break; + } + /* if insert required, move backward */ + for (k = num; k > i; k--) + flex_pit[k] = flex_pit[k - 1]; + /* insert */ + flex_pit[i].dst_offset = j + 1 - size; + flex_pit[i].src_offset = src_tmp; + flex_pit[i].size = size; + j++; + num++; + } + return num; +} + +/* i40e_check_fdir_flex_payload -check flex payload configuration arguments */ +static inline int +i40e_check_fdir_flex_payload(const struct rte_eth_flex_payload_cfg *flex_cfg) +{ + struct i40e_fdir_flex_pit flex_pit[I40E_FDIR_MAX_FLEX_LEN]; + uint16_t num, i; + + for (i = 0; i < I40E_FDIR_MAX_FLEX_LEN; i++) { + if (flex_cfg->src_offset[i] >= I40E_MAX_FLX_SOURCE_OFF) { + PMD_DRV_LOG(ERR, "exceeds maxmial payload limit."); + return -EINVAL; + } + } + + memset(flex_pit, 0, sizeof(flex_pit)); + num = i40e_srcoff_to_flx_pit(flex_cfg->src_offset, flex_pit); + if (num > I40E_MAX_FLXPLD_FIED) { + PMD_DRV_LOG(ERR, "exceeds maxmial number of flex fields."); + return -EINVAL; + } + for (i = 0; i < num; i++) { + if (flex_pit[i].size & 0x01 || flex_pit[i].dst_offset & 0x01 || + flex_pit[i].src_offset & 0x01) { + PMD_DRV_LOG(ERR, "flexpayload should be measured" + " in word"); + return -EINVAL; + } + if (i != num - 1) + I40E_VALIDATE_FLEX_PIT(flex_pit[i], flex_pit[i + 1]); + } + return 0; +} + +/* + * i40e_check_fdir_flex_conf -check if the flex payload and mask configuration + * arguments are valid + */ +static int +i40e_check_fdir_flex_conf(const struct rte_eth_fdir_flex_conf *conf) +{ + const struct rte_eth_flex_payload_cfg *flex_cfg; + const struct rte_eth_fdir_flex_mask *flex_mask; + uint16_t mask_tmp; + uint8_t nb_bitmask; + uint16_t i, j; + int ret = 0; + + if (conf == NULL) { + PMD_DRV_LOG(INFO, "NULL pointer."); + return -EINVAL; + } + /* check flexible payload setting configuration */ + if (conf->nb_payloads > RTE_ETH_L4_PAYLOAD) { + PMD_DRV_LOG(ERR, "invalid number of payload setting."); + return -EINVAL; + } + for (i = 0; i < conf->nb_payloads; i++) { + flex_cfg = &conf->flex_set[i]; + if (flex_cfg->type > RTE_ETH_L4_PAYLOAD) { + PMD_DRV_LOG(ERR, "invalid payload type."); + return -EINVAL; + } + ret = i40e_check_fdir_flex_payload(flex_cfg); + if (ret < 0) { + PMD_DRV_LOG(ERR, "invalid flex payload arguments."); + return -EINVAL; + } + } + + /* check flex mask setting configuration */ + if (conf->nb_flexmasks >= RTE_ETH_FLOW_MAX) { + PMD_DRV_LOG(ERR, "invalid number of flex masks."); + return -EINVAL; + } + for (i = 0; i < conf->nb_flexmasks; i++) { + flex_mask = &conf->flex_mask[i]; + if (!I40E_VALID_FLOW(flex_mask->flow_type)) { + PMD_DRV_LOG(WARNING, "invalid flow type."); + return -EINVAL; + } + nb_bitmask = 0; + for (j = 0; j < I40E_FDIR_MAX_FLEX_LEN; j += sizeof(uint16_t)) { + mask_tmp = I40E_WORD(flex_mask->mask[j], + flex_mask->mask[j + 1]); + if (mask_tmp != 0x0 && mask_tmp != UINT16_MAX) { + nb_bitmask++; + if (nb_bitmask > I40E_FDIR_BITMASK_NUM_WORD) { + PMD_DRV_LOG(ERR, " exceed maximal" + " number of bitmasks."); + return -EINVAL; + } + } + } + } + return 0; +} + +/* + * i40e_set_flx_pld_cfg -configure the rule how bytes stream is extracted as flexible payload + * @pf: board private structure + * @cfg: the rule how bytes stream is extracted as flexible payload + */ +static void +i40e_set_flx_pld_cfg(struct i40e_pf *pf, + const struct rte_eth_flex_payload_cfg *cfg) +{ + struct i40e_hw *hw = I40E_PF_TO_HW(pf); + struct i40e_fdir_flex_pit flex_pit[I40E_MAX_FLXPLD_FIED]; + uint32_t flx_pit; + uint16_t num, min_next_off; /* in words */ + uint8_t field_idx = 0; + uint8_t layer_idx = 0; + uint16_t i; + + if (cfg->type == RTE_ETH_L2_PAYLOAD) + layer_idx = I40E_FLXPLD_L2_IDX; + else if (cfg->type == RTE_ETH_L3_PAYLOAD) + layer_idx = I40E_FLXPLD_L3_IDX; + else if (cfg->type == RTE_ETH_L4_PAYLOAD) + layer_idx = I40E_FLXPLD_L4_IDX; + + memset(flex_pit, 0, sizeof(flex_pit)); + num = i40e_srcoff_to_flx_pit(cfg->src_offset, flex_pit); + + for (i = 0; i < num; i++) { + field_idx = layer_idx * I40E_MAX_FLXPLD_FIED + i; + /* record the info in fdir structure */ + pf->fdir.flex_set[field_idx].src_offset = + flex_pit[i].src_offset / sizeof(uint16_t); + pf->fdir.flex_set[field_idx].size = + flex_pit[i].size / sizeof(uint16_t); + pf->fdir.flex_set[field_idx].dst_offset = + flex_pit[i].dst_offset / sizeof(uint16_t); + flx_pit = MK_FLX_PIT(pf->fdir.flex_set[field_idx].src_offset, + pf->fdir.flex_set[field_idx].size, + pf->fdir.flex_set[field_idx].dst_offset); + + I40E_WRITE_REG(hw, I40E_PRTQF_FLX_PIT(field_idx), flx_pit); + } + min_next_off = pf->fdir.flex_set[field_idx].src_offset + + pf->fdir.flex_set[field_idx].size; + + for (; i < I40E_MAX_FLXPLD_FIED; i++) { + /* set the non-used register obeying register's constrain */ + flx_pit = MK_FLX_PIT(min_next_off, NONUSE_FLX_PIT_FSIZE, + NONUSE_FLX_PIT_DEST_OFF); + I40E_WRITE_REG(hw, + I40E_PRTQF_FLX_PIT(layer_idx * I40E_MAX_FLXPLD_FIED + i), + flx_pit); + min_next_off++; + } +} + +/* + * i40e_set_flex_mask_on_pctype - configure the mask on flexible payload + * @pf: board private structure + * @pctype: packet classify type + * @flex_masks: mask for flexible payload + */ +static void +i40e_set_flex_mask_on_pctype(struct i40e_pf *pf, + enum i40e_filter_pctype pctype, + const struct rte_eth_fdir_flex_mask *mask_cfg) +{ + struct i40e_hw *hw = I40E_PF_TO_HW(pf); + struct i40e_fdir_flex_mask *flex_mask; + uint32_t flxinset, fd_mask; + uint16_t mask_tmp; + uint8_t i, nb_bitmask = 0; + + flex_mask = &pf->fdir.flex_mask[pctype]; + memset(flex_mask, 0, sizeof(struct i40e_fdir_flex_mask)); + for (i = 0; i < I40E_FDIR_MAX_FLEX_LEN; i += sizeof(uint16_t)) { + mask_tmp = I40E_WORD(mask_cfg->mask[i], mask_cfg->mask[i + 1]); + if (mask_tmp != 0x0) { + flex_mask->word_mask |= + I40E_FLEX_WORD_MASK(i / sizeof(uint16_t)); + if (mask_tmp != UINT16_MAX) { + /* set bit mask */ + flex_mask->bitmask[nb_bitmask].mask = ~mask_tmp; + flex_mask->bitmask[nb_bitmask].offset = + i / sizeof(uint16_t); + nb_bitmask++; + } + } + } + /* write mask to hw */ + flxinset = (flex_mask->word_mask << + I40E_PRTQF_FD_FLXINSET_INSET_SHIFT) & + I40E_PRTQF_FD_FLXINSET_INSET_MASK; + I40E_WRITE_REG(hw, I40E_PRTQF_FD_FLXINSET(pctype), flxinset); + + for (i = 0; i < nb_bitmask; i++) { + fd_mask = (flex_mask->bitmask[i].mask << + I40E_PRTQF_FD_MSK_MASK_SHIFT) & + I40E_PRTQF_FD_MSK_MASK_MASK; + fd_mask |= ((flex_mask->bitmask[i].offset + + I40E_FLX_OFFSET_IN_FIELD_VECTOR) << + I40E_PRTQF_FD_MSK_OFFSET_SHIFT) & + I40E_PRTQF_FD_MSK_OFFSET_MASK; + I40E_WRITE_REG(hw, I40E_PRTQF_FD_MSK(pctype, i), fd_mask); + } +} + +/* + * Configure flow director related setting + */ +int +i40e_fdir_configure(struct rte_eth_dev *dev) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct rte_eth_fdir_flex_conf *conf; + enum i40e_filter_pctype pctype; + uint32_t val; + uint8_t i; + int ret = 0; + + /* + * configuration need to be done before + * flow director filters are added + * If filters exist, flush them. + */ + if (i40e_fdir_empty(hw) < 0) { + ret = i40e_fdir_flush(dev); + if (ret) { + PMD_DRV_LOG(ERR, "failed to flush fdir table."); + return ret; + } + } + + /* enable FDIR filter */ + val = I40E_READ_REG(hw, I40E_PFQF_CTL_0); + val |= I40E_PFQF_CTL_0_FD_ENA_MASK; + I40E_WRITE_REG(hw, I40E_PFQF_CTL_0, val); + + i40e_init_flx_pld(pf); /* set flex config to default value */ + + conf = &dev->data->dev_conf.fdir_conf.flex_conf; + ret = i40e_check_fdir_flex_conf(conf); + if (ret < 0) { + PMD_DRV_LOG(ERR, " invalid configuration arguments."); + return -EINVAL; + } + /* configure flex payload */ + for (i = 0; i < conf->nb_payloads; i++) + i40e_set_flx_pld_cfg(pf, &conf->flex_set[i]); + /* configure flex mask*/ + for (i = 0; i < conf->nb_flexmasks; i++) { + pctype = i40e_flowtype_to_pctype(conf->flex_mask[i].flow_type); + i40e_set_flex_mask_on_pctype(pf, pctype, &conf->flex_mask[i]); + } + + return ret; +} + +static inline void +i40e_fdir_fill_eth_ip_head(const struct rte_eth_fdir_input *fdir_input, + unsigned char *raw_pkt) +{ + struct ether_hdr *ether = (struct ether_hdr *)raw_pkt; + struct ipv4_hdr *ip; + struct ipv6_hdr *ip6; + static const uint8_t next_proto[] = { + [RTE_ETH_FLOW_FRAG_IPV4] = IPPROTO_IP, + [RTE_ETH_FLOW_NONFRAG_IPV4_TCP] = IPPROTO_TCP, + [RTE_ETH_FLOW_NONFRAG_IPV4_UDP] = IPPROTO_UDP, + [RTE_ETH_FLOW_NONFRAG_IPV4_SCTP] = IPPROTO_SCTP, + [RTE_ETH_FLOW_NONFRAG_IPV4_OTHER] = IPPROTO_IP, + [RTE_ETH_FLOW_FRAG_IPV6] = IPPROTO_NONE, + [RTE_ETH_FLOW_NONFRAG_IPV6_TCP] = IPPROTO_TCP, + [RTE_ETH_FLOW_NONFRAG_IPV6_UDP] = IPPROTO_UDP, + [RTE_ETH_FLOW_NONFRAG_IPV6_SCTP] = IPPROTO_SCTP, + [RTE_ETH_FLOW_NONFRAG_IPV6_OTHER] = IPPROTO_NONE, + }; + + switch (fdir_input->flow_type) { + case RTE_ETH_FLOW_NONFRAG_IPV4_TCP: + case RTE_ETH_FLOW_NONFRAG_IPV4_UDP: + case RTE_ETH_FLOW_NONFRAG_IPV4_SCTP: + case RTE_ETH_FLOW_NONFRAG_IPV4_OTHER: + case RTE_ETH_FLOW_FRAG_IPV4: + ip = (struct ipv4_hdr *)(raw_pkt + sizeof(struct ether_hdr)); + + ether->ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4); + ip->version_ihl = I40E_FDIR_IP_DEFAULT_VERSION_IHL; + /* set len to by default */ + ip->total_length = rte_cpu_to_be_16(I40E_FDIR_IP_DEFAULT_LEN); + ip->time_to_live = I40E_FDIR_IP_DEFAULT_TTL; + /* + * The source and destination fields in the transmitted packet + * need to be presented in a reversed order with respect + * to the expected received packets. + */ + ip->src_addr = fdir_input->flow.ip4_flow.dst_ip; + ip->dst_addr = fdir_input->flow.ip4_flow.src_ip; + ip->next_proto_id = next_proto[fdir_input->flow_type]; + break; + case RTE_ETH_FLOW_NONFRAG_IPV6_TCP: + case RTE_ETH_FLOW_NONFRAG_IPV6_UDP: + case RTE_ETH_FLOW_NONFRAG_IPV6_SCTP: + case RTE_ETH_FLOW_NONFRAG_IPV6_OTHER: + case RTE_ETH_FLOW_FRAG_IPV6: + ip6 = (struct ipv6_hdr *)(raw_pkt + sizeof(struct ether_hdr)); + + ether->ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6); + ip6->vtc_flow = + rte_cpu_to_be_32(I40E_FDIR_IPv6_DEFAULT_VTC_FLOW); + ip6->payload_len = + rte_cpu_to_be_16(I40E_FDIR_IPv6_PAYLOAD_LEN); + ip6->hop_limits = I40E_FDIR_IPv6_DEFAULT_HOP_LIMITS; + + /* + * The source and destination fields in the transmitted packet + * need to be presented in a reversed order with respect + * to the expected received packets. + */ + rte_memcpy(&(ip6->src_addr), + &(fdir_input->flow.ipv6_flow.dst_ip), + IPV6_ADDR_LEN); + rte_memcpy(&(ip6->dst_addr), + &(fdir_input->flow.ipv6_flow.src_ip), + IPV6_ADDR_LEN); + ip6->proto = next_proto[fdir_input->flow_type]; + break; + default: + PMD_DRV_LOG(ERR, "unknown flow type %u.", + fdir_input->flow_type); + break; + } +} + + +/* + * i40e_fdir_construct_pkt - construct packet based on fields in input + * @pf: board private structure + * @fdir_input: input set of the flow director entry + * @raw_pkt: a packet to be constructed + */ +static int +i40e_fdir_construct_pkt(struct i40e_pf *pf, + const struct rte_eth_fdir_input *fdir_input, + unsigned char *raw_pkt) +{ + unsigned char *payload, *ptr; + struct udp_hdr *udp; + struct tcp_hdr *tcp; + struct sctp_hdr *sctp; + uint8_t size, dst = 0; + uint8_t i, pit_idx, set_idx = I40E_FLXPLD_L4_IDX; /* use l4 by default*/ + + /* fill the ethernet and IP head */ + i40e_fdir_fill_eth_ip_head(fdir_input, raw_pkt); + + /* fill the L4 head */ + switch (fdir_input->flow_type) { + case RTE_ETH_FLOW_NONFRAG_IPV4_UDP: + udp = (struct udp_hdr *)(raw_pkt + sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr)); + payload = (unsigned char *)udp + sizeof(struct udp_hdr); + /* + * The source and destination fields in the transmitted packet + * need to be presented in a reversed order with respect + * to the expected received packets. + */ + udp->src_port = fdir_input->flow.udp4_flow.dst_port; + udp->dst_port = fdir_input->flow.udp4_flow.src_port; + udp->dgram_len = rte_cpu_to_be_16(I40E_FDIR_UDP_DEFAULT_LEN); + break; + + case RTE_ETH_FLOW_NONFRAG_IPV4_TCP: + tcp = (struct tcp_hdr *)(raw_pkt + sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr)); + payload = (unsigned char *)tcp + sizeof(struct tcp_hdr); + /* + * The source and destination fields in the transmitted packet + * need to be presented in a reversed order with respect + * to the expected received packets. + */ + tcp->src_port = fdir_input->flow.tcp4_flow.dst_port; + tcp->dst_port = fdir_input->flow.tcp4_flow.src_port; + tcp->data_off = I40E_FDIR_TCP_DEFAULT_DATAOFF; + break; + + case RTE_ETH_FLOW_NONFRAG_IPV4_SCTP: + sctp = (struct sctp_hdr *)(raw_pkt + sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr)); + payload = (unsigned char *)sctp + sizeof(struct sctp_hdr); + sctp->tag = fdir_input->flow.sctp4_flow.verify_tag; + break; + + case RTE_ETH_FLOW_NONFRAG_IPV4_OTHER: + case RTE_ETH_FLOW_FRAG_IPV4: + payload = raw_pkt + sizeof(struct ether_hdr) + + sizeof(struct ipv4_hdr); + set_idx = I40E_FLXPLD_L3_IDX; + break; + + case RTE_ETH_FLOW_NONFRAG_IPV6_UDP: + udp = (struct udp_hdr *)(raw_pkt + sizeof(struct ether_hdr) + + sizeof(struct ipv6_hdr)); + payload = (unsigned char *)udp + sizeof(struct udp_hdr); + /* + * The source and destination fields in the transmitted packet + * need to be presented in a reversed order with respect + * to the expected received packets. + */ + udp->src_port = fdir_input->flow.udp6_flow.dst_port; + udp->dst_port = fdir_input->flow.udp6_flow.src_port; + udp->dgram_len = rte_cpu_to_be_16(I40E_FDIR_IPv6_PAYLOAD_LEN); + break; + + case RTE_ETH_FLOW_NONFRAG_IPV6_TCP: + tcp = (struct tcp_hdr *)(raw_pkt + sizeof(struct ether_hdr) + + sizeof(struct ipv6_hdr)); + payload = (unsigned char *)tcp + sizeof(struct tcp_hdr); + /* + * The source and destination fields in the transmitted packet + * need to be presented in a reversed order with respect + * to the expected received packets. + */ + tcp->data_off = I40E_FDIR_TCP_DEFAULT_DATAOFF; + tcp->src_port = fdir_input->flow.udp6_flow.dst_port; + tcp->dst_port = fdir_input->flow.udp6_flow.src_port; + break; + + case RTE_ETH_FLOW_NONFRAG_IPV6_SCTP: + sctp = (struct sctp_hdr *)(raw_pkt + sizeof(struct ether_hdr) + + sizeof(struct ipv6_hdr)); + payload = (unsigned char *)sctp + sizeof(struct sctp_hdr); + sctp->tag = fdir_input->flow.sctp6_flow.verify_tag; + break; + + case RTE_ETH_FLOW_NONFRAG_IPV6_OTHER: + case RTE_ETH_FLOW_FRAG_IPV6: + payload = raw_pkt + sizeof(struct ether_hdr) + + sizeof(struct ipv6_hdr); + set_idx = I40E_FLXPLD_L3_IDX; + break; + default: + PMD_DRV_LOG(ERR, "unknown flow type %u.", fdir_input->flow_type); + return -EINVAL; + } + + /* fill the flexbytes to payload */ + for (i = 0; i < I40E_MAX_FLXPLD_FIED; i++) { + pit_idx = set_idx * I40E_MAX_FLXPLD_FIED + i; + size = pf->fdir.flex_set[pit_idx].size; + if (size == 0) + continue; + dst = pf->fdir.flex_set[pit_idx].dst_offset * sizeof(uint16_t); + ptr = payload + + pf->fdir.flex_set[pit_idx].src_offset * sizeof(uint16_t); + (void)rte_memcpy(ptr, + &fdir_input->flow_ext.flexbytes[dst], + size * sizeof(uint16_t)); + } + + return 0; +} + +/* Construct the tx flags */ +static inline uint64_t +i40e_build_ctob(uint32_t td_cmd, + uint32_t td_offset, + unsigned int size, + uint32_t td_tag) +{ + return rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DATA | + ((uint64_t)td_cmd << I40E_TXD_QW1_CMD_SHIFT) | + ((uint64_t)td_offset << I40E_TXD_QW1_OFFSET_SHIFT) | + ((uint64_t)size << I40E_TXD_QW1_TX_BUF_SZ_SHIFT) | + ((uint64_t)td_tag << I40E_TXD_QW1_L2TAG1_SHIFT)); +} + +/* + * check the programming status descriptor in rx queue. + * done after Programming Flow Director is programmed on + * tx queue + */ +static inline int +i40e_check_fdir_programming_status(struct i40e_rx_queue *rxq) +{ + volatile union i40e_rx_desc *rxdp; + uint64_t qword1; + uint32_t rx_status; + uint32_t len, id; + uint32_t error; + int ret = 0; + + rxdp = &rxq->rx_ring[rxq->rx_tail]; + qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len); + rx_status = (qword1 & I40E_RXD_QW1_STATUS_MASK) + >> I40E_RXD_QW1_STATUS_SHIFT; + + if (rx_status & (1 << I40E_RX_DESC_STATUS_DD_SHIFT)) { + len = qword1 >> I40E_RX_PROG_STATUS_DESC_LENGTH_SHIFT; + id = (qword1 & I40E_RX_PROG_STATUS_DESC_QW1_PROGID_MASK) >> + I40E_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT; + + if (len == I40E_RX_PROG_STATUS_DESC_LENGTH && + id == I40E_RX_PROG_STATUS_DESC_FD_FILTER_STATUS) { + error = (qword1 & + I40E_RX_PROG_STATUS_DESC_QW1_ERROR_MASK) >> + I40E_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT; + if (error == (0x1 << + I40E_RX_PROG_STATUS_DESC_FD_TBL_FULL_SHIFT)) { + PMD_DRV_LOG(ERR, "Failed to add FDIR filter" + " (FD_ID %u): programming status" + " reported.", + rxdp->wb.qword0.hi_dword.fd_id); + ret = -1; + } else if (error == (0x1 << + I40E_RX_PROG_STATUS_DESC_NO_FD_ENTRY_SHIFT)) { + PMD_DRV_LOG(ERR, "Failed to delete FDIR filter" + " (FD_ID %u): programming status" + " reported.", + rxdp->wb.qword0.hi_dword.fd_id); + ret = -1; + } else + PMD_DRV_LOG(ERR, "invalid programming status" + " reported, error = %u.", error); + } else + PMD_DRV_LOG(ERR, "unknown programming status" + " reported, len = %d, id = %u.", len, id); + rxdp->wb.qword1.status_error_len = 0; + rxq->rx_tail++; + if (unlikely(rxq->rx_tail == rxq->nb_rx_desc)) + rxq->rx_tail = 0; + } + return ret; +} + +/* + * i40e_add_del_fdir_filter - add or remove a flow director filter. + * @pf: board private structure + * @filter: fdir filter entry + * @add: 0 - delete, 1 - add + */ +static int +i40e_add_del_fdir_filter(struct rte_eth_dev *dev, + const struct rte_eth_fdir_filter *filter, + bool add) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + unsigned char *pkt = (unsigned char *)pf->fdir.prg_pkt; + enum i40e_filter_pctype pctype; + int ret = 0; + + if (dev->data->dev_conf.fdir_conf.mode != RTE_FDIR_MODE_PERFECT) { + PMD_DRV_LOG(ERR, "FDIR is not enabled, please" + " check the mode in fdir_conf."); + return -ENOTSUP; + } + + if (!I40E_VALID_FLOW(filter->input.flow_type)) { + PMD_DRV_LOG(ERR, "invalid flow_type input."); + return -EINVAL; + } + if (filter->action.rx_queue >= pf->dev_data->nb_rx_queues) { + PMD_DRV_LOG(ERR, "Invalid queue ID"); + return -EINVAL; + } + + memset(pkt, 0, I40E_FDIR_PKT_LEN); + + ret = i40e_fdir_construct_pkt(pf, &filter->input, pkt); + if (ret < 0) { + PMD_DRV_LOG(ERR, "construct packet for fdir fails."); + return ret; + } + pctype = i40e_flowtype_to_pctype(filter->input.flow_type); + ret = i40e_fdir_filter_programming(pf, pctype, filter, add); + if (ret < 0) { + PMD_DRV_LOG(ERR, "fdir programming fails for PCTYPE(%u).", + pctype); + return ret; + } + return ret; +} + +/* + * i40e_fdir_filter_programming - Program a flow director filter rule. + * Is done by Flow Director Programming Descriptor followed by packet + * structure that contains the filter fields need to match. + * @pf: board private structure + * @pctype: pctype + * @filter: fdir filter entry + * @add: 0 - delelet, 1 - add + */ +static int +i40e_fdir_filter_programming(struct i40e_pf *pf, + enum i40e_filter_pctype pctype, + const struct rte_eth_fdir_filter *filter, + bool add) +{ + struct i40e_tx_queue *txq = pf->fdir.txq; + struct i40e_rx_queue *rxq = pf->fdir.rxq; + const struct rte_eth_fdir_action *fdir_action = &filter->action; + volatile struct i40e_tx_desc *txdp; + volatile struct i40e_filter_program_desc *fdirdp; + uint32_t td_cmd; + uint16_t i; + uint8_t dest; + + PMD_DRV_LOG(INFO, "filling filter programming descriptor."); + fdirdp = (volatile struct i40e_filter_program_desc *) + (&(txq->tx_ring[txq->tx_tail])); + + fdirdp->qindex_flex_ptype_vsi = + rte_cpu_to_le_32((fdir_action->rx_queue << + I40E_TXD_FLTR_QW0_QINDEX_SHIFT) & + I40E_TXD_FLTR_QW0_QINDEX_MASK); + + fdirdp->qindex_flex_ptype_vsi |= + rte_cpu_to_le_32((fdir_action->flex_off << + I40E_TXD_FLTR_QW0_FLEXOFF_SHIFT) & + I40E_TXD_FLTR_QW0_FLEXOFF_MASK); + + fdirdp->qindex_flex_ptype_vsi |= + rte_cpu_to_le_32((pctype << + I40E_TXD_FLTR_QW0_PCTYPE_SHIFT) & + I40E_TXD_FLTR_QW0_PCTYPE_MASK); + + /* Use LAN VSI Id by default */ + fdirdp->qindex_flex_ptype_vsi |= + rte_cpu_to_le_32((pf->main_vsi->vsi_id << + I40E_TXD_FLTR_QW0_DEST_VSI_SHIFT) & + I40E_TXD_FLTR_QW0_DEST_VSI_MASK); + + fdirdp->dtype_cmd_cntindex = + rte_cpu_to_le_32(I40E_TX_DESC_DTYPE_FILTER_PROG); + + if (add) + fdirdp->dtype_cmd_cntindex |= rte_cpu_to_le_32( + I40E_FILTER_PROGRAM_DESC_PCMD_ADD_UPDATE << + I40E_TXD_FLTR_QW1_PCMD_SHIFT); + else + fdirdp->dtype_cmd_cntindex |= rte_cpu_to_le_32( + I40E_FILTER_PROGRAM_DESC_PCMD_REMOVE << + I40E_TXD_FLTR_QW1_PCMD_SHIFT); + + if (fdir_action->behavior == RTE_ETH_FDIR_REJECT) + dest = I40E_FILTER_PROGRAM_DESC_DEST_DROP_PACKET; + else + dest = I40E_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_QINDEX; + fdirdp->dtype_cmd_cntindex |= rte_cpu_to_le_32((dest << + I40E_TXD_FLTR_QW1_DEST_SHIFT) & + I40E_TXD_FLTR_QW1_DEST_MASK); + + fdirdp->dtype_cmd_cntindex |= + rte_cpu_to_le_32((fdir_action->report_status<< + I40E_TXD_FLTR_QW1_FD_STATUS_SHIFT) & + I40E_TXD_FLTR_QW1_FD_STATUS_MASK); + + fdirdp->dtype_cmd_cntindex |= + rte_cpu_to_le_32(I40E_TXD_FLTR_QW1_CNT_ENA_MASK); + fdirdp->dtype_cmd_cntindex |= + rte_cpu_to_le_32((pf->fdir.match_counter_index << + I40E_TXD_FLTR_QW1_CNTINDEX_SHIFT) & + I40E_TXD_FLTR_QW1_CNTINDEX_MASK); + + fdirdp->fd_id = rte_cpu_to_le_32(filter->soft_id); + + PMD_DRV_LOG(INFO, "filling transmit descriptor."); + txdp = &(txq->tx_ring[txq->tx_tail + 1]); + txdp->buffer_addr = rte_cpu_to_le_64(pf->fdir.dma_addr); + td_cmd = I40E_TX_DESC_CMD_EOP | + I40E_TX_DESC_CMD_RS | + I40E_TX_DESC_CMD_DUMMY; + + txdp->cmd_type_offset_bsz = + i40e_build_ctob(td_cmd, 0, I40E_FDIR_PKT_LEN, 0); + + txq->tx_tail += 2; /* set 2 descriptors above, fdirdp and txdp */ + if (txq->tx_tail >= txq->nb_tx_desc) + txq->tx_tail = 0; + /* Update the tx tail register */ + rte_wmb(); + I40E_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail); + + for (i = 0; i < I40E_FDIR_WAIT_COUNT; i++) { + rte_delay_us(I40E_FDIR_WAIT_INTERVAL_US); + if (txdp->cmd_type_offset_bsz & + rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE)) + break; + } + if (i >= I40E_FDIR_WAIT_COUNT) { + PMD_DRV_LOG(ERR, "Failed to program FDIR filter:" + " time out to get DD on tx queue."); + return -ETIMEDOUT; + } + /* totally delay 10 ms to check programming status*/ + rte_delay_us((I40E_FDIR_WAIT_COUNT - i) * I40E_FDIR_WAIT_INTERVAL_US); + if (i40e_check_fdir_programming_status(rxq) < 0) { + PMD_DRV_LOG(ERR, "Failed to program FDIR filter:" + " programming status reported."); + return -ENOSYS; + } + + return 0; +} + +/* + * i40e_fdir_flush - clear all filters of Flow Director table + * @pf: board private structure + */ +static int +i40e_fdir_flush(struct rte_eth_dev *dev) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_hw *hw = I40E_PF_TO_HW(pf); + uint32_t reg; + uint16_t guarant_cnt, best_cnt; + uint16_t i; + + I40E_WRITE_REG(hw, I40E_PFQF_CTL_1, I40E_PFQF_CTL_1_CLEARFDTABLE_MASK); + I40E_WRITE_FLUSH(hw); + + for (i = 0; i < I40E_FDIR_FLUSH_RETRY; i++) { + rte_delay_ms(I40E_FDIR_FLUSH_INTERVAL_MS); + reg = I40E_READ_REG(hw, I40E_PFQF_CTL_1); + if (!(reg & I40E_PFQF_CTL_1_CLEARFDTABLE_MASK)) + break; + } + if (i >= I40E_FDIR_FLUSH_RETRY) { + PMD_DRV_LOG(ERR, "FD table did not flush, may need more time."); + return -ETIMEDOUT; + } + guarant_cnt = (uint16_t)((I40E_READ_REG(hw, I40E_PFQF_FDSTAT) & + I40E_PFQF_FDSTAT_GUARANT_CNT_MASK) >> + I40E_PFQF_FDSTAT_GUARANT_CNT_SHIFT); + best_cnt = (uint16_t)((I40E_READ_REG(hw, I40E_PFQF_FDSTAT) & + I40E_PFQF_FDSTAT_BEST_CNT_MASK) >> + I40E_PFQF_FDSTAT_BEST_CNT_SHIFT); + if (guarant_cnt != 0 || best_cnt != 0) { + PMD_DRV_LOG(ERR, "Failed to flush FD table."); + return -ENOSYS; + } else + PMD_DRV_LOG(INFO, "FD table Flush success."); + return 0; +} + +static inline void +i40e_fdir_info_get_flex_set(struct i40e_pf *pf, + struct rte_eth_flex_payload_cfg *flex_set, + uint16_t *num) +{ + struct i40e_fdir_flex_pit *flex_pit; + struct rte_eth_flex_payload_cfg *ptr = flex_set; + uint16_t src, dst, size, j, k; + uint8_t i, layer_idx; + + for (layer_idx = I40E_FLXPLD_L2_IDX; + layer_idx <= I40E_FLXPLD_L4_IDX; + layer_idx++) { + if (layer_idx == I40E_FLXPLD_L2_IDX) + ptr->type = RTE_ETH_L2_PAYLOAD; + else if (layer_idx == I40E_FLXPLD_L3_IDX) + ptr->type = RTE_ETH_L3_PAYLOAD; + else if (layer_idx == I40E_FLXPLD_L4_IDX) + ptr->type = RTE_ETH_L4_PAYLOAD; + + for (i = 0; i < I40E_MAX_FLXPLD_FIED; i++) { + flex_pit = &pf->fdir.flex_set[layer_idx * + I40E_MAX_FLXPLD_FIED + i]; + if (flex_pit->size == 0) + continue; + src = flex_pit->src_offset * sizeof(uint16_t); + dst = flex_pit->dst_offset * sizeof(uint16_t); + size = flex_pit->size * sizeof(uint16_t); + for (j = src, k = dst; j < src + size; j++, k++) + ptr->src_offset[k] = j; + } + (*num)++; + ptr++; + } +} + +static inline void +i40e_fdir_info_get_flex_mask(struct i40e_pf *pf, + struct rte_eth_fdir_flex_mask *flex_mask, + uint16_t *num) +{ + struct i40e_fdir_flex_mask *mask; + struct rte_eth_fdir_flex_mask *ptr = flex_mask; + uint16_t flow_type; + uint8_t i, j; + uint16_t off_bytes, mask_tmp; + + for (i = I40E_FILTER_PCTYPE_NONF_IPV4_UDP; + i <= I40E_FILTER_PCTYPE_FRAG_IPV6; + i++) { + mask = &pf->fdir.flex_mask[i]; + if (!I40E_VALID_PCTYPE((enum i40e_filter_pctype)i)) + continue; + flow_type = i40e_pctype_to_flowtype((enum i40e_filter_pctype)i); + for (j = 0; j < I40E_FDIR_MAX_FLEXWORD_NUM; j++) { + if (mask->word_mask & I40E_FLEX_WORD_MASK(j)) { + ptr->mask[j * sizeof(uint16_t)] = UINT8_MAX; + ptr->mask[j * sizeof(uint16_t) + 1] = UINT8_MAX; + } else { + ptr->mask[j * sizeof(uint16_t)] = 0x0; + ptr->mask[j * sizeof(uint16_t) + 1] = 0x0; + } + } + for (j = 0; j < I40E_FDIR_BITMASK_NUM_WORD; j++) { + off_bytes = mask->bitmask[j].offset * sizeof(uint16_t); + mask_tmp = ~mask->bitmask[j].mask; + ptr->mask[off_bytes] &= I40E_HI_BYTE(mask_tmp); + ptr->mask[off_bytes + 1] &= I40E_LO_BYTE(mask_tmp); + } + ptr->flow_type = flow_type; + ptr++; + (*num)++; + } +} + +/* + * i40e_fdir_info_get - get information of Flow Director + * @pf: ethernet device to get info from + * @fdir: a pointer to a structure of type *rte_eth_fdir_info* to be filled with + * the flow director information. + */ +static void +i40e_fdir_info_get(struct rte_eth_dev *dev, struct rte_eth_fdir_info *fdir) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_hw *hw = I40E_PF_TO_HW(pf); + uint16_t num_flex_set = 0; + uint16_t num_flex_mask = 0; + + if (dev->data->dev_conf.fdir_conf.mode == RTE_FDIR_MODE_PERFECT) + fdir->mode = RTE_FDIR_MODE_PERFECT; + else + fdir->mode = RTE_FDIR_MODE_NONE; + + fdir->guarant_spc = + (uint32_t)hw->func_caps.fd_filters_guaranteed; + fdir->best_spc = + (uint32_t)hw->func_caps.fd_filters_best_effort; + fdir->max_flexpayload = I40E_FDIR_MAX_FLEX_LEN; + fdir->flow_types_mask[0] = I40E_FDIR_FLOWS; + fdir->flex_payload_unit = sizeof(uint16_t); + fdir->flex_bitmask_unit = sizeof(uint16_t); + fdir->max_flex_payload_segment_num = I40E_MAX_FLXPLD_FIED; + fdir->flex_payload_limit = I40E_MAX_FLX_SOURCE_OFF; + fdir->max_flex_bitmask_num = I40E_FDIR_BITMASK_NUM_WORD; + + i40e_fdir_info_get_flex_set(pf, + fdir->flex_conf.flex_set, + &num_flex_set); + i40e_fdir_info_get_flex_mask(pf, + fdir->flex_conf.flex_mask, + &num_flex_mask); + + fdir->flex_conf.nb_payloads = num_flex_set; + fdir->flex_conf.nb_flexmasks = num_flex_mask; +} + +/* + * i40e_fdir_stat_get - get statistics of Flow Director + * @pf: ethernet device to get info from + * @stat: a pointer to a structure of type *rte_eth_fdir_stats* to be filled with + * the flow director statistics. + */ +static void +i40e_fdir_stats_get(struct rte_eth_dev *dev, struct rte_eth_fdir_stats *stat) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_hw *hw = I40E_PF_TO_HW(pf); + uint32_t fdstat; + + fdstat = I40E_READ_REG(hw, I40E_PFQF_FDSTAT); + stat->guarant_cnt = + (uint32_t)((fdstat & I40E_PFQF_FDSTAT_GUARANT_CNT_MASK) >> + I40E_PFQF_FDSTAT_GUARANT_CNT_SHIFT); + stat->best_cnt = + (uint32_t)((fdstat & I40E_PFQF_FDSTAT_BEST_CNT_MASK) >> + I40E_PFQF_FDSTAT_BEST_CNT_SHIFT); +} + +/* + * i40e_fdir_ctrl_func - deal with all operations on flow director. + * @pf: board private structure + * @filter_op:operation will be taken. + * @arg: a pointer to specific structure corresponding to the filter_op + */ +int +i40e_fdir_ctrl_func(struct rte_eth_dev *dev, + enum rte_filter_op filter_op, + void *arg) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + int ret = 0; + + if ((pf->flags & I40E_FLAG_FDIR) == 0) + return -ENOTSUP; + + if (filter_op == RTE_ETH_FILTER_NOP) + return 0; + + if (arg == NULL && filter_op != RTE_ETH_FILTER_FLUSH) + return -EINVAL; + + switch (filter_op) { + case RTE_ETH_FILTER_ADD: + ret = i40e_add_del_fdir_filter(dev, + (struct rte_eth_fdir_filter *)arg, + TRUE); + break; + case RTE_ETH_FILTER_DELETE: + ret = i40e_add_del_fdir_filter(dev, + (struct rte_eth_fdir_filter *)arg, + FALSE); + break; + case RTE_ETH_FILTER_FLUSH: + ret = i40e_fdir_flush(dev); + break; + case RTE_ETH_FILTER_INFO: + i40e_fdir_info_get(dev, (struct rte_eth_fdir_info *)arg); + break; + case RTE_ETH_FILTER_STATS: + i40e_fdir_stats_get(dev, (struct rte_eth_fdir_stats *)arg); + break; + default: + PMD_DRV_LOG(ERR, "unknown operation %u.", filter_op); + ret = -EINVAL; + break; + } + return ret; +} diff --git a/drivers/net/i40e/i40e_logs.h b/drivers/net/i40e/i40e_logs.h new file mode 100644 index 0000000..399867f --- /dev/null +++ b/drivers/net/i40e/i40e_logs.h @@ -0,0 +1,78 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _I40E_LOGS_H_ +#define _I40E_LOGS_H_ + +#define PMD_INIT_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \ + "PMD: %s(): " fmt "\n", __func__, ##args) + +#ifdef RTE_LIBRTE_I40E_DEBUG_INIT +#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>") +#else +#define PMD_INIT_FUNC_TRACE() do { } while(0) +#endif + +#ifdef RTE_LIBRTE_I40E_DEBUG_RX +#define PMD_RX_LOG(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) +#else +#define PMD_RX_LOG(level, fmt, args...) do { } while(0) +#endif + +#ifdef RTE_LIBRTE_I40E_DEBUG_TX +#define PMD_TX_LOG(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) +#else +#define PMD_TX_LOG(level, fmt, args...) do { } while(0) +#endif + +#ifdef RTE_LIBRTE_I40E_DEBUG_TX_FREE +#define PMD_TX_FREE_LOG(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) +#else +#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while(0) +#endif + +#ifdef RTE_LIBRTE_I40E_DEBUG_DRIVER +#define PMD_DRV_LOG_RAW(level, fmt, args...) \ + RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args) +#else +#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0) +#endif + +#define PMD_DRV_LOG(level, fmt, args...) \ + PMD_DRV_LOG_RAW(level, fmt "\n", ## args) + +#endif /* _I40E_LOGS_H_ */ diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c new file mode 100644 index 0000000..b89a1e2 --- /dev/null +++ b/drivers/net/i40e/i40e_pf.c @@ -0,0 +1,1063 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include + +#include "i40e_logs.h" +#include "base/i40e_prototype.h" +#include "base/i40e_adminq_cmd.h" +#include "base/i40e_type.h" +#include "i40e_ethdev.h" +#include "i40e_rxtx.h" +#include "i40e_pf.h" + +#define I40E_CFG_CRCSTRIP_DEFAULT 1 + +static int +i40e_pf_host_switch_queues(struct i40e_pf_vf *vf, + struct i40e_virtchnl_queue_select *qsel, + bool on); + +/** + * Bind PF queues with VSI and VF. + **/ +static int +i40e_pf_vf_queues_mapping(struct i40e_pf_vf *vf) +{ + int i; + struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf); + uint16_t vsi_id = vf->vsi->vsi_id; + uint16_t vf_id = vf->vf_idx; + uint16_t nb_qps = vf->vsi->nb_qps; + uint16_t qbase = vf->vsi->base_queue; + uint16_t q1, q2; + uint32_t val; + + /* + * VF should use scatter range queues. So, it needn't + * to set QBASE in this register. + */ + I40E_WRITE_REG(hw, I40E_VSILAN_QBASE(vsi_id), + I40E_VSILAN_QBASE_VSIQTABLE_ENA_MASK); + + /* Set to enable VFLAN_QTABLE[] registers valid */ + I40E_WRITE_REG(hw, I40E_VPLAN_MAPENA(vf_id), + I40E_VPLAN_MAPENA_TXRX_ENA_MASK); + + /* map PF queues to VF */ + for (i = 0; i < nb_qps; i++) { + val = ((qbase + i) & I40E_VPLAN_QTABLE_QINDEX_MASK); + I40E_WRITE_REG(hw, I40E_VPLAN_QTABLE(i, vf_id), val); + } + + /* map PF queues to VSI */ + for (i = 0; i < I40E_MAX_QP_NUM_PER_VF / 2; i++) { + if (2 * i > nb_qps - 1) + q1 = I40E_VSILAN_QTABLE_QINDEX_0_MASK; + else + q1 = qbase + 2 * i; + + if (2 * i + 1 > nb_qps - 1) + q2 = I40E_VSILAN_QTABLE_QINDEX_0_MASK; + else + q2 = qbase + 2 * i + 1; + + val = (q2 << I40E_VSILAN_QTABLE_QINDEX_1_SHIFT) + q1; + I40E_WRITE_REG(hw, I40E_VSILAN_QTABLE(i, vsi_id), val); + } + I40E_WRITE_FLUSH(hw); + + return I40E_SUCCESS; +} + + +/** + * Proceed VF reset operation. + */ +int +i40e_pf_host_vf_reset(struct i40e_pf_vf *vf, bool do_hw_reset) +{ + uint32_t val, i; + struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf); + uint16_t vf_id, abs_vf_id, vf_msix_num; + int ret; + struct i40e_virtchnl_queue_select qsel; + + if (vf == NULL) + return -EINVAL; + + vf_id = vf->vf_idx; + abs_vf_id = vf_id + hw->func_caps.vf_base_id; + + /* Notify VF that we are in VFR progress */ + I40E_WRITE_REG(hw, I40E_VFGEN_RSTAT1(vf_id), I40E_PF_VFR_INPROGRESS); + + /* + * If require a SW VF reset, a VFLR interrupt will be generated, + * this function will be called again. To avoid it, + * disable interrupt first. + */ + if (do_hw_reset) { + vf->state = I40E_VF_INRESET; + val = I40E_READ_REG(hw, I40E_VPGEN_VFRTRIG(vf_id)); + val |= I40E_VPGEN_VFRTRIG_VFSWR_MASK; + I40E_WRITE_REG(hw, I40E_VPGEN_VFRTRIG(vf_id), val); + I40E_WRITE_FLUSH(hw); + } + +#define VFRESET_MAX_WAIT_CNT 100 + /* Wait until VF reset is done */ + for (i = 0; i < VFRESET_MAX_WAIT_CNT; i++) { + rte_delay_us(10); + val = I40E_READ_REG(hw, I40E_VPGEN_VFRSTAT(vf_id)); + if (val & I40E_VPGEN_VFRSTAT_VFRD_MASK) + break; + } + + if (i >= VFRESET_MAX_WAIT_CNT) { + PMD_DRV_LOG(ERR, "VF reset timeout"); + return -ETIMEDOUT; + } + + /* This is not first time to do reset, do cleanup job first */ + if (vf->vsi) { + /* Disable queues */ + memset(&qsel, 0, sizeof(qsel)); + for (i = 0; i < vf->vsi->nb_qps; i++) + qsel.rx_queues |= 1 << i; + qsel.tx_queues = qsel.rx_queues; + ret = i40e_pf_host_switch_queues(vf, &qsel, false); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Disable VF queues failed"); + return -EFAULT; + } + + /* Disable VF interrupt setting */ + vf_msix_num = hw->func_caps.num_msix_vectors_vf; + for (i = 0; i < vf_msix_num; i++) { + if (!i) + val = I40E_VFINT_DYN_CTL0(vf_id); + else + val = I40E_VFINT_DYN_CTLN(((vf_msix_num - 1) * + (vf_id)) + (i - 1)); + I40E_WRITE_REG(hw, val, I40E_VFINT_DYN_CTLN_CLEARPBA_MASK); + } + I40E_WRITE_FLUSH(hw); + + /* remove VSI */ + ret = i40e_vsi_release(vf->vsi); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Release VSI failed"); + return -EFAULT; + } + } + +#define I40E_VF_PCI_ADDR 0xAA +#define I40E_VF_PEND_MASK 0x20 + /* Check the pending transactions of this VF */ + /* Use absolute VF id, refer to datasheet for details */ + I40E_WRITE_REG(hw, I40E_PF_PCI_CIAA, I40E_VF_PCI_ADDR | + (abs_vf_id << I40E_PF_PCI_CIAA_VF_NUM_SHIFT)); + for (i = 0; i < VFRESET_MAX_WAIT_CNT; i++) { + rte_delay_us(1); + val = I40E_READ_REG(hw, I40E_PF_PCI_CIAD); + if ((val & I40E_VF_PEND_MASK) == 0) + break; + } + + if (i >= VFRESET_MAX_WAIT_CNT) { + PMD_DRV_LOG(ERR, "Wait VF PCI transaction end timeout"); + return -ETIMEDOUT; + } + + /* Reset done, Set COMPLETE flag and clear reset bit */ + I40E_WRITE_REG(hw, I40E_VFGEN_RSTAT1(vf_id), I40E_PF_VFR_COMPLETED); + val = I40E_READ_REG(hw, I40E_VPGEN_VFRTRIG(vf_id)); + val &= ~I40E_VPGEN_VFRTRIG_VFSWR_MASK; + I40E_WRITE_REG(hw, I40E_VPGEN_VFRTRIG(vf_id), val); + vf->reset_cnt++; + I40E_WRITE_FLUSH(hw); + + /* Allocate resource again */ + vf->vsi = i40e_vsi_setup(vf->pf, I40E_VSI_SRIOV, + vf->pf->main_vsi, vf->vf_idx); + if (vf->vsi == NULL) { + PMD_DRV_LOG(ERR, "Add vsi failed"); + return -EFAULT; + } + + ret = i40e_pf_vf_queues_mapping(vf); + if (ret != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "queue mapping error"); + i40e_vsi_release(vf->vsi); + return -EFAULT; + } + + return ret; +} + +static int +i40e_pf_host_send_msg_to_vf(struct i40e_pf_vf *vf, + uint32_t opcode, + uint32_t retval, + uint8_t *msg, + uint16_t msglen) +{ + struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf); + uint16_t abs_vf_id = hw->func_caps.vf_base_id + vf->vf_idx; + int ret; + + ret = i40e_aq_send_msg_to_vf(hw, abs_vf_id, opcode, retval, + msg, msglen, NULL); + if (ret) { + PMD_INIT_LOG(ERR, "Fail to send message to VF, err %u", + hw->aq.asq_last_status); + } + + return ret; +} + +static void +i40e_pf_host_process_cmd_version(struct i40e_pf_vf *vf) +{ + struct i40e_virtchnl_version_info info; + + info.major = I40E_DPDK_VERSION_MAJOR; + info.minor = I40E_DPDK_VERSION_MINOR; + i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_VERSION, + I40E_SUCCESS, (uint8_t *)&info, sizeof(info)); +} + +static int +i40e_pf_host_process_cmd_reset_vf(struct i40e_pf_vf *vf) +{ + i40e_pf_host_vf_reset(vf, 1); + + /* No feedback will be sent to VF for VFLR */ + return I40E_SUCCESS; +} + +static int +i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf) +{ + struct i40e_virtchnl_vf_resource *vf_res = NULL; + struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf); + uint32_t len = 0; + int ret = I40E_SUCCESS; + + /* only have 1 VSI by default */ + len = sizeof(struct i40e_virtchnl_vf_resource) + + I40E_DEFAULT_VF_VSI_NUM * + sizeof(struct i40e_virtchnl_vsi_resource); + + vf_res = rte_zmalloc("i40e_vf_res", len, 0); + if (vf_res == NULL) { + PMD_DRV_LOG(ERR, "failed to allocate mem"); + ret = I40E_ERR_NO_MEMORY; + vf_res = NULL; + len = 0; + goto send_msg; + } + + vf_res->vf_offload_flags = I40E_VIRTCHNL_VF_OFFLOAD_L2 | + I40E_VIRTCHNL_VF_OFFLOAD_VLAN; + vf_res->max_vectors = hw->func_caps.num_msix_vectors_vf; + vf_res->num_queue_pairs = vf->vsi->nb_qps; + vf_res->num_vsis = I40E_DEFAULT_VF_VSI_NUM; + + /* Change below setting if PF host can support more VSIs for VF */ + vf_res->vsi_res[0].vsi_type = I40E_VSI_SRIOV; + /* As assume Vf only has single VSI now, always return 0 */ + vf_res->vsi_res[0].vsi_id = 0; + vf_res->vsi_res[0].num_queue_pairs = vf->vsi->nb_qps; + +send_msg: + i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_GET_VF_RESOURCES, + ret, (uint8_t *)vf_res, len); + rte_free(vf_res); + + return ret; +} + +static int +i40e_pf_host_hmc_config_rxq(struct i40e_hw *hw, + struct i40e_pf_vf *vf, + struct i40e_virtchnl_rxq_info *rxq, + uint8_t crcstrip) +{ + int err = I40E_SUCCESS; + struct i40e_hmc_obj_rxq rx_ctx; + uint16_t abs_queue_id = vf->vsi->base_queue + rxq->queue_id; + + /* Clear the context structure first */ + memset(&rx_ctx, 0, sizeof(struct i40e_hmc_obj_rxq)); + rx_ctx.dbuff = rxq->databuffer_size >> I40E_RXQ_CTX_DBUFF_SHIFT; + rx_ctx.hbuff = rxq->hdr_size >> I40E_RXQ_CTX_HBUFF_SHIFT; + rx_ctx.base = rxq->dma_ring_addr / I40E_QUEUE_BASE_ADDR_UNIT; + rx_ctx.qlen = rxq->ring_len; +#ifndef RTE_LIBRTE_I40E_16BYTE_RX_DESC + rx_ctx.dsize = 1; +#endif + + if (rxq->splithdr_enabled) { + rx_ctx.hsplit_0 = I40E_HEADER_SPLIT_ALL; + rx_ctx.dtype = i40e_header_split_enabled; + } else { + rx_ctx.hsplit_0 = I40E_HEADER_SPLIT_NONE; + rx_ctx.dtype = i40e_header_split_none; + } + rx_ctx.rxmax = rxq->max_pkt_size; + rx_ctx.tphrdesc_ena = 1; + rx_ctx.tphwdesc_ena = 1; + rx_ctx.tphdata_ena = 1; + rx_ctx.tphhead_ena = 1; + rx_ctx.lrxqthresh = 2; + rx_ctx.crcstrip = crcstrip; + rx_ctx.l2tsel = 1; + rx_ctx.prefena = 1; + + err = i40e_clear_lan_rx_queue_context(hw, abs_queue_id); + if (err != I40E_SUCCESS) + return err; + err = i40e_set_lan_rx_queue_context(hw, abs_queue_id, &rx_ctx); + + return err; +} + +static int +i40e_pf_host_hmc_config_txq(struct i40e_hw *hw, + struct i40e_pf_vf *vf, + struct i40e_virtchnl_txq_info *txq) +{ + int err = I40E_SUCCESS; + struct i40e_hmc_obj_txq tx_ctx; + uint32_t qtx_ctl; + uint16_t abs_queue_id = vf->vsi->base_queue + txq->queue_id; + + + /* clear the context structure first */ + memset(&tx_ctx, 0, sizeof(tx_ctx)); + tx_ctx.new_context = 1; + tx_ctx.base = txq->dma_ring_addr / I40E_QUEUE_BASE_ADDR_UNIT; + tx_ctx.qlen = txq->ring_len; + tx_ctx.rdylist = rte_le_to_cpu_16(vf->vsi->info.qs_handle[0]); + err = i40e_clear_lan_tx_queue_context(hw, abs_queue_id); + if (err != I40E_SUCCESS) + return err; + + err = i40e_set_lan_tx_queue_context(hw, abs_queue_id, &tx_ctx); + if (err != I40E_SUCCESS) + return err; + + /* bind queue with VF function, since TX/QX will appear in pair, + * so only has QTX_CTL to set. + */ + qtx_ctl = (I40E_QTX_CTL_VF_QUEUE << I40E_QTX_CTL_PFVF_Q_SHIFT) | + ((hw->pf_id << I40E_QTX_CTL_PF_INDX_SHIFT) & + I40E_QTX_CTL_PF_INDX_MASK) | + (((vf->vf_idx + hw->func_caps.vf_base_id) << + I40E_QTX_CTL_VFVM_INDX_SHIFT) & + I40E_QTX_CTL_VFVM_INDX_MASK); + I40E_WRITE_REG(hw, I40E_QTX_CTL(abs_queue_id), qtx_ctl); + I40E_WRITE_FLUSH(hw); + + return I40E_SUCCESS; +} + +static int +i40e_pf_host_process_cmd_config_vsi_queues(struct i40e_pf_vf *vf, + uint8_t *msg, + uint16_t msglen) +{ + struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf); + struct i40e_vsi *vsi = vf->vsi; + struct i40e_virtchnl_vsi_queue_config_info *vc_vqci = + (struct i40e_virtchnl_vsi_queue_config_info *)msg; + struct i40e_virtchnl_queue_pair_info *vc_qpi; + int i, ret = I40E_SUCCESS; + + if (!msg || vc_vqci->num_queue_pairs > vsi->nb_qps || + vc_vqci->num_queue_pairs > I40E_MAX_VSI_QP || + msglen < I40E_VIRTCHNL_CONFIG_VSI_QUEUES_SIZE(vc_vqci, + vc_vqci->num_queue_pairs)) { + PMD_DRV_LOG(ERR, "vsi_queue_config_info argument wrong\n"); + ret = I40E_ERR_PARAM; + goto send_msg; + } + + vc_qpi = vc_vqci->qpair; + for (i = 0; i < vc_vqci->num_queue_pairs; i++) { + if (vc_qpi[i].rxq.queue_id > vsi->nb_qps - 1 || + vc_qpi[i].txq.queue_id > vsi->nb_qps - 1) { + ret = I40E_ERR_PARAM; + goto send_msg; + } + + /* + * Apply VF RX queue setting to HMC. + * If the opcode is I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES_EXT, + * then the extra information of + * 'struct i40e_virtchnl_queue_pair_extra_info' is needed, + * otherwise set the last parameter to NULL. + */ + if (i40e_pf_host_hmc_config_rxq(hw, vf, &vc_qpi[i].rxq, + I40E_CFG_CRCSTRIP_DEFAULT) != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Configure RX queue HMC failed"); + ret = I40E_ERR_PARAM; + goto send_msg; + } + + /* Apply VF TX queue setting to HMC */ + if (i40e_pf_host_hmc_config_txq(hw, vf, + &vc_qpi[i].txq) != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Configure TX queue HMC failed"); + ret = I40E_ERR_PARAM; + goto send_msg; + } + } + +send_msg: + i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES, + ret, NULL, 0); + + return ret; +} + +static int +i40e_pf_host_process_cmd_config_vsi_queues_ext(struct i40e_pf_vf *vf, + uint8_t *msg, + uint16_t msglen) +{ + struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf); + struct i40e_vsi *vsi = vf->vsi; + struct i40e_virtchnl_vsi_queue_config_ext_info *vc_vqcei = + (struct i40e_virtchnl_vsi_queue_config_ext_info *)msg; + struct i40e_virtchnl_queue_pair_ext_info *vc_qpei; + int i, ret = I40E_SUCCESS; + + if (!msg || vc_vqcei->num_queue_pairs > vsi->nb_qps || + vc_vqcei->num_queue_pairs > I40E_MAX_VSI_QP || + msglen < I40E_VIRTCHNL_CONFIG_VSI_QUEUES_SIZE(vc_vqcei, + vc_vqcei->num_queue_pairs)) { + PMD_DRV_LOG(ERR, "vsi_queue_config_ext_info argument wrong\n"); + ret = I40E_ERR_PARAM; + goto send_msg; + } + + vc_qpei = vc_vqcei->qpair; + for (i = 0; i < vc_vqcei->num_queue_pairs; i++) { + if (vc_qpei[i].rxq.queue_id > vsi->nb_qps - 1 || + vc_qpei[i].txq.queue_id > vsi->nb_qps - 1) { + ret = I40E_ERR_PARAM; + goto send_msg; + } + /* + * Apply VF RX queue setting to HMC. + * If the opcode is I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES_EXT, + * then the extra information of + * 'struct i40e_virtchnl_queue_pair_ext_info' is needed, + * otherwise set the last parameter to NULL. + */ + if (i40e_pf_host_hmc_config_rxq(hw, vf, &vc_qpei[i].rxq, + vc_qpei[i].rxq_ext.crcstrip) != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Configure RX queue HMC failed"); + ret = I40E_ERR_PARAM; + goto send_msg; + } + + /* Apply VF TX queue setting to HMC */ + if (i40e_pf_host_hmc_config_txq(hw, vf, &vc_qpei[i].txq) != + I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Configure TX queue HMC failed"); + ret = I40E_ERR_PARAM; + goto send_msg; + } + } + +send_msg: + i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES_EXT, + ret, NULL, 0); + + return ret; +} + +static int +i40e_pf_host_process_cmd_config_irq_map(struct i40e_pf_vf *vf, + uint8_t *msg, uint16_t msglen) +{ + int ret = I40E_SUCCESS; + struct i40e_virtchnl_irq_map_info *irqmap = + (struct i40e_virtchnl_irq_map_info *)msg; + + if (msg == NULL || msglen < sizeof(struct i40e_virtchnl_irq_map_info)) { + PMD_DRV_LOG(ERR, "buffer too short"); + ret = I40E_ERR_PARAM; + goto send_msg; + } + + /* Assume VF only have 1 vector to bind all queues */ + if (irqmap->num_vectors != 1) { + PMD_DRV_LOG(ERR, "DKDK host only support 1 vector"); + ret = I40E_ERR_PARAM; + goto send_msg; + } + + if (irqmap->vecmap[0].vector_id == 0) { + PMD_DRV_LOG(ERR, "DPDK host don't support use IRQ0"); + ret = I40E_ERR_PARAM; + goto send_msg; + } + /* This MSIX intr store the intr in VF range */ + vf->vsi->msix_intr = irqmap->vecmap[0].vector_id; + + /* Don't care how the TX/RX queue mapping with this vector. + * Link all VF RX queues together. Only did mapping work. + * VF can disable/enable the intr by itself. + */ + i40e_vsi_queues_bind_intr(vf->vsi); +send_msg: + i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_CONFIG_IRQ_MAP, + ret, NULL, 0); + + return ret; +} + +static int +i40e_pf_host_switch_queues(struct i40e_pf_vf *vf, + struct i40e_virtchnl_queue_select *qsel, + bool on) +{ + int ret = I40E_SUCCESS; + int i; + struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf); + uint16_t baseq = vf->vsi->base_queue; + + if (qsel->rx_queues + qsel->tx_queues == 0) + return I40E_ERR_PARAM; + + /* always enable RX first and disable last */ + /* Enable RX if it's enable */ + if (on) { + for (i = 0; i < I40E_MAX_QP_NUM_PER_VF; i++) + if (qsel->rx_queues & (1 << i)) { + ret = i40e_switch_rx_queue(hw, baseq + i, on); + if (ret != I40E_SUCCESS) + return ret; + } + } + + /* Enable/Disable TX */ + for (i = 0; i < I40E_MAX_QP_NUM_PER_VF; i++) + if (qsel->tx_queues & (1 << i)) { + ret = i40e_switch_tx_queue(hw, baseq + i, on); + if (ret != I40E_SUCCESS) + return ret; + } + + /* disable RX last if it's disable */ + if (!on) { + /* disable RX */ + for (i = 0; i < I40E_MAX_QP_NUM_PER_VF; i++) + if (qsel->rx_queues & (1 << i)) { + ret = i40e_switch_rx_queue(hw, baseq + i, on); + if (ret != I40E_SUCCESS) + return ret; + } + } + + return ret; +} + +static int +i40e_pf_host_process_cmd_enable_queues(struct i40e_pf_vf *vf, + uint8_t *msg, + uint16_t msglen) +{ + int ret = I40E_SUCCESS; + struct i40e_virtchnl_queue_select *q_sel = + (struct i40e_virtchnl_queue_select *)msg; + + if (msg == NULL || msglen != sizeof(*q_sel)) { + ret = I40E_ERR_PARAM; + goto send_msg; + } + ret = i40e_pf_host_switch_queues(vf, q_sel, true); + +send_msg: + i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_ENABLE_QUEUES, + ret, NULL, 0); + + return ret; +} + +static int +i40e_pf_host_process_cmd_disable_queues(struct i40e_pf_vf *vf, + uint8_t *msg, + uint16_t msglen) +{ + int ret = I40E_SUCCESS; + struct i40e_virtchnl_queue_select *q_sel = + (struct i40e_virtchnl_queue_select *)msg; + + if (msg == NULL || msglen != sizeof(*q_sel)) { + ret = I40E_ERR_PARAM; + goto send_msg; + } + ret = i40e_pf_host_switch_queues(vf, q_sel, false); + +send_msg: + i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_DISABLE_QUEUES, + ret, NULL, 0); + + return ret; +} + + +static int +i40e_pf_host_process_cmd_add_ether_address(struct i40e_pf_vf *vf, + uint8_t *msg, + uint16_t msglen) +{ + int ret = I40E_SUCCESS; + struct i40e_virtchnl_ether_addr_list *addr_list = + (struct i40e_virtchnl_ether_addr_list *)msg; + struct i40e_mac_filter_info filter; + int i; + struct ether_addr *mac; + + memset(&filter, 0 , sizeof(struct i40e_mac_filter_info)); + + if (msg == NULL || msglen <= sizeof(*addr_list)) { + PMD_DRV_LOG(ERR, "add_ether_address argument too short"); + ret = I40E_ERR_PARAM; + goto send_msg; + } + + for (i = 0; i < addr_list->num_elements; i++) { + mac = (struct ether_addr *)(addr_list->list[i].addr); + (void)rte_memcpy(&filter.mac_addr, mac, ETHER_ADDR_LEN); + filter.filter_type = RTE_MACVLAN_PERFECT_MATCH; + if(!is_valid_assigned_ether_addr(mac) || + i40e_vsi_add_mac(vf->vsi, &filter)) { + ret = I40E_ERR_INVALID_MAC_ADDR; + goto send_msg; + } + } + +send_msg: + i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_ADD_ETHER_ADDRESS, + ret, NULL, 0); + + return ret; +} + +static int +i40e_pf_host_process_cmd_del_ether_address(struct i40e_pf_vf *vf, + uint8_t *msg, + uint16_t msglen) +{ + int ret = I40E_SUCCESS; + struct i40e_virtchnl_ether_addr_list *addr_list = + (struct i40e_virtchnl_ether_addr_list *)msg; + int i; + struct ether_addr *mac; + + if (msg == NULL || msglen <= sizeof(*addr_list)) { + PMD_DRV_LOG(ERR, "delete_ether_address argument too short"); + ret = I40E_ERR_PARAM; + goto send_msg; + } + + for (i = 0; i < addr_list->num_elements; i++) { + mac = (struct ether_addr *)(addr_list->list[i].addr); + if(!is_valid_assigned_ether_addr(mac) || + i40e_vsi_delete_mac(vf->vsi, mac)) { + ret = I40E_ERR_INVALID_MAC_ADDR; + goto send_msg; + } + } + +send_msg: + i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_DEL_ETHER_ADDRESS, + ret, NULL, 0); + + return ret; +} + +static int +i40e_pf_host_process_cmd_add_vlan(struct i40e_pf_vf *vf, + uint8_t *msg, uint16_t msglen) +{ + int ret = I40E_SUCCESS; + struct i40e_virtchnl_vlan_filter_list *vlan_filter_list = + (struct i40e_virtchnl_vlan_filter_list *)msg; + int i; + uint16_t *vid; + + if (msg == NULL || msglen <= sizeof(*vlan_filter_list)) { + PMD_DRV_LOG(ERR, "add_vlan argument too short"); + ret = I40E_ERR_PARAM; + goto send_msg; + } + + vid = vlan_filter_list->vlan_id; + + for (i = 0; i < vlan_filter_list->num_elements; i++) { + ret = i40e_vsi_add_vlan(vf->vsi, vid[i]); + if(ret != I40E_SUCCESS) + goto send_msg; + } + +send_msg: + i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_ADD_VLAN, + ret, NULL, 0); + + return ret; +} + +static int +i40e_pf_host_process_cmd_del_vlan(struct i40e_pf_vf *vf, + uint8_t *msg, + uint16_t msglen) +{ + int ret = I40E_SUCCESS; + struct i40e_virtchnl_vlan_filter_list *vlan_filter_list = + (struct i40e_virtchnl_vlan_filter_list *)msg; + int i; + uint16_t *vid; + + if (msg == NULL || msglen <= sizeof(*vlan_filter_list)) { + PMD_DRV_LOG(ERR, "delete_vlan argument too short"); + ret = I40E_ERR_PARAM; + goto send_msg; + } + + vid = vlan_filter_list->vlan_id; + for (i = 0; i < vlan_filter_list->num_elements; i++) { + ret = i40e_vsi_delete_vlan(vf->vsi, vid[i]); + if(ret != I40E_SUCCESS) + goto send_msg; + } + +send_msg: + i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_DEL_VLAN, + ret, NULL, 0); + + return ret; +} + +static int +i40e_pf_host_process_cmd_config_promisc_mode( + struct i40e_pf_vf *vf, + uint8_t *msg, + uint16_t msglen) +{ + int ret = I40E_SUCCESS; + struct i40e_virtchnl_promisc_info *promisc = + (struct i40e_virtchnl_promisc_info *)msg; + struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf); + bool unicast = FALSE, multicast = FALSE; + + if (msg == NULL || msglen != sizeof(*promisc)) { + ret = I40E_ERR_PARAM; + goto send_msg; + } + + if (promisc->flags & I40E_FLAG_VF_UNICAST_PROMISC) + unicast = TRUE; + ret = i40e_aq_set_vsi_unicast_promiscuous(hw, + vf->vsi->seid, unicast, NULL); + if (ret != I40E_SUCCESS) + goto send_msg; + + if (promisc->flags & I40E_FLAG_VF_MULTICAST_PROMISC) + multicast = TRUE; + ret = i40e_aq_set_vsi_multicast_promiscuous(hw, vf->vsi->seid, + multicast, NULL); + +send_msg: + i40e_pf_host_send_msg_to_vf(vf, + I40E_VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE, ret, NULL, 0); + + return ret; +} + +static int +i40e_pf_host_process_cmd_get_stats(struct i40e_pf_vf *vf) +{ + i40e_update_vsi_stats(vf->vsi); + + i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_GET_STATS, + I40E_SUCCESS, (uint8_t *)&vf->vsi->eth_stats, + sizeof(vf->vsi->eth_stats)); + + return I40E_SUCCESS; +} + +static void +i40e_pf_host_process_cmd_get_link_status(struct i40e_pf_vf *vf) +{ + struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vf->pf->main_vsi); + + /* Update link status first to acquire latest link change */ + i40e_dev_link_update(dev, 1); + i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_GET_LINK_STAT, + I40E_SUCCESS, (uint8_t *)&dev->data->dev_link, + sizeof(struct rte_eth_link)); +} + +static int +i40e_pf_host_process_cmd_cfg_vlan_offload( + struct i40e_pf_vf *vf, + uint8_t *msg, + uint16_t msglen) +{ + int ret = I40E_SUCCESS; + struct i40e_virtchnl_vlan_offload_info *offload = + (struct i40e_virtchnl_vlan_offload_info *)msg; + + if (msg == NULL || msglen != sizeof(*offload)) { + ret = I40E_ERR_PARAM; + goto send_msg; + } + + ret = i40e_vsi_config_vlan_stripping(vf->vsi, + !!offload->enable_vlan_strip); + if (ret != 0) + PMD_DRV_LOG(ERR, "Failed to configure vlan stripping"); + +send_msg: + i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_CFG_VLAN_OFFLOAD, + ret, NULL, 0); + + return ret; +} + +static int +i40e_pf_host_process_cmd_cfg_pvid(struct i40e_pf_vf *vf, + uint8_t *msg, + uint16_t msglen) +{ + int ret = I40E_SUCCESS; + struct i40e_virtchnl_pvid_info *tpid_info = + (struct i40e_virtchnl_pvid_info *)msg; + + if (msg == NULL || msglen != sizeof(*tpid_info)) { + ret = I40E_ERR_PARAM; + goto send_msg; + } + + ret = i40e_vsi_vlan_pvid_set(vf->vsi, &tpid_info->info); + +send_msg: + i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_CFG_VLAN_PVID, + ret, NULL, 0); + + return ret; +} + +void +i40e_pf_host_handle_vf_msg(struct rte_eth_dev *dev, + uint16_t abs_vf_id, uint32_t opcode, + __rte_unused uint32_t retval, + uint8_t *msg, + uint16_t msglen) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct i40e_pf_vf *vf; + /* AdminQ will pass absolute VF id, transfer to internal vf id */ + uint16_t vf_id = abs_vf_id - hw->func_caps.vf_base_id; + + if (!dev || vf_id > pf->vf_num - 1 || !pf->vfs) { + PMD_DRV_LOG(ERR, "invalid argument"); + return; + } + + vf = &pf->vfs[vf_id]; + if (!vf->vsi) { + PMD_DRV_LOG(ERR, "NO VSI associated with VF found"); + i40e_pf_host_send_msg_to_vf(vf, opcode, + I40E_ERR_NO_AVAILABLE_VSI, NULL, 0); + return; + } + + switch (opcode) { + case I40E_VIRTCHNL_OP_VERSION : + PMD_DRV_LOG(INFO, "OP_VERSION received"); + i40e_pf_host_process_cmd_version(vf); + break; + case I40E_VIRTCHNL_OP_RESET_VF : + PMD_DRV_LOG(INFO, "OP_RESET_VF received"); + i40e_pf_host_process_cmd_reset_vf(vf); + break; + case I40E_VIRTCHNL_OP_GET_VF_RESOURCES: + PMD_DRV_LOG(INFO, "OP_GET_VF_RESOURCES received"); + i40e_pf_host_process_cmd_get_vf_resource(vf); + break; + case I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES: + PMD_DRV_LOG(INFO, "OP_CONFIG_VSI_QUEUES received"); + i40e_pf_host_process_cmd_config_vsi_queues(vf, msg, msglen); + break; + case I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES_EXT: + PMD_DRV_LOG(INFO, "OP_CONFIG_VSI_QUEUES_EXT received"); + i40e_pf_host_process_cmd_config_vsi_queues_ext(vf, msg, + msglen); + break; + case I40E_VIRTCHNL_OP_CONFIG_IRQ_MAP: + PMD_DRV_LOG(INFO, "OP_CONFIG_IRQ_MAP received"); + i40e_pf_host_process_cmd_config_irq_map(vf, msg, msglen); + break; + case I40E_VIRTCHNL_OP_ENABLE_QUEUES: + PMD_DRV_LOG(INFO, "OP_ENABLE_QUEUES received"); + i40e_pf_host_process_cmd_enable_queues(vf, msg, msglen); + break; + case I40E_VIRTCHNL_OP_DISABLE_QUEUES: + PMD_DRV_LOG(INFO, "OP_DISABLE_QUEUE received"); + i40e_pf_host_process_cmd_disable_queues(vf, msg, msglen); + break; + case I40E_VIRTCHNL_OP_ADD_ETHER_ADDRESS: + PMD_DRV_LOG(INFO, "OP_ADD_ETHER_ADDRESS received"); + i40e_pf_host_process_cmd_add_ether_address(vf, msg, msglen); + break; + case I40E_VIRTCHNL_OP_DEL_ETHER_ADDRESS: + PMD_DRV_LOG(INFO, "OP_DEL_ETHER_ADDRESS received"); + i40e_pf_host_process_cmd_del_ether_address(vf, msg, msglen); + break; + case I40E_VIRTCHNL_OP_ADD_VLAN: + PMD_DRV_LOG(INFO, "OP_ADD_VLAN received"); + i40e_pf_host_process_cmd_add_vlan(vf, msg, msglen); + break; + case I40E_VIRTCHNL_OP_DEL_VLAN: + PMD_DRV_LOG(INFO, "OP_DEL_VLAN received"); + i40e_pf_host_process_cmd_del_vlan(vf, msg, msglen); + break; + case I40E_VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE: + PMD_DRV_LOG(INFO, "OP_CONFIG_PROMISCUOUS_MODE received"); + i40e_pf_host_process_cmd_config_promisc_mode(vf, msg, msglen); + break; + case I40E_VIRTCHNL_OP_GET_STATS: + PMD_DRV_LOG(INFO, "OP_GET_STATS received"); + i40e_pf_host_process_cmd_get_stats(vf); + break; + case I40E_VIRTCHNL_OP_GET_LINK_STAT: + PMD_DRV_LOG(INFO, "OP_GET_LINK_STAT received"); + i40e_pf_host_process_cmd_get_link_status(vf); + break; + case I40E_VIRTCHNL_OP_CFG_VLAN_OFFLOAD: + PMD_DRV_LOG(INFO, "OP_CFG_VLAN_OFFLOAD received"); + i40e_pf_host_process_cmd_cfg_vlan_offload(vf, msg, msglen); + break; + case I40E_VIRTCHNL_OP_CFG_VLAN_PVID: + PMD_DRV_LOG(INFO, "OP_CFG_VLAN_PVID received"); + i40e_pf_host_process_cmd_cfg_pvid(vf, msg, msglen); + break; + /* Don't add command supported below, which will + * return an error code. + */ + case I40E_VIRTCHNL_OP_FCOE: + PMD_DRV_LOG(ERR, "OP_FCOE received, not supported"); + default: + PMD_DRV_LOG(ERR, "%u received, not supported", opcode); + i40e_pf_host_send_msg_to_vf(vf, opcode, I40E_ERR_PARAM, + NULL, 0); + break; + } +} + +int +i40e_pf_host_init(struct rte_eth_dev *dev) +{ + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_hw *hw = I40E_PF_TO_HW(pf); + int ret, i; + uint32_t val; + + PMD_INIT_FUNC_TRACE(); + + /** + * return if SRIOV not enabled, VF number not configured or + * no queue assigned. + */ + if(!hw->func_caps.sr_iov_1_1 || pf->vf_num == 0 || pf->vf_nb_qps == 0) + return I40E_SUCCESS; + + /* Allocate memory to store VF structure */ + pf->vfs = rte_zmalloc("i40e_pf_vf",sizeof(*pf->vfs) * pf->vf_num, 0); + if(pf->vfs == NULL) + return -ENOMEM; + + /* Disable irq0 for VFR event */ + i40e_pf_disable_irq0(hw); + + /* Disable VF link status interrupt */ + val = I40E_READ_REG(hw, I40E_PFGEN_PORTMDIO_NUM); + val &= ~I40E_PFGEN_PORTMDIO_NUM_VFLINK_STAT_ENA_MASK; + I40E_WRITE_REG(hw, I40E_PFGEN_PORTMDIO_NUM, val); + I40E_WRITE_FLUSH(hw); + + for (i = 0; i < pf->vf_num; i++) { + pf->vfs[i].pf = pf; + pf->vfs[i].state = I40E_VF_INACTIVE; + pf->vfs[i].vf_idx = i; + ret = i40e_pf_host_vf_reset(&pf->vfs[i], 0); + if (ret != I40E_SUCCESS) + goto fail; + } + + /* restore irq0 */ + i40e_pf_enable_irq0(hw); + + return I40E_SUCCESS; + +fail: + rte_free(pf->vfs); + i40e_pf_enable_irq0(hw); + + return ret; +} diff --git a/drivers/net/i40e/i40e_pf.h b/drivers/net/i40e/i40e_pf.h new file mode 100644 index 0000000..8bf83c2 --- /dev/null +++ b/drivers/net/i40e/i40e_pf.h @@ -0,0 +1,127 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _I40E_PF_H_ +#define _I40E_PF_H_ + +/* VERSION info to exchange between VF and PF host. In case VF works with + * ND kernel driver, it reads I40E_VIRTCHNL_VERSION_MAJOR/MINOR. In + * case works with DPDK host, it reads version below. Then VF realize who it + * is talking to and use proper language to communicate. + * */ +#define I40E_DPDK_SIGNATURE ('D' << 24 | 'P' << 16 | 'D' << 8 | 'K') +#define I40E_DPDK_VERSION_MAJOR I40E_DPDK_SIGNATURE +#define I40E_DPDK_VERSION_MINOR 0 + +/* Default setting on number of VSIs that VF can contain */ +#define I40E_DEFAULT_VF_VSI_NUM 1 + +#define I40E_DPDK_OFFSET 0x100 + +enum i40e_pf_vfr_state { + I40E_PF_VFR_INPROGRESS = 0, + I40E_PF_VFR_COMPLETED = 1, +}; + +/* DPDK pf driver specific command to VF */ +enum i40e_virtchnl_ops_dpdk { + /* + * Keep some gap between Linux PF commands and + * DPDK PF extended commands. + */ + I40E_VIRTCHNL_OP_GET_LINK_STAT = I40E_VIRTCHNL_OP_VERSION + + I40E_DPDK_OFFSET, + I40E_VIRTCHNL_OP_CFG_VLAN_OFFLOAD, + I40E_VIRTCHNL_OP_CFG_VLAN_PVID, + I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES_EXT, +}; + +/* A structure to support extended info of a receive queue. */ +struct i40e_virtchnl_rxq_ext_info { + uint8_t crcstrip; +}; + +/* + * A structure to support extended info of queue pairs, an additional field + * is added, comparing to original 'struct i40e_virtchnl_queue_pair_info'. + */ +struct i40e_virtchnl_queue_pair_ext_info { + /* vsi_id and queue_id should be identical for both rx and tx queues.*/ + struct i40e_virtchnl_txq_info txq; + struct i40e_virtchnl_rxq_info rxq; + struct i40e_virtchnl_rxq_ext_info rxq_ext; +}; + +/* + * A structure to support extended info of VSI queue pairs, + * 'struct i40e_virtchnl_queue_pair_ext_info' is used, see its original + * of 'struct i40e_virtchnl_queue_pair_info'. + */ +struct i40e_virtchnl_vsi_queue_config_ext_info { + uint16_t vsi_id; + uint16_t num_queue_pairs; + struct i40e_virtchnl_queue_pair_ext_info qpair[0]; +}; + +struct i40e_virtchnl_vlan_offload_info { + uint16_t vsi_id; + uint8_t enable_vlan_strip; + uint8_t reserved; +}; + +/* + * Macro to calculate the memory size for configuring VSI queues + * via virtual channel. + */ +#define I40E_VIRTCHNL_CONFIG_VSI_QUEUES_SIZE(x, n) \ + (sizeof(*(x)) + sizeof((x)->qpair[0]) * (n)) + +/* + * I40E_VIRTCHNL_OP_CFG_VLAN_PVID + * VF sends this message to enable/disable pvid. If it's + * enable op, needs to specify the pvid. PF returns status + * code in retval. + */ +struct i40e_virtchnl_pvid_info { + uint16_t vsi_id; + struct i40e_vsi_vlan_pvid_info info; +}; + +int i40e_pf_host_vf_reset(struct i40e_pf_vf *vf, bool do_hw_reset); +void i40e_pf_host_handle_vf_msg(struct rte_eth_dev *dev, + uint16_t abs_vf_id, uint32_t opcode, + __rte_unused uint32_t retval, + uint8_t *msg, uint16_t msglen); +int i40e_pf_host_init(struct rte_eth_dev *dev); + +#endif /* _I40E_PF_H_ */ diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c new file mode 100644 index 0000000..29551b9 --- /dev/null +++ b/drivers/net/i40e/i40e_rxtx.c @@ -0,0 +1,2709 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2015 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "i40e_logs.h" +#include "base/i40e_prototype.h" +#include "base/i40e_type.h" +#include "i40e_ethdev.h" +#include "i40e_rxtx.h" + +#define I40E_MIN_RING_DESC 64 +#define I40E_MAX_RING_DESC 4096 +#define I40E_ALIGN 128 +#define DEFAULT_TX_RS_THRESH 32 +#define DEFAULT_TX_FREE_THRESH 32 +#define I40E_MAX_PKT_TYPE 256 + +#define I40E_VLAN_TAG_SIZE 4 +#define I40E_TX_MAX_BURST 32 + +#define I40E_DMA_MEM_ALIGN 4096 + +#define I40E_SIMPLE_FLAGS ((uint32_t)ETH_TXQ_FLAGS_NOMULTSEGS | \ + ETH_TXQ_FLAGS_NOOFFLOADS) + +#define I40E_TXD_CMD (I40E_TX_DESC_CMD_EOP | I40E_TX_DESC_CMD_RS) + +#define I40E_TX_CKSUM_OFFLOAD_MASK ( \ + PKT_TX_IP_CKSUM | \ + PKT_TX_L4_MASK | \ + PKT_TX_OUTER_IP_CKSUM) + +#define RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb) \ + (uint64_t) ((mb)->buf_physaddr + RTE_PKTMBUF_HEADROOM) + +#define RTE_MBUF_DATA_DMA_ADDR(mb) \ + ((uint64_t)((mb)->buf_physaddr + (mb)->data_off)) + +static const struct rte_memzone * +i40e_ring_dma_zone_reserve(struct rte_eth_dev *dev, + const char *ring_name, + uint16_t queue_id, + uint32_t ring_size, + int socket_id); +static uint16_t i40e_xmit_pkts_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); + +/* Translate the rx descriptor status to pkt flags */ +static inline uint64_t +i40e_rxd_status_to_pkt_flags(uint64_t qword) +{ + uint64_t flags; + + /* Check if VLAN packet */ + flags = qword & (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ? + PKT_RX_VLAN_PKT : 0; + + /* Check if RSS_HASH */ + flags |= (((qword >> I40E_RX_DESC_STATUS_FLTSTAT_SHIFT) & + I40E_RX_DESC_FLTSTAT_RSS_HASH) == + I40E_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0; + + /* Check if FDIR Match */ + flags |= (qword & (1 << I40E_RX_DESC_STATUS_FLM_SHIFT) ? + PKT_RX_FDIR : 0); + + return flags; +} + +static inline uint64_t +i40e_rxd_error_to_pkt_flags(uint64_t qword) +{ + uint64_t flags = 0; + uint64_t error_bits = (qword >> I40E_RXD_QW1_ERROR_SHIFT); + +#define I40E_RX_ERR_BITS 0x3f + if (likely((error_bits & I40E_RX_ERR_BITS) == 0)) + return flags; + /* If RXE bit set, all other status bits are meaningless */ + if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_RXE_SHIFT))) { + flags |= PKT_RX_MAC_ERR; + return flags; + } + + /* If RECIPE bit set, all other status indications should be ignored */ + if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_RECIPE_SHIFT))) { + flags |= PKT_RX_RECIP_ERR; + return flags; + } + if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_HBO_SHIFT))) + flags |= PKT_RX_HBUF_OVERFLOW; + if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_IPE_SHIFT))) + flags |= PKT_RX_IP_CKSUM_BAD; + if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_L4E_SHIFT))) + flags |= PKT_RX_L4_CKSUM_BAD; + if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_EIPE_SHIFT))) + flags |= PKT_RX_EIP_CKSUM_BAD; + if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_OVERSIZE_SHIFT))) + flags |= PKT_RX_OVERSIZE; + + return flags; +} + +/* Translate pkt types to pkt flags */ +static inline uint64_t +i40e_rxd_ptype_to_pkt_flags(uint64_t qword) +{ + uint8_t ptype = (uint8_t)((qword & I40E_RXD_QW1_PTYPE_MASK) >> + I40E_RXD_QW1_PTYPE_SHIFT); + static const uint64_t ip_ptype_map[I40E_MAX_PKT_TYPE] = { + 0, /* PTYPE 0 */ + 0, /* PTYPE 1 */ + 0, /* PTYPE 2 */ + 0, /* PTYPE 3 */ + 0, /* PTYPE 4 */ + 0, /* PTYPE 5 */ + 0, /* PTYPE 6 */ + 0, /* PTYPE 7 */ + 0, /* PTYPE 8 */ + 0, /* PTYPE 9 */ + 0, /* PTYPE 10 */ + 0, /* PTYPE 11 */ + 0, /* PTYPE 12 */ + 0, /* PTYPE 13 */ + 0, /* PTYPE 14 */ + 0, /* PTYPE 15 */ + 0, /* PTYPE 16 */ + 0, /* PTYPE 17 */ + 0, /* PTYPE 18 */ + 0, /* PTYPE 19 */ + 0, /* PTYPE 20 */ + 0, /* PTYPE 21 */ + PKT_RX_IPV4_HDR, /* PTYPE 22 */ + PKT_RX_IPV4_HDR, /* PTYPE 23 */ + PKT_RX_IPV4_HDR, /* PTYPE 24 */ + 0, /* PTYPE 25 */ + PKT_RX_IPV4_HDR, /* PTYPE 26 */ + PKT_RX_IPV4_HDR, /* PTYPE 27 */ + PKT_RX_IPV4_HDR, /* PTYPE 28 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 29 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 30 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 31 */ + 0, /* PTYPE 32 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 33 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 34 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 35 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 36 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 37 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 38 */ + 0, /* PTYPE 39 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 40 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 41 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 42 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 43 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 44 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 45 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 46 */ + 0, /* PTYPE 47 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 48 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 49 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 50 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 51 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 52 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 53 */ + 0, /* PTYPE 54 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 55 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 56 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 57 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 58 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 59 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 60 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 61 */ + 0, /* PTYPE 62 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 63 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 64 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 65 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 66 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 67 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 68 */ + 0, /* PTYPE 69 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 70 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 71 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 72 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 73 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 74 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 75 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 76 */ + 0, /* PTYPE 77 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 78 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 79 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 80 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 81 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 82 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 83 */ + 0, /* PTYPE 84 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 85 */ + PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 86 */ + PKT_RX_IPV4_HDR_EXT, /* PTYPE 87 */ + PKT_RX_IPV6_HDR, /* PTYPE 88 */ + PKT_RX_IPV6_HDR, /* PTYPE 89 */ + PKT_RX_IPV6_HDR, /* PTYPE 90 */ + 0, /* PTYPE 91 */ + PKT_RX_IPV6_HDR, /* PTYPE 92 */ + PKT_RX_IPV6_HDR, /* PTYPE 93 */ + PKT_RX_IPV6_HDR, /* PTYPE 94 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 95 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 96 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 97 */ + 0, /* PTYPE 98 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 99 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 100 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 101 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 102 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 103 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 104 */ + 0, /* PTYPE 105 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 106 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 107 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 108 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 109 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 110 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 111 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 112 */ + 0, /* PTYPE 113 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 114 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 115 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 116 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 117 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 118 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 119 */ + 0, /* PTYPE 120 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 121 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 122 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 123 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 124 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 125 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 126 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 127 */ + 0, /* PTYPE 128 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 129 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 130 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 131 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 132 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 133 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 134 */ + 0, /* PTYPE 135 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 136 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 137 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 138 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 139 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 140 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 141 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 142 */ + 0, /* PTYPE 143 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 144 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 145 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 146 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 147 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 148 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 149 */ + 0, /* PTYPE 150 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 151 */ + PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 152 */ + PKT_RX_IPV6_HDR_EXT, /* PTYPE 153 */ + 0, /* PTYPE 154 */ + 0, /* PTYPE 155 */ + 0, /* PTYPE 156 */ + 0, /* PTYPE 157 */ + 0, /* PTYPE 158 */ + 0, /* PTYPE 159 */ + 0, /* PTYPE 160 */ + 0, /* PTYPE 161 */ + 0, /* PTYPE 162 */ + 0, /* PTYPE 163 */ + 0, /* PTYPE 164 */ + 0, /* PTYPE 165 */ + 0, /* PTYPE 166 */ + 0, /* PTYPE 167 */ + 0, /* PTYPE 168 */ + 0, /* PTYPE 169 */ + 0, /* PTYPE 170 */ + 0, /* PTYPE 171 */ + 0, /* PTYPE 172 */ + 0, /* PTYPE 173 */ + 0, /* PTYPE 174 */ + 0, /* PTYPE 175 */ + 0, /* PTYPE 176 */ + 0, /* PTYPE 177 */ + 0, /* PTYPE 178 */ + 0, /* PTYPE 179 */ + 0, /* PTYPE 180 */ + 0, /* PTYPE 181 */ + 0, /* PTYPE 182 */ + 0, /* PTYPE 183 */ + 0, /* PTYPE 184 */ + 0, /* PTYPE 185 */ + 0, /* PTYPE 186 */ + 0, /* PTYPE 187 */ + 0, /* PTYPE 188 */ + 0, /* PTYPE 189 */ + 0, /* PTYPE 190 */ + 0, /* PTYPE 191 */ + 0, /* PTYPE 192 */ + 0, /* PTYPE 193 */ + 0, /* PTYPE 194 */ + 0, /* PTYPE 195 */ + 0, /* PTYPE 196 */ + 0, /* PTYPE 197 */ + 0, /* PTYPE 198 */ + 0, /* PTYPE 199 */ + 0, /* PTYPE 200 */ + 0, /* PTYPE 201 */ + 0, /* PTYPE 202 */ + 0, /* PTYPE 203 */ + 0, /* PTYPE 204 */ + 0, /* PTYPE 205 */ + 0, /* PTYPE 206 */ + 0, /* PTYPE 207 */ + 0, /* PTYPE 208 */ + 0, /* PTYPE 209 */ + 0, /* PTYPE 210 */ + 0, /* PTYPE 211 */ + 0, /* PTYPE 212 */ + 0, /* PTYPE 213 */ + 0, /* PTYPE 214 */ + 0, /* PTYPE 215 */ + 0, /* PTYPE 216 */ + 0, /* PTYPE 217 */ + 0, /* PTYPE 218 */ + 0, /* PTYPE 219 */ + 0, /* PTYPE 220 */ + 0, /* PTYPE 221 */ + 0, /* PTYPE 222 */ + 0, /* PTYPE 223 */ + 0, /* PTYPE 224 */ + 0, /* PTYPE 225 */ + 0, /* PTYPE 226 */ + 0, /* PTYPE 227 */ + 0, /* PTYPE 228 */ + 0, /* PTYPE 229 */ + 0, /* PTYPE 230 */ + 0, /* PTYPE 231 */ + 0, /* PTYPE 232 */ + 0, /* PTYPE 233 */ + 0, /* PTYPE 234 */ + 0, /* PTYPE 235 */ + 0, /* PTYPE 236 */ + 0, /* PTYPE 237 */ + 0, /* PTYPE 238 */ + 0, /* PTYPE 239 */ + 0, /* PTYPE 240 */ + 0, /* PTYPE 241 */ + 0, /* PTYPE 242 */ + 0, /* PTYPE 243 */ + 0, /* PTYPE 244 */ + 0, /* PTYPE 245 */ + 0, /* PTYPE 246 */ + 0, /* PTYPE 247 */ + 0, /* PTYPE 248 */ + 0, /* PTYPE 249 */ + 0, /* PTYPE 250 */ + 0, /* PTYPE 251 */ + 0, /* PTYPE 252 */ + 0, /* PTYPE 253 */ + 0, /* PTYPE 254 */ + 0, /* PTYPE 255 */ + }; + + return ip_ptype_map[ptype]; +} + +#define I40E_RX_DESC_EXT_STATUS_FLEXBH_MASK 0x03 +#define I40E_RX_DESC_EXT_STATUS_FLEXBH_FD_ID 0x01 +#define I40E_RX_DESC_EXT_STATUS_FLEXBH_FLEX 0x02 +#define I40E_RX_DESC_EXT_STATUS_FLEXBL_MASK 0x03 +#define I40E_RX_DESC_EXT_STATUS_FLEXBL_FLEX 0x01 + +static inline uint64_t +i40e_rxd_build_fdir(volatile union i40e_rx_desc *rxdp, struct rte_mbuf *mb) +{ + uint64_t flags = 0; +#ifndef RTE_LIBRTE_I40E_16BYTE_RX_DESC + uint16_t flexbh, flexbl; + + flexbh = (rte_le_to_cpu_32(rxdp->wb.qword2.ext_status) >> + I40E_RX_DESC_EXT_STATUS_FLEXBH_SHIFT) & + I40E_RX_DESC_EXT_STATUS_FLEXBH_MASK; + flexbl = (rte_le_to_cpu_32(rxdp->wb.qword2.ext_status) >> + I40E_RX_DESC_EXT_STATUS_FLEXBL_SHIFT) & + I40E_RX_DESC_EXT_STATUS_FLEXBL_MASK; + + + if (flexbh == I40E_RX_DESC_EXT_STATUS_FLEXBH_FD_ID) { + mb->hash.fdir.hi = + rte_le_to_cpu_32(rxdp->wb.qword3.hi_dword.fd_id); + flags |= PKT_RX_FDIR_ID; + } else if (flexbh == I40E_RX_DESC_EXT_STATUS_FLEXBH_FLEX) { + mb->hash.fdir.hi = + rte_le_to_cpu_32(rxdp->wb.qword3.hi_dword.flex_bytes_hi); + flags |= PKT_RX_FDIR_FLX; + } + if (flexbl == I40E_RX_DESC_EXT_STATUS_FLEXBL_FLEX) { + mb->hash.fdir.lo = + rte_le_to_cpu_32(rxdp->wb.qword3.lo_dword.flex_bytes_lo); + flags |= PKT_RX_FDIR_FLX; + } +#else + mb->hash.fdir.hi = + rte_le_to_cpu_32(rxdp->wb.qword0.hi_dword.fd_id); + flags |= PKT_RX_FDIR_ID; +#endif + return flags; +} +static inline void +i40e_txd_enable_checksum(uint64_t ol_flags, + uint32_t *td_cmd, + uint32_t *td_offset, + union i40e_tx_offload tx_offload, + uint32_t *cd_tunneling) +{ + /* UDP tunneling packet TX checksum offload */ + if (ol_flags & PKT_TX_OUTER_IP_CKSUM) { + + *td_offset |= (tx_offload.outer_l2_len >> 1) + << I40E_TX_DESC_LENGTH_MACLEN_SHIFT; + + if (ol_flags & PKT_TX_OUTER_IP_CKSUM) + *cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV4; + else if (ol_flags & PKT_TX_OUTER_IPV4) + *cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV4_NO_CSUM; + else if (ol_flags & PKT_TX_OUTER_IPV6) + *cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV6; + + /* Now set the ctx descriptor fields */ + *cd_tunneling |= (tx_offload.outer_l3_len >> 2) << + I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT | + (tx_offload.l2_len >> 1) << + I40E_TXD_CTX_QW0_NATLEN_SHIFT; + + } else + *td_offset |= (tx_offload.l2_len >> 1) + << I40E_TX_DESC_LENGTH_MACLEN_SHIFT; + + /* Enable L3 checksum offloads */ + if (ol_flags & PKT_TX_IP_CKSUM) { + *td_cmd |= I40E_TX_DESC_CMD_IIPT_IPV4_CSUM; + *td_offset |= (tx_offload.l3_len >> 2) + << I40E_TX_DESC_LENGTH_IPLEN_SHIFT; + } else if (ol_flags & PKT_TX_IPV4) { + *td_cmd |= I40E_TX_DESC_CMD_IIPT_IPV4; + *td_offset |= (tx_offload.l3_len >> 2) + << I40E_TX_DESC_LENGTH_IPLEN_SHIFT; + } else if (ol_flags & PKT_TX_IPV6) { + *td_cmd |= I40E_TX_DESC_CMD_IIPT_IPV6; + *td_offset |= (tx_offload.l3_len >> 2) + << I40E_TX_DESC_LENGTH_IPLEN_SHIFT; + } + + if (ol_flags & PKT_TX_TCP_SEG) { + *td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_TCP; + *td_offset |= (tx_offload.l4_len >> 2) + << I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT; + return; + } + + /* Enable L4 checksum offloads */ + switch (ol_flags & PKT_TX_L4_MASK) { + case PKT_TX_TCP_CKSUM: + *td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_TCP; + *td_offset |= (sizeof(struct tcp_hdr) >> 2) << + I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT; + break; + case PKT_TX_SCTP_CKSUM: + *td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_SCTP; + *td_offset |= (sizeof(struct sctp_hdr) >> 2) << + I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT; + break; + case PKT_TX_UDP_CKSUM: + *td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_UDP; + *td_offset |= (sizeof(struct udp_hdr) >> 2) << + I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT; + break; + default: + break; + } +} + +static inline struct rte_mbuf * +rte_rxmbuf_alloc(struct rte_mempool *mp) +{ + struct rte_mbuf *m; + + m = __rte_mbuf_raw_alloc(mp); + __rte_mbuf_sanity_check_raw(m, 0); + + return m; +} + +/* Construct the tx flags */ +static inline uint64_t +i40e_build_ctob(uint32_t td_cmd, + uint32_t td_offset, + unsigned int size, + uint32_t td_tag) +{ + return rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DATA | + ((uint64_t)td_cmd << I40E_TXD_QW1_CMD_SHIFT) | + ((uint64_t)td_offset << I40E_TXD_QW1_OFFSET_SHIFT) | + ((uint64_t)size << I40E_TXD_QW1_TX_BUF_SZ_SHIFT) | + ((uint64_t)td_tag << I40E_TXD_QW1_L2TAG1_SHIFT)); +} + +static inline int +i40e_xmit_cleanup(struct i40e_tx_queue *txq) +{ + struct i40e_tx_entry *sw_ring = txq->sw_ring; + volatile struct i40e_tx_desc *txd = txq->tx_ring; + uint16_t last_desc_cleaned = txq->last_desc_cleaned; + uint16_t nb_tx_desc = txq->nb_tx_desc; + uint16_t desc_to_clean_to; + uint16_t nb_tx_to_clean; + + desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh); + if (desc_to_clean_to >= nb_tx_desc) + desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc); + + desc_to_clean_to = sw_ring[desc_to_clean_to].last_id; + if (!(txd[desc_to_clean_to].cmd_type_offset_bsz & + rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))) { + PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done " + "(port=%d queue=%d)", desc_to_clean_to, + txq->port_id, txq->queue_id); + return -1; + } + + if (last_desc_cleaned > desc_to_clean_to) + nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) + + desc_to_clean_to); + else + nb_tx_to_clean = (uint16_t)(desc_to_clean_to - + last_desc_cleaned); + + txd[desc_to_clean_to].cmd_type_offset_bsz = 0; + + txq->last_desc_cleaned = desc_to_clean_to; + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean); + + return 0; +} + +static inline int +#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC +check_rx_burst_bulk_alloc_preconditions(struct i40e_rx_queue *rxq) +#else +check_rx_burst_bulk_alloc_preconditions(__rte_unused struct i40e_rx_queue *rxq) +#endif +{ + int ret = 0; + +#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC + if (!(rxq->rx_free_thresh >= RTE_PMD_I40E_RX_MAX_BURST)) { + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: " + "rxq->rx_free_thresh=%d, " + "RTE_PMD_I40E_RX_MAX_BURST=%d", + rxq->rx_free_thresh, RTE_PMD_I40E_RX_MAX_BURST); + ret = -EINVAL; + } else if (!(rxq->rx_free_thresh < rxq->nb_rx_desc)) { + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: " + "rxq->rx_free_thresh=%d, " + "rxq->nb_rx_desc=%d", + rxq->rx_free_thresh, rxq->nb_rx_desc); + ret = -EINVAL; + } else if (rxq->nb_rx_desc % rxq->rx_free_thresh != 0) { + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: " + "rxq->nb_rx_desc=%d, " + "rxq->rx_free_thresh=%d", + rxq->nb_rx_desc, rxq->rx_free_thresh); + ret = -EINVAL; + } else if (!(rxq->nb_rx_desc < (I40E_MAX_RING_DESC - + RTE_PMD_I40E_RX_MAX_BURST))) { + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: " + "rxq->nb_rx_desc=%d, " + "I40E_MAX_RING_DESC=%d, " + "RTE_PMD_I40E_RX_MAX_BURST=%d", + rxq->nb_rx_desc, I40E_MAX_RING_DESC, + RTE_PMD_I40E_RX_MAX_BURST); + ret = -EINVAL; + } +#else + ret = -EINVAL; +#endif + + return ret; +} + +#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC +#define I40E_LOOK_AHEAD 8 +#if (I40E_LOOK_AHEAD != 8) +#error "PMD I40E: I40E_LOOK_AHEAD must be 8\n" +#endif +static inline int +i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq) +{ + volatile union i40e_rx_desc *rxdp; + struct i40e_rx_entry *rxep; + struct rte_mbuf *mb; + uint16_t pkt_len; + uint64_t qword1; + uint32_t rx_status; + int32_t s[I40E_LOOK_AHEAD], nb_dd; + int32_t i, j, nb_rx = 0; + uint64_t pkt_flags; + + rxdp = &rxq->rx_ring[rxq->rx_tail]; + rxep = &rxq->sw_ring[rxq->rx_tail]; + + qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len); + rx_status = (qword1 & I40E_RXD_QW1_STATUS_MASK) >> + I40E_RXD_QW1_STATUS_SHIFT; + + /* Make sure there is at least 1 packet to receive */ + if (!(rx_status & (1 << I40E_RX_DESC_STATUS_DD_SHIFT))) + return 0; + + /** + * Scan LOOK_AHEAD descriptors at a time to determine which + * descriptors reference packets that are ready to be received. + */ + for (i = 0; i < RTE_PMD_I40E_RX_MAX_BURST; i+=I40E_LOOK_AHEAD, + rxdp += I40E_LOOK_AHEAD, rxep += I40E_LOOK_AHEAD) { + /* Read desc statuses backwards to avoid race condition */ + for (j = I40E_LOOK_AHEAD - 1; j >= 0; j--) { + qword1 = rte_le_to_cpu_64(\ + rxdp[j].wb.qword1.status_error_len); + s[j] = (qword1 & I40E_RXD_QW1_STATUS_MASK) >> + I40E_RXD_QW1_STATUS_SHIFT; + } + + /* Compute how many status bits were set */ + for (j = 0, nb_dd = 0; j < I40E_LOOK_AHEAD; j++) + nb_dd += s[j] & (1 << I40E_RX_DESC_STATUS_DD_SHIFT); + + nb_rx += nb_dd; + + /* Translate descriptor info to mbuf parameters */ + for (j = 0; j < nb_dd; j++) { + mb = rxep[j].mbuf; + qword1 = rte_le_to_cpu_64(\ + rxdp[j].wb.qword1.status_error_len); + rx_status = (qword1 & I40E_RXD_QW1_STATUS_MASK) >> + I40E_RXD_QW1_STATUS_SHIFT; + pkt_len = ((qword1 & I40E_RXD_QW1_LENGTH_PBUF_MASK) >> + I40E_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len; + mb->data_len = pkt_len; + mb->pkt_len = pkt_len; + mb->vlan_tci = rx_status & + (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ? + rte_le_to_cpu_16(\ + rxdp[j].wb.qword0.lo_dword.l2tag1) : 0; + pkt_flags = i40e_rxd_status_to_pkt_flags(qword1); + pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1); + pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1); + + mb->packet_type = (uint16_t)((qword1 & + I40E_RXD_QW1_PTYPE_MASK) >> + I40E_RXD_QW1_PTYPE_SHIFT); + if (pkt_flags & PKT_RX_RSS_HASH) + mb->hash.rss = rte_le_to_cpu_32(\ + rxdp[j].wb.qword0.hi_dword.rss); + if (pkt_flags & PKT_RX_FDIR) + pkt_flags |= i40e_rxd_build_fdir(&rxdp[j], mb); + + mb->ol_flags = pkt_flags; + } + + for (j = 0; j < I40E_LOOK_AHEAD; j++) + rxq->rx_stage[i + j] = rxep[j].mbuf; + + if (nb_dd != I40E_LOOK_AHEAD) + break; + } + + /* Clear software ring entries */ + for (i = 0; i < nb_rx; i++) + rxq->sw_ring[rxq->rx_tail + i].mbuf = NULL; + + return nb_rx; +} + +static inline uint16_t +i40e_rx_fill_from_stage(struct i40e_rx_queue *rxq, + struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t i; + struct rte_mbuf **stage = &rxq->rx_stage[rxq->rx_next_avail]; + + nb_pkts = (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail); + + for (i = 0; i < nb_pkts; i++) + rx_pkts[i] = stage[i]; + + rxq->rx_nb_avail = (uint16_t)(rxq->rx_nb_avail - nb_pkts); + rxq->rx_next_avail = (uint16_t)(rxq->rx_next_avail + nb_pkts); + + return nb_pkts; +} + +static inline int +i40e_rx_alloc_bufs(struct i40e_rx_queue *rxq) +{ + volatile union i40e_rx_desc *rxdp; + struct i40e_rx_entry *rxep; + struct rte_mbuf *mb; + uint16_t alloc_idx, i; + uint64_t dma_addr; + int diag; + + /* Allocate buffers in bulk */ + alloc_idx = (uint16_t)(rxq->rx_free_trigger - + (rxq->rx_free_thresh - 1)); + rxep = &(rxq->sw_ring[alloc_idx]); + diag = rte_mempool_get_bulk(rxq->mp, (void *)rxep, + rxq->rx_free_thresh); + if (unlikely(diag != 0)) { + PMD_DRV_LOG(ERR, "Failed to get mbufs in bulk"); + return -ENOMEM; + } + + rxdp = &rxq->rx_ring[alloc_idx]; + for (i = 0; i < rxq->rx_free_thresh; i++) { + mb = rxep[i].mbuf; + rte_mbuf_refcnt_set(mb, 1); + mb->next = NULL; + mb->data_off = RTE_PKTMBUF_HEADROOM; + mb->nb_segs = 1; + mb->port = rxq->port_id; + dma_addr = rte_cpu_to_le_64(\ + RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb)); + rxdp[i].read.hdr_addr = dma_addr; + rxdp[i].read.pkt_addr = dma_addr; + } + + /* Update rx tail regsiter */ + rte_wmb(); + I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->rx_free_trigger); + + rxq->rx_free_trigger = + (uint16_t)(rxq->rx_free_trigger + rxq->rx_free_thresh); + if (rxq->rx_free_trigger >= rxq->nb_rx_desc) + rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1); + + return 0; +} + +static inline uint16_t +rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + struct i40e_rx_queue *rxq = (struct i40e_rx_queue *)rx_queue; + uint16_t nb_rx = 0; + + if (!nb_pkts) + return 0; + + if (rxq->rx_nb_avail) + return i40e_rx_fill_from_stage(rxq, rx_pkts, nb_pkts); + + nb_rx = (uint16_t)i40e_rx_scan_hw_ring(rxq); + rxq->rx_next_avail = 0; + rxq->rx_nb_avail = nb_rx; + rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx); + + if (rxq->rx_tail > rxq->rx_free_trigger) { + if (i40e_rx_alloc_bufs(rxq) != 0) { + uint16_t i, j; + + PMD_RX_LOG(DEBUG, "Rx mbuf alloc failed for " + "port_id=%u, queue_id=%u", + rxq->port_id, rxq->queue_id); + rxq->rx_nb_avail = 0; + rxq->rx_tail = (uint16_t)(rxq->rx_tail - nb_rx); + for (i = 0, j = rxq->rx_tail; i < nb_rx; i++, j++) + rxq->sw_ring[j].mbuf = rxq->rx_stage[i]; + + return 0; + } + } + + if (rxq->rx_tail >= rxq->nb_rx_desc) + rxq->rx_tail = 0; + + if (rxq->rx_nb_avail) + return i40e_rx_fill_from_stage(rxq, rx_pkts, nb_pkts); + + return 0; +} + +static uint16_t +i40e_recv_pkts_bulk_alloc(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + uint16_t nb_rx = 0, n, count; + + if (unlikely(nb_pkts == 0)) + return 0; + + if (likely(nb_pkts <= RTE_PMD_I40E_RX_MAX_BURST)) + return rx_recv_pkts(rx_queue, rx_pkts, nb_pkts); + + while (nb_pkts) { + n = RTE_MIN(nb_pkts, RTE_PMD_I40E_RX_MAX_BURST); + count = rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n); + nb_rx = (uint16_t)(nb_rx + count); + nb_pkts = (uint16_t)(nb_pkts - count); + if (count < n) + break; + } + + return nb_rx; +} +#endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */ + +uint16_t +i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + struct i40e_rx_queue *rxq; + volatile union i40e_rx_desc *rx_ring; + volatile union i40e_rx_desc *rxdp; + union i40e_rx_desc rxd; + struct i40e_rx_entry *sw_ring; + struct i40e_rx_entry *rxe; + struct rte_mbuf *rxm; + struct rte_mbuf *nmb; + uint16_t nb_rx; + uint32_t rx_status; + uint64_t qword1; + uint16_t rx_packet_len; + uint16_t rx_id, nb_hold; + uint64_t dma_addr; + uint64_t pkt_flags; + + nb_rx = 0; + nb_hold = 0; + rxq = rx_queue; + rx_id = rxq->rx_tail; + rx_ring = rxq->rx_ring; + sw_ring = rxq->sw_ring; + + while (nb_rx < nb_pkts) { + rxdp = &rx_ring[rx_id]; + qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len); + rx_status = (qword1 & I40E_RXD_QW1_STATUS_MASK) + >> I40E_RXD_QW1_STATUS_SHIFT; + /* Check the DD bit first */ + if (!(rx_status & (1 << I40E_RX_DESC_STATUS_DD_SHIFT))) + break; + + nmb = rte_rxmbuf_alloc(rxq->mp); + if (unlikely(!nmb)) + break; + rxd = *rxdp; + + nb_hold++; + rxe = &sw_ring[rx_id]; + rx_id++; + if (unlikely(rx_id == rxq->nb_rx_desc)) + rx_id = 0; + + /* Prefetch next mbuf */ + rte_prefetch0(sw_ring[rx_id].mbuf); + + /** + * When next RX descriptor is on a cache line boundary, + * prefetch the next 4 RX descriptors and next 8 pointers + * to mbufs. + */ + if ((rx_id & 0x3) == 0) { + rte_prefetch0(&rx_ring[rx_id]); + rte_prefetch0(&sw_ring[rx_id]); + } + rxm = rxe->mbuf; + rxe->mbuf = nmb; + dma_addr = + rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(nmb)); + rxdp->read.hdr_addr = dma_addr; + rxdp->read.pkt_addr = dma_addr; + + rx_packet_len = ((qword1 & I40E_RXD_QW1_LENGTH_PBUF_MASK) >> + I40E_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len; + + rxm->data_off = RTE_PKTMBUF_HEADROOM; + rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM)); + rxm->nb_segs = 1; + rxm->next = NULL; + rxm->pkt_len = rx_packet_len; + rxm->data_len = rx_packet_len; + rxm->port = rxq->port_id; + + rxm->vlan_tci = rx_status & + (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ? + rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0; + pkt_flags = i40e_rxd_status_to_pkt_flags(qword1); + pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1); + pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1); + rxm->packet_type = (uint16_t)((qword1 & I40E_RXD_QW1_PTYPE_MASK) >> + I40E_RXD_QW1_PTYPE_SHIFT); + if (pkt_flags & PKT_RX_RSS_HASH) + rxm->hash.rss = + rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss); + if (pkt_flags & PKT_RX_FDIR) + pkt_flags |= i40e_rxd_build_fdir(&rxd, rxm); + + rxm->ol_flags = pkt_flags; + + rx_pkts[nb_rx++] = rxm; + } + rxq->rx_tail = rx_id; + + /** + * If the number of free RX descriptors is greater than the RX free + * threshold of the queue, advance the receive tail register of queue. + * Update that register with the value of the last processed RX + * descriptor minus 1. + */ + nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold); + if (nb_hold > rxq->rx_free_thresh) { + rx_id = (uint16_t) ((rx_id == 0) ? + (rxq->nb_rx_desc - 1) : (rx_id - 1)); + I40E_PCI_REG_WRITE(rxq->qrx_tail, rx_id); + nb_hold = 0; + } + rxq->nb_rx_hold = nb_hold; + + return nb_rx; +} + +uint16_t +i40e_recv_scattered_pkts(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct i40e_rx_queue *rxq = rx_queue; + volatile union i40e_rx_desc *rx_ring = rxq->rx_ring; + volatile union i40e_rx_desc *rxdp; + union i40e_rx_desc rxd; + struct i40e_rx_entry *sw_ring = rxq->sw_ring; + struct i40e_rx_entry *rxe; + struct rte_mbuf *first_seg = rxq->pkt_first_seg; + struct rte_mbuf *last_seg = rxq->pkt_last_seg; + struct rte_mbuf *nmb, *rxm; + uint16_t rx_id = rxq->rx_tail; + uint16_t nb_rx = 0, nb_hold = 0, rx_packet_len; + uint32_t rx_status; + uint64_t qword1; + uint64_t dma_addr; + uint64_t pkt_flags; + + while (nb_rx < nb_pkts) { + rxdp = &rx_ring[rx_id]; + qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len); + rx_status = (qword1 & I40E_RXD_QW1_STATUS_MASK) >> + I40E_RXD_QW1_STATUS_SHIFT; + /* Check the DD bit */ + if (!(rx_status & (1 << I40E_RX_DESC_STATUS_DD_SHIFT))) + break; + + nmb = rte_rxmbuf_alloc(rxq->mp); + if (unlikely(!nmb)) + break; + rxd = *rxdp; + nb_hold++; + rxe = &sw_ring[rx_id]; + rx_id++; + if (rx_id == rxq->nb_rx_desc) + rx_id = 0; + + /* Prefetch next mbuf */ + rte_prefetch0(sw_ring[rx_id].mbuf); + + /** + * When next RX descriptor is on a cache line boundary, + * prefetch the next 4 RX descriptors and next 8 pointers + * to mbufs. + */ + if ((rx_id & 0x3) == 0) { + rte_prefetch0(&rx_ring[rx_id]); + rte_prefetch0(&sw_ring[rx_id]); + } + + rxm = rxe->mbuf; + rxe->mbuf = nmb; + dma_addr = + rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(nmb)); + + /* Set data buffer address and data length of the mbuf */ + rxdp->read.hdr_addr = dma_addr; + rxdp->read.pkt_addr = dma_addr; + rx_packet_len = (qword1 & I40E_RXD_QW1_LENGTH_PBUF_MASK) >> + I40E_RXD_QW1_LENGTH_PBUF_SHIFT; + rxm->data_len = rx_packet_len; + rxm->data_off = RTE_PKTMBUF_HEADROOM; + + /** + * If this is the first buffer of the received packet, set the + * pointer to the first mbuf of the packet and initialize its + * context. Otherwise, update the total length and the number + * of segments of the current scattered packet, and update the + * pointer to the last mbuf of the current packet. + */ + if (!first_seg) { + first_seg = rxm; + first_seg->nb_segs = 1; + first_seg->pkt_len = rx_packet_len; + } else { + first_seg->pkt_len = + (uint16_t)(first_seg->pkt_len + + rx_packet_len); + first_seg->nb_segs++; + last_seg->next = rxm; + } + + /** + * If this is not the last buffer of the received packet, + * update the pointer to the last mbuf of the current scattered + * packet and continue to parse the RX ring. + */ + if (!(rx_status & (1 << I40E_RX_DESC_STATUS_EOF_SHIFT))) { + last_seg = rxm; + continue; + } + + /** + * This is the last buffer of the received packet. If the CRC + * is not stripped by the hardware: + * - Subtract the CRC length from the total packet length. + * - If the last buffer only contains the whole CRC or a part + * of it, free the mbuf associated to the last buffer. If part + * of the CRC is also contained in the previous mbuf, subtract + * the length of that CRC part from the data length of the + * previous mbuf. + */ + rxm->next = NULL; + if (unlikely(rxq->crc_len > 0)) { + first_seg->pkt_len -= ETHER_CRC_LEN; + if (rx_packet_len <= ETHER_CRC_LEN) { + rte_pktmbuf_free_seg(rxm); + first_seg->nb_segs--; + last_seg->data_len = + (uint16_t)(last_seg->data_len - + (ETHER_CRC_LEN - rx_packet_len)); + last_seg->next = NULL; + } else + rxm->data_len = (uint16_t)(rx_packet_len - + ETHER_CRC_LEN); + } + + first_seg->port = rxq->port_id; + first_seg->vlan_tci = (rx_status & + (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) ? + rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0; + pkt_flags = i40e_rxd_status_to_pkt_flags(qword1); + pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1); + pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1); + first_seg->packet_type = (uint16_t)((qword1 & + I40E_RXD_QW1_PTYPE_MASK) >> + I40E_RXD_QW1_PTYPE_SHIFT); + if (pkt_flags & PKT_RX_RSS_HASH) + rxm->hash.rss = + rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss); + if (pkt_flags & PKT_RX_FDIR) + pkt_flags |= i40e_rxd_build_fdir(&rxd, rxm); + + first_seg->ol_flags = pkt_flags; + + /* Prefetch data of first segment, if configured to do so. */ + rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, + first_seg->data_off)); + rx_pkts[nb_rx++] = first_seg; + first_seg = NULL; + } + + /* Record index of the next RX descriptor to probe. */ + rxq->rx_tail = rx_id; + rxq->pkt_first_seg = first_seg; + rxq->pkt_last_seg = last_seg; + + /** + * If the number of free RX descriptors is greater than the RX free + * threshold of the queue, advance the Receive Descriptor Tail (RDT) + * register. Update the RDT with the value of the last processed RX + * descriptor minus 1, to guarantee that the RDT register is never + * equal to the RDH register, which creates a "full" ring situtation + * from the hardware point of view. + */ + nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold); + if (nb_hold > rxq->rx_free_thresh) { + rx_id = (uint16_t)(rx_id == 0 ? + (rxq->nb_rx_desc - 1) : (rx_id - 1)); + I40E_PCI_REG_WRITE(rxq->qrx_tail, rx_id); + nb_hold = 0; + } + rxq->nb_rx_hold = nb_hold; + + return nb_rx; +} + +/* Check if the context descriptor is needed for TX offloading */ +static inline uint16_t +i40e_calc_context_desc(uint64_t flags) +{ + uint64_t mask = 0ULL; + + mask |= (PKT_TX_OUTER_IP_CKSUM | PKT_TX_TCP_SEG); + +#ifdef RTE_LIBRTE_IEEE1588 + mask |= PKT_TX_IEEE1588_TMST; +#endif + if (flags & mask) + return 1; + + return 0; +} + +/* set i40e TSO context descriptor */ +static inline uint64_t +i40e_set_tso_ctx(struct rte_mbuf *mbuf, union i40e_tx_offload tx_offload) +{ + uint64_t ctx_desc = 0; + uint32_t cd_cmd, hdr_len, cd_tso_len; + + if (!tx_offload.l4_len) { + PMD_DRV_LOG(DEBUG, "L4 length set to 0"); + return ctx_desc; + } + + /** + * in case of tunneling packet, the outer_l2_len and + * outer_l3_len must be 0. + */ + hdr_len = tx_offload.outer_l2_len + + tx_offload.outer_l3_len + + tx_offload.l2_len + + tx_offload.l3_len + + tx_offload.l4_len; + + cd_cmd = I40E_TX_CTX_DESC_TSO; + cd_tso_len = mbuf->pkt_len - hdr_len; + ctx_desc |= ((uint64_t)cd_cmd << I40E_TXD_CTX_QW1_CMD_SHIFT) | + ((uint64_t)cd_tso_len << + I40E_TXD_CTX_QW1_TSO_LEN_SHIFT) | + ((uint64_t)mbuf->tso_segsz << + I40E_TXD_CTX_QW1_MSS_SHIFT); + + return ctx_desc; +} + +uint16_t +i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + struct i40e_tx_queue *txq; + struct i40e_tx_entry *sw_ring; + struct i40e_tx_entry *txe, *txn; + volatile struct i40e_tx_desc *txd; + volatile struct i40e_tx_desc *txr; + struct rte_mbuf *tx_pkt; + struct rte_mbuf *m_seg; + uint32_t cd_tunneling_params; + uint16_t tx_id; + uint16_t nb_tx; + uint32_t td_cmd; + uint32_t td_offset; + uint32_t tx_flags; + uint32_t td_tag; + uint64_t ol_flags; + uint16_t nb_used; + uint16_t nb_ctx; + uint16_t tx_last; + uint16_t slen; + uint64_t buf_dma_addr; + union i40e_tx_offload tx_offload = {0}; + + txq = tx_queue; + sw_ring = txq->sw_ring; + txr = txq->tx_ring; + tx_id = txq->tx_tail; + txe = &sw_ring[tx_id]; + + /* Check if the descriptor ring needs to be cleaned. */ + if ((txq->nb_tx_desc - txq->nb_tx_free) > txq->tx_free_thresh) + i40e_xmit_cleanup(txq); + + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { + td_cmd = 0; + td_tag = 0; + td_offset = 0; + tx_flags = 0; + + tx_pkt = *tx_pkts++; + RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf); + + ol_flags = tx_pkt->ol_flags; + tx_offload.l2_len = tx_pkt->l2_len; + tx_offload.l3_len = tx_pkt->l3_len; + tx_offload.outer_l2_len = tx_pkt->outer_l2_len; + tx_offload.outer_l3_len = tx_pkt->outer_l3_len; + tx_offload.l4_len = tx_pkt->l4_len; + tx_offload.tso_segsz = tx_pkt->tso_segsz; + + /* Calculate the number of context descriptors needed. */ + nb_ctx = i40e_calc_context_desc(ol_flags); + + /** + * The number of descriptors that must be allocated for + * a packet equals to the number of the segments of that + * packet plus 1 context descriptor if needed. + */ + nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx); + tx_last = (uint16_t)(tx_id + nb_used - 1); + + /* Circular ring */ + if (tx_last >= txq->nb_tx_desc) + tx_last = (uint16_t)(tx_last - txq->nb_tx_desc); + + if (nb_used > txq->nb_tx_free) { + if (i40e_xmit_cleanup(txq) != 0) { + if (nb_tx == 0) + return 0; + goto end_of_tx; + } + if (unlikely(nb_used > txq->tx_rs_thresh)) { + while (nb_used > txq->nb_tx_free) { + if (i40e_xmit_cleanup(txq) != 0) { + if (nb_tx == 0) + return 0; + goto end_of_tx; + } + } + } + } + + /* Descriptor based VLAN insertion */ + if (ol_flags & PKT_TX_VLAN_PKT) { + tx_flags |= tx_pkt->vlan_tci << + I40E_TX_FLAG_L2TAG1_SHIFT; + tx_flags |= I40E_TX_FLAG_INSERT_VLAN; + td_cmd |= I40E_TX_DESC_CMD_IL2TAG1; + td_tag = (tx_flags & I40E_TX_FLAG_L2TAG1_MASK) >> + I40E_TX_FLAG_L2TAG1_SHIFT; + } + + /* Always enable CRC offload insertion */ + td_cmd |= I40E_TX_DESC_CMD_ICRC; + + /* Enable checksum offloading */ + cd_tunneling_params = 0; + if (unlikely(ol_flags & I40E_TX_CKSUM_OFFLOAD_MASK)) { + i40e_txd_enable_checksum(ol_flags, &td_cmd, &td_offset, + tx_offload, &cd_tunneling_params); + } + + if (unlikely(nb_ctx)) { + /* Setup TX context descriptor if required */ + volatile struct i40e_tx_context_desc *ctx_txd = + (volatile struct i40e_tx_context_desc *)\ + &txr[tx_id]; + uint16_t cd_l2tag2 = 0; + uint64_t cd_type_cmd_tso_mss = + I40E_TX_DESC_DTYPE_CONTEXT; + + txn = &sw_ring[txe->next_id]; + RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf); + if (txe->mbuf != NULL) { + rte_pktmbuf_free_seg(txe->mbuf); + txe->mbuf = NULL; + } + + /* TSO enabled means no timestamp */ + if (ol_flags & PKT_TX_TCP_SEG) + cd_type_cmd_tso_mss |= + i40e_set_tso_ctx(tx_pkt, tx_offload); + else { +#ifdef RTE_LIBRTE_IEEE1588 + if (ol_flags & PKT_TX_IEEE1588_TMST) + cd_type_cmd_tso_mss |= + ((uint64_t)I40E_TX_CTX_DESC_TSYN << + I40E_TXD_CTX_QW1_CMD_SHIFT); +#endif + } + + ctx_txd->tunneling_params = + rte_cpu_to_le_32(cd_tunneling_params); + ctx_txd->l2tag2 = rte_cpu_to_le_16(cd_l2tag2); + ctx_txd->type_cmd_tso_mss = + rte_cpu_to_le_64(cd_type_cmd_tso_mss); + + PMD_TX_LOG(DEBUG, "mbuf: %p, TCD[%u]:\n" + "tunneling_params: %#x;\n" + "l2tag2: %#hx;\n" + "rsvd: %#hx;\n" + "type_cmd_tso_mss: %#lx;\n", + tx_pkt, tx_id, + ctx_txd->tunneling_params, + ctx_txd->l2tag2, + ctx_txd->rsvd, + ctx_txd->type_cmd_tso_mss); + + txe->last_id = tx_last; + tx_id = txe->next_id; + txe = txn; + } + + m_seg = tx_pkt; + do { + txd = &txr[tx_id]; + txn = &sw_ring[txe->next_id]; + + if (txe->mbuf) + rte_pktmbuf_free_seg(txe->mbuf); + txe->mbuf = m_seg; + + /* Setup TX Descriptor */ + slen = m_seg->data_len; + buf_dma_addr = RTE_MBUF_DATA_DMA_ADDR(m_seg); + + PMD_TX_LOG(DEBUG, "mbuf: %p, TDD[%u]:\n" + "buf_dma_addr: %#"PRIx64";\n" + "td_cmd: %#x;\n" + "td_offset: %#x;\n" + "td_len: %u;\n" + "td_tag: %#x;\n", + tx_pkt, tx_id, buf_dma_addr, + td_cmd, td_offset, slen, td_tag); + + txd->buffer_addr = rte_cpu_to_le_64(buf_dma_addr); + txd->cmd_type_offset_bsz = i40e_build_ctob(td_cmd, + td_offset, slen, td_tag); + txe->last_id = tx_last; + tx_id = txe->next_id; + txe = txn; + m_seg = m_seg->next; + } while (m_seg != NULL); + + /* The last packet data descriptor needs End Of Packet (EOP) */ + td_cmd |= I40E_TX_DESC_CMD_EOP; + txq->nb_tx_used = (uint16_t)(txq->nb_tx_used + nb_used); + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_used); + + if (txq->nb_tx_used >= txq->tx_rs_thresh) { + PMD_TX_FREE_LOG(DEBUG, + "Setting RS bit on TXD id=" + "%4u (port=%d queue=%d)", + tx_last, txq->port_id, txq->queue_id); + + td_cmd |= I40E_TX_DESC_CMD_RS; + + /* Update txq RS bit counters */ + txq->nb_tx_used = 0; + } + + txd->cmd_type_offset_bsz |= + rte_cpu_to_le_64(((uint64_t)td_cmd) << + I40E_TXD_QW1_CMD_SHIFT); + } + +end_of_tx: + rte_wmb(); + + PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u", + (unsigned) txq->port_id, (unsigned) txq->queue_id, + (unsigned) tx_id, (unsigned) nb_tx); + + I40E_PCI_REG_WRITE(txq->qtx_tail, tx_id); + txq->tx_tail = tx_id; + + return nb_tx; +} + +static inline int __attribute__((always_inline)) +i40e_tx_free_bufs(struct i40e_tx_queue *txq) +{ + struct i40e_tx_entry *txep; + uint16_t i; + + if (!(txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & + rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))) + return 0; + + txep = &(txq->sw_ring[txq->tx_next_dd - (txq->tx_rs_thresh - 1)]); + + for (i = 0; i < txq->tx_rs_thresh; i++) + rte_prefetch0((txep + i)->mbuf); + + if (!(txq->txq_flags & (uint32_t)ETH_TXQ_FLAGS_NOREFCOUNT)) { + for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) { + rte_mempool_put(txep->mbuf->pool, txep->mbuf); + txep->mbuf = NULL; + } + } else { + for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) { + rte_pktmbuf_free_seg(txep->mbuf); + txep->mbuf = NULL; + } + } + + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh); + txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh); + if (txq->tx_next_dd >= txq->nb_tx_desc) + txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); + + return txq->tx_rs_thresh; +} + +#define I40E_TD_CMD (I40E_TX_DESC_CMD_ICRC |\ + I40E_TX_DESC_CMD_EOP) + +/* Populate 4 descriptors with data from 4 mbufs */ +static inline void +tx4(volatile struct i40e_tx_desc *txdp, struct rte_mbuf **pkts) +{ + uint64_t dma_addr; + uint32_t i; + + for (i = 0; i < 4; i++, txdp++, pkts++) { + dma_addr = RTE_MBUF_DATA_DMA_ADDR(*pkts); + txdp->buffer_addr = rte_cpu_to_le_64(dma_addr); + txdp->cmd_type_offset_bsz = + i40e_build_ctob((uint32_t)I40E_TD_CMD, 0, + (*pkts)->data_len, 0); + } +} + +/* Populate 1 descriptor with data from 1 mbuf */ +static inline void +tx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf **pkts) +{ + uint64_t dma_addr; + + dma_addr = RTE_MBUF_DATA_DMA_ADDR(*pkts); + txdp->buffer_addr = rte_cpu_to_le_64(dma_addr); + txdp->cmd_type_offset_bsz = + i40e_build_ctob((uint32_t)I40E_TD_CMD, 0, + (*pkts)->data_len, 0); +} + +/* Fill hardware descriptor ring with mbuf data */ +static inline void +i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq, + struct rte_mbuf **pkts, + uint16_t nb_pkts) +{ + volatile struct i40e_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]); + struct i40e_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]); + const int N_PER_LOOP = 4; + const int N_PER_LOOP_MASK = N_PER_LOOP - 1; + int mainpart, leftover; + int i, j; + + mainpart = (nb_pkts & ((uint32_t) ~N_PER_LOOP_MASK)); + leftover = (nb_pkts & ((uint32_t) N_PER_LOOP_MASK)); + for (i = 0; i < mainpart; i += N_PER_LOOP) { + for (j = 0; j < N_PER_LOOP; ++j) { + (txep + i + j)->mbuf = *(pkts + i + j); + } + tx4(txdp + i, pkts + i); + } + if (unlikely(leftover > 0)) { + for (i = 0; i < leftover; ++i) { + (txep + mainpart + i)->mbuf = *(pkts + mainpart + i); + tx1(txdp + mainpart + i, pkts + mainpart + i); + } + } +} + +static inline uint16_t +tx_xmit_pkts(struct i40e_tx_queue *txq, + struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + volatile struct i40e_tx_desc *txr = txq->tx_ring; + uint16_t n = 0; + + /** + * Begin scanning the H/W ring for done descriptors when the number + * of available descriptors drops below tx_free_thresh. For each done + * descriptor, free the associated buffer. + */ + if (txq->nb_tx_free < txq->tx_free_thresh) + i40e_tx_free_bufs(txq); + + /* Use available descriptor only */ + nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts); + if (unlikely(!nb_pkts)) + return 0; + + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); + if ((txq->tx_tail + nb_pkts) > txq->nb_tx_desc) { + n = (uint16_t)(txq->nb_tx_desc - txq->tx_tail); + i40e_tx_fill_hw_ring(txq, tx_pkts, n); + txr[txq->tx_next_rs].cmd_type_offset_bsz |= + rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) << + I40E_TXD_QW1_CMD_SHIFT); + txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); + txq->tx_tail = 0; + } + + /* Fill hardware descriptor ring with mbuf data */ + i40e_tx_fill_hw_ring(txq, tx_pkts + n, (uint16_t)(nb_pkts - n)); + txq->tx_tail = (uint16_t)(txq->tx_tail + (nb_pkts - n)); + + /* Determin if RS bit needs to be set */ + if (txq->tx_tail > txq->tx_next_rs) { + txr[txq->tx_next_rs].cmd_type_offset_bsz |= + rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) << + I40E_TXD_QW1_CMD_SHIFT); + txq->tx_next_rs = + (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh); + if (txq->tx_next_rs >= txq->nb_tx_desc) + txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); + } + + if (txq->tx_tail >= txq->nb_tx_desc) + txq->tx_tail = 0; + + /* Update the tx tail register */ + rte_wmb(); + I40E_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail); + + return nb_pkts; +} + +static uint16_t +i40e_xmit_pkts_simple(void *tx_queue, + struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + uint16_t nb_tx = 0; + + if (likely(nb_pkts <= I40E_TX_MAX_BURST)) + return tx_xmit_pkts((struct i40e_tx_queue *)tx_queue, + tx_pkts, nb_pkts); + + while (nb_pkts) { + uint16_t ret, num = (uint16_t)RTE_MIN(nb_pkts, + I40E_TX_MAX_BURST); + + ret = tx_xmit_pkts((struct i40e_tx_queue *)tx_queue, + &tx_pkts[nb_tx], num); + nb_tx = (uint16_t)(nb_tx + ret); + nb_pkts = (uint16_t)(nb_pkts - ret); + if (ret < num) + break; + } + + return nb_tx; +} + +/* + * Find the VSI the queue belongs to. 'queue_idx' is the queue index + * application used, which assume having sequential ones. But from driver's + * perspective, it's different. For example, q0 belongs to FDIR VSI, q1-q64 + * to MAIN VSI, , q65-96 to SRIOV VSIs, q97-128 to VMDQ VSIs. For application + * running on host, q1-64 and q97-128 can be used, total 96 queues. They can + * use queue_idx from 0 to 95 to access queues, while real queue would be + * different. This function will do a queue mapping to find VSI the queue + * belongs to. + */ +static struct i40e_vsi* +i40e_pf_get_vsi_by_qindex(struct i40e_pf *pf, uint16_t queue_idx) +{ + /* the queue in MAIN VSI range */ + if (queue_idx < pf->main_vsi->nb_qps) + return pf->main_vsi; + + queue_idx -= pf->main_vsi->nb_qps; + + /* queue_idx is greater than VMDQ VSIs range */ + if (queue_idx > pf->nb_cfg_vmdq_vsi * pf->vmdq_nb_qps - 1) { + PMD_INIT_LOG(ERR, "queue_idx out of range. VMDQ configured?"); + return NULL; + } + + return pf->vmdq[queue_idx / pf->vmdq_nb_qps].vsi; +} + +static uint16_t +i40e_get_queue_offset_by_qindex(struct i40e_pf *pf, uint16_t queue_idx) +{ + /* the queue in MAIN VSI range */ + if (queue_idx < pf->main_vsi->nb_qps) + return queue_idx; + + /* It's VMDQ queues */ + queue_idx -= pf->main_vsi->nb_qps; + + if (pf->nb_cfg_vmdq_vsi) + return queue_idx % pf->vmdq_nb_qps; + else { + PMD_INIT_LOG(ERR, "Fail to get queue offset"); + return (uint16_t)(-1); + } +} + +int +i40e_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct i40e_rx_queue *rxq; + int err = -1; + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + if (rx_queue_id < dev->data->nb_rx_queues) { + rxq = dev->data->rx_queues[rx_queue_id]; + + err = i40e_alloc_rx_queue_mbufs(rxq); + if (err) { + PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf"); + return err; + } + + rte_wmb(); + + /* Init the RX tail regieter. */ + I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); + + err = i40e_switch_rx_queue(hw, rxq->reg_idx, TRUE); + + if (err) { + PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on", + rx_queue_id); + + i40e_rx_queue_release_mbufs(rxq); + i40e_reset_rx_queue(rxq); + } + } + + return err; +} + +int +i40e_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct i40e_rx_queue *rxq; + int err; + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + if (rx_queue_id < dev->data->nb_rx_queues) { + rxq = dev->data->rx_queues[rx_queue_id]; + + /* + * rx_queue_id is queue id aplication refers to, while + * rxq->reg_idx is the real queue index. + */ + err = i40e_switch_rx_queue(hw, rxq->reg_idx, FALSE); + + if (err) { + PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off", + rx_queue_id); + return err; + } + i40e_rx_queue_release_mbufs(rxq); + i40e_reset_rx_queue(rxq); + } + + return 0; +} + +int +i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + int err = -1; + struct i40e_tx_queue *txq; + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + PMD_INIT_FUNC_TRACE(); + + if (tx_queue_id < dev->data->nb_tx_queues) { + txq = dev->data->tx_queues[tx_queue_id]; + + /* + * tx_queue_id is queue id aplication refers to, while + * rxq->reg_idx is the real queue index. + */ + err = i40e_switch_tx_queue(hw, txq->reg_idx, TRUE); + if (err) + PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on", + tx_queue_id); + } + + return err; +} + +int +i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct i40e_tx_queue *txq; + int err; + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + + if (tx_queue_id < dev->data->nb_tx_queues) { + txq = dev->data->tx_queues[tx_queue_id]; + + /* + * tx_queue_id is queue id aplication refers to, while + * txq->reg_idx is the real queue index. + */ + err = i40e_switch_tx_queue(hw, txq->reg_idx, FALSE); + + if (err) { + PMD_DRV_LOG(ERR, "Failed to switch TX queue %u of", + tx_queue_id); + return err; + } + + i40e_tx_queue_release_mbufs(txq); + i40e_reset_tx_queue(txq); + } + + return 0; +} + +int +i40e_dev_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct i40e_vsi *vsi; + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_rx_queue *rxq; + const struct rte_memzone *rz; + uint32_t ring_size; + uint16_t len; + int use_def_burst_func = 1; + + if (hw->mac.type == I40E_MAC_VF) { + struct i40e_vf *vf = + I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + vsi = &vf->vsi; + } else + vsi = i40e_pf_get_vsi_by_qindex(pf, queue_idx); + + if (vsi == NULL) { + PMD_DRV_LOG(ERR, "VSI not available or queue " + "index exceeds the maximum"); + return I40E_ERR_PARAM; + } + if (((nb_desc * sizeof(union i40e_rx_desc)) % I40E_ALIGN) != 0 || + (nb_desc > I40E_MAX_RING_DESC) || + (nb_desc < I40E_MIN_RING_DESC)) { + PMD_DRV_LOG(ERR, "Number (%u) of receive descriptors is " + "invalid", nb_desc); + return I40E_ERR_PARAM; + } + + /* Free memory if needed */ + if (dev->data->rx_queues[queue_idx]) { + i40e_dev_rx_queue_release(dev->data->rx_queues[queue_idx]); + dev->data->rx_queues[queue_idx] = NULL; + } + + /* Allocate the rx queue data structure */ + rxq = rte_zmalloc_socket("i40e rx queue", + sizeof(struct i40e_rx_queue), + RTE_CACHE_LINE_SIZE, + socket_id); + if (!rxq) { + PMD_DRV_LOG(ERR, "Failed to allocate memory for " + "rx queue data structure"); + return (-ENOMEM); + } + rxq->mp = mp; + rxq->nb_rx_desc = nb_desc; + rxq->rx_free_thresh = rx_conf->rx_free_thresh; + rxq->queue_id = queue_idx; + if (hw->mac.type == I40E_MAC_VF) + rxq->reg_idx = queue_idx; + else /* PF device */ + rxq->reg_idx = vsi->base_queue + + i40e_get_queue_offset_by_qindex(pf, queue_idx); + + rxq->port_id = dev->data->port_id; + rxq->crc_len = (uint8_t) ((dev->data->dev_conf.rxmode.hw_strip_crc) ? + 0 : ETHER_CRC_LEN); + rxq->drop_en = rx_conf->rx_drop_en; + rxq->vsi = vsi; + rxq->rx_deferred_start = rx_conf->rx_deferred_start; + + /* Allocate the maximun number of RX ring hardware descriptor. */ + ring_size = sizeof(union i40e_rx_desc) * I40E_MAX_RING_DESC; + ring_size = RTE_ALIGN(ring_size, I40E_DMA_MEM_ALIGN); + rz = i40e_ring_dma_zone_reserve(dev, + "rx_ring", + queue_idx, + ring_size, + socket_id); + if (!rz) { + i40e_dev_rx_queue_release(rxq); + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for RX"); + return (-ENOMEM); + } + + /* Zero all the descriptors in the ring. */ + memset(rz->addr, 0, ring_size); + +#ifdef RTE_LIBRTE_XEN_DOM0 + rxq->rx_ring_phys_addr = rte_mem_phy2mch(rz->memseg_id, rz->phys_addr); +#else + rxq->rx_ring_phys_addr = (uint64_t)rz->phys_addr; +#endif + + rxq->rx_ring = (union i40e_rx_desc *)rz->addr; + +#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC + len = (uint16_t)(nb_desc + RTE_PMD_I40E_RX_MAX_BURST); +#else + len = nb_desc; +#endif + + /* Allocate the software ring. */ + rxq->sw_ring = + rte_zmalloc_socket("i40e rx sw ring", + sizeof(struct i40e_rx_entry) * len, + RTE_CACHE_LINE_SIZE, + socket_id); + if (!rxq->sw_ring) { + i40e_dev_rx_queue_release(rxq); + PMD_DRV_LOG(ERR, "Failed to allocate memory for SW ring"); + return (-ENOMEM); + } + + i40e_reset_rx_queue(rxq); + rxq->q_set = TRUE; + dev->data->rx_queues[queue_idx] = rxq; + + use_def_burst_func = check_rx_burst_bulk_alloc_preconditions(rxq); + + if (!use_def_burst_func && !dev->data->scattered_rx) { +#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are " + "satisfied. Rx Burst Bulk Alloc function will be " + "used on port=%d, queue=%d.", + rxq->port_id, rxq->queue_id); + dev->rx_pkt_burst = i40e_recv_pkts_bulk_alloc; +#endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */ + } else { + PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are " + "not satisfied, Scattered Rx is requested, " + "or RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC is " + "not enabled on port=%d, queue=%d.", + rxq->port_id, rxq->queue_id); + } + + return 0; +} + +void +i40e_dev_rx_queue_release(void *rxq) +{ + struct i40e_rx_queue *q = (struct i40e_rx_queue *)rxq; + + if (!q) { + PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL"); + return; + } + + i40e_rx_queue_release_mbufs(q); + rte_free(q->sw_ring); + rte_free(q); +} + +uint32_t +i40e_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ +#define I40E_RXQ_SCAN_INTERVAL 4 + volatile union i40e_rx_desc *rxdp; + struct i40e_rx_queue *rxq; + uint16_t desc = 0; + + if (unlikely(rx_queue_id >= dev->data->nb_rx_queues)) { + PMD_DRV_LOG(ERR, "Invalid RX queue id %u", rx_queue_id); + return 0; + } + + rxq = dev->data->rx_queues[rx_queue_id]; + rxdp = &(rxq->rx_ring[rxq->rx_tail]); + while ((desc < rxq->nb_rx_desc) && + ((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) & + I40E_RXD_QW1_STATUS_MASK) >> I40E_RXD_QW1_STATUS_SHIFT) & + (1 << I40E_RX_DESC_STATUS_DD_SHIFT)) { + /** + * Check the DD bit of a rx descriptor of each 4 in a group, + * to avoid checking too frequently and downgrading performance + * too much. + */ + desc += I40E_RXQ_SCAN_INTERVAL; + rxdp += I40E_RXQ_SCAN_INTERVAL; + if (rxq->rx_tail + desc >= rxq->nb_rx_desc) + rxdp = &(rxq->rx_ring[rxq->rx_tail + + desc - rxq->nb_rx_desc]); + } + + return desc; +} + +int +i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset) +{ + volatile union i40e_rx_desc *rxdp; + struct i40e_rx_queue *rxq = rx_queue; + uint16_t desc; + int ret; + + if (unlikely(offset >= rxq->nb_rx_desc)) { + PMD_DRV_LOG(ERR, "Invalid RX queue id %u", offset); + return 0; + } + + desc = rxq->rx_tail + offset; + if (desc >= rxq->nb_rx_desc) + desc -= rxq->nb_rx_desc; + + rxdp = &(rxq->rx_ring[desc]); + + ret = !!(((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) & + I40E_RXD_QW1_STATUS_MASK) >> I40E_RXD_QW1_STATUS_SHIFT) & + (1 << I40E_RX_DESC_STATUS_DD_SHIFT)); + + return ret; +} + +int +i40e_dev_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_txconf *tx_conf) +{ + struct i40e_vsi *vsi; + struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); + struct i40e_tx_queue *txq; + const struct rte_memzone *tz; + uint32_t ring_size; + uint16_t tx_rs_thresh, tx_free_thresh; + + if (hw->mac.type == I40E_MAC_VF) { + struct i40e_vf *vf = + I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + vsi = &vf->vsi; + } else + vsi = i40e_pf_get_vsi_by_qindex(pf, queue_idx); + + if (vsi == NULL) { + PMD_DRV_LOG(ERR, "VSI is NULL, or queue index (%u) " + "exceeds the maximum", queue_idx); + return I40E_ERR_PARAM; + } + + if (((nb_desc * sizeof(struct i40e_tx_desc)) % I40E_ALIGN) != 0 || + (nb_desc > I40E_MAX_RING_DESC) || + (nb_desc < I40E_MIN_RING_DESC)) { + PMD_DRV_LOG(ERR, "Number (%u) of transmit descriptors is " + "invalid", nb_desc); + return I40E_ERR_PARAM; + } + + /** + * The following two parameters control the setting of the RS bit on + * transmit descriptors. TX descriptors will have their RS bit set + * after txq->tx_rs_thresh descriptors have been used. The TX + * descriptor ring will be cleaned after txq->tx_free_thresh + * descriptors are used or if the number of descriptors required to + * transmit a packet is greater than the number of free TX descriptors. + * + * The following constraints must be satisfied: + * - tx_rs_thresh must be greater than 0. + * - tx_rs_thresh must be less than the size of the ring minus 2. + * - tx_rs_thresh must be less than or equal to tx_free_thresh. + * - tx_rs_thresh must be a divisor of the ring size. + * - tx_free_thresh must be greater than 0. + * - tx_free_thresh must be less than the size of the ring minus 3. + * + * One descriptor in the TX ring is used as a sentinel to avoid a H/W + * race condition, hence the maximum threshold constraints. When set + * to zero use default values. + */ + tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh) ? + tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH); + tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ? + tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH); + if (tx_rs_thresh >= (nb_desc - 2)) { + PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than the " + "number of TX descriptors minus 2. " + "(tx_rs_thresh=%u port=%d queue=%d)", + (unsigned int)tx_rs_thresh, + (int)dev->data->port_id, + (int)queue_idx); + return I40E_ERR_PARAM; + } + if (tx_free_thresh >= (nb_desc - 3)) { + PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than the " + "tx_free_thresh must be less than the " + "number of TX descriptors minus 3. " + "(tx_free_thresh=%u port=%d queue=%d)", + (unsigned int)tx_free_thresh, + (int)dev->data->port_id, + (int)queue_idx); + return I40E_ERR_PARAM; + } + if (tx_rs_thresh > tx_free_thresh) { + PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than or " + "equal to tx_free_thresh. (tx_free_thresh=%u" + " tx_rs_thresh=%u port=%d queue=%d)", + (unsigned int)tx_free_thresh, + (unsigned int)tx_rs_thresh, + (int)dev->data->port_id, + (int)queue_idx); + return I40E_ERR_PARAM; + } + if ((nb_desc % tx_rs_thresh) != 0) { + PMD_INIT_LOG(ERR, "tx_rs_thresh must be a divisor of the " + "number of TX descriptors. (tx_rs_thresh=%u" + " port=%d queue=%d)", + (unsigned int)tx_rs_thresh, + (int)dev->data->port_id, + (int)queue_idx); + return I40E_ERR_PARAM; + } + if ((tx_rs_thresh > 1) && (tx_conf->tx_thresh.wthresh != 0)) { + PMD_INIT_LOG(ERR, "TX WTHRESH must be set to 0 if " + "tx_rs_thresh is greater than 1. " + "(tx_rs_thresh=%u port=%d queue=%d)", + (unsigned int)tx_rs_thresh, + (int)dev->data->port_id, + (int)queue_idx); + return I40E_ERR_PARAM; + } + + /* Free memory if needed. */ + if (dev->data->tx_queues[queue_idx]) { + i40e_dev_tx_queue_release(dev->data->tx_queues[queue_idx]); + dev->data->tx_queues[queue_idx] = NULL; + } + + /* Allocate the TX queue data structure. */ + txq = rte_zmalloc_socket("i40e tx queue", + sizeof(struct i40e_tx_queue), + RTE_CACHE_LINE_SIZE, + socket_id); + if (!txq) { + PMD_DRV_LOG(ERR, "Failed to allocate memory for " + "tx queue structure"); + return (-ENOMEM); + } + + /* Allocate TX hardware ring descriptors. */ + ring_size = sizeof(struct i40e_tx_desc) * I40E_MAX_RING_DESC; + ring_size = RTE_ALIGN(ring_size, I40E_DMA_MEM_ALIGN); + tz = i40e_ring_dma_zone_reserve(dev, + "tx_ring", + queue_idx, + ring_size, + socket_id); + if (!tz) { + i40e_dev_tx_queue_release(txq); + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for TX"); + return (-ENOMEM); + } + + txq->nb_tx_desc = nb_desc; + txq->tx_rs_thresh = tx_rs_thresh; + txq->tx_free_thresh = tx_free_thresh; + txq->pthresh = tx_conf->tx_thresh.pthresh; + txq->hthresh = tx_conf->tx_thresh.hthresh; + txq->wthresh = tx_conf->tx_thresh.wthresh; + txq->queue_id = queue_idx; + if (hw->mac.type == I40E_MAC_VF) + txq->reg_idx = queue_idx; + else /* PF device */ + txq->reg_idx = vsi->base_queue + + i40e_get_queue_offset_by_qindex(pf, queue_idx); + + txq->port_id = dev->data->port_id; + txq->txq_flags = tx_conf->txq_flags; + txq->vsi = vsi; + txq->tx_deferred_start = tx_conf->tx_deferred_start; + +#ifdef RTE_LIBRTE_XEN_DOM0 + txq->tx_ring_phys_addr = rte_mem_phy2mch(tz->memseg_id, tz->phys_addr); +#else + txq->tx_ring_phys_addr = (uint64_t)tz->phys_addr; +#endif + txq->tx_ring = (struct i40e_tx_desc *)tz->addr; + + /* Allocate software ring */ + txq->sw_ring = + rte_zmalloc_socket("i40e tx sw ring", + sizeof(struct i40e_tx_entry) * nb_desc, + RTE_CACHE_LINE_SIZE, + socket_id); + if (!txq->sw_ring) { + i40e_dev_tx_queue_release(txq); + PMD_DRV_LOG(ERR, "Failed to allocate memory for SW TX ring"); + return (-ENOMEM); + } + + i40e_reset_tx_queue(txq); + txq->q_set = TRUE; + dev->data->tx_queues[queue_idx] = txq; + + /* Use a simple TX queue without offloads or multi segs if possible */ + if (((txq->txq_flags & I40E_SIMPLE_FLAGS) == I40E_SIMPLE_FLAGS) && + (txq->tx_rs_thresh >= I40E_TX_MAX_BURST)) { + PMD_INIT_LOG(INFO, "Using simple tx path"); + dev->tx_pkt_burst = i40e_xmit_pkts_simple; + } else { + PMD_INIT_LOG(INFO, "Using full-featured tx path"); + dev->tx_pkt_burst = i40e_xmit_pkts; + } + + return 0; +} + +void +i40e_dev_tx_queue_release(void *txq) +{ + struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq; + + if (!q) { + PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL"); + return; + } + + i40e_tx_queue_release_mbufs(q); + rte_free(q->sw_ring); + rte_free(q); +} + +static const struct rte_memzone * +i40e_ring_dma_zone_reserve(struct rte_eth_dev *dev, + const char *ring_name, + uint16_t queue_id, + uint32_t ring_size, + int socket_id) +{ + char z_name[RTE_MEMZONE_NAMESIZE]; + const struct rte_memzone *mz; + + snprintf(z_name, sizeof(z_name), "%s_%s_%d_%d", + dev->driver->pci_drv.name, ring_name, + dev->data->port_id, queue_id); + mz = rte_memzone_lookup(z_name); + if (mz) + return mz; + +#ifdef RTE_LIBRTE_XEN_DOM0 + return rte_memzone_reserve_bounded(z_name, ring_size, + socket_id, 0, I40E_ALIGN, RTE_PGSIZE_2M); +#else + return rte_memzone_reserve_aligned(z_name, ring_size, + socket_id, 0, I40E_ALIGN); +#endif +} + +const struct rte_memzone * +i40e_memzone_reserve(const char *name, uint32_t len, int socket_id) +{ + const struct rte_memzone *mz = NULL; + + mz = rte_memzone_lookup(name); + if (mz) + return mz; +#ifdef RTE_LIBRTE_XEN_DOM0 + mz = rte_memzone_reserve_bounded(name, len, + socket_id, 0, I40E_ALIGN, RTE_PGSIZE_2M); +#else + mz = rte_memzone_reserve_aligned(name, len, + socket_id, 0, I40E_ALIGN); +#endif + return mz; +} + +void +i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq) +{ + uint16_t i; + + if (!rxq || !rxq->sw_ring) { + PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL"); + return; + } + + for (i = 0; i < rxq->nb_rx_desc; i++) { + if (rxq->sw_ring[i].mbuf) { + rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf); + rxq->sw_ring[i].mbuf = NULL; + } + } +#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC + if (rxq->rx_nb_avail == 0) + return; + for (i = 0; i < rxq->rx_nb_avail; i++) { + struct rte_mbuf *mbuf; + + mbuf = rxq->rx_stage[rxq->rx_next_avail + i]; + rte_pktmbuf_free_seg(mbuf); + } + rxq->rx_nb_avail = 0; +#endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */ +} + +void +i40e_reset_rx_queue(struct i40e_rx_queue *rxq) +{ + unsigned i; + uint16_t len; + +#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC + if (check_rx_burst_bulk_alloc_preconditions(rxq) == 0) + len = (uint16_t)(rxq->nb_rx_desc + RTE_PMD_I40E_RX_MAX_BURST); + else +#endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */ + len = rxq->nb_rx_desc; + + for (i = 0; i < len * sizeof(union i40e_rx_desc); i++) + ((volatile char *)rxq->rx_ring)[i] = 0; + +#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC + memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf)); + for (i = 0; i < RTE_PMD_I40E_RX_MAX_BURST; ++i) + rxq->sw_ring[rxq->nb_rx_desc + i].mbuf = &rxq->fake_mbuf; + + rxq->rx_nb_avail = 0; + rxq->rx_next_avail = 0; + rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1); +#endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */ + rxq->rx_tail = 0; + rxq->nb_rx_hold = 0; + rxq->pkt_first_seg = NULL; + rxq->pkt_last_seg = NULL; +} + +void +i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq) +{ + uint16_t i; + + if (!txq || !txq->sw_ring) { + PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL"); + return; + } + + for (i = 0; i < txq->nb_tx_desc; i++) { + if (txq->sw_ring[i].mbuf) { + rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf); + txq->sw_ring[i].mbuf = NULL; + } + } +} + +void +i40e_reset_tx_queue(struct i40e_tx_queue *txq) +{ + struct i40e_tx_entry *txe; + uint16_t i, prev, size; + + if (!txq) { + PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL"); + return; + } + + txe = txq->sw_ring; + size = sizeof(struct i40e_tx_desc) * txq->nb_tx_desc; + for (i = 0; i < size; i++) + ((volatile char *)txq->tx_ring)[i] = 0; + + prev = (uint16_t)(txq->nb_tx_desc - 1); + for (i = 0; i < txq->nb_tx_desc; i++) { + volatile struct i40e_tx_desc *txd = &txq->tx_ring[i]; + + txd->cmd_type_offset_bsz = + rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE); + txe[i].mbuf = NULL; + txe[i].last_id = i; + txe[prev].next_id = i; + prev = i; + } + + txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); + txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); + + txq->tx_tail = 0; + txq->nb_tx_used = 0; + + txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1); + txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1); +} + +/* Init the TX queue in hardware */ +int +i40e_tx_queue_init(struct i40e_tx_queue *txq) +{ + enum i40e_status_code err = I40E_SUCCESS; + struct i40e_vsi *vsi = txq->vsi; + struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); + uint16_t pf_q = txq->reg_idx; + struct i40e_hmc_obj_txq tx_ctx; + uint32_t qtx_ctl; + + /* clear the context structure first */ + memset(&tx_ctx, 0, sizeof(tx_ctx)); + tx_ctx.new_context = 1; + tx_ctx.base = txq->tx_ring_phys_addr / I40E_QUEUE_BASE_ADDR_UNIT; + tx_ctx.qlen = txq->nb_tx_desc; + tx_ctx.rdylist = rte_le_to_cpu_16(vsi->info.qs_handle[0]); + if (vsi->type == I40E_VSI_FDIR) + tx_ctx.fd_ena = TRUE; + + err = i40e_clear_lan_tx_queue_context(hw, pf_q); + if (err != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failure of clean lan tx queue context"); + return err; + } + + err = i40e_set_lan_tx_queue_context(hw, pf_q, &tx_ctx); + if (err != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failure of set lan tx queue context"); + return err; + } + + /* Now associate this queue with this PCI function */ + qtx_ctl = I40E_QTX_CTL_PF_QUEUE; + qtx_ctl |= ((hw->pf_id << I40E_QTX_CTL_PF_INDX_SHIFT) & + I40E_QTX_CTL_PF_INDX_MASK); + I40E_WRITE_REG(hw, I40E_QTX_CTL(pf_q), qtx_ctl); + I40E_WRITE_FLUSH(hw); + + txq->qtx_tail = hw->hw_addr + I40E_QTX_TAIL(pf_q); + + return err; +} + +int +i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq) +{ + struct i40e_rx_entry *rxe = rxq->sw_ring; + uint64_t dma_addr; + uint16_t i; + + for (i = 0; i < rxq->nb_rx_desc; i++) { + volatile union i40e_rx_desc *rxd; + struct rte_mbuf *mbuf = rte_rxmbuf_alloc(rxq->mp); + + if (unlikely(!mbuf)) { + PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX"); + return -ENOMEM; + } + + rte_mbuf_refcnt_set(mbuf, 1); + mbuf->next = NULL; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->nb_segs = 1; + mbuf->port = rxq->port_id; + + dma_addr = + rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf)); + + rxd = &rxq->rx_ring[i]; + rxd->read.pkt_addr = dma_addr; + rxd->read.hdr_addr = dma_addr; +#ifndef RTE_LIBRTE_I40E_16BYTE_RX_DESC + rxd->read.rsvd1 = 0; + rxd->read.rsvd2 = 0; +#endif /* RTE_LIBRTE_I40E_16BYTE_RX_DESC */ + + rxe[i].mbuf = mbuf; + } + + return 0; +} + +/* + * Calculate the buffer length, and check the jumbo frame + * and maximum packet length. + */ +static int +i40e_rx_queue_config(struct i40e_rx_queue *rxq) +{ + struct i40e_pf *pf = I40E_VSI_TO_PF(rxq->vsi); + struct i40e_hw *hw = I40E_VSI_TO_HW(rxq->vsi); + struct rte_eth_dev_data *data = pf->dev_data; + uint16_t buf_size, len; + + buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) - + RTE_PKTMBUF_HEADROOM); + + switch (pf->flags & (I40E_FLAG_HEADER_SPLIT_DISABLED | + I40E_FLAG_HEADER_SPLIT_ENABLED)) { + case I40E_FLAG_HEADER_SPLIT_ENABLED: /* Not supported */ + rxq->rx_hdr_len = RTE_ALIGN(I40E_RXBUF_SZ_1024, + (1 << I40E_RXQ_CTX_HBUFF_SHIFT)); + rxq->rx_buf_len = RTE_ALIGN(I40E_RXBUF_SZ_2048, + (1 << I40E_RXQ_CTX_DBUFF_SHIFT)); + rxq->hs_mode = i40e_header_split_enabled; + break; + case I40E_FLAG_HEADER_SPLIT_DISABLED: + default: + rxq->rx_hdr_len = 0; + rxq->rx_buf_len = RTE_ALIGN(buf_size, + (1 << I40E_RXQ_CTX_DBUFF_SHIFT)); + rxq->hs_mode = i40e_header_split_none; + break; + } + + len = hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len; + rxq->max_pkt_len = RTE_MIN(len, data->dev_conf.rxmode.max_rx_pkt_len); + if (data->dev_conf.rxmode.jumbo_frame == 1) { + if (rxq->max_pkt_len <= ETHER_MAX_LEN || + rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) { + PMD_DRV_LOG(ERR, "maximum packet length must " + "be larger than %u and smaller than %u," + "as jumbo frame is enabled", + (uint32_t)ETHER_MAX_LEN, + (uint32_t)I40E_FRAME_SIZE_MAX); + return I40E_ERR_CONFIG; + } + } else { + if (rxq->max_pkt_len < ETHER_MIN_LEN || + rxq->max_pkt_len > ETHER_MAX_LEN) { + PMD_DRV_LOG(ERR, "maximum packet length must be " + "larger than %u and smaller than %u, " + "as jumbo frame is disabled", + (uint32_t)ETHER_MIN_LEN, + (uint32_t)ETHER_MAX_LEN); + return I40E_ERR_CONFIG; + } + } + + return 0; +} + +/* Init the RX queue in hardware */ +int +i40e_rx_queue_init(struct i40e_rx_queue *rxq) +{ + int err = I40E_SUCCESS; + struct i40e_hw *hw = I40E_VSI_TO_HW(rxq->vsi); + struct rte_eth_dev_data *dev_data = I40E_VSI_TO_DEV_DATA(rxq->vsi); + struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(rxq->vsi); + uint16_t pf_q = rxq->reg_idx; + uint16_t buf_size; + struct i40e_hmc_obj_rxq rx_ctx; + + err = i40e_rx_queue_config(rxq); + if (err < 0) { + PMD_DRV_LOG(ERR, "Failed to config RX queue"); + return err; + } + + /* Clear the context structure first */ + memset(&rx_ctx, 0, sizeof(struct i40e_hmc_obj_rxq)); + rx_ctx.dbuff = rxq->rx_buf_len >> I40E_RXQ_CTX_DBUFF_SHIFT; + rx_ctx.hbuff = rxq->rx_hdr_len >> I40E_RXQ_CTX_HBUFF_SHIFT; + + rx_ctx.base = rxq->rx_ring_phys_addr / I40E_QUEUE_BASE_ADDR_UNIT; + rx_ctx.qlen = rxq->nb_rx_desc; +#ifndef RTE_LIBRTE_I40E_16BYTE_RX_DESC + rx_ctx.dsize = 1; +#endif + rx_ctx.dtype = rxq->hs_mode; + if (rxq->hs_mode) + rx_ctx.hsplit_0 = I40E_HEADER_SPLIT_ALL; + else + rx_ctx.hsplit_0 = I40E_HEADER_SPLIT_NONE; + rx_ctx.rxmax = rxq->max_pkt_len; + rx_ctx.tphrdesc_ena = 1; + rx_ctx.tphwdesc_ena = 1; + rx_ctx.tphdata_ena = 1; + rx_ctx.tphhead_ena = 1; + rx_ctx.lrxqthresh = 2; + rx_ctx.crcstrip = (rxq->crc_len == 0) ? 1 : 0; + rx_ctx.l2tsel = 1; + rx_ctx.showiv = 1; + rx_ctx.prefena = 1; + + err = i40e_clear_lan_rx_queue_context(hw, pf_q); + if (err != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to clear LAN RX queue context"); + return err; + } + err = i40e_set_lan_rx_queue_context(hw, pf_q, &rx_ctx); + if (err != I40E_SUCCESS) { + PMD_DRV_LOG(ERR, "Failed to set LAN RX queue context"); + return err; + } + + rxq->qrx_tail = hw->hw_addr + I40E_QRX_TAIL(pf_q); + + buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) - + RTE_PKTMBUF_HEADROOM); + + /* Check if scattered RX needs to be used. */ + if ((rxq->max_pkt_len + 2 * I40E_VLAN_TAG_SIZE) > buf_size) { + dev_data->scattered_rx = 1; + dev->rx_pkt_burst = i40e_recv_scattered_pkts; + } + + /* Init the RX tail regieter. */ + I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); + + return 0; +} + +void +i40e_dev_clear_queues(struct rte_eth_dev *dev) +{ + uint16_t i; + + PMD_INIT_FUNC_TRACE(); + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + i40e_tx_queue_release_mbufs(dev->data->tx_queues[i]); + i40e_reset_tx_queue(dev->data->tx_queues[i]); + } + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + i40e_rx_queue_release_mbufs(dev->data->rx_queues[i]); + i40e_reset_rx_queue(dev->data->rx_queues[i]); + } +} + +#define I40E_FDIR_NUM_TX_DESC I40E_MIN_RING_DESC +#define I40E_FDIR_NUM_RX_DESC I40E_MIN_RING_DESC + +enum i40e_status_code +i40e_fdir_setup_tx_resources(struct i40e_pf *pf) +{ + struct i40e_tx_queue *txq; + const struct rte_memzone *tz = NULL; + uint32_t ring_size; + struct rte_eth_dev *dev = pf->adapter->eth_dev; + + if (!pf) { + PMD_DRV_LOG(ERR, "PF is not available"); + return I40E_ERR_BAD_PTR; + } + + /* Allocate the TX queue data structure. */ + txq = rte_zmalloc_socket("i40e fdir tx queue", + sizeof(struct i40e_tx_queue), + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!txq) { + PMD_DRV_LOG(ERR, "Failed to allocate memory for " + "tx queue structure."); + return I40E_ERR_NO_MEMORY; + } + + /* Allocate TX hardware ring descriptors. */ + ring_size = sizeof(struct i40e_tx_desc) * I40E_FDIR_NUM_TX_DESC; + ring_size = RTE_ALIGN(ring_size, I40E_DMA_MEM_ALIGN); + + tz = i40e_ring_dma_zone_reserve(dev, + "fdir_tx_ring", + I40E_FDIR_QUEUE_ID, + ring_size, + SOCKET_ID_ANY); + if (!tz) { + i40e_dev_tx_queue_release(txq); + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for TX."); + return I40E_ERR_NO_MEMORY; + } + + txq->nb_tx_desc = I40E_FDIR_NUM_TX_DESC; + txq->queue_id = I40E_FDIR_QUEUE_ID; + txq->reg_idx = pf->fdir.fdir_vsi->base_queue; + txq->vsi = pf->fdir.fdir_vsi; + +#ifdef RTE_LIBRTE_XEN_DOM0 + txq->tx_ring_phys_addr = rte_mem_phy2mch(tz->memseg_id, tz->phys_addr); +#else + txq->tx_ring_phys_addr = (uint64_t)tz->phys_addr; +#endif + txq->tx_ring = (struct i40e_tx_desc *)tz->addr; + /* + * don't need to allocate software ring and reset for the fdir + * program queue just set the queue has been configured. + */ + txq->q_set = TRUE; + pf->fdir.txq = txq; + + return I40E_SUCCESS; +} + +enum i40e_status_code +i40e_fdir_setup_rx_resources(struct i40e_pf *pf) +{ + struct i40e_rx_queue *rxq; + const struct rte_memzone *rz = NULL; + uint32_t ring_size; + struct rte_eth_dev *dev = pf->adapter->eth_dev; + + if (!pf) { + PMD_DRV_LOG(ERR, "PF is not available"); + return I40E_ERR_BAD_PTR; + } + + /* Allocate the RX queue data structure. */ + rxq = rte_zmalloc_socket("i40e fdir rx queue", + sizeof(struct i40e_rx_queue), + RTE_CACHE_LINE_SIZE, + SOCKET_ID_ANY); + if (!rxq) { + PMD_DRV_LOG(ERR, "Failed to allocate memory for " + "rx queue structure."); + return I40E_ERR_NO_MEMORY; + } + + /* Allocate RX hardware ring descriptors. */ + ring_size = sizeof(union i40e_rx_desc) * I40E_FDIR_NUM_RX_DESC; + ring_size = RTE_ALIGN(ring_size, I40E_DMA_MEM_ALIGN); + + rz = i40e_ring_dma_zone_reserve(dev, + "fdir_rx_ring", + I40E_FDIR_QUEUE_ID, + ring_size, + SOCKET_ID_ANY); + if (!rz) { + i40e_dev_rx_queue_release(rxq); + PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for RX."); + return I40E_ERR_NO_MEMORY; + } + + rxq->nb_rx_desc = I40E_FDIR_NUM_RX_DESC; + rxq->queue_id = I40E_FDIR_QUEUE_ID; + rxq->reg_idx = pf->fdir.fdir_vsi->base_queue; + rxq->vsi = pf->fdir.fdir_vsi; + +#ifdef RTE_LIBRTE_XEN_DOM0 + rxq->rx_ring_phys_addr = rte_mem_phy2mch(rz->memseg_id, rz->phys_addr); +#else + rxq->rx_ring_phys_addr = (uint64_t)rz->phys_addr; +#endif + rxq->rx_ring = (union i40e_rx_desc *)rz->addr; + + /* + * Don't need to allocate software ring and reset for the fdir + * rx queue, just set the queue has been configured. + */ + rxq->q_set = TRUE; + pf->fdir.rxq = rxq; + + return I40E_SUCCESS; +} diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h new file mode 100644 index 0000000..5b2984a --- /dev/null +++ b/drivers/net/i40e/i40e_rxtx.h @@ -0,0 +1,211 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _I40E_RXTX_H_ +#define _I40E_RXTX_H_ + +/** + * 32 bits tx flags, high 16 bits for L2TAG1 (VLAN), + * low 16 bits for others. + */ +#define I40E_TX_FLAG_L2TAG1_SHIFT 16 +#define I40E_TX_FLAG_L2TAG1_MASK 0xffff0000 +#define I40E_TX_FLAG_CSUM ((uint32_t)(1 << 0)) +#define I40E_TX_FLAG_INSERT_VLAN ((uint32_t)(1 << 1)) +#define I40E_TX_FLAG_TSYN ((uint32_t)(1 << 2)) + +#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC +#define RTE_PMD_I40E_RX_MAX_BURST 32 +#endif + +#define I40E_RXBUF_SZ_1024 1024 +#define I40E_RXBUF_SZ_2048 2048 + +enum i40e_header_split_mode { + i40e_header_split_none = 0, + i40e_header_split_enabled = 1, + i40e_header_split_always = 2, + i40e_header_split_reserved +}; + +#define I40E_HEADER_SPLIT_NONE ((uint8_t)0) +#define I40E_HEADER_SPLIT_L2 ((uint8_t)(1 << 0)) +#define I40E_HEADER_SPLIT_IP ((uint8_t)(1 << 1)) +#define I40E_HEADER_SPLIT_UDP_TCP ((uint8_t)(1 << 2)) +#define I40E_HEADER_SPLIT_SCTP ((uint8_t)(1 << 3)) +#define I40E_HEADER_SPLIT_ALL (I40E_HEADER_SPLIT_L2 | \ + I40E_HEADER_SPLIT_IP | \ + I40E_HEADER_SPLIT_UDP_TCP | \ + I40E_HEADER_SPLIT_SCTP) + +/* HW desc structure, both 16-byte and 32-byte types are supported */ +#ifdef RTE_LIBRTE_I40E_16BYTE_RX_DESC +#define i40e_rx_desc i40e_16byte_rx_desc +#else +#define i40e_rx_desc i40e_32byte_rx_desc +#endif + +struct i40e_rx_entry { + struct rte_mbuf *mbuf; +}; + +/* + * Structure associated with each RX queue. + */ +struct i40e_rx_queue { + struct rte_mempool *mp; /**< mbuf pool to populate RX ring */ + volatile union i40e_rx_desc *rx_ring;/**< RX ring virtual address */ + uint64_t rx_ring_phys_addr; /**< RX ring DMA address */ + struct i40e_rx_entry *sw_ring; /**< address of RX soft ring */ + uint16_t nb_rx_desc; /**< number of RX descriptors */ + uint16_t rx_free_thresh; /**< max free RX desc to hold */ + uint16_t rx_tail; /**< current value of tail */ + uint16_t nb_rx_hold; /**< number of held free RX desc */ + struct rte_mbuf *pkt_first_seg; /**< first segment of current packet */ + struct rte_mbuf *pkt_last_seg; /**< last segment of current packet */ +#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC + uint16_t rx_nb_avail; /**< number of staged packets ready */ + uint16_t rx_next_avail; /**< index of next staged packets */ + uint16_t rx_free_trigger; /**< triggers rx buffer allocation */ + struct rte_mbuf fake_mbuf; /**< dummy mbuf */ + struct rte_mbuf *rx_stage[RTE_PMD_I40E_RX_MAX_BURST * 2]; +#endif + uint8_t port_id; /**< device port ID */ + uint8_t crc_len; /**< 0 if CRC stripped, 4 otherwise */ + uint16_t queue_id; /**< RX queue index */ + uint16_t reg_idx; /**< RX queue register index */ + uint8_t drop_en; /**< if not 0, set register bit */ + volatile uint8_t *qrx_tail; /**< register address of tail */ + struct i40e_vsi *vsi; /**< the VSI this queue belongs to */ + uint16_t rx_buf_len; /* The packet buffer size */ + uint16_t rx_hdr_len; /* The header buffer size */ + uint16_t max_pkt_len; /* Maximum packet length */ + uint8_t hs_mode; /* Header Split mode */ + bool q_set; /**< indicate if rx queue has been configured */ + bool rx_deferred_start; /**< don't start this queue in dev start */ +}; + +struct i40e_tx_entry { + struct rte_mbuf *mbuf; + uint16_t next_id; + uint16_t last_id; +}; + +/* + * Structure associated with each TX queue. + */ +struct i40e_tx_queue { + uint16_t nb_tx_desc; /**< number of TX descriptors */ + uint64_t tx_ring_phys_addr; /**< TX ring DMA address */ + volatile struct i40e_tx_desc *tx_ring; /**< TX ring virtual address */ + struct i40e_tx_entry *sw_ring; /**< virtual address of SW ring */ + uint16_t tx_tail; /**< current value of tail register */ + volatile uint8_t *qtx_tail; /**< register address of tail */ + uint16_t nb_tx_used; /**< number of TX desc used since RS bit set */ + /**< index to last TX descriptor to have been cleaned */ + uint16_t last_desc_cleaned; + /**< Total number of TX descriptors ready to be allocated. */ + uint16_t nb_tx_free; + /**< Number of TX descriptors to use before RS bit is set. */ + uint16_t tx_free_thresh; /**< minimum TX before freeing. */ + /** Number of TX descriptors to use before RS bit is set. */ + uint16_t tx_rs_thresh; + uint8_t pthresh; /**< Prefetch threshold register. */ + uint8_t hthresh; /**< Host threshold register. */ + uint8_t wthresh; /**< Write-back threshold reg. */ + uint8_t port_id; /**< Device port identifier. */ + uint16_t queue_id; /**< TX queue index. */ + uint16_t reg_idx; + uint32_t txq_flags; + struct i40e_vsi *vsi; /**< the VSI this queue belongs to */ + uint16_t tx_next_dd; + uint16_t tx_next_rs; + bool q_set; /**< indicate if tx queue has been configured */ + bool tx_deferred_start; /**< don't start this queue in dev start */ +}; + +/** Offload features */ +union i40e_tx_offload { + uint64_t data; + struct { + uint64_t l2_len:7; /**< L2 (MAC) Header Length. */ + uint64_t l3_len:9; /**< L3 (IP) Header Length. */ + uint64_t l4_len:8; /**< L4 Header Length. */ + uint64_t tso_segsz:16; /**< TCP TSO segment size */ + uint64_t outer_l2_len:8; /**< outer L2 Header Length */ + uint64_t outer_l3_len:16; /**< outer L3 Header Length */ + }; +}; + +int i40e_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id); +int i40e_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id); +int i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id); +int i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id); +int i40e_dev_rx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); +int i40e_dev_tx_queue_setup(struct rte_eth_dev *dev, + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_txconf *tx_conf); +void i40e_dev_rx_queue_release(void *rxq); +void i40e_dev_tx_queue_release(void *txq); +uint16_t i40e_recv_pkts(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); +uint16_t i40e_recv_scattered_pkts(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); +uint16_t i40e_xmit_pkts(void *tx_queue, + struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); +int i40e_tx_queue_init(struct i40e_tx_queue *txq); +int i40e_rx_queue_init(struct i40e_rx_queue *rxq); +void i40e_free_tx_resources(struct i40e_tx_queue *txq); +void i40e_free_rx_resources(struct i40e_rx_queue *rxq); +void i40e_dev_clear_queues(struct rte_eth_dev *dev); +void i40e_reset_rx_queue(struct i40e_rx_queue *rxq); +void i40e_reset_tx_queue(struct i40e_tx_queue *txq); +void i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq); +int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq); +void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq); + +uint32_t i40e_dev_rx_queue_count(struct rte_eth_dev *dev, + uint16_t rx_queue_id); +int i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset); + +#endif /* _I40E_RXTX_H_ */ diff --git a/drivers/net/i40e/rte_pmd_i40e_version.map b/drivers/net/i40e/rte_pmd_i40e_version.map new file mode 100644 index 0000000..ef35398 --- /dev/null +++ b/drivers/net/i40e/rte_pmd_i40e_version.map @@ -0,0 +1,4 @@ +DPDK_2.0 { + + local: *; +}; diff --git a/lib/Makefile b/lib/Makefile index 73a0e09..73e0bd1 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -42,7 +42,6 @@ DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += librte_pmd_ixgbe -DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += librte_pmd_i40e DIRS-$(CONFIG_RTE_LIBRTE_MLX4_PMD) += librte_pmd_mlx4 DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += librte_pmd_ring DIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += librte_pmd_pcap diff --git a/lib/librte_pmd_i40e/Makefile b/lib/librte_pmd_i40e/Makefile deleted file mode 100644 index 050dd44..0000000 --- a/lib/librte_pmd_i40e/Makefile +++ /dev/null @@ -1,105 +0,0 @@ -# BSD LICENSE -# -# Copyright(c) 2010-2014 Intel Corporation. All rights reserved. -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions -# are met: -# -# * Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in -# the documentation and/or other materials provided with the -# distribution. -# * Neither the name of Intel Corporation nor the names of its -# contributors may be used to endorse or promote products derived -# from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -include $(RTE_SDK)/mk/rte.vars.mk - -# -# library name -# -LIB = librte_pmd_i40e.a - -CFLAGS += -O3 -CFLAGS += $(WERROR_FLAGS) - -EXPORT_MAP := rte_pmd_i40e_version.map - -LIBABIVER := 1 - -# -# Add extra flags for base driver files (also known as shared code) -# to disable warnings -# -ifeq ($(CC), icc) -CFLAGS_BASE_DRIVER = -wd593 -else ifeq ($(CC), clang) -CFLAGS_BASE_DRIVER += -Wno-sign-compare -CFLAGS_BASE_DRIVER += -Wno-unused-value -CFLAGS_BASE_DRIVER += -Wno-unused-parameter -CFLAGS_BASE_DRIVER += -Wno-strict-aliasing -CFLAGS_BASE_DRIVER += -Wno-format -CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers -CFLAGS_BASE_DRIVER += -Wno-pointer-to-int-cast -CFLAGS_BASE_DRIVER += -Wno-format-nonliteral -else -CFLAGS_BASE_DRIVER = -Wno-sign-compare -CFLAGS_BASE_DRIVER += -Wno-unused-value -CFLAGS_BASE_DRIVER += -Wno-unused-parameter -CFLAGS_BASE_DRIVER += -Wno-strict-aliasing -CFLAGS_BASE_DRIVER += -Wno-format -CFLAGS_BASE_DRIVER += -Wno-missing-field-initializers -CFLAGS_BASE_DRIVER += -Wno-pointer-to-int-cast -CFLAGS_BASE_DRIVER += -Wno-format-nonliteral -CFLAGS_BASE_DRIVER += -Wno-format-security - -ifeq ($(shell test $(GCC_VERSION) -ge 44 && echo 1), 1) -CFLAGS_BASE_DRIVER += -Wno-unused-but-set-variable -endif - -CFLAGS_i40e_lan_hmc.o += -Wno-error -endif -OBJS_BASE_DRIVER=$(patsubst %.c,%.o,$(notdir $(wildcard $(SRCDIR)/i40e/*.c))) -$(foreach obj, $(OBJS_BASE_DRIVER), $(eval CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER))) - -VPATH += $(SRCDIR)/i40e - -# -# all source are stored in SRCS-y -# -SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_adminq.c -SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_common.c -SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_diag.c -SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_hmc.c -SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_lan_hmc.c -SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_nvm.c -SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_dcb.c - -SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_ethdev.c -SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_rxtx.c -SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_ethdev_vf.c -SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_pf.c -SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_fdir.c - -# this lib depends upon: -DEPDIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += lib/librte_eal lib/librte_ether -DEPDIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += lib/librte_mempool lib/librte_mbuf -DEPDIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += lib/librte_net lib/librte_malloc - -include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_pmd_i40e/i40e/i40e_adminq.c b/lib/librte_pmd_i40e/i40e/i40e_adminq.c deleted file mode 100644 index e098ed6..0000000 --- a/lib/librte_pmd_i40e/i40e/i40e_adminq.c +++ /dev/null @@ -1,1084 +0,0 @@ -/******************************************************************************* - -Copyright (c) 2013 - 2014, Intel Corporation -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - - 1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - 3. Neither the name of the Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. - -***************************************************************************/ - -#include "i40e_status.h" -#include "i40e_type.h" -#include "i40e_register.h" -#include "i40e_adminq.h" -#include "i40e_prototype.h" - -#ifndef VF_DRIVER -/** - * i40e_is_nvm_update_op - return true if this is an NVM update operation - * @desc: API request descriptor - **/ -STATIC INLINE bool i40e_is_nvm_update_op(struct i40e_aq_desc *desc) -{ - return (desc->opcode == CPU_TO_LE16(i40e_aqc_opc_nvm_erase) || - desc->opcode == CPU_TO_LE16(i40e_aqc_opc_nvm_update)); -} - -#endif /* VF_DRIVER */ -/** - * i40e_adminq_init_regs - Initialize AdminQ registers - * @hw: pointer to the hardware structure - * - * This assumes the alloc_asq and alloc_arq functions have already been called - **/ -STATIC void i40e_adminq_init_regs(struct i40e_hw *hw) -{ - /* set head and tail registers in our local struct */ - if (hw->mac.type == I40E_MAC_VF) { - hw->aq.asq.tail = I40E_VF_ATQT1; - hw->aq.asq.head = I40E_VF_ATQH1; - hw->aq.asq.len = I40E_VF_ATQLEN1; - hw->aq.asq.bal = I40E_VF_ATQBAL1; - hw->aq.asq.bah = I40E_VF_ATQBAH1; - hw->aq.arq.tail = I40E_VF_ARQT1; - hw->aq.arq.head = I40E_VF_ARQH1; - hw->aq.arq.len = I40E_VF_ARQLEN1; - hw->aq.arq.bal = I40E_VF_ARQBAL1; - hw->aq.arq.bah = I40E_VF_ARQBAH1; - } else { - hw->aq.asq.tail = I40E_PF_ATQT; - hw->aq.asq.head = I40E_PF_ATQH; - hw->aq.asq.len = I40E_PF_ATQLEN; - hw->aq.asq.bal = I40E_PF_ATQBAL; - hw->aq.asq.bah = I40E_PF_ATQBAH; - hw->aq.arq.tail = I40E_PF_ARQT; - hw->aq.arq.head = I40E_PF_ARQH; - hw->aq.arq.len = I40E_PF_ARQLEN; - hw->aq.arq.bal = I40E_PF_ARQBAL; - hw->aq.arq.bah = I40E_PF_ARQBAH; - } -} - -/** - * i40e_alloc_adminq_asq_ring - Allocate Admin Queue send rings - * @hw: pointer to the hardware structure - **/ -enum i40e_status_code i40e_alloc_adminq_asq_ring(struct i40e_hw *hw) -{ - enum i40e_status_code ret_code; - - ret_code = i40e_allocate_dma_mem(hw, &hw->aq.asq.desc_buf, - i40e_mem_atq_ring, - (hw->aq.num_asq_entries * - sizeof(struct i40e_aq_desc)), - I40E_ADMINQ_DESC_ALIGNMENT); - if (ret_code) - return ret_code; - - ret_code = i40e_allocate_virt_mem(hw, &hw->aq.asq.cmd_buf, - (hw->aq.num_asq_entries * - sizeof(struct i40e_asq_cmd_details))); - if (ret_code) { - i40e_free_dma_mem(hw, &hw->aq.asq.desc_buf); - return ret_code; - } - - return ret_code; -} - -/** - * i40e_alloc_adminq_arq_ring - Allocate Admin Queue receive rings - * @hw: pointer to the hardware structure - **/ -enum i40e_status_code i40e_alloc_adminq_arq_ring(struct i40e_hw *hw) -{ - enum i40e_status_code ret_code; - - ret_code = i40e_allocate_dma_mem(hw, &hw->aq.arq.desc_buf, - i40e_mem_arq_ring, - (hw->aq.num_arq_entries * - sizeof(struct i40e_aq_desc)), - I40E_ADMINQ_DESC_ALIGNMENT); - - return ret_code; -} - -/** - * i40e_free_adminq_asq - Free Admin Queue send rings - * @hw: pointer to the hardware structure - * - * This assumes the posted send buffers have already been cleaned - * and de-allocated - **/ -void i40e_free_adminq_asq(struct i40e_hw *hw) -{ - i40e_free_dma_mem(hw, &hw->aq.asq.desc_buf); -} - -/** - * i40e_free_adminq_arq - Free Admin Queue receive rings - * @hw: pointer to the hardware structure - * - * This assumes the posted receive buffers have already been cleaned - * and de-allocated - **/ -void i40e_free_adminq_arq(struct i40e_hw *hw) -{ - i40e_free_dma_mem(hw, &hw->aq.arq.desc_buf); -} - -/** - * i40e_alloc_arq_bufs - Allocate pre-posted buffers for the receive queue - * @hw: pointer to the hardware structure - **/ -STATIC enum i40e_status_code i40e_alloc_arq_bufs(struct i40e_hw *hw) -{ - enum i40e_status_code ret_code; - struct i40e_aq_desc *desc; - struct i40e_dma_mem *bi; - int i; - - /* We'll be allocating the buffer info memory first, then we can - * allocate the mapped buffers for the event processing - */ - - /* buffer_info structures do not need alignment */ - ret_code = i40e_allocate_virt_mem(hw, &hw->aq.arq.dma_head, - (hw->aq.num_arq_entries * sizeof(struct i40e_dma_mem))); - if (ret_code) - goto alloc_arq_bufs; - hw->aq.arq.r.arq_bi = (struct i40e_dma_mem *)hw->aq.arq.dma_head.va; - - /* allocate the mapped buffers */ - for (i = 0; i < hw->aq.num_arq_entries; i++) { - bi = &hw->aq.arq.r.arq_bi[i]; - ret_code = i40e_allocate_dma_mem(hw, bi, - i40e_mem_arq_buf, - hw->aq.arq_buf_size, - I40E_ADMINQ_DESC_ALIGNMENT); - if (ret_code) - goto unwind_alloc_arq_bufs; - - /* now configure the descriptors for use */ - desc = I40E_ADMINQ_DESC(hw->aq.arq, i); - - desc->flags = CPU_TO_LE16(I40E_AQ_FLAG_BUF); - if (hw->aq.arq_buf_size > I40E_AQ_LARGE_BUF) - desc->flags |= CPU_TO_LE16(I40E_AQ_FLAG_LB); - desc->opcode = 0; - /* This is in accordance with Admin queue design, there is no - * register for buffer size configuration - */ - desc->datalen = CPU_TO_LE16((u16)bi->size); - desc->retval = 0; - desc->cookie_high = 0; - desc->cookie_low = 0; - desc->params.external.addr_high = - CPU_TO_LE32(I40E_HI_DWORD(bi->pa)); - desc->params.external.addr_low = - CPU_TO_LE32(I40E_LO_DWORD(bi->pa)); - desc->params.external.param0 = 0; - desc->params.external.param1 = 0; - } - -alloc_arq_bufs: - return ret_code; - -unwind_alloc_arq_bufs: - /* don't try to free the one that failed... */ - i--; - for (; i >= 0; i--) - i40e_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]); - i40e_free_virt_mem(hw, &hw->aq.arq.dma_head); - - return ret_code; -} - -/** - * i40e_alloc_asq_bufs - Allocate empty buffer structs for the send queue - * @hw: pointer to the hardware structure - **/ -STATIC enum i40e_status_code i40e_alloc_asq_bufs(struct i40e_hw *hw) -{ - enum i40e_status_code ret_code; - struct i40e_dma_mem *bi; - int i; - - /* No mapped memory needed yet, just the buffer info structures */ - ret_code = i40e_allocate_virt_mem(hw, &hw->aq.asq.dma_head, - (hw->aq.num_asq_entries * sizeof(struct i40e_dma_mem))); - if (ret_code) - goto alloc_asq_bufs; - hw->aq.asq.r.asq_bi = (struct i40e_dma_mem *)hw->aq.asq.dma_head.va; - - /* allocate the mapped buffers */ - for (i = 0; i < hw->aq.num_asq_entries; i++) { - bi = &hw->aq.asq.r.asq_bi[i]; - ret_code = i40e_allocate_dma_mem(hw, bi, - i40e_mem_asq_buf, - hw->aq.asq_buf_size, - I40E_ADMINQ_DESC_ALIGNMENT); - if (ret_code) - goto unwind_alloc_asq_bufs; - } -alloc_asq_bufs: - return ret_code; - -unwind_alloc_asq_bufs: - /* don't try to free the one that failed... */ - i--; - for (; i >= 0; i--) - i40e_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]); - i40e_free_virt_mem(hw, &hw->aq.asq.dma_head); - - return ret_code; -} - -/** - * i40e_free_arq_bufs - Free receive queue buffer info elements - * @hw: pointer to the hardware structure - **/ -STATIC void i40e_free_arq_bufs(struct i40e_hw *hw) -{ - int i; - - /* free descriptors */ - for (i = 0; i < hw->aq.num_arq_entries; i++) - i40e_free_dma_mem(hw, &hw->aq.arq.r.arq_bi[i]); - - /* free the descriptor memory */ - i40e_free_dma_mem(hw, &hw->aq.arq.desc_buf); - - /* free the dma header */ - i40e_free_virt_mem(hw, &hw->aq.arq.dma_head); -} - -/** - * i40e_free_asq_bufs - Free send queue buffer info elements - * @hw: pointer to the hardware structure - **/ -STATIC void i40e_free_asq_bufs(struct i40e_hw *hw) -{ - int i; - - /* only unmap if the address is non-NULL */ - for (i = 0; i < hw->aq.num_asq_entries; i++) - if (hw->aq.asq.r.asq_bi[i].pa) - i40e_free_dma_mem(hw, &hw->aq.asq.r.asq_bi[i]); - - /* free the buffer info list */ - i40e_free_virt_mem(hw, &hw->aq.asq.cmd_buf); - - /* free the descriptor memory */ - i40e_free_dma_mem(hw, &hw->aq.asq.desc_buf); - - /* free the dma header */ - i40e_free_virt_mem(hw, &hw->aq.asq.dma_head); -} - -/** - * i40e_config_asq_regs - configure ASQ registers - * @hw: pointer to the hardware structure - * - * Configure base address and length registers for the transmit queue - **/ -STATIC enum i40e_status_code i40e_config_asq_regs(struct i40e_hw *hw) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - u32 reg = 0; - - /* Clear Head and Tail */ - wr32(hw, hw->aq.asq.head, 0); - wr32(hw, hw->aq.asq.tail, 0); - - /* set starting point */ - wr32(hw, hw->aq.asq.len, (hw->aq.num_asq_entries | - I40E_PF_ATQLEN_ATQENABLE_MASK)); - wr32(hw, hw->aq.asq.bal, I40E_LO_DWORD(hw->aq.asq.desc_buf.pa)); - wr32(hw, hw->aq.asq.bah, I40E_HI_DWORD(hw->aq.asq.desc_buf.pa)); - - /* Check one register to verify that config was applied */ - reg = rd32(hw, hw->aq.asq.bal); - if (reg != I40E_LO_DWORD(hw->aq.asq.desc_buf.pa)) - ret_code = I40E_ERR_ADMIN_QUEUE_ERROR; - - return ret_code; -} - -/** - * i40e_config_arq_regs - ARQ register configuration - * @hw: pointer to the hardware structure - * - * Configure base address and length registers for the receive (event queue) - **/ -STATIC enum i40e_status_code i40e_config_arq_regs(struct i40e_hw *hw) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - u32 reg = 0; - - /* Clear Head and Tail */ - wr32(hw, hw->aq.arq.head, 0); - wr32(hw, hw->aq.arq.tail, 0); - - /* set starting point */ - wr32(hw, hw->aq.arq.len, (hw->aq.num_arq_entries | - I40E_PF_ARQLEN_ARQENABLE_MASK)); - wr32(hw, hw->aq.arq.bal, I40E_LO_DWORD(hw->aq.arq.desc_buf.pa)); - wr32(hw, hw->aq.arq.bah, I40E_HI_DWORD(hw->aq.arq.desc_buf.pa)); - - /* Update tail in the HW to post pre-allocated buffers */ - wr32(hw, hw->aq.arq.tail, hw->aq.num_arq_entries - 1); - - /* Check one register to verify that config was applied */ - reg = rd32(hw, hw->aq.arq.bal); - if (reg != I40E_LO_DWORD(hw->aq.arq.desc_buf.pa)) - ret_code = I40E_ERR_ADMIN_QUEUE_ERROR; - - return ret_code; -} - -/** - * i40e_init_asq - main initialization routine for ASQ - * @hw: pointer to the hardware structure - * - * This is the main initialization routine for the Admin Send Queue - * Prior to calling this function, drivers *MUST* set the following fields - * in the hw->aq structure: - * - hw->aq.num_asq_entries - * - hw->aq.arq_buf_size - * - * Do *NOT* hold the lock when calling this as the memory allocation routines - * called are not going to be atomic context safe - **/ -enum i40e_status_code i40e_init_asq(struct i40e_hw *hw) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - - if (hw->aq.asq.count > 0) { - /* queue already initialized */ - ret_code = I40E_ERR_NOT_READY; - goto init_adminq_exit; - } - - /* verify input for valid configuration */ - if ((hw->aq.num_asq_entries == 0) || - (hw->aq.asq_buf_size == 0)) { - ret_code = I40E_ERR_CONFIG; - goto init_adminq_exit; - } - - hw->aq.asq.next_to_use = 0; - hw->aq.asq.next_to_clean = 0; - hw->aq.asq.count = hw->aq.num_asq_entries; - - /* allocate the ring memory */ - ret_code = i40e_alloc_adminq_asq_ring(hw); - if (ret_code != I40E_SUCCESS) - goto init_adminq_exit; - - /* allocate buffers in the rings */ - ret_code = i40e_alloc_asq_bufs(hw); - if (ret_code != I40E_SUCCESS) - goto init_adminq_free_rings; - - /* initialize base registers */ - ret_code = i40e_config_asq_regs(hw); - if (ret_code != I40E_SUCCESS) - goto init_adminq_free_rings; - - /* success! */ - goto init_adminq_exit; - -init_adminq_free_rings: - i40e_free_adminq_asq(hw); - -init_adminq_exit: - return ret_code; -} - -/** - * i40e_init_arq - initialize ARQ - * @hw: pointer to the hardware structure - * - * The main initialization routine for the Admin Receive (Event) Queue. - * Prior to calling this function, drivers *MUST* set the following fields - * in the hw->aq structure: - * - hw->aq.num_asq_entries - * - hw->aq.arq_buf_size - * - * Do *NOT* hold the lock when calling this as the memory allocation routines - * called are not going to be atomic context safe - **/ -enum i40e_status_code i40e_init_arq(struct i40e_hw *hw) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - - if (hw->aq.arq.count > 0) { - /* queue already initialized */ - ret_code = I40E_ERR_NOT_READY; - goto init_adminq_exit; - } - - /* verify input for valid configuration */ - if ((hw->aq.num_arq_entries == 0) || - (hw->aq.arq_buf_size == 0)) { - ret_code = I40E_ERR_CONFIG; - goto init_adminq_exit; - } - - hw->aq.arq.next_to_use = 0; - hw->aq.arq.next_to_clean = 0; - hw->aq.arq.count = hw->aq.num_arq_entries; - - /* allocate the ring memory */ - ret_code = i40e_alloc_adminq_arq_ring(hw); - if (ret_code != I40E_SUCCESS) - goto init_adminq_exit; - - /* allocate buffers in the rings */ - ret_code = i40e_alloc_arq_bufs(hw); - if (ret_code != I40E_SUCCESS) - goto init_adminq_free_rings; - - /* initialize base registers */ - ret_code = i40e_config_arq_regs(hw); - if (ret_code != I40E_SUCCESS) - goto init_adminq_free_rings; - - /* success! */ - goto init_adminq_exit; - -init_adminq_free_rings: - i40e_free_adminq_arq(hw); - -init_adminq_exit: - return ret_code; -} - -/** - * i40e_shutdown_asq - shutdown the ASQ - * @hw: pointer to the hardware structure - * - * The main shutdown routine for the Admin Send Queue - **/ -enum i40e_status_code i40e_shutdown_asq(struct i40e_hw *hw) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - - if (hw->aq.asq.count == 0) - return I40E_ERR_NOT_READY; - - /* Stop firmware AdminQ processing */ - wr32(hw, hw->aq.asq.head, 0); - wr32(hw, hw->aq.asq.tail, 0); - wr32(hw, hw->aq.asq.len, 0); - wr32(hw, hw->aq.asq.bal, 0); - wr32(hw, hw->aq.asq.bah, 0); - - /* make sure spinlock is available */ - i40e_acquire_spinlock(&hw->aq.asq_spinlock); - - hw->aq.asq.count = 0; /* to indicate uninitialized queue */ - - /* free ring buffers */ - i40e_free_asq_bufs(hw); - - i40e_release_spinlock(&hw->aq.asq_spinlock); - - return ret_code; -} - -/** - * i40e_shutdown_arq - shutdown ARQ - * @hw: pointer to the hardware structure - * - * The main shutdown routine for the Admin Receive Queue - **/ -enum i40e_status_code i40e_shutdown_arq(struct i40e_hw *hw) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - - if (hw->aq.arq.count == 0) - return I40E_ERR_NOT_READY; - - /* Stop firmware AdminQ processing */ - wr32(hw, hw->aq.arq.head, 0); - wr32(hw, hw->aq.arq.tail, 0); - wr32(hw, hw->aq.arq.len, 0); - wr32(hw, hw->aq.arq.bal, 0); - wr32(hw, hw->aq.arq.bah, 0); - - /* make sure spinlock is available */ - i40e_acquire_spinlock(&hw->aq.arq_spinlock); - - hw->aq.arq.count = 0; /* to indicate uninitialized queue */ - - /* free ring buffers */ - i40e_free_arq_bufs(hw); - - i40e_release_spinlock(&hw->aq.arq_spinlock); - - return ret_code; -} - -/** - * i40e_init_adminq - main initialization routine for Admin Queue - * @hw: pointer to the hardware structure - * - * Prior to calling this function, drivers *MUST* set the following fields - * in the hw->aq structure: - * - hw->aq.num_asq_entries - * - hw->aq.num_arq_entries - * - hw->aq.arq_buf_size - * - hw->aq.asq_buf_size - **/ -enum i40e_status_code i40e_init_adminq(struct i40e_hw *hw) -{ - enum i40e_status_code ret_code; -#ifndef VF_DRIVER - u16 eetrack_lo, eetrack_hi; - int retry = 0; -#endif - - /* verify input for valid configuration */ - if ((hw->aq.num_arq_entries == 0) || - (hw->aq.num_asq_entries == 0) || - (hw->aq.arq_buf_size == 0) || - (hw->aq.asq_buf_size == 0)) { - ret_code = I40E_ERR_CONFIG; - goto init_adminq_exit; - } - - /* initialize spin locks */ - i40e_init_spinlock(&hw->aq.asq_spinlock); - i40e_init_spinlock(&hw->aq.arq_spinlock); - - /* Set up register offsets */ - i40e_adminq_init_regs(hw); - - /* setup ASQ command write back timeout */ - hw->aq.asq_cmd_timeout = I40E_ASQ_CMD_TIMEOUT; - - /* allocate the ASQ */ - ret_code = i40e_init_asq(hw); - if (ret_code != I40E_SUCCESS) - goto init_adminq_destroy_spinlocks; - - /* allocate the ARQ */ - ret_code = i40e_init_arq(hw); - if (ret_code != I40E_SUCCESS) - goto init_adminq_free_asq; - -#ifndef VF_DRIVER - /* There are some cases where the firmware may not be quite ready - * for AdminQ operations, so we retry the AdminQ setup a few times - * if we see timeouts in this first AQ call. - */ - do { - ret_code = i40e_aq_get_firmware_version(hw, - &hw->aq.fw_maj_ver, - &hw->aq.fw_min_ver, - &hw->aq.api_maj_ver, - &hw->aq.api_min_ver, - NULL); - if (ret_code != I40E_ERR_ADMIN_QUEUE_TIMEOUT) - break; - retry++; - i40e_msec_delay(100); - i40e_resume_aq(hw); - } while (retry < 10); - if (ret_code != I40E_SUCCESS) - goto init_adminq_free_arq; - - /* get the NVM version info */ - i40e_read_nvm_word(hw, I40E_SR_NVM_IMAGE_VERSION, &hw->nvm.version); - i40e_read_nvm_word(hw, I40E_SR_NVM_EETRACK_LO, &eetrack_lo); - i40e_read_nvm_word(hw, I40E_SR_NVM_EETRACK_HI, &eetrack_hi); - hw->nvm.eetrack = (eetrack_hi << 16) | eetrack_lo; - - if (hw->aq.api_maj_ver > I40E_FW_API_VERSION_MAJOR) { - ret_code = I40E_ERR_FIRMWARE_API_VERSION; - goto init_adminq_free_arq; - } - - /* pre-emptive resource lock release */ - i40e_aq_release_resource(hw, I40E_NVM_RESOURCE_ID, 0, NULL); - hw->aq.nvm_busy = false; - - ret_code = i40e_aq_set_hmc_resource_profile(hw, - I40E_HMC_PROFILE_DEFAULT, - 0, - NULL); - ret_code = I40E_SUCCESS; - -#endif /* VF_DRIVER */ - /* success! */ - goto init_adminq_exit; - -#ifndef VF_DRIVER -init_adminq_free_arq: - i40e_shutdown_arq(hw); -#endif -init_adminq_free_asq: - i40e_shutdown_asq(hw); -init_adminq_destroy_spinlocks: - i40e_destroy_spinlock(&hw->aq.asq_spinlock); - i40e_destroy_spinlock(&hw->aq.arq_spinlock); - -init_adminq_exit: - return ret_code; -} - -/** - * i40e_shutdown_adminq - shutdown routine for the Admin Queue - * @hw: pointer to the hardware structure - **/ -enum i40e_status_code i40e_shutdown_adminq(struct i40e_hw *hw) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - - if (i40e_check_asq_alive(hw)) - i40e_aq_queue_shutdown(hw, true); - - i40e_shutdown_asq(hw); - i40e_shutdown_arq(hw); - - /* destroy the spinlocks */ - i40e_destroy_spinlock(&hw->aq.asq_spinlock); - i40e_destroy_spinlock(&hw->aq.arq_spinlock); - - return ret_code; -} - -/** - * i40e_clean_asq - cleans Admin send queue - * @hw: pointer to the hardware structure - * - * returns the number of free desc - **/ -u16 i40e_clean_asq(struct i40e_hw *hw) -{ - struct i40e_adminq_ring *asq = &(hw->aq.asq); - struct i40e_asq_cmd_details *details; - u16 ntc = asq->next_to_clean; - struct i40e_aq_desc desc_cb; - struct i40e_aq_desc *desc; - - desc = I40E_ADMINQ_DESC(*asq, ntc); - details = I40E_ADMINQ_DETAILS(*asq, ntc); - while (rd32(hw, hw->aq.asq.head) != ntc) { - i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, - "%s: ntc %d head %d.\n", __FUNCTION__, ntc, - rd32(hw, hw->aq.asq.head)); - - if (details->callback) { - I40E_ADMINQ_CALLBACK cb_func = - (I40E_ADMINQ_CALLBACK)details->callback; - i40e_memcpy(&desc_cb, desc, - sizeof(struct i40e_aq_desc), I40E_DMA_TO_DMA); - cb_func(hw, &desc_cb); - } - i40e_memset(desc, 0, sizeof(*desc), I40E_DMA_MEM); - i40e_memset(details, 0, sizeof(*details), I40E_NONDMA_MEM); - ntc++; - if (ntc == asq->count) - ntc = 0; - desc = I40E_ADMINQ_DESC(*asq, ntc); - details = I40E_ADMINQ_DETAILS(*asq, ntc); - } - - asq->next_to_clean = ntc; - - return I40E_DESC_UNUSED(asq); -} - -/** - * i40e_asq_done - check if FW has processed the Admin Send Queue - * @hw: pointer to the hw struct - * - * Returns true if the firmware has processed all descriptors on the - * admin send queue. Returns false if there are still requests pending. - **/ -bool i40e_asq_done(struct i40e_hw *hw) -{ - /* AQ designers suggest use of head for better - * timing reliability than DD bit - */ - return rd32(hw, hw->aq.asq.head) == hw->aq.asq.next_to_use; - -} - -/** - * i40e_asq_send_command - send command to Admin Queue - * @hw: pointer to the hw struct - * @desc: prefilled descriptor describing the command (non DMA mem) - * @buff: buffer to use for indirect commands - * @buff_size: size of buffer for indirect commands - * @cmd_details: pointer to command details structure - * - * This is the main send command driver routine for the Admin Queue send - * queue. It runs the queue, cleans the queue, etc - **/ -enum i40e_status_code i40e_asq_send_command(struct i40e_hw *hw, - struct i40e_aq_desc *desc, - void *buff, /* can be NULL */ - u16 buff_size, - struct i40e_asq_cmd_details *cmd_details) -{ - enum i40e_status_code status = I40E_SUCCESS; - struct i40e_dma_mem *dma_buff = NULL; - struct i40e_asq_cmd_details *details; - struct i40e_aq_desc *desc_on_ring; - bool cmd_completed = false; - u16 retval = 0; - u32 val = 0; - - val = rd32(hw, hw->aq.asq.head); - if (val >= hw->aq.num_asq_entries) { - i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, - "AQTX: head overrun at %d\n", val); - status = I40E_ERR_QUEUE_EMPTY; - goto asq_send_command_exit; - } - - if (hw->aq.asq.count == 0) { - i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, - "AQTX: Admin queue not initialized.\n"); - status = I40E_ERR_QUEUE_EMPTY; - goto asq_send_command_exit; - } - -#ifndef VF_DRIVER - if (i40e_is_nvm_update_op(desc) && hw->aq.nvm_busy) { - i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, "AQTX: NVM busy.\n"); - status = I40E_ERR_NVM; - goto asq_send_command_exit; - } - -#endif - details = I40E_ADMINQ_DETAILS(hw->aq.asq, hw->aq.asq.next_to_use); - if (cmd_details) { - i40e_memcpy(details, - cmd_details, - sizeof(struct i40e_asq_cmd_details), - I40E_NONDMA_TO_NONDMA); - - /* If the cmd_details are defined copy the cookie. The - * CPU_TO_LE32 is not needed here because the data is ignored - * by the FW, only used by the driver - */ - if (details->cookie) { - desc->cookie_high = - CPU_TO_LE32(I40E_HI_DWORD(details->cookie)); - desc->cookie_low = - CPU_TO_LE32(I40E_LO_DWORD(details->cookie)); - } - } else { - i40e_memset(details, 0, - sizeof(struct i40e_asq_cmd_details), - I40E_NONDMA_MEM); - } - - /* clear requested flags and then set additional flags if defined */ - desc->flags &= ~CPU_TO_LE16(details->flags_dis); - desc->flags |= CPU_TO_LE16(details->flags_ena); - - i40e_acquire_spinlock(&hw->aq.asq_spinlock); - - if (buff_size > hw->aq.asq_buf_size) { - i40e_debug(hw, - I40E_DEBUG_AQ_MESSAGE, - "AQTX: Invalid buffer size: %d.\n", - buff_size); - status = I40E_ERR_INVALID_SIZE; - goto asq_send_command_error; - } - - if (details->postpone && !details->async) { - i40e_debug(hw, - I40E_DEBUG_AQ_MESSAGE, - "AQTX: Async flag not set along with postpone flag"); - status = I40E_ERR_PARAM; - goto asq_send_command_error; - } - - /* call clean and check queue available function to reclaim the - * descriptors that were processed by FW, the function returns the - * number of desc available - */ - /* the clean function called here could be called in a separate thread - * in case of asynchronous completions - */ - if (i40e_clean_asq(hw) == 0) { - i40e_debug(hw, - I40E_DEBUG_AQ_MESSAGE, - "AQTX: Error queue is full.\n"); - status = I40E_ERR_ADMIN_QUEUE_FULL; - goto asq_send_command_error; - } - - /* initialize the temp desc pointer with the right desc */ - desc_on_ring = I40E_ADMINQ_DESC(hw->aq.asq, hw->aq.asq.next_to_use); - - /* if the desc is available copy the temp desc to the right place */ - i40e_memcpy(desc_on_ring, desc, sizeof(struct i40e_aq_desc), - I40E_NONDMA_TO_DMA); - - /* if buff is not NULL assume indirect command */ - if (buff != NULL) { - dma_buff = &(hw->aq.asq.r.asq_bi[hw->aq.asq.next_to_use]); - /* copy the user buff into the respective DMA buff */ - i40e_memcpy(dma_buff->va, buff, buff_size, - I40E_NONDMA_TO_DMA); - desc_on_ring->datalen = CPU_TO_LE16(buff_size); - - /* Update the address values in the desc with the pa value - * for respective buffer - */ - desc_on_ring->params.external.addr_high = - CPU_TO_LE32(I40E_HI_DWORD(dma_buff->pa)); - desc_on_ring->params.external.addr_low = - CPU_TO_LE32(I40E_LO_DWORD(dma_buff->pa)); - } - - /* bump the tail */ - i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, "AQTX: desc and buffer:\n"); - i40e_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc_on_ring, - buff, buff_size); - (hw->aq.asq.next_to_use)++; - if (hw->aq.asq.next_to_use == hw->aq.asq.count) - hw->aq.asq.next_to_use = 0; - if (!details->postpone) - wr32(hw, hw->aq.asq.tail, hw->aq.asq.next_to_use); - - /* if cmd_details are not defined or async flag is not set, - * we need to wait for desc write back - */ - if (!details->async && !details->postpone) { - u32 total_delay = 0; - - do { - /* AQ designers suggest use of head for better - * timing reliability than DD bit - */ - if (i40e_asq_done(hw)) - break; - /* ugh! delay while spin_lock */ - i40e_msec_delay(1); - total_delay++; - } while (total_delay < hw->aq.asq_cmd_timeout); - } - - /* if ready, copy the desc back to temp */ - if (i40e_asq_done(hw)) { - i40e_memcpy(desc, desc_on_ring, sizeof(struct i40e_aq_desc), - I40E_DMA_TO_NONDMA); - if (buff != NULL) - i40e_memcpy(buff, dma_buff->va, buff_size, - I40E_DMA_TO_NONDMA); - retval = LE16_TO_CPU(desc->retval); - if (retval != 0) { - i40e_debug(hw, - I40E_DEBUG_AQ_MESSAGE, - "AQTX: Command completed with error 0x%X.\n", - retval); - - /* strip off FW internal code */ - retval &= 0xff; - } - cmd_completed = true; - if ((enum i40e_admin_queue_err)retval == I40E_AQ_RC_OK) - status = I40E_SUCCESS; - else - status = I40E_ERR_ADMIN_QUEUE_ERROR; - hw->aq.asq_last_status = (enum i40e_admin_queue_err)retval; - } - - i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, - "AQTX: desc and buffer writeback:\n"); - i40e_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc, buff, buff_size); - - /* update the error if time out occurred */ - if ((!cmd_completed) && - (!details->async && !details->postpone)) { - i40e_debug(hw, - I40E_DEBUG_AQ_MESSAGE, - "AQTX: Writeback timeout.\n"); - status = I40E_ERR_ADMIN_QUEUE_TIMEOUT; - } - -#ifndef VF_DRIVER - if (!status && i40e_is_nvm_update_op(desc)) - hw->aq.nvm_busy = true; - -#endif /* VF_DRIVER */ -asq_send_command_error: - i40e_release_spinlock(&hw->aq.asq_spinlock); -asq_send_command_exit: - return status; -} - -/** - * i40e_fill_default_direct_cmd_desc - AQ descriptor helper function - * @desc: pointer to the temp descriptor (non DMA mem) - * @opcode: the opcode can be used to decide which flags to turn off or on - * - * Fill the desc with default values - **/ -void i40e_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc, - u16 opcode) -{ - /* zero out the desc */ - i40e_memset((void *)desc, 0, sizeof(struct i40e_aq_desc), - I40E_NONDMA_MEM); - desc->opcode = CPU_TO_LE16(opcode); - desc->flags = CPU_TO_LE16(I40E_AQ_FLAG_SI); -} - -/** - * i40e_clean_arq_element - * @hw: pointer to the hw struct - * @e: event info from the receive descriptor, includes any buffers - * @pending: number of events that could be left to process - * - * This function cleans one Admin Receive Queue element and returns - * the contents through e. It can also return how many events are - * left to process through 'pending' - **/ -enum i40e_status_code i40e_clean_arq_element(struct i40e_hw *hw, - struct i40e_arq_event_info *e, - u16 *pending) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - u16 ntc = hw->aq.arq.next_to_clean; - struct i40e_aq_desc *desc; - struct i40e_dma_mem *bi; - u16 desc_idx; - u16 datalen; - u16 flags; - u16 ntu; - - /* take the lock before we start messing with the ring */ - i40e_acquire_spinlock(&hw->aq.arq_spinlock); - - /* set next_to_use to head */ - ntu = (rd32(hw, hw->aq.arq.head) & I40E_PF_ARQH_ARQH_MASK); - if (ntu == ntc) { - /* nothing to do - shouldn't need to update ring's values */ - i40e_debug(hw, - I40E_DEBUG_AQ_MESSAGE, - "AQRX: Queue is empty.\n"); - ret_code = I40E_ERR_ADMIN_QUEUE_NO_WORK; - goto clean_arq_element_out; - } - - /* now clean the next descriptor */ - desc = I40E_ADMINQ_DESC(hw->aq.arq, ntc); - desc_idx = ntc; - - flags = LE16_TO_CPU(desc->flags); - if (flags & I40E_AQ_FLAG_ERR) { - ret_code = I40E_ERR_ADMIN_QUEUE_ERROR; - hw->aq.arq_last_status = - (enum i40e_admin_queue_err)LE16_TO_CPU(desc->retval); - i40e_debug(hw, - I40E_DEBUG_AQ_MESSAGE, - "AQRX: Event received with error 0x%X.\n", - hw->aq.arq_last_status); - } - - i40e_memcpy(&e->desc, desc, sizeof(struct i40e_aq_desc), - I40E_DMA_TO_NONDMA); - datalen = LE16_TO_CPU(desc->datalen); - e->msg_len = min(datalen, e->buf_len); - if (e->msg_buf != NULL && (e->msg_len != 0)) - i40e_memcpy(e->msg_buf, - hw->aq.arq.r.arq_bi[desc_idx].va, - e->msg_len, I40E_DMA_TO_NONDMA); - - i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, "AQRX: desc and buffer:\n"); - i40e_debug_aq(hw, I40E_DEBUG_AQ_COMMAND, (void *)desc, e->msg_buf, - hw->aq.arq_buf_size); - - /* Restore the original datalen and buffer address in the desc, - * FW updates datalen to indicate the event message - * size - */ - bi = &hw->aq.arq.r.arq_bi[ntc]; - i40e_memset((void *)desc, 0, sizeof(struct i40e_aq_desc), I40E_DMA_MEM); - - desc->flags = CPU_TO_LE16(I40E_AQ_FLAG_BUF); - if (hw->aq.arq_buf_size > I40E_AQ_LARGE_BUF) - desc->flags |= CPU_TO_LE16(I40E_AQ_FLAG_LB); - desc->datalen = CPU_TO_LE16((u16)bi->size); - desc->params.external.addr_high = CPU_TO_LE32(I40E_HI_DWORD(bi->pa)); - desc->params.external.addr_low = CPU_TO_LE32(I40E_LO_DWORD(bi->pa)); - - /* set tail = the last cleaned desc index. */ - wr32(hw, hw->aq.arq.tail, ntc); - /* ntc is updated to tail + 1 */ - ntc++; - if (ntc == hw->aq.num_arq_entries) - ntc = 0; - hw->aq.arq.next_to_clean = ntc; - hw->aq.arq.next_to_use = ntu; - -clean_arq_element_out: - /* Set pending if needed, unlock and return */ - if (pending != NULL) - *pending = (ntc > ntu ? hw->aq.arq.count : 0) + (ntu - ntc); - i40e_release_spinlock(&hw->aq.arq_spinlock); - -#ifndef VF_DRIVER - if (i40e_is_nvm_update_op(&e->desc)) { - hw->aq.nvm_busy = false; - if (hw->aq.nvm_release_on_done) { - i40e_release_nvm(hw); - hw->aq.nvm_release_on_done = false; - } - } - -#endif - return ret_code; -} - -void i40e_resume_aq(struct i40e_hw *hw) -{ - /* Registers are reset after PF reset */ - hw->aq.asq.next_to_use = 0; - hw->aq.asq.next_to_clean = 0; - -#if (I40E_VF_ATQLEN_ATQENABLE_MASK != I40E_PF_ATQLEN_ATQENABLE_MASK) -#error I40E_VF_ATQLEN_ATQENABLE_MASK != I40E_PF_ATQLEN_ATQENABLE_MASK -#endif - i40e_config_asq_regs(hw); - - hw->aq.arq.next_to_use = 0; - hw->aq.arq.next_to_clean = 0; - - i40e_config_arq_regs(hw); -} diff --git a/lib/librte_pmd_i40e/i40e/i40e_adminq.h b/lib/librte_pmd_i40e/i40e/i40e_adminq.h deleted file mode 100644 index ea611bd..0000000 --- a/lib/librte_pmd_i40e/i40e/i40e_adminq.h +++ /dev/null @@ -1,157 +0,0 @@ -/******************************************************************************* - -Copyright (c) 2013 - 2014, Intel Corporation -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - - 1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - 3. Neither the name of the Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. - -***************************************************************************/ - -#ifndef _I40E_ADMINQ_H_ -#define _I40E_ADMINQ_H_ - -#include "i40e_osdep.h" -#include "i40e_adminq_cmd.h" - -#define I40E_ADMINQ_DESC(R, i) \ - (&(((struct i40e_aq_desc *)((R).desc_buf.va))[i])) - -#define I40E_ADMINQ_DESC_ALIGNMENT 4096 - -struct i40e_adminq_ring { - struct i40e_virt_mem dma_head; /* space for dma structures */ - struct i40e_dma_mem desc_buf; /* descriptor ring memory */ - struct i40e_virt_mem cmd_buf; /* command buffer memory */ - - union { - struct i40e_dma_mem *asq_bi; - struct i40e_dma_mem *arq_bi; - } r; - - u16 count; /* Number of descriptors */ - u16 rx_buf_len; /* Admin Receive Queue buffer length */ - - /* used for interrupt processing */ - u16 next_to_use; - u16 next_to_clean; - - /* used for queue tracking */ - u32 head; - u32 tail; - u32 len; - u32 bah; - u32 bal; -}; - -/* ASQ transaction details */ -struct i40e_asq_cmd_details { - void *callback; /* cast from type I40E_ADMINQ_CALLBACK */ - u64 cookie; - u16 flags_ena; - u16 flags_dis; - bool async; - bool postpone; -}; - -#define I40E_ADMINQ_DETAILS(R, i) \ - (&(((struct i40e_asq_cmd_details *)((R).cmd_buf.va))[i])) - -/* ARQ event information */ -struct i40e_arq_event_info { - struct i40e_aq_desc desc; - u16 msg_len; - u16 buf_len; - u8 *msg_buf; -}; - -/* Admin Queue information */ -struct i40e_adminq_info { - struct i40e_adminq_ring arq; /* receive queue */ - struct i40e_adminq_ring asq; /* send queue */ - u32 asq_cmd_timeout; /* send queue cmd write back timeout*/ - u16 num_arq_entries; /* receive queue depth */ - u16 num_asq_entries; /* send queue depth */ - u16 arq_buf_size; /* receive queue buffer size */ - u16 asq_buf_size; /* send queue buffer size */ - u16 fw_maj_ver; /* firmware major version */ - u16 fw_min_ver; /* firmware minor version */ - u16 api_maj_ver; /* api major version */ - u16 api_min_ver; /* api minor version */ - bool nvm_busy; - bool nvm_release_on_done; - - struct i40e_spinlock asq_spinlock; /* Send queue spinlock */ - struct i40e_spinlock arq_spinlock; /* Receive queue spinlock */ - - /* last status values on send and receive queues */ - enum i40e_admin_queue_err asq_last_status; - enum i40e_admin_queue_err arq_last_status; -}; - -/** - * i40e_aq_rc_to_posix - convert errors to user-land codes - * aq_rc: AdminQ error code to convert - **/ -STATIC inline int i40e_aq_rc_to_posix(u16 aq_rc) -{ - int aq_to_posix[] = { - 0, /* I40E_AQ_RC_OK */ - -EPERM, /* I40E_AQ_RC_EPERM */ - -ENOENT, /* I40E_AQ_RC_ENOENT */ - -ESRCH, /* I40E_AQ_RC_ESRCH */ - -EINTR, /* I40E_AQ_RC_EINTR */ - -EIO, /* I40E_AQ_RC_EIO */ - -ENXIO, /* I40E_AQ_RC_ENXIO */ - -E2BIG, /* I40E_AQ_RC_E2BIG */ - -EAGAIN, /* I40E_AQ_RC_EAGAIN */ - -ENOMEM, /* I40E_AQ_RC_ENOMEM */ - -EACCES, /* I40E_AQ_RC_EACCES */ - -EFAULT, /* I40E_AQ_RC_EFAULT */ - -EBUSY, /* I40E_AQ_RC_EBUSY */ - -EEXIST, /* I40E_AQ_RC_EEXIST */ - -EINVAL, /* I40E_AQ_RC_EINVAL */ - -ENOTTY, /* I40E_AQ_RC_ENOTTY */ - -ENOSPC, /* I40E_AQ_RC_ENOSPC */ - -ENOSYS, /* I40E_AQ_RC_ENOSYS */ - -ERANGE, /* I40E_AQ_RC_ERANGE */ - -EPIPE, /* I40E_AQ_RC_EFLUSHED */ - -ESPIPE, /* I40E_AQ_RC_BAD_ADDR */ - -EROFS, /* I40E_AQ_RC_EMODE */ - -EFBIG, /* I40E_AQ_RC_EFBIG */ - }; - - return aq_to_posix[aq_rc]; -} - -/* general information */ -#define I40E_AQ_LARGE_BUF 512 -#define I40E_ASQ_CMD_TIMEOUT 100 /* msecs */ - -void i40e_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc, - u16 opcode); - -#endif /* _I40E_ADMINQ_H_ */ diff --git a/lib/librte_pmd_i40e/i40e/i40e_adminq_cmd.h b/lib/librte_pmd_i40e/i40e/i40e_adminq_cmd.h deleted file mode 100644 index 5ea9b7d..0000000 --- a/lib/librte_pmd_i40e/i40e/i40e_adminq_cmd.h +++ /dev/null @@ -1,2179 +0,0 @@ -/******************************************************************************* - -Copyright (c) 2013 - 2014, Intel Corporation -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - - 1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - 3. Neither the name of the Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. - -***************************************************************************/ - -#ifndef _I40E_ADMINQ_CMD_H_ -#define _I40E_ADMINQ_CMD_H_ - -/* This header file defines the i40e Admin Queue commands and is shared between - * i40e Firmware and Software. - * - * This file needs to comply with the Linux Kernel coding style. - */ - -#define I40E_FW_API_VERSION_MAJOR 0x0001 -#define I40E_FW_API_VERSION_MINOR 0x0002 - -struct i40e_aq_desc { - __le16 flags; - __le16 opcode; - __le16 datalen; - __le16 retval; - __le32 cookie_high; - __le32 cookie_low; - union { - struct { - __le32 param0; - __le32 param1; - __le32 param2; - __le32 param3; - } internal; - struct { - __le32 param0; - __le32 param1; - __le32 addr_high; - __le32 addr_low; - } external; - u8 raw[16]; - } params; -}; - -/* Flags sub-structure - * |0 |1 |2 |3 |4 |5 |6 |7 |8 |9 |10 |11 |12 |13 |14 |15 | - * |DD |CMP|ERR|VFE| * * RESERVED * * |LB |RD |VFC|BUF|SI |EI |FE | - */ - -/* command flags and offsets*/ -#define I40E_AQ_FLAG_DD_SHIFT 0 -#define I40E_AQ_FLAG_CMP_SHIFT 1 -#define I40E_AQ_FLAG_ERR_SHIFT 2 -#define I40E_AQ_FLAG_VFE_SHIFT 3 -#define I40E_AQ_FLAG_LB_SHIFT 9 -#define I40E_AQ_FLAG_RD_SHIFT 10 -#define I40E_AQ_FLAG_VFC_SHIFT 11 -#define I40E_AQ_FLAG_BUF_SHIFT 12 -#define I40E_AQ_FLAG_SI_SHIFT 13 -#define I40E_AQ_FLAG_EI_SHIFT 14 -#define I40E_AQ_FLAG_FE_SHIFT 15 - -#define I40E_AQ_FLAG_DD (1 << I40E_AQ_FLAG_DD_SHIFT) /* 0x1 */ -#define I40E_AQ_FLAG_CMP (1 << I40E_AQ_FLAG_CMP_SHIFT) /* 0x2 */ -#define I40E_AQ_FLAG_ERR (1 << I40E_AQ_FLAG_ERR_SHIFT) /* 0x4 */ -#define I40E_AQ_FLAG_VFE (1 << I40E_AQ_FLAG_VFE_SHIFT) /* 0x8 */ -#define I40E_AQ_FLAG_LB (1 << I40E_AQ_FLAG_LB_SHIFT) /* 0x200 */ -#define I40E_AQ_FLAG_RD (1 << I40E_AQ_FLAG_RD_SHIFT) /* 0x400 */ -#define I40E_AQ_FLAG_VFC (1 << I40E_AQ_FLAG_VFC_SHIFT) /* 0x800 */ -#define I40E_AQ_FLAG_BUF (1 << I40E_AQ_FLAG_BUF_SHIFT) /* 0x1000 */ -#define I40E_AQ_FLAG_SI (1 << I40E_AQ_FLAG_SI_SHIFT) /* 0x2000 */ -#define I40E_AQ_FLAG_EI (1 << I40E_AQ_FLAG_EI_SHIFT) /* 0x4000 */ -#define I40E_AQ_FLAG_FE (1 << I40E_AQ_FLAG_FE_SHIFT) /* 0x8000 */ - -/* error codes */ -enum i40e_admin_queue_err { - I40E_AQ_RC_OK = 0, /* success */ - I40E_AQ_RC_EPERM = 1, /* Operation not permitted */ - I40E_AQ_RC_ENOENT = 2, /* No such element */ - I40E_AQ_RC_ESRCH = 3, /* Bad opcode */ - I40E_AQ_RC_EINTR = 4, /* operation interrupted */ - I40E_AQ_RC_EIO = 5, /* I/O error */ - I40E_AQ_RC_ENXIO = 6, /* No such resource */ - I40E_AQ_RC_E2BIG = 7, /* Arg too long */ - I40E_AQ_RC_EAGAIN = 8, /* Try again */ - I40E_AQ_RC_ENOMEM = 9, /* Out of memory */ - I40E_AQ_RC_EACCES = 10, /* Permission denied */ - I40E_AQ_RC_EFAULT = 11, /* Bad address */ - I40E_AQ_RC_EBUSY = 12, /* Device or resource busy */ - I40E_AQ_RC_EEXIST = 13, /* object already exists */ - I40E_AQ_RC_EINVAL = 14, /* Invalid argument */ - I40E_AQ_RC_ENOTTY = 15, /* Not a typewriter */ - I40E_AQ_RC_ENOSPC = 16, /* No space left or alloc failure */ - I40E_AQ_RC_ENOSYS = 17, /* Function not implemented */ - I40E_AQ_RC_ERANGE = 18, /* Parameter out of range */ - I40E_AQ_RC_EFLUSHED = 19, /* Cmd flushed due to prev cmd error */ - I40E_AQ_RC_BAD_ADDR = 20, /* Descriptor contains a bad pointer */ - I40E_AQ_RC_EMODE = 21, /* Op not allowed in current dev mode */ - I40E_AQ_RC_EFBIG = 22, /* File too large */ -}; - -/* Admin Queue command opcodes */ -enum i40e_admin_queue_opc { - /* aq commands */ - i40e_aqc_opc_get_version = 0x0001, - i40e_aqc_opc_driver_version = 0x0002, - i40e_aqc_opc_queue_shutdown = 0x0003, - i40e_aqc_opc_set_pf_context = 0x0004, - - /* resource ownership */ - i40e_aqc_opc_request_resource = 0x0008, - i40e_aqc_opc_release_resource = 0x0009, - - i40e_aqc_opc_list_func_capabilities = 0x000A, - i40e_aqc_opc_list_dev_capabilities = 0x000B, - - i40e_aqc_opc_set_cppm_configuration = 0x0103, - i40e_aqc_opc_set_arp_proxy_entry = 0x0104, - i40e_aqc_opc_set_ns_proxy_entry = 0x0105, - - /* LAA */ - i40e_aqc_opc_mng_laa = 0x0106, /* AQ obsolete */ - i40e_aqc_opc_mac_address_read = 0x0107, - i40e_aqc_opc_mac_address_write = 0x0108, - - /* PXE */ - i40e_aqc_opc_clear_pxe_mode = 0x0110, - - /* internal switch commands */ - i40e_aqc_opc_get_switch_config = 0x0200, - i40e_aqc_opc_add_statistics = 0x0201, - i40e_aqc_opc_remove_statistics = 0x0202, - i40e_aqc_opc_set_port_parameters = 0x0203, - i40e_aqc_opc_get_switch_resource_alloc = 0x0204, - - i40e_aqc_opc_add_vsi = 0x0210, - i40e_aqc_opc_update_vsi_parameters = 0x0211, - i40e_aqc_opc_get_vsi_parameters = 0x0212, - - i40e_aqc_opc_add_pv = 0x0220, - i40e_aqc_opc_update_pv_parameters = 0x0221, - i40e_aqc_opc_get_pv_parameters = 0x0222, - - i40e_aqc_opc_add_veb = 0x0230, - i40e_aqc_opc_update_veb_parameters = 0x0231, - i40e_aqc_opc_get_veb_parameters = 0x0232, - - i40e_aqc_opc_delete_element = 0x0243, - - i40e_aqc_opc_add_macvlan = 0x0250, - i40e_aqc_opc_remove_macvlan = 0x0251, - i40e_aqc_opc_add_vlan = 0x0252, - i40e_aqc_opc_remove_vlan = 0x0253, - i40e_aqc_opc_set_vsi_promiscuous_modes = 0x0254, - i40e_aqc_opc_add_tag = 0x0255, - i40e_aqc_opc_remove_tag = 0x0256, - i40e_aqc_opc_add_multicast_etag = 0x0257, - i40e_aqc_opc_remove_multicast_etag = 0x0258, - i40e_aqc_opc_update_tag = 0x0259, - i40e_aqc_opc_add_control_packet_filter = 0x025A, - i40e_aqc_opc_remove_control_packet_filter = 0x025B, - i40e_aqc_opc_add_cloud_filters = 0x025C, - i40e_aqc_opc_remove_cloud_filters = 0x025D, - - i40e_aqc_opc_add_mirror_rule = 0x0260, - i40e_aqc_opc_delete_mirror_rule = 0x0261, - - /* DCB commands */ - i40e_aqc_opc_dcb_ignore_pfc = 0x0301, - i40e_aqc_opc_dcb_updated = 0x0302, - - /* TX scheduler */ - i40e_aqc_opc_configure_vsi_bw_limit = 0x0400, - i40e_aqc_opc_configure_vsi_ets_sla_bw_limit = 0x0406, - i40e_aqc_opc_configure_vsi_tc_bw = 0x0407, - i40e_aqc_opc_query_vsi_bw_config = 0x0408, - i40e_aqc_opc_query_vsi_ets_sla_config = 0x040A, - i40e_aqc_opc_configure_switching_comp_bw_limit = 0x0410, - - i40e_aqc_opc_enable_switching_comp_ets = 0x0413, - i40e_aqc_opc_modify_switching_comp_ets = 0x0414, - i40e_aqc_opc_disable_switching_comp_ets = 0x0415, - i40e_aqc_opc_configure_switching_comp_ets_bw_limit = 0x0416, - i40e_aqc_opc_configure_switching_comp_bw_config = 0x0417, - i40e_aqc_opc_query_switching_comp_ets_config = 0x0418, - i40e_aqc_opc_query_port_ets_config = 0x0419, - i40e_aqc_opc_query_switching_comp_bw_config = 0x041A, - i40e_aqc_opc_suspend_port_tx = 0x041B, - i40e_aqc_opc_resume_port_tx = 0x041C, - i40e_aqc_opc_configure_partition_bw = 0x041D, - - /* hmc */ - i40e_aqc_opc_query_hmc_resource_profile = 0x0500, - i40e_aqc_opc_set_hmc_resource_profile = 0x0501, - - /* phy commands*/ - i40e_aqc_opc_get_phy_abilities = 0x0600, - i40e_aqc_opc_set_phy_config = 0x0601, - i40e_aqc_opc_set_mac_config = 0x0603, - i40e_aqc_opc_set_link_restart_an = 0x0605, - i40e_aqc_opc_get_link_status = 0x0607, - i40e_aqc_opc_set_phy_int_mask = 0x0613, - i40e_aqc_opc_get_local_advt_reg = 0x0614, - i40e_aqc_opc_set_local_advt_reg = 0x0615, - i40e_aqc_opc_get_partner_advt = 0x0616, - i40e_aqc_opc_set_lb_modes = 0x0618, - i40e_aqc_opc_get_phy_wol_caps = 0x0621, - i40e_aqc_opc_set_phy_debug = 0x0622, - i40e_aqc_opc_upload_ext_phy_fm = 0x0625, - - /* NVM commands */ - i40e_aqc_opc_nvm_read = 0x0701, - i40e_aqc_opc_nvm_erase = 0x0702, - i40e_aqc_opc_nvm_update = 0x0703, - i40e_aqc_opc_nvm_config_read = 0x0704, - i40e_aqc_opc_nvm_config_write = 0x0705, - - /* virtualization commands */ - i40e_aqc_opc_send_msg_to_pf = 0x0801, - i40e_aqc_opc_send_msg_to_vf = 0x0802, - i40e_aqc_opc_send_msg_to_peer = 0x0803, - - /* alternate structure */ - i40e_aqc_opc_alternate_write = 0x0900, - i40e_aqc_opc_alternate_write_indirect = 0x0901, - i40e_aqc_opc_alternate_read = 0x0902, - i40e_aqc_opc_alternate_read_indirect = 0x0903, - i40e_aqc_opc_alternate_write_done = 0x0904, - i40e_aqc_opc_alternate_set_mode = 0x0905, - i40e_aqc_opc_alternate_clear_port = 0x0906, - - /* LLDP commands */ - i40e_aqc_opc_lldp_get_mib = 0x0A00, - i40e_aqc_opc_lldp_update_mib = 0x0A01, - i40e_aqc_opc_lldp_add_tlv = 0x0A02, - i40e_aqc_opc_lldp_update_tlv = 0x0A03, - i40e_aqc_opc_lldp_delete_tlv = 0x0A04, - i40e_aqc_opc_lldp_stop = 0x0A05, - i40e_aqc_opc_lldp_start = 0x0A06, - - /* Tunnel commands */ - i40e_aqc_opc_add_udp_tunnel = 0x0B00, - i40e_aqc_opc_del_udp_tunnel = 0x0B01, - i40e_aqc_opc_tunnel_key_structure = 0x0B10, - - /* Async Events */ - i40e_aqc_opc_event_lan_overflow = 0x1001, - - /* OEM commands */ - i40e_aqc_opc_oem_parameter_change = 0xFE00, - i40e_aqc_opc_oem_device_status_change = 0xFE01, - - /* debug commands */ - i40e_aqc_opc_debug_get_deviceid = 0xFF00, - i40e_aqc_opc_debug_set_mode = 0xFF01, - i40e_aqc_opc_debug_read_reg = 0xFF03, - i40e_aqc_opc_debug_write_reg = 0xFF04, - i40e_aqc_opc_debug_modify_reg = 0xFF07, - i40e_aqc_opc_debug_dump_internals = 0xFF08, - i40e_aqc_opc_debug_modify_internals = 0xFF09, -}; - -/* command structures and indirect data structures */ - -/* Structure naming conventions: - * - no suffix for direct command descriptor structures - * - _data for indirect sent data - * - _resp for indirect return data (data which is both will use _data) - * - _completion for direct return data - * - _element_ for repeated elements (may also be _data or _resp) - * - * Command structures are expected to overlay the params.raw member of the basic - * descriptor, and as such cannot exceed 16 bytes in length. - */ - -/* This macro is used to generate a compilation error if a structure - * is not exactly the correct length. It gives a divide by zero error if the - * structure is not of the correct size, otherwise it creates an enum that is - * never used. - */ -#define I40E_CHECK_STRUCT_LEN(n, X) enum i40e_static_assert_enum_##X \ - { i40e_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) } - -/* This macro is used extensively to ensure that command structures are 16 - * bytes in length as they have to map to the raw array of that size. - */ -#define I40E_CHECK_CMD_LENGTH(X) I40E_CHECK_STRUCT_LEN(16, X) - -/* internal (0x00XX) commands */ - -/* Get version (direct 0x0001) */ -struct i40e_aqc_get_version { - __le32 rom_ver; - __le32 fw_build; - __le16 fw_major; - __le16 fw_minor; - __le16 api_major; - __le16 api_minor; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_get_version); - -/* Send driver version (indirect 0x0002) */ -struct i40e_aqc_driver_version { - u8 driver_major_ver; - u8 driver_minor_ver; - u8 driver_build_ver; - u8 driver_subbuild_ver; - u8 reserved[4]; - __le32 address_high; - __le32 address_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_driver_version); - -/* Queue Shutdown (direct 0x0003) */ -struct i40e_aqc_queue_shutdown { - __le32 driver_unloading; -#define I40E_AQ_DRIVER_UNLOADING 0x1 - u8 reserved[12]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_queue_shutdown); - -/* Set PF context (0x0004, direct) */ -struct i40e_aqc_set_pf_context { - u8 pf_id; - u8 reserved[15]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_set_pf_context); - -/* Request resource ownership (direct 0x0008) - * Release resource ownership (direct 0x0009) - */ -#define I40E_AQ_RESOURCE_NVM 1 -#define I40E_AQ_RESOURCE_SDP 2 -#define I40E_AQ_RESOURCE_ACCESS_READ 1 -#define I40E_AQ_RESOURCE_ACCESS_WRITE 2 -#define I40E_AQ_RESOURCE_NVM_READ_TIMEOUT 3000 -#define I40E_AQ_RESOURCE_NVM_WRITE_TIMEOUT 180000 - -struct i40e_aqc_request_resource { - __le16 resource_id; - __le16 access_type; - __le32 timeout; - __le32 resource_number; - u8 reserved[4]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_request_resource); - -/* Get function capabilities (indirect 0x000A) - * Get device capabilities (indirect 0x000B) - */ -struct i40e_aqc_list_capabilites { - u8 command_flags; -#define I40E_AQ_LIST_CAP_PF_INDEX_EN 1 - u8 pf_index; - u8 reserved[2]; - __le32 count; - __le32 addr_high; - __le32 addr_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_list_capabilites); - -struct i40e_aqc_list_capabilities_element_resp { - __le16 id; - u8 major_rev; - u8 minor_rev; - __le32 number; - __le32 logical_id; - __le32 phys_id; - u8 reserved[16]; -}; - -/* list of caps */ - -#define I40E_AQ_CAP_ID_SWITCH_MODE 0x0001 -#define I40E_AQ_CAP_ID_MNG_MODE 0x0002 -#define I40E_AQ_CAP_ID_NPAR_ACTIVE 0x0003 -#define I40E_AQ_CAP_ID_OS2BMC_CAP 0x0004 -#define I40E_AQ_CAP_ID_FUNCTIONS_VALID 0x0005 -#define I40E_AQ_CAP_ID_ALTERNATE_RAM 0x0006 -#define I40E_AQ_CAP_ID_SRIOV 0x0012 -#define I40E_AQ_CAP_ID_VF 0x0013 -#define I40E_AQ_CAP_ID_VMDQ 0x0014 -#define I40E_AQ_CAP_ID_8021QBG 0x0015 -#define I40E_AQ_CAP_ID_8021QBR 0x0016 -#define I40E_AQ_CAP_ID_VSI 0x0017 -#define I40E_AQ_CAP_ID_DCB 0x0018 -#define I40E_AQ_CAP_ID_FCOE 0x0021 -#define I40E_AQ_CAP_ID_RSS 0x0040 -#define I40E_AQ_CAP_ID_RXQ 0x0041 -#define I40E_AQ_CAP_ID_TXQ 0x0042 -#define I40E_AQ_CAP_ID_MSIX 0x0043 -#define I40E_AQ_CAP_ID_VF_MSIX 0x0044 -#define I40E_AQ_CAP_ID_FLOW_DIRECTOR 0x0045 -#define I40E_AQ_CAP_ID_1588 0x0046 -#define I40E_AQ_CAP_ID_IWARP 0x0051 -#define I40E_AQ_CAP_ID_LED 0x0061 -#define I40E_AQ_CAP_ID_SDP 0x0062 -#define I40E_AQ_CAP_ID_MDIO 0x0063 -#define I40E_AQ_CAP_ID_FLEX10 0x00F1 -#define I40E_AQ_CAP_ID_CEM 0x00F2 - -/* Set CPPM Configuration (direct 0x0103) */ -struct i40e_aqc_cppm_configuration { - __le16 command_flags; -#define I40E_AQ_CPPM_EN_LTRC 0x0800 -#define I40E_AQ_CPPM_EN_DMCTH 0x1000 -#define I40E_AQ_CPPM_EN_DMCTLX 0x2000 -#define I40E_AQ_CPPM_EN_HPTC 0x4000 -#define I40E_AQ_CPPM_EN_DMARC 0x8000 - __le16 ttlx; - __le32 dmacr; - __le16 dmcth; - u8 hptc; - u8 reserved; - __le32 pfltrc; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_cppm_configuration); - -/* Set ARP Proxy command / response (indirect 0x0104) */ -struct i40e_aqc_arp_proxy_data { - __le16 command_flags; -#define I40E_AQ_ARP_INIT_IPV4 0x0008 -#define I40E_AQ_ARP_UNSUP_CTL 0x0010 -#define I40E_AQ_ARP_ENA 0x0020 -#define I40E_AQ_ARP_ADD_IPV4 0x0040 -#define I40E_AQ_ARP_DEL_IPV4 0x0080 - __le16 table_id; - __le32 pfpm_proxyfc; - __le32 ip_addr; - u8 mac_addr[6]; -}; - -/* Set NS Proxy Table Entry Command (indirect 0x0105) */ -struct i40e_aqc_ns_proxy_data { - __le16 table_idx_mac_addr_0; - __le16 table_idx_mac_addr_1; - __le16 table_idx_ipv6_0; - __le16 table_idx_ipv6_1; - __le16 control; -#define I40E_AQ_NS_PROXY_ADD_0 0x0100 -#define I40E_AQ_NS_PROXY_DEL_0 0x0200 -#define I40E_AQ_NS_PROXY_ADD_1 0x0400 -#define I40E_AQ_NS_PROXY_DEL_1 0x0800 -#define I40E_AQ_NS_PROXY_ADD_IPV6_0 0x1000 -#define I40E_AQ_NS_PROXY_DEL_IPV6_0 0x2000 -#define I40E_AQ_NS_PROXY_ADD_IPV6_1 0x4000 -#define I40E_AQ_NS_PROXY_DEL_IPV6_1 0x8000 -#define I40E_AQ_NS_PROXY_COMMAND_SEQ 0x0001 -#define I40E_AQ_NS_PROXY_INIT_IPV6_TBL 0x0002 -#define I40E_AQ_NS_PROXY_INIT_MAC_TBL 0x0004 - u8 mac_addr_0[6]; - u8 mac_addr_1[6]; - u8 local_mac_addr[6]; - u8 ipv6_addr_0[16]; /* Warning! spec specifies BE byte order */ - u8 ipv6_addr_1[16]; -}; - -/* Manage LAA Command (0x0106) - obsolete */ -struct i40e_aqc_mng_laa { - __le16 command_flags; -#define I40E_AQ_LAA_FLAG_WR 0x8000 - u8 reserved[2]; - __le32 sal; - __le16 sah; - u8 reserved2[6]; -}; - -/* Manage MAC Address Read Command (indirect 0x0107) */ -struct i40e_aqc_mac_address_read { - __le16 command_flags; -#define I40E_AQC_LAN_ADDR_VALID 0x10 -#define I40E_AQC_SAN_ADDR_VALID 0x20 -#define I40E_AQC_PORT_ADDR_VALID 0x40 -#define I40E_AQC_WOL_ADDR_VALID 0x80 -#define I40E_AQC_ADDR_VALID_MASK 0xf0 - u8 reserved[6]; - __le32 addr_high; - __le32 addr_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_mac_address_read); - -struct i40e_aqc_mac_address_read_data { - u8 pf_lan_mac[6]; - u8 pf_san_mac[6]; - u8 port_mac[6]; - u8 pf_wol_mac[6]; -}; - -I40E_CHECK_STRUCT_LEN(24, i40e_aqc_mac_address_read_data); - -/* Manage MAC Address Write Command (0x0108) */ -struct i40e_aqc_mac_address_write { - __le16 command_flags; -#define I40E_AQC_WRITE_TYPE_LAA_ONLY 0x0000 -#define I40E_AQC_WRITE_TYPE_LAA_WOL 0x4000 -#define I40E_AQC_WRITE_TYPE_PORT 0x8000 -#define I40E_AQC_WRITE_TYPE_MASK 0xc000 - __le16 mac_sah; - __le32 mac_sal; - u8 reserved[8]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_mac_address_write); - -/* PXE commands (0x011x) */ - -/* Clear PXE Command and response (direct 0x0110) */ -struct i40e_aqc_clear_pxe { - u8 rx_cnt; - u8 reserved[15]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_clear_pxe); - -/* Switch configuration commands (0x02xx) */ - -/* Used by many indirect commands that only pass an seid and a buffer in the - * command - */ -struct i40e_aqc_switch_seid { - __le16 seid; - u8 reserved[6]; - __le32 addr_high; - __le32 addr_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_switch_seid); - -/* Get Switch Configuration command (indirect 0x0200) - * uses i40e_aqc_switch_seid for the descriptor - */ -struct i40e_aqc_get_switch_config_header_resp { - __le16 num_reported; - __le16 num_total; - u8 reserved[12]; -}; - -struct i40e_aqc_switch_config_element_resp { - u8 element_type; -#define I40E_AQ_SW_ELEM_TYPE_MAC 1 -#define I40E_AQ_SW_ELEM_TYPE_PF 2 -#define I40E_AQ_SW_ELEM_TYPE_VF 3 -#define I40E_AQ_SW_ELEM_TYPE_EMP 4 -#define I40E_AQ_SW_ELEM_TYPE_BMC 5 -#define I40E_AQ_SW_ELEM_TYPE_PV 16 -#define I40E_AQ_SW_ELEM_TYPE_VEB 17 -#define I40E_AQ_SW_ELEM_TYPE_PA 18 -#define I40E_AQ_SW_ELEM_TYPE_VSI 19 - u8 revision; -#define I40E_AQ_SW_ELEM_REV_1 1 - __le16 seid; - __le16 uplink_seid; - __le16 downlink_seid; - u8 reserved[3]; - u8 connection_type; -#define I40E_AQ_CONN_TYPE_REGULAR 0x1 -#define I40E_AQ_CONN_TYPE_DEFAULT 0x2 -#define I40E_AQ_CONN_TYPE_CASCADED 0x3 - __le16 scheduler_id; - __le16 element_info; -}; - -/* Get Switch Configuration (indirect 0x0200) - * an array of elements are returned in the response buffer - * the first in the array is the header, remainder are elements - */ -struct i40e_aqc_get_switch_config_resp { - struct i40e_aqc_get_switch_config_header_resp header; - struct i40e_aqc_switch_config_element_resp element[1]; -}; - -/* Add Statistics (direct 0x0201) - * Remove Statistics (direct 0x0202) - */ -struct i40e_aqc_add_remove_statistics { - __le16 seid; - __le16 vlan; - __le16 stat_index; - u8 reserved[10]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_add_remove_statistics); - -/* Set Port Parameters command (direct 0x0203) */ -struct i40e_aqc_set_port_parameters { - __le16 command_flags; -#define I40E_AQ_SET_P_PARAMS_SAVE_BAD_PACKETS 1 -#define I40E_AQ_SET_P_PARAMS_PAD_SHORT_PACKETS 2 /* must set! */ -#define I40E_AQ_SET_P_PARAMS_DOUBLE_VLAN_ENA 4 - __le16 bad_frame_vsi; - __le16 default_seid; /* reserved for command */ - u8 reserved[10]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_set_port_parameters); - -/* Get Switch Resource Allocation (indirect 0x0204) */ -struct i40e_aqc_get_switch_resource_alloc { - u8 num_entries; /* reserved for command */ - u8 reserved[7]; - __le32 addr_high; - __le32 addr_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_get_switch_resource_alloc); - -/* expect an array of these structs in the response buffer */ -struct i40e_aqc_switch_resource_alloc_element_resp { - u8 resource_type; -#define I40E_AQ_RESOURCE_TYPE_VEB 0x0 -#define I40E_AQ_RESOURCE_TYPE_VSI 0x1 -#define I40E_AQ_RESOURCE_TYPE_MACADDR 0x2 -#define I40E_AQ_RESOURCE_TYPE_STAG 0x3 -#define I40E_AQ_RESOURCE_TYPE_ETAG 0x4 -#define I40E_AQ_RESOURCE_TYPE_MULTICAST_HASH 0x5 -#define I40E_AQ_RESOURCE_TYPE_UNICAST_HASH 0x6 -#define I40E_AQ_RESOURCE_TYPE_VLAN 0x7 -#define I40E_AQ_RESOURCE_TYPE_VSI_LIST_ENTRY 0x8 -#define I40E_AQ_RESOURCE_TYPE_ETAG_LIST_ENTRY 0x9 -#define I40E_AQ_RESOURCE_TYPE_VLAN_STAT_POOL 0xA -#define I40E_AQ_RESOURCE_TYPE_MIRROR_RULE 0xB -#define I40E_AQ_RESOURCE_TYPE_QUEUE_SETS 0xC -#define I40E_AQ_RESOURCE_TYPE_VLAN_FILTERS 0xD -#define I40E_AQ_RESOURCE_TYPE_INNER_MAC_FILTERS 0xF -#define I40E_AQ_RESOURCE_TYPE_IP_FILTERS 0x10 -#define I40E_AQ_RESOURCE_TYPE_GRE_VN_KEYS 0x11 -#define I40E_AQ_RESOURCE_TYPE_VN2_KEYS 0x12 -#define I40E_AQ_RESOURCE_TYPE_TUNNEL_PORTS 0x13 - u8 reserved1; - __le16 guaranteed; - __le16 total; - __le16 used; - __le16 total_unalloced; - u8 reserved2[6]; -}; - -/* Add VSI (indirect 0x0210) - * this indirect command uses struct i40e_aqc_vsi_properties_data - * as the indirect buffer (128 bytes) - * - * Update VSI (indirect 0x211) - * uses the same data structure as Add VSI - * - * Get VSI (indirect 0x0212) - * uses the same completion and data structure as Add VSI - */ -struct i40e_aqc_add_get_update_vsi { - __le16 uplink_seid; - u8 connection_type; -#define I40E_AQ_VSI_CONN_TYPE_NORMAL 0x1 -#define I40E_AQ_VSI_CONN_TYPE_DEFAULT 0x2 -#define I40E_AQ_VSI_CONN_TYPE_CASCADED 0x3 - u8 reserved1; - u8 vf_id; - u8 reserved2; - __le16 vsi_flags; -#define I40E_AQ_VSI_TYPE_SHIFT 0x0 -#define I40E_AQ_VSI_TYPE_MASK (0x3 << I40E_AQ_VSI_TYPE_SHIFT) -#define I40E_AQ_VSI_TYPE_VF 0x0 -#define I40E_AQ_VSI_TYPE_VMDQ2 0x1 -#define I40E_AQ_VSI_TYPE_PF 0x2 -#define I40E_AQ_VSI_TYPE_EMP_MNG 0x3 -#define I40E_AQ_VSI_FLAG_CASCADED_PV 0x4 - __le32 addr_high; - __le32 addr_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_add_get_update_vsi); - -struct i40e_aqc_add_get_update_vsi_completion { - __le16 seid; - __le16 vsi_number; - __le16 vsi_used; - __le16 vsi_free; - __le32 addr_high; - __le32 addr_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_add_get_update_vsi_completion); - -struct i40e_aqc_vsi_properties_data { - /* first 96 byte are written by SW */ - __le16 valid_sections; -#define I40E_AQ_VSI_PROP_SWITCH_VALID 0x0001 -#define I40E_AQ_VSI_PROP_SECURITY_VALID 0x0002 -#define I40E_AQ_VSI_PROP_VLAN_VALID 0x0004 -#define I40E_AQ_VSI_PROP_CAS_PV_VALID 0x0008 -#define I40E_AQ_VSI_PROP_INGRESS_UP_VALID 0x0010 -#define I40E_AQ_VSI_PROP_EGRESS_UP_VALID 0x0020 -#define I40E_AQ_VSI_PROP_QUEUE_MAP_VALID 0x0040 -#define I40E_AQ_VSI_PROP_QUEUE_OPT_VALID 0x0080 -#define I40E_AQ_VSI_PROP_OUTER_UP_VALID 0x0100 -#define I40E_AQ_VSI_PROP_SCHED_VALID 0x0200 - /* switch section */ - __le16 switch_id; /* 12bit id combined with flags below */ -#define I40E_AQ_VSI_SW_ID_SHIFT 0x0000 -#define I40E_AQ_VSI_SW_ID_MASK (0xFFF << I40E_AQ_VSI_SW_ID_SHIFT) -#define I40E_AQ_VSI_SW_ID_FLAG_NOT_STAG 0x1000 -#define I40E_AQ_VSI_SW_ID_FLAG_ALLOW_LB 0x2000 -#define I40E_AQ_VSI_SW_ID_FLAG_LOCAL_LB 0x4000 - u8 sw_reserved[2]; - /* security section */ - u8 sec_flags; -#define I40E_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD 0x01 -#define I40E_AQ_VSI_SEC_FLAG_ENABLE_VLAN_CHK 0x02 -#define I40E_AQ_VSI_SEC_FLAG_ENABLE_MAC_CHK 0x04 - u8 sec_reserved; - /* VLAN section */ - __le16 pvid; /* VLANS include priority bits */ - __le16 fcoe_pvid; - u8 port_vlan_flags; -#define I40E_AQ_VSI_PVLAN_MODE_SHIFT 0x00 -#define I40E_AQ_VSI_PVLAN_MODE_MASK (0x03 << \ - I40E_AQ_VSI_PVLAN_MODE_SHIFT) -#define I40E_AQ_VSI_PVLAN_MODE_TAGGED 0x01 -#define I40E_AQ_VSI_PVLAN_MODE_UNTAGGED 0x02 -#define I40E_AQ_VSI_PVLAN_MODE_ALL 0x03 -#define I40E_AQ_VSI_PVLAN_INSERT_PVID 0x04 -#define I40E_AQ_VSI_PVLAN_EMOD_SHIFT 0x03 -#define I40E_AQ_VSI_PVLAN_EMOD_MASK (0x3 << \ - I40E_AQ_VSI_PVLAN_EMOD_SHIFT) -#define I40E_AQ_VSI_PVLAN_EMOD_STR_BOTH 0x0 -#define I40E_AQ_VSI_PVLAN_EMOD_STR_UP 0x08 -#define I40E_AQ_VSI_PVLAN_EMOD_STR 0x10 -#define I40E_AQ_VSI_PVLAN_EMOD_NOTHING 0x18 - u8 pvlan_reserved[3]; - /* ingress egress up sections */ - __le32 ingress_table; /* bitmap, 3 bits per up */ -#define I40E_AQ_VSI_UP_TABLE_UP0_SHIFT 0 -#define I40E_AQ_VSI_UP_TABLE_UP0_MASK (0x7 << \ - I40E_AQ_VSI_UP_TABLE_UP0_SHIFT) -#define I40E_AQ_VSI_UP_TABLE_UP1_SHIFT 3 -#define I40E_AQ_VSI_UP_TABLE_UP1_MASK (0x7 << \ - I40E_AQ_VSI_UP_TABLE_UP1_SHIFT) -#define I40E_AQ_VSI_UP_TABLE_UP2_SHIFT 6 -#define I40E_AQ_VSI_UP_TABLE_UP2_MASK (0x7 << \ - I40E_AQ_VSI_UP_TABLE_UP2_SHIFT) -#define I40E_AQ_VSI_UP_TABLE_UP3_SHIFT 9 -#define I40E_AQ_VSI_UP_TABLE_UP3_MASK (0x7 << \ - I40E_AQ_VSI_UP_TABLE_UP3_SHIFT) -#define I40E_AQ_VSI_UP_TABLE_UP4_SHIFT 12 -#define I40E_AQ_VSI_UP_TABLE_UP4_MASK (0x7 << \ - I40E_AQ_VSI_UP_TABLE_UP4_SHIFT) -#define I40E_AQ_VSI_UP_TABLE_UP5_SHIFT 15 -#define I40E_AQ_VSI_UP_TABLE_UP5_MASK (0x7 << \ - I40E_AQ_VSI_UP_TABLE_UP5_SHIFT) -#define I40E_AQ_VSI_UP_TABLE_UP6_SHIFT 18 -#define I40E_AQ_VSI_UP_TABLE_UP6_MASK (0x7 << \ - I40E_AQ_VSI_UP_TABLE_UP6_SHIFT) -#define I40E_AQ_VSI_UP_TABLE_UP7_SHIFT 21 -#define I40E_AQ_VSI_UP_TABLE_UP7_MASK (0x7 << \ - I40E_AQ_VSI_UP_TABLE_UP7_SHIFT) - __le32 egress_table; /* same defines as for ingress table */ - /* cascaded PV section */ - __le16 cas_pv_tag; - u8 cas_pv_flags; -#define I40E_AQ_VSI_CAS_PV_TAGX_SHIFT 0x00 -#define I40E_AQ_VSI_CAS_PV_TAGX_MASK (0x03 << \ - I40E_AQ_VSI_CAS_PV_TAGX_SHIFT) -#define I40E_AQ_VSI_CAS_PV_TAGX_LEAVE 0x00 -#define I40E_AQ_VSI_CAS_PV_TAGX_REMOVE 0x01 -#define I40E_AQ_VSI_CAS_PV_TAGX_COPY 0x02 -#define I40E_AQ_VSI_CAS_PV_INSERT_TAG 0x10 -#define I40E_AQ_VSI_CAS_PV_ETAG_PRUNE 0x20 -#define I40E_AQ_VSI_CAS_PV_ACCEPT_HOST_TAG 0x40 - u8 cas_pv_reserved; - /* queue mapping section */ - __le16 mapping_flags; -#define I40E_AQ_VSI_QUE_MAP_CONTIG 0x0 -#define I40E_AQ_VSI_QUE_MAP_NONCONTIG 0x1 - __le16 queue_mapping[16]; -#define I40E_AQ_VSI_QUEUE_SHIFT 0x0 -#define I40E_AQ_VSI_QUEUE_MASK (0x7FF << I40E_AQ_VSI_QUEUE_SHIFT) - __le16 tc_mapping[8]; -#define I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT 0 -#define I40E_AQ_VSI_TC_QUE_OFFSET_MASK (0x1FF << \ - I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT) -#define I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT 9 -#define I40E_AQ_VSI_TC_QUE_NUMBER_MASK (0x7 << \ - I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT) - /* queueing option section */ - u8 queueing_opt_flags; -#define I40E_AQ_VSI_QUE_OPT_TCP_ENA 0x10 -#define I40E_AQ_VSI_QUE_OPT_FCOE_ENA 0x20 - u8 queueing_opt_reserved[3]; - /* scheduler section */ - u8 up_enable_bits; - u8 sched_reserved; - /* outer up section */ - __le32 outer_up_table; /* same structure and defines as ingress table */ - u8 cmd_reserved[8]; - /* last 32 bytes are written by FW */ - __le16 qs_handle[8]; -#define I40E_AQ_VSI_QS_HANDLE_INVALID 0xFFFF - __le16 stat_counter_idx; - __le16 sched_id; - u8 resp_reserved[12]; -}; - -I40E_CHECK_STRUCT_LEN(128, i40e_aqc_vsi_properties_data); - -/* Add Port Virtualizer (direct 0x0220) - * also used for update PV (direct 0x0221) but only flags are used - * (IS_CTRL_PORT only works on add PV) - */ -struct i40e_aqc_add_update_pv { - __le16 command_flags; -#define I40E_AQC_PV_FLAG_PV_TYPE 0x1 -#define I40E_AQC_PV_FLAG_FWD_UNKNOWN_STAG_EN 0x2 -#define I40E_AQC_PV_FLAG_FWD_UNKNOWN_ETAG_EN 0x4 -#define I40E_AQC_PV_FLAG_IS_CTRL_PORT 0x8 - __le16 uplink_seid; - __le16 connected_seid; - u8 reserved[10]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_add_update_pv); - -struct i40e_aqc_add_update_pv_completion { - /* reserved for update; for add also encodes error if rc == ENOSPC */ - __le16 pv_seid; -#define I40E_AQC_PV_ERR_FLAG_NO_PV 0x1 -#define I40E_AQC_PV_ERR_FLAG_NO_SCHED 0x2 -#define I40E_AQC_PV_ERR_FLAG_NO_COUNTER 0x4 -#define I40E_AQC_PV_ERR_FLAG_NO_ENTRY 0x8 - u8 reserved[14]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_add_update_pv_completion); - -/* Get PV Params (direct 0x0222) - * uses i40e_aqc_switch_seid for the descriptor - */ - -struct i40e_aqc_get_pv_params_completion { - __le16 seid; - __le16 default_stag; - __le16 pv_flags; /* same flags as add_pv */ -#define I40E_AQC_GET_PV_PV_TYPE 0x1 -#define I40E_AQC_GET_PV_FRWD_UNKNOWN_STAG 0x2 -#define I40E_AQC_GET_PV_FRWD_UNKNOWN_ETAG 0x4 - u8 reserved[8]; - __le16 default_port_seid; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_get_pv_params_completion); - -/* Add VEB (direct 0x0230) */ -struct i40e_aqc_add_veb { - __le16 uplink_seid; - __le16 downlink_seid; - __le16 veb_flags; -#define I40E_AQC_ADD_VEB_FLOATING 0x1 -#define I40E_AQC_ADD_VEB_PORT_TYPE_SHIFT 1 -#define I40E_AQC_ADD_VEB_PORT_TYPE_MASK (0x3 << \ - I40E_AQC_ADD_VEB_PORT_TYPE_SHIFT) -#define I40E_AQC_ADD_VEB_PORT_TYPE_DEFAULT 0x2 -#define I40E_AQC_ADD_VEB_PORT_TYPE_DATA 0x4 -#define I40E_AQC_ADD_VEB_ENABLE_L2_FILTER 0x8 - u8 enable_tcs; - u8 reserved[9]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_add_veb); - -struct i40e_aqc_add_veb_completion { - u8 reserved[6]; - __le16 switch_seid; - /* also encodes error if rc == ENOSPC; codes are the same as add_pv */ - __le16 veb_seid; -#define I40E_AQC_VEB_ERR_FLAG_NO_VEB 0x1 -#define I40E_AQC_VEB_ERR_FLAG_NO_SCHED 0x2 -#define I40E_AQC_VEB_ERR_FLAG_NO_COUNTER 0x4 -#define I40E_AQC_VEB_ERR_FLAG_NO_ENTRY 0x8 - __le16 statistic_index; - __le16 vebs_used; - __le16 vebs_free; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_add_veb_completion); - -/* Get VEB Parameters (direct 0x0232) - * uses i40e_aqc_switch_seid for the descriptor - */ -struct i40e_aqc_get_veb_parameters_completion { - __le16 seid; - __le16 switch_id; - __le16 veb_flags; /* only the first/last flags from 0x0230 is valid */ - __le16 statistic_index; - __le16 vebs_used; - __le16 vebs_free; - u8 reserved[4]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_get_veb_parameters_completion); - -/* Delete Element (direct 0x0243) - * uses the generic i40e_aqc_switch_seid - */ - -/* Add MAC-VLAN (indirect 0x0250) */ - -/* used for the command for most vlan commands */ -struct i40e_aqc_macvlan { - __le16 num_addresses; - __le16 seid[3]; -#define I40E_AQC_MACVLAN_CMD_SEID_NUM_SHIFT 0 -#define I40E_AQC_MACVLAN_CMD_SEID_NUM_MASK (0x3FF << \ - I40E_AQC_MACVLAN_CMD_SEID_NUM_SHIFT) -#define I40E_AQC_MACVLAN_CMD_SEID_VALID 0x8000 - __le32 addr_high; - __le32 addr_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_macvlan); - -/* indirect data for command and response */ -struct i40e_aqc_add_macvlan_element_data { - u8 mac_addr[6]; - __le16 vlan_tag; - __le16 flags; -#define I40E_AQC_MACVLAN_ADD_PERFECT_MATCH 0x0001 -#define I40E_AQC_MACVLAN_ADD_HASH_MATCH 0x0002 -#define I40E_AQC_MACVLAN_ADD_IGNORE_VLAN 0x0004 -#define I40E_AQC_MACVLAN_ADD_TO_QUEUE 0x0008 - __le16 queue_number; -#define I40E_AQC_MACVLAN_CMD_QUEUE_SHIFT 0 -#define I40E_AQC_MACVLAN_CMD_QUEUE_MASK (0x7FF << \ - I40E_AQC_MACVLAN_CMD_SEID_NUM_SHIFT) - /* response section */ - u8 match_method; -#define I40E_AQC_MM_PERFECT_MATCH 0x01 -#define I40E_AQC_MM_HASH_MATCH 0x02 -#define I40E_AQC_MM_ERR_NO_RES 0xFF - u8 reserved1[3]; -}; - -struct i40e_aqc_add_remove_macvlan_completion { - __le16 perfect_mac_used; - __le16 perfect_mac_free; - __le16 unicast_hash_free; - __le16 multicast_hash_free; - __le32 addr_high; - __le32 addr_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_add_remove_macvlan_completion); - -/* Remove MAC-VLAN (indirect 0x0251) - * uses i40e_aqc_macvlan for the descriptor - * data points to an array of num_addresses of elements - */ - -struct i40e_aqc_remove_macvlan_element_data { - u8 mac_addr[6]; - __le16 vlan_tag; - u8 flags; -#define I40E_AQC_MACVLAN_DEL_PERFECT_MATCH 0x01 -#define I40E_AQC_MACVLAN_DEL_HASH_MATCH 0x02 -#define I40E_AQC_MACVLAN_DEL_IGNORE_VLAN 0x08 -#define I40E_AQC_MACVLAN_DEL_ALL_VSIS 0x10 - u8 reserved[3]; - /* reply section */ - u8 error_code; -#define I40E_AQC_REMOVE_MACVLAN_SUCCESS 0x0 -#define I40E_AQC_REMOVE_MACVLAN_FAIL 0xFF - u8 reply_reserved[3]; -}; - -/* Add VLAN (indirect 0x0252) - * Remove VLAN (indirect 0x0253) - * use the generic i40e_aqc_macvlan for the command - */ -struct i40e_aqc_add_remove_vlan_element_data { - __le16 vlan_tag; - u8 vlan_flags; -/* flags for add VLAN */ -#define I40E_AQC_ADD_VLAN_LOCAL 0x1 -#define I40E_AQC_ADD_PVLAN_TYPE_SHIFT 1 -#define I40E_AQC_ADD_PVLAN_TYPE_MASK (0x3 << I40E_AQC_ADD_PVLAN_TYPE_SHIFT) -#define I40E_AQC_ADD_PVLAN_TYPE_REGULAR 0x0 -#define I40E_AQC_ADD_PVLAN_TYPE_PRIMARY 0x2 -#define I40E_AQC_ADD_PVLAN_TYPE_SECONDARY 0x4 -#define I40E_AQC_VLAN_PTYPE_SHIFT 3 -#define I40E_AQC_VLAN_PTYPE_MASK (0x3 << I40E_AQC_VLAN_PTYPE_SHIFT) -#define I40E_AQC_VLAN_PTYPE_REGULAR_VSI 0x0 -#define I40E_AQC_VLAN_PTYPE_PROMISC_VSI 0x8 -#define I40E_AQC_VLAN_PTYPE_COMMUNITY_VSI 0x10 -#define I40E_AQC_VLAN_PTYPE_ISOLATED_VSI 0x18 -/* flags for remove VLAN */ -#define I40E_AQC_REMOVE_VLAN_ALL 0x1 - u8 reserved; - u8 result; -/* flags for add VLAN */ -#define I40E_AQC_ADD_VLAN_SUCCESS 0x0 -#define I40E_AQC_ADD_VLAN_FAIL_REQUEST 0xFE -#define I40E_AQC_ADD_VLAN_FAIL_RESOURCE 0xFF -/* flags for remove VLAN */ -#define I40E_AQC_REMOVE_VLAN_SUCCESS 0x0 -#define I40E_AQC_REMOVE_VLAN_FAIL 0xFF - u8 reserved1[3]; -}; - -struct i40e_aqc_add_remove_vlan_completion { - u8 reserved[4]; - __le16 vlans_used; - __le16 vlans_free; - __le32 addr_high; - __le32 addr_low; -}; - -/* Set VSI Promiscuous Modes (direct 0x0254) */ -struct i40e_aqc_set_vsi_promiscuous_modes { - __le16 promiscuous_flags; - __le16 valid_flags; -/* flags used for both fields above */ -#define I40E_AQC_SET_VSI_PROMISC_UNICAST 0x01 -#define I40E_AQC_SET_VSI_PROMISC_MULTICAST 0x02 -#define I40E_AQC_SET_VSI_PROMISC_BROADCAST 0x04 -#define I40E_AQC_SET_VSI_DEFAULT 0x08 -#define I40E_AQC_SET_VSI_PROMISC_VLAN 0x10 - __le16 seid; -#define I40E_AQC_VSI_PROM_CMD_SEID_MASK 0x3FF - __le16 vlan_tag; -#define I40E_AQC_SET_VSI_VLAN_VALID 0x8000 - u8 reserved[8]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_set_vsi_promiscuous_modes); - -/* Add S/E-tag command (direct 0x0255) - * Uses generic i40e_aqc_add_remove_tag_completion for completion - */ -struct i40e_aqc_add_tag { - __le16 flags; -#define I40E_AQC_ADD_TAG_FLAG_TO_QUEUE 0x0001 - __le16 seid; -#define I40E_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT 0 -#define I40E_AQC_ADD_TAG_CMD_SEID_NUM_MASK (0x3FF << \ - I40E_AQC_ADD_TAG_CMD_SEID_NUM_SHIFT) - __le16 tag; - __le16 queue_number; - u8 reserved[8]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_add_tag); - -struct i40e_aqc_add_remove_tag_completion { - u8 reserved[12]; - __le16 tags_used; - __le16 tags_free; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_add_remove_tag_completion); - -/* Remove S/E-tag command (direct 0x0256) - * Uses generic i40e_aqc_add_remove_tag_completion for completion - */ -struct i40e_aqc_remove_tag { - __le16 seid; -#define I40E_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT 0 -#define I40E_AQC_REMOVE_TAG_CMD_SEID_NUM_MASK (0x3FF << \ - I40E_AQC_REMOVE_TAG_CMD_SEID_NUM_SHIFT) - __le16 tag; - u8 reserved[12]; -}; - -/* Add multicast E-Tag (direct 0x0257) - * del multicast E-Tag (direct 0x0258) only uses pv_seid and etag fields - * and no external data - */ -struct i40e_aqc_add_remove_mcast_etag { - __le16 pv_seid; - __le16 etag; - u8 num_unicast_etags; - u8 reserved[3]; - __le32 addr_high; /* address of array of 2-byte s-tags */ - __le32 addr_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_add_remove_mcast_etag); - -struct i40e_aqc_add_remove_mcast_etag_completion { - u8 reserved[4]; - __le16 mcast_etags_used; - __le16 mcast_etags_free; - __le32 addr_high; - __le32 addr_low; - -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_add_remove_mcast_etag_completion); - -/* Update S/E-Tag (direct 0x0259) */ -struct i40e_aqc_update_tag { - __le16 seid; -#define I40E_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT 0 -#define I40E_AQC_UPDATE_TAG_CMD_SEID_NUM_MASK (0x3FF << \ - I40E_AQC_UPDATE_TAG_CMD_SEID_NUM_SHIFT) - __le16 old_tag; - __le16 new_tag; - u8 reserved[10]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_update_tag); - -struct i40e_aqc_update_tag_completion { - u8 reserved[12]; - __le16 tags_used; - __le16 tags_free; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_update_tag_completion); - -/* Add Control Packet filter (direct 0x025A) - * Remove Control Packet filter (direct 0x025B) - * uses the i40e_aqc_add_oveb_cloud, - * and the generic direct completion structure - */ -struct i40e_aqc_add_remove_control_packet_filter { - u8 mac[6]; - __le16 etype; - __le16 flags; -#define I40E_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC 0x0001 -#define I40E_AQC_ADD_CONTROL_PACKET_FLAGS_DROP 0x0002 -#define I40E_AQC_ADD_CONTROL_PACKET_FLAGS_TO_QUEUE 0x0004 -#define I40E_AQC_ADD_CONTROL_PACKET_FLAGS_TX 0x0008 -#define I40E_AQC_ADD_CONTROL_PACKET_FLAGS_RX 0x0000 - __le16 seid; -#define I40E_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT 0 -#define I40E_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_MASK (0x3FF << \ - I40E_AQC_ADD_CONTROL_PACKET_CMD_SEID_NUM_SHIFT) - __le16 queue; - u8 reserved[2]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_add_remove_control_packet_filter); - -struct i40e_aqc_add_remove_control_packet_filter_completion { - __le16 mac_etype_used; - __le16 etype_used; - __le16 mac_etype_free; - __le16 etype_free; - u8 reserved[8]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_add_remove_control_packet_filter_completion); - -/* Add Cloud filters (indirect 0x025C) - * Remove Cloud filters (indirect 0x025D) - * uses the i40e_aqc_add_remove_cloud_filters, - * and the generic indirect completion structure - */ -struct i40e_aqc_add_remove_cloud_filters { - u8 num_filters; - u8 reserved; - __le16 seid; -#define I40E_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT 0 -#define I40E_AQC_ADD_CLOUD_CMD_SEID_NUM_MASK (0x3FF << \ - I40E_AQC_ADD_CLOUD_CMD_SEID_NUM_SHIFT) - u8 reserved2[4]; - __le32 addr_high; - __le32 addr_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_add_remove_cloud_filters); - -struct i40e_aqc_add_remove_cloud_filters_element_data { - u8 outer_mac[6]; - u8 inner_mac[6]; - __le16 inner_vlan; - union { - struct { - u8 reserved[12]; - u8 data[4]; - } v4; - struct { - u8 data[16]; - } v6; - } ipaddr; - __le16 flags; -#define I40E_AQC_ADD_CLOUD_FILTER_SHIFT 0 -#define I40E_AQC_ADD_CLOUD_FILTER_MASK (0x3F << \ - I40E_AQC_ADD_CLOUD_FILTER_SHIFT) -/* 0x0000 reserved */ -#define I40E_AQC_ADD_CLOUD_FILTER_OIP 0x0001 -/* 0x0002 reserved */ -#define I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN 0x0003 -#define I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID 0x0004 -/* 0x0005 reserved */ -#define I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID 0x0006 -/* 0x0007 reserved */ -/* 0x0008 reserved */ -#define I40E_AQC_ADD_CLOUD_FILTER_OMAC 0x0009 -#define I40E_AQC_ADD_CLOUD_FILTER_IMAC 0x000A -#define I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC 0x000B -#define I40E_AQC_ADD_CLOUD_FILTER_IIP 0x000C - -#define I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE 0x0080 -#define I40E_AQC_ADD_CLOUD_VNK_SHIFT 6 -#define I40E_AQC_ADD_CLOUD_VNK_MASK 0x00C0 -#define I40E_AQC_ADD_CLOUD_FLAGS_IPV4 0 -#define I40E_AQC_ADD_CLOUD_FLAGS_IPV6 0x0100 - -#define I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT 9 -#define I40E_AQC_ADD_CLOUD_TNL_TYPE_MASK 0x1E00 -#define I40E_AQC_ADD_CLOUD_TNL_TYPE_XVLAN 0 -#define I40E_AQC_ADD_CLOUD_TNL_TYPE_NVGRE_OMAC 1 -#define I40E_AQC_ADD_CLOUD_TNL_TYPE_NGE 2 -#define I40E_AQC_ADD_CLOUD_TNL_TYPE_IP 3 - - __le32 tenant_id; - u8 reserved[4]; - __le16 queue_number; -#define I40E_AQC_ADD_CLOUD_QUEUE_SHIFT 0 -#define I40E_AQC_ADD_CLOUD_QUEUE_MASK (0x3F << \ - I40E_AQC_ADD_CLOUD_QUEUE_SHIFT) - u8 reserved2[14]; - /* response section */ - u8 allocation_result; -#define I40E_AQC_ADD_CLOUD_FILTER_SUCCESS 0x0 -#define I40E_AQC_ADD_CLOUD_FILTER_FAIL 0xFF - u8 response_reserved[7]; -}; - -struct i40e_aqc_remove_cloud_filters_completion { - __le16 perfect_ovlan_used; - __le16 perfect_ovlan_free; - __le16 vlan_used; - __le16 vlan_free; - __le32 addr_high; - __le32 addr_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_remove_cloud_filters_completion); - -/* Add Mirror Rule (indirect or direct 0x0260) - * Delete Mirror Rule (indirect or direct 0x0261) - * note: some rule types (4,5) do not use an external buffer. - * take care to set the flags correctly. - */ -struct i40e_aqc_add_delete_mirror_rule { - __le16 seid; - __le16 rule_type; -#define I40E_AQC_MIRROR_RULE_TYPE_SHIFT 0 -#define I40E_AQC_MIRROR_RULE_TYPE_MASK (0x7 << \ - I40E_AQC_MIRROR_RULE_TYPE_SHIFT) -#define I40E_AQC_MIRROR_RULE_TYPE_VPORT_INGRESS 1 -#define I40E_AQC_MIRROR_RULE_TYPE_VPORT_EGRESS 2 -#define I40E_AQC_MIRROR_RULE_TYPE_VLAN 3 -#define I40E_AQC_MIRROR_RULE_TYPE_ALL_INGRESS 4 -#define I40E_AQC_MIRROR_RULE_TYPE_ALL_EGRESS 5 - __le16 num_entries; - __le16 destination; /* VSI for add, rule id for delete */ - __le32 addr_high; /* address of array of 2-byte VSI or VLAN ids */ - __le32 addr_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_add_delete_mirror_rule); - -struct i40e_aqc_add_delete_mirror_rule_completion { - u8 reserved[2]; - __le16 rule_id; /* only used on add */ - __le16 mirror_rules_used; - __le16 mirror_rules_free; - __le32 addr_high; - __le32 addr_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_add_delete_mirror_rule_completion); - -/* DCB 0x03xx*/ - -/* PFC Ignore (direct 0x0301) - * the command and response use the same descriptor structure - */ -struct i40e_aqc_pfc_ignore { - u8 tc_bitmap; - u8 command_flags; /* unused on response */ -#define I40E_AQC_PFC_IGNORE_SET 0x80 -#define I40E_AQC_PFC_IGNORE_CLEAR 0x0 - u8 reserved[14]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_pfc_ignore); - -/* DCB Update (direct 0x0302) uses the i40e_aq_desc structure - * with no parameters - */ - -/* TX scheduler 0x04xx */ - -/* Almost all the indirect commands use - * this generic struct to pass the SEID in param0 - */ -struct i40e_aqc_tx_sched_ind { - __le16 vsi_seid; - u8 reserved[6]; - __le32 addr_high; - __le32 addr_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_tx_sched_ind); - -/* Several commands respond with a set of queue set handles */ -struct i40e_aqc_qs_handles_resp { - __le16 qs_handles[8]; -}; - -/* Configure VSI BW limits (direct 0x0400) */ -struct i40e_aqc_configure_vsi_bw_limit { - __le16 vsi_seid; - u8 reserved[2]; - __le16 credit; - u8 reserved1[2]; - u8 max_credit; /* 0-3, limit = 2^max */ - u8 reserved2[7]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_configure_vsi_bw_limit); - -/* Configure VSI Bandwidth Limit per Traffic Type (indirect 0x0406) - * responds with i40e_aqc_qs_handles_resp - */ -struct i40e_aqc_configure_vsi_ets_sla_bw_data { - u8 tc_valid_bits; - u8 reserved[15]; - __le16 tc_bw_credits[8]; /* FW writesback QS handles here */ - - /* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */ - __le16 tc_bw_max[2]; - u8 reserved1[28]; -}; - -/* Configure VSI Bandwidth Allocation per Traffic Type (indirect 0x0407) - * responds with i40e_aqc_qs_handles_resp - */ -struct i40e_aqc_configure_vsi_tc_bw_data { - u8 tc_valid_bits; - u8 reserved[3]; - u8 tc_bw_credits[8]; - u8 reserved1[4]; - __le16 qs_handles[8]; -}; - -/* Query vsi bw configuration (indirect 0x0408) */ -struct i40e_aqc_query_vsi_bw_config_resp { - u8 tc_valid_bits; - u8 tc_suspended_bits; - u8 reserved[14]; - __le16 qs_handles[8]; - u8 reserved1[4]; - __le16 port_bw_limit; - u8 reserved2[2]; - u8 max_bw; /* 0-3, limit = 2^max */ - u8 reserved3[23]; -}; - -/* Query VSI Bandwidth Allocation per Traffic Type (indirect 0x040A) */ -struct i40e_aqc_query_vsi_ets_sla_config_resp { - u8 tc_valid_bits; - u8 reserved[3]; - u8 share_credits[8]; - __le16 credits[8]; - - /* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */ - __le16 tc_bw_max[2]; -}; - -/* Configure Switching Component Bandwidth Limit (direct 0x0410) */ -struct i40e_aqc_configure_switching_comp_bw_limit { - __le16 seid; - u8 reserved[2]; - __le16 credit; - u8 reserved1[2]; - u8 max_bw; /* 0-3, limit = 2^max */ - u8 reserved2[7]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_configure_switching_comp_bw_limit); - -/* Enable Physical Port ETS (indirect 0x0413) - * Modify Physical Port ETS (indirect 0x0414) - * Disable Physical Port ETS (indirect 0x0415) - */ -struct i40e_aqc_configure_switching_comp_ets_data { - u8 reserved[4]; - u8 tc_valid_bits; - u8 seepage; -#define I40E_AQ_ETS_SEEPAGE_EN_MASK 0x1 - u8 tc_strict_priority_flags; - u8 reserved1[17]; - u8 tc_bw_share_credits[8]; - u8 reserved2[96]; -}; - -/* Configure Switching Component Bandwidth Limits per Tc (indirect 0x0416) */ -struct i40e_aqc_configure_switching_comp_ets_bw_limit_data { - u8 tc_valid_bits; - u8 reserved[15]; - __le16 tc_bw_credit[8]; - - /* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */ - __le16 tc_bw_max[2]; - u8 reserved1[28]; -}; - -/* Configure Switching Component Bandwidth Allocation per Tc - * (indirect 0x0417) - */ -struct i40e_aqc_configure_switching_comp_bw_config_data { - u8 tc_valid_bits; - u8 reserved[2]; - u8 absolute_credits; /* bool */ - u8 tc_bw_share_credits[8]; - u8 reserved1[20]; -}; - -/* Query Switching Component Configuration (indirect 0x0418) */ -struct i40e_aqc_query_switching_comp_ets_config_resp { - u8 tc_valid_bits; - u8 reserved[35]; - __le16 port_bw_limit; - u8 reserved1[2]; - u8 tc_bw_max; /* 0-3, limit = 2^max */ - u8 reserved2[23]; -}; - -/* Query PhysicalPort ETS Configuration (indirect 0x0419) */ -struct i40e_aqc_query_port_ets_config_resp { - u8 reserved[4]; - u8 tc_valid_bits; - u8 reserved1; - u8 tc_strict_priority_bits; - u8 reserved2; - u8 tc_bw_share_credits[8]; - __le16 tc_bw_limits[8]; - - /* 4 bits per tc 0-7, 4th bit reserved, limit = 2^max */ - __le16 tc_bw_max[2]; - u8 reserved3[32]; -}; - -/* Query Switching Component Bandwidth Allocation per Traffic Type - * (indirect 0x041A) - */ -struct i40e_aqc_query_switching_comp_bw_config_resp { - u8 tc_valid_bits; - u8 reserved[2]; - u8 absolute_credits_enable; /* bool */ - u8 tc_bw_share_credits[8]; - __le16 tc_bw_limits[8]; - - /* 4 bits per tc 0-7, 4th bit is reserved, limit = 2^max */ - __le16 tc_bw_max[2]; -}; - -/* Suspend/resume port TX traffic - * (direct 0x041B and 0x041C) uses the generic SEID struct - */ - -/* Configure partition BW - * (indirect 0x041D) - */ -struct i40e_aqc_configure_partition_bw_data { - __le16 pf_valid_bits; - u8 min_bw[16]; /* guaranteed bandwidth */ - u8 max_bw[16]; /* bandwidth limit */ -}; - -/* Get and set the active HMC resource profile and status. - * (direct 0x0500) and (direct 0x0501) - */ -struct i40e_aq_get_set_hmc_resource_profile { - u8 pm_profile; - u8 pe_vf_enabled; - u8 reserved[14]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aq_get_set_hmc_resource_profile); - -enum i40e_aq_hmc_profile { - /* I40E_HMC_PROFILE_NO_CHANGE = 0, reserved */ - I40E_HMC_PROFILE_DEFAULT = 1, - I40E_HMC_PROFILE_FAVOR_VF = 2, - I40E_HMC_PROFILE_EQUAL = 3, -}; - -#define I40E_AQ_GET_HMC_RESOURCE_PROFILE_PM_MASK 0xF -#define I40E_AQ_GET_HMC_RESOURCE_PROFILE_COUNT_MASK 0x3F - -/* Get PHY Abilities (indirect 0x0600) uses the generic indirect struct */ - -/* set in param0 for get phy abilities to report qualified modules */ -#define I40E_AQ_PHY_REPORT_QUALIFIED_MODULES 0x0001 -#define I40E_AQ_PHY_REPORT_INITIAL_VALUES 0x0002 - -enum i40e_aq_phy_type { - I40E_PHY_TYPE_SGMII = 0x0, - I40E_PHY_TYPE_1000BASE_KX = 0x1, - I40E_PHY_TYPE_10GBASE_KX4 = 0x2, - I40E_PHY_TYPE_10GBASE_KR = 0x3, - I40E_PHY_TYPE_40GBASE_KR4 = 0x4, - I40E_PHY_TYPE_XAUI = 0x5, - I40E_PHY_TYPE_XFI = 0x6, - I40E_PHY_TYPE_SFI = 0x7, - I40E_PHY_TYPE_XLAUI = 0x8, - I40E_PHY_TYPE_XLPPI = 0x9, - I40E_PHY_TYPE_40GBASE_CR4_CU = 0xA, - I40E_PHY_TYPE_10GBASE_CR1_CU = 0xB, - I40E_PHY_TYPE_10GBASE_AOC = 0xC, - I40E_PHY_TYPE_40GBASE_AOC = 0xD, - I40E_PHY_TYPE_100BASE_TX = 0x11, - I40E_PHY_TYPE_1000BASE_T = 0x12, - I40E_PHY_TYPE_10GBASE_T = 0x13, - I40E_PHY_TYPE_10GBASE_SR = 0x14, - I40E_PHY_TYPE_10GBASE_LR = 0x15, - I40E_PHY_TYPE_10GBASE_SFPP_CU = 0x16, - I40E_PHY_TYPE_10GBASE_CR1 = 0x17, - I40E_PHY_TYPE_40GBASE_CR4 = 0x18, - I40E_PHY_TYPE_40GBASE_SR4 = 0x19, - I40E_PHY_TYPE_40GBASE_LR4 = 0x1A, - I40E_PHY_TYPE_1000BASE_SX = 0x1B, - I40E_PHY_TYPE_1000BASE_LX = 0x1C, - I40E_PHY_TYPE_1000BASE_T_OPTICAL = 0x1D, - I40E_PHY_TYPE_20GBASE_KR2 = 0x1E, - I40E_PHY_TYPE_MAX -}; - -#define I40E_LINK_SPEED_100MB_SHIFT 0x1 -#define I40E_LINK_SPEED_1000MB_SHIFT 0x2 -#define I40E_LINK_SPEED_10GB_SHIFT 0x3 -#define I40E_LINK_SPEED_40GB_SHIFT 0x4 -#define I40E_LINK_SPEED_20GB_SHIFT 0x5 - -enum i40e_aq_link_speed { - I40E_LINK_SPEED_UNKNOWN = 0, - I40E_LINK_SPEED_100MB = (1 << I40E_LINK_SPEED_100MB_SHIFT), - I40E_LINK_SPEED_1GB = (1 << I40E_LINK_SPEED_1000MB_SHIFT), - I40E_LINK_SPEED_10GB = (1 << I40E_LINK_SPEED_10GB_SHIFT), - I40E_LINK_SPEED_40GB = (1 << I40E_LINK_SPEED_40GB_SHIFT), - I40E_LINK_SPEED_20GB = (1 << I40E_LINK_SPEED_20GB_SHIFT) -}; - -struct i40e_aqc_module_desc { - u8 oui[3]; - u8 reserved1; - u8 part_number[16]; - u8 revision[4]; - u8 reserved2[8]; -}; - -struct i40e_aq_get_phy_abilities_resp { - __le32 phy_type; /* bitmap using the above enum for offsets */ - u8 link_speed; /* bitmap using the above enum bit patterns */ - u8 abilities; -#define I40E_AQ_PHY_FLAG_PAUSE_TX 0x01 -#define I40E_AQ_PHY_FLAG_PAUSE_RX 0x02 -#define I40E_AQ_PHY_FLAG_LOW_POWER 0x04 -#define I40E_AQ_PHY_LINK_ENABLED 0x08 -#define I40E_AQ_PHY_AN_ENABLED 0x10 -#define I40E_AQ_PHY_FLAG_MODULE_QUAL 0x20 - __le16 eee_capability; -#define I40E_AQ_EEE_100BASE_TX 0x0002 -#define I40E_AQ_EEE_1000BASE_T 0x0004 -#define I40E_AQ_EEE_10GBASE_T 0x0008 -#define I40E_AQ_EEE_1000BASE_KX 0x0010 -#define I40E_AQ_EEE_10GBASE_KX4 0x0020 -#define I40E_AQ_EEE_10GBASE_KR 0x0040 - __le32 eeer_val; - u8 d3_lpan; -#define I40E_AQ_SET_PHY_D3_LPAN_ENA 0x01 - u8 reserved[3]; - u8 phy_id[4]; - u8 module_type[3]; - u8 qualified_module_count; -#define I40E_AQ_PHY_MAX_QMS 16 - struct i40e_aqc_module_desc qualified_module[I40E_AQ_PHY_MAX_QMS]; -}; - -/* Set PHY Config (direct 0x0601) */ -struct i40e_aq_set_phy_config { /* same bits as above in all */ - __le32 phy_type; - u8 link_speed; - u8 abilities; -/* bits 0-2 use the values from get_phy_abilities_resp */ -#define I40E_AQ_PHY_ENABLE_LINK 0x08 -#define I40E_AQ_PHY_ENABLE_AN 0x10 -#define I40E_AQ_PHY_ENABLE_ATOMIC_LINK 0x20 - __le16 eee_capability; - __le32 eeer; - u8 low_power_ctrl; - u8 reserved[3]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aq_set_phy_config); - -/* Set MAC Config command data structure (direct 0x0603) */ -struct i40e_aq_set_mac_config { - __le16 max_frame_size; - u8 params; -#define I40E_AQ_SET_MAC_CONFIG_CRC_EN 0x04 -#define I40E_AQ_SET_MAC_CONFIG_PACING_MASK 0x78 -#define I40E_AQ_SET_MAC_CONFIG_PACING_SHIFT 3 -#define I40E_AQ_SET_MAC_CONFIG_PACING_NONE 0x0 -#define I40E_AQ_SET_MAC_CONFIG_PACING_1B_13TX 0xF -#define I40E_AQ_SET_MAC_CONFIG_PACING_1DW_9TX 0x9 -#define I40E_AQ_SET_MAC_CONFIG_PACING_1DW_4TX 0x8 -#define I40E_AQ_SET_MAC_CONFIG_PACING_3DW_7TX 0x7 -#define I40E_AQ_SET_MAC_CONFIG_PACING_2DW_3TX 0x6 -#define I40E_AQ_SET_MAC_CONFIG_PACING_1DW_1TX 0x5 -#define I40E_AQ_SET_MAC_CONFIG_PACING_3DW_2TX 0x4 -#define I40E_AQ_SET_MAC_CONFIG_PACING_7DW_3TX 0x3 -#define I40E_AQ_SET_MAC_CONFIG_PACING_4DW_1TX 0x2 -#define I40E_AQ_SET_MAC_CONFIG_PACING_9DW_1TX 0x1 - u8 tx_timer_priority; /* bitmap */ - __le16 tx_timer_value; - __le16 fc_refresh_threshold; - u8 reserved[8]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aq_set_mac_config); - -/* Restart Auto-Negotiation (direct 0x605) */ -struct i40e_aqc_set_link_restart_an { - u8 command; -#define I40E_AQ_PHY_RESTART_AN 0x02 -#define I40E_AQ_PHY_LINK_ENABLE 0x04 - u8 reserved[15]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_set_link_restart_an); - -/* Get Link Status cmd & response data structure (direct 0x0607) */ -struct i40e_aqc_get_link_status { - __le16 command_flags; /* only field set on command */ -#define I40E_AQ_LSE_MASK 0x3 -#define I40E_AQ_LSE_NOP 0x0 -#define I40E_AQ_LSE_DISABLE 0x2 -#define I40E_AQ_LSE_ENABLE 0x3 -/* only response uses this flag */ -#define I40E_AQ_LSE_IS_ENABLED 0x1 - u8 phy_type; /* i40e_aq_phy_type */ - u8 link_speed; /* i40e_aq_link_speed */ - u8 link_info; -#define I40E_AQ_LINK_UP 0x01 -#define I40E_AQ_LINK_FAULT 0x02 -#define I40E_AQ_LINK_FAULT_TX 0x04 -#define I40E_AQ_LINK_FAULT_RX 0x08 -#define I40E_AQ_LINK_FAULT_REMOTE 0x10 -#define I40E_AQ_MEDIA_AVAILABLE 0x40 -#define I40E_AQ_SIGNAL_DETECT 0x80 - u8 an_info; -#define I40E_AQ_AN_COMPLETED 0x01 -#define I40E_AQ_LP_AN_ABILITY 0x02 -#define I40E_AQ_PD_FAULT 0x04 -#define I40E_AQ_FEC_EN 0x08 -#define I40E_AQ_PHY_LOW_POWER 0x10 -#define I40E_AQ_LINK_PAUSE_TX 0x20 -#define I40E_AQ_LINK_PAUSE_RX 0x40 -#define I40E_AQ_QUALIFIED_MODULE 0x80 - u8 ext_info; -#define I40E_AQ_LINK_PHY_TEMP_ALARM 0x01 -#define I40E_AQ_LINK_XCESSIVE_ERRORS 0x02 -#define I40E_AQ_LINK_TX_SHIFT 0x02 -#define I40E_AQ_LINK_TX_MASK (0x03 << I40E_AQ_LINK_TX_SHIFT) -#define I40E_AQ_LINK_TX_ACTIVE 0x00 -#define I40E_AQ_LINK_TX_DRAINED 0x01 -#define I40E_AQ_LINK_TX_FLUSHED 0x03 -#define I40E_AQ_LINK_FORCED_40G 0x10 - u8 loopback; /* use defines from i40e_aqc_set_lb_mode */ - __le16 max_frame_size; - u8 config; -#define I40E_AQ_CONFIG_CRC_ENA 0x04 -#define I40E_AQ_CONFIG_PACING_MASK 0x78 - u8 reserved[5]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_get_link_status); - -/* Set event mask command (direct 0x613) */ -struct i40e_aqc_set_phy_int_mask { - u8 reserved[8]; - __le16 event_mask; -#define I40E_AQ_EVENT_LINK_UPDOWN 0x0002 -#define I40E_AQ_EVENT_MEDIA_NA 0x0004 -#define I40E_AQ_EVENT_LINK_FAULT 0x0008 -#define I40E_AQ_EVENT_PHY_TEMP_ALARM 0x0010 -#define I40E_AQ_EVENT_EXCESSIVE_ERRORS 0x0020 -#define I40E_AQ_EVENT_SIGNAL_DETECT 0x0040 -#define I40E_AQ_EVENT_AN_COMPLETED 0x0080 -#define I40E_AQ_EVENT_MODULE_QUAL_FAIL 0x0100 -#define I40E_AQ_EVENT_PORT_TX_SUSPENDED 0x0200 - u8 reserved1[6]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_set_phy_int_mask); - -/* Get Local AN advt register (direct 0x0614) - * Set Local AN advt register (direct 0x0615) - * Get Link Partner AN advt register (direct 0x0616) - */ -struct i40e_aqc_an_advt_reg { - __le32 local_an_reg0; - __le16 local_an_reg1; - u8 reserved[10]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_an_advt_reg); - -/* Set Loopback mode (0x0618) */ -struct i40e_aqc_set_lb_mode { - __le16 lb_mode; -#define I40E_AQ_LB_PHY_LOCAL 0x01 -#define I40E_AQ_LB_PHY_REMOTE 0x02 -#define I40E_AQ_LB_MAC_LOCAL 0x04 - u8 reserved[14]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_set_lb_mode); - -/* Set PHY Debug command (0x0622) */ -struct i40e_aqc_set_phy_debug { - u8 command_flags; -#define I40E_AQ_PHY_DEBUG_RESET_INTERNAL 0x02 -#define I40E_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT 2 -#define I40E_AQ_PHY_DEBUG_RESET_EXTERNAL_MASK (0x03 << \ - I40E_AQ_PHY_DEBUG_RESET_EXTERNAL_SHIFT) -#define I40E_AQ_PHY_DEBUG_RESET_EXTERNAL_NONE 0x00 -#define I40E_AQ_PHY_DEBUG_RESET_EXTERNAL_HARD 0x01 -#define I40E_AQ_PHY_DEBUG_RESET_EXTERNAL_SOFT 0x02 -#define I40E_AQ_PHY_DEBUG_DISABLE_LINK_FW 0x10 - u8 reserved[15]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_set_phy_debug); - -enum i40e_aq_phy_reg_type { - I40E_AQC_PHY_REG_INTERNAL = 0x1, - I40E_AQC_PHY_REG_EXERNAL_BASET = 0x2, - I40E_AQC_PHY_REG_EXERNAL_MODULE = 0x3 -}; - -/* NVM Read command (indirect 0x0701) - * NVM Erase commands (direct 0x0702) - * NVM Update commands (indirect 0x0703) - */ -struct i40e_aqc_nvm_update { - u8 command_flags; -#define I40E_AQ_NVM_LAST_CMD 0x01 -#define I40E_AQ_NVM_FLASH_ONLY 0x80 - u8 module_pointer; - __le16 length; - __le32 offset; - __le32 addr_high; - __le32 addr_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_nvm_update); - -/* NVM Config Read (indirect 0x0704) */ -struct i40e_aqc_nvm_config_read { - __le16 cmd_flags; -#define ANVM_SINGLE_OR_MULTIPLE_FEATURES_MASK 1 -#define ANVM_READ_SINGLE_FEATURE 0 -#define ANVM_READ_MULTIPLE_FEATURES 1 - __le16 element_count; - __le16 element_id; /* Feature/field ID */ - u8 reserved[2]; - __le32 address_high; - __le32 address_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_nvm_config_read); - -/* NVM Config Write (indirect 0x0705) */ -struct i40e_aqc_nvm_config_write { - __le16 cmd_flags; - __le16 element_count; - u8 reserved[4]; - __le32 address_high; - __le32 address_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_nvm_config_write); - -struct i40e_aqc_nvm_config_data_feature { - __le16 feature_id; - __le16 instance_id; - __le16 feature_options; - __le16 feature_selection; -}; - -struct i40e_aqc_nvm_config_data_immediate_field { -#define ANVM_FEATURE_OR_IMMEDIATE_MASK 0x2 - __le16 field_id; - __le16 instance_id; - __le16 field_options; - __le16 field_value; -}; - -/* Send to PF command (indirect 0x0801) id is only used by PF - * Send to VF command (indirect 0x0802) id is only used by PF - * Send to Peer PF command (indirect 0x0803) - */ -struct i40e_aqc_pf_vf_message { - __le32 id; - u8 reserved[4]; - __le32 addr_high; - __le32 addr_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_pf_vf_message); - -/* Alternate structure */ - -/* Direct write (direct 0x0900) - * Direct read (direct 0x0902) - */ -struct i40e_aqc_alternate_write { - __le32 address0; - __le32 data0; - __le32 address1; - __le32 data1; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_alternate_write); - -/* Indirect write (indirect 0x0901) - * Indirect read (indirect 0x0903) - */ - -struct i40e_aqc_alternate_ind_write { - __le32 address; - __le32 length; - __le32 addr_high; - __le32 addr_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_alternate_ind_write); - -/* Done alternate write (direct 0x0904) - * uses i40e_aq_desc - */ -struct i40e_aqc_alternate_write_done { - __le16 cmd_flags; -#define I40E_AQ_ALTERNATE_MODE_BIOS_MASK 1 -#define I40E_AQ_ALTERNATE_MODE_BIOS_LEGACY 0 -#define I40E_AQ_ALTERNATE_MODE_BIOS_UEFI 1 -#define I40E_AQ_ALTERNATE_RESET_NEEDED 2 - u8 reserved[14]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_alternate_write_done); - -/* Set OEM mode (direct 0x0905) */ -struct i40e_aqc_alternate_set_mode { - __le32 mode; -#define I40E_AQ_ALTERNATE_MODE_NONE 0 -#define I40E_AQ_ALTERNATE_MODE_OEM 1 - u8 reserved[12]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_alternate_set_mode); - -/* Clear port Alternate RAM (direct 0x0906) uses i40e_aq_desc */ - -/* async events 0x10xx */ - -/* Lan Queue Overflow Event (direct, 0x1001) */ -struct i40e_aqc_lan_overflow { - __le32 prtdcb_rupto; - __le32 otx_ctl; - u8 reserved[8]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_lan_overflow); - -/* Get LLDP MIB (indirect 0x0A00) */ -struct i40e_aqc_lldp_get_mib { - u8 type; - u8 reserved1; -#define I40E_AQ_LLDP_MIB_TYPE_MASK 0x3 -#define I40E_AQ_LLDP_MIB_LOCAL 0x0 -#define I40E_AQ_LLDP_MIB_REMOTE 0x1 -#define I40E_AQ_LLDP_MIB_LOCAL_AND_REMOTE 0x2 -#define I40E_AQ_LLDP_BRIDGE_TYPE_MASK 0xC -#define I40E_AQ_LLDP_BRIDGE_TYPE_SHIFT 0x2 -#define I40E_AQ_LLDP_BRIDGE_TYPE_NEAREST_BRIDGE 0x0 -#define I40E_AQ_LLDP_BRIDGE_TYPE_NON_TPMR 0x1 -#define I40E_AQ_LLDP_TX_SHIFT 0x4 -#define I40E_AQ_LLDP_TX_MASK (0x03 << I40E_AQ_LLDP_TX_SHIFT) -/* TX pause flags use I40E_AQ_LINK_TX_* above */ - __le16 local_len; - __le16 remote_len; - u8 reserved2[2]; - __le32 addr_high; - __le32 addr_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_lldp_get_mib); - -/* Configure LLDP MIB Change Event (direct 0x0A01) - * also used for the event (with type in the command field) - */ -struct i40e_aqc_lldp_update_mib { - u8 command; -#define I40E_AQ_LLDP_MIB_UPDATE_ENABLE 0x0 -#define I40E_AQ_LLDP_MIB_UPDATE_DISABLE 0x1 - u8 reserved[7]; - __le32 addr_high; - __le32 addr_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_lldp_update_mib); - -/* Add LLDP TLV (indirect 0x0A02) - * Delete LLDP TLV (indirect 0x0A04) - */ -struct i40e_aqc_lldp_add_tlv { - u8 type; /* only nearest bridge and non-TPMR from 0x0A00 */ - u8 reserved1[1]; - __le16 len; - u8 reserved2[4]; - __le32 addr_high; - __le32 addr_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_lldp_add_tlv); - -/* Update LLDP TLV (indirect 0x0A03) */ -struct i40e_aqc_lldp_update_tlv { - u8 type; /* only nearest bridge and non-TPMR from 0x0A00 */ - u8 reserved; - __le16 old_len; - __le16 new_offset; - __le16 new_len; - __le32 addr_high; - __le32 addr_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_lldp_update_tlv); - -/* Stop LLDP (direct 0x0A05) */ -struct i40e_aqc_lldp_stop { - u8 command; -#define I40E_AQ_LLDP_AGENT_STOP 0x0 -#define I40E_AQ_LLDP_AGENT_SHUTDOWN 0x1 - u8 reserved[15]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_lldp_stop); - -/* Start LLDP (direct 0x0A06) */ - -struct i40e_aqc_lldp_start { - u8 command; -#define I40E_AQ_LLDP_AGENT_START 0x1 - u8 reserved[15]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_lldp_start); - -/* Apply MIB changes (0x0A07) - * uses the generic struc as it contains no data - */ - -/* Add Udp Tunnel command and completion (direct 0x0B00) */ -struct i40e_aqc_add_udp_tunnel { - __le16 udp_port; - u8 reserved0[3]; - u8 protocol_type; -#define I40E_AQC_TUNNEL_TYPE_VXLAN 0x00 -#define I40E_AQC_TUNNEL_TYPE_NGE 0x01 -#define I40E_AQC_TUNNEL_TYPE_TEREDO 0x10 - u8 reserved1[10]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_add_udp_tunnel); - -struct i40e_aqc_add_udp_tunnel_completion { - __le16 udp_port; - u8 filter_entry_index; - u8 multiple_pfs; -#define I40E_AQC_SINGLE_PF 0x0 -#define I40E_AQC_MULTIPLE_PFS 0x1 - u8 total_filters; - u8 reserved[11]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_add_udp_tunnel_completion); - -/* remove UDP Tunnel command (0x0B01) */ -struct i40e_aqc_remove_udp_tunnel { - u8 reserved[2]; - u8 index; /* 0 to 15 */ - u8 reserved2[13]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_remove_udp_tunnel); - -struct i40e_aqc_del_udp_tunnel_completion { - __le16 udp_port; - u8 index; /* 0 to 15 */ - u8 multiple_pfs; - u8 total_filters_used; - u8 reserved1[11]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_del_udp_tunnel_completion); - -/* tunnel key structure 0x0B10 */ - -struct i40e_aqc_tunnel_key_structure { - u8 key1_off; - u8 key2_off; - u8 key1_len; /* 0 to 15 */ - u8 key2_len; /* 0 to 15 */ - u8 flags; -#define I40E_AQC_TUNNEL_KEY_STRUCT_OVERRIDE 0x01 -/* response flags */ -#define I40E_AQC_TUNNEL_KEY_STRUCT_SUCCESS 0x01 -#define I40E_AQC_TUNNEL_KEY_STRUCT_MODIFIED 0x02 -#define I40E_AQC_TUNNEL_KEY_STRUCT_OVERRIDDEN 0x03 - u8 network_key_index; -#define I40E_AQC_NETWORK_KEY_INDEX_VXLAN 0x0 -#define I40E_AQC_NETWORK_KEY_INDEX_NGE 0x1 -#define I40E_AQC_NETWORK_KEY_INDEX_FLEX_MAC_IN_UDP 0x2 -#define I40E_AQC_NETWORK_KEY_INDEX_GRE 0x3 - u8 reserved[10]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_tunnel_key_structure); - -/* OEM mode commands (direct 0xFE0x) */ -struct i40e_aqc_oem_param_change { - __le32 param_type; -#define I40E_AQ_OEM_PARAM_TYPE_PF_CTL 0 -#define I40E_AQ_OEM_PARAM_TYPE_BW_CTL 1 -#define I40E_AQ_OEM_PARAM_MAC 2 - __le32 param_value1; - u8 param_value2[8]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_oem_param_change); - -struct i40e_aqc_oem_state_change { - __le32 state; -#define I40E_AQ_OEM_STATE_LINK_DOWN 0x0 -#define I40E_AQ_OEM_STATE_LINK_UP 0x1 - u8 reserved[12]; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_oem_state_change); - -/* debug commands */ - -/* get device id (0xFF00) uses the generic structure */ - -/* set test more (0xFF01, internal) */ - -struct i40e_acq_set_test_mode { - u8 mode; -#define I40E_AQ_TEST_PARTIAL 0 -#define I40E_AQ_TEST_FULL 1 -#define I40E_AQ_TEST_NVM 2 - u8 reserved[3]; - u8 command; -#define I40E_AQ_TEST_OPEN 0 -#define I40E_AQ_TEST_CLOSE 1 -#define I40E_AQ_TEST_INC 2 - u8 reserved2[3]; - __le32 address_high; - __le32 address_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_acq_set_test_mode); - -/* Debug Read Register command (0xFF03) - * Debug Write Register command (0xFF04) - */ -struct i40e_aqc_debug_reg_read_write { - __le32 reserved; - __le32 address; - __le32 value_high; - __le32 value_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_debug_reg_read_write); - -/* Scatter/gather Reg Read (indirect 0xFF05) - * Scatter/gather Reg Write (indirect 0xFF06) - */ - -/* i40e_aq_desc is used for the command */ -struct i40e_aqc_debug_reg_sg_element_data { - __le32 address; - __le32 value; -}; - -/* Debug Modify register (direct 0xFF07) */ -struct i40e_aqc_debug_modify_reg { - __le32 address; - __le32 value; - __le32 clear_mask; - __le32 set_mask; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_debug_modify_reg); - -/* dump internal data (0xFF08, indirect) */ - -#define I40E_AQ_CLUSTER_ID_AUX 0 -#define I40E_AQ_CLUSTER_ID_SWITCH_FLU 1 -#define I40E_AQ_CLUSTER_ID_TXSCHED 2 -#define I40E_AQ_CLUSTER_ID_HMC 3 -#define I40E_AQ_CLUSTER_ID_MAC0 4 -#define I40E_AQ_CLUSTER_ID_MAC1 5 -#define I40E_AQ_CLUSTER_ID_MAC2 6 -#define I40E_AQ_CLUSTER_ID_MAC3 7 -#define I40E_AQ_CLUSTER_ID_DCB 8 -#define I40E_AQ_CLUSTER_ID_EMP_MEM 9 -#define I40E_AQ_CLUSTER_ID_PKT_BUF 10 -#define I40E_AQ_CLUSTER_ID_ALTRAM 11 - -struct i40e_aqc_debug_dump_internals { - u8 cluster_id; - u8 table_id; - __le16 data_size; - __le32 idx; - __le32 address_high; - __le32 address_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_debug_dump_internals); - -struct i40e_aqc_debug_modify_internals { - u8 cluster_id; - u8 cluster_specific_params[7]; - __le32 address_high; - __le32 address_low; -}; - -I40E_CHECK_CMD_LENGTH(i40e_aqc_debug_modify_internals); - -#endif diff --git a/lib/librte_pmd_i40e/i40e/i40e_alloc.h b/lib/librte_pmd_i40e/i40e/i40e_alloc.h deleted file mode 100644 index 6e81cd5..0000000 --- a/lib/librte_pmd_i40e/i40e/i40e_alloc.h +++ /dev/null @@ -1,65 +0,0 @@ -/******************************************************************************* - -Copyright (c) 2013 - 2014, Intel Corporation -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - - 1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - 3. Neither the name of the Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. - -***************************************************************************/ - -#ifndef _I40E_ALLOC_H_ -#define _I40E_ALLOC_H_ - -struct i40e_hw; - -/* Memory allocation types */ -enum i40e_memory_type { - i40e_mem_arq_buf = 0, /* ARQ indirect command buffer */ - i40e_mem_asq_buf = 1, - i40e_mem_atq_buf = 2, /* ATQ indirect command buffer */ - i40e_mem_arq_ring = 3, /* ARQ descriptor ring */ - i40e_mem_atq_ring = 4, /* ATQ descriptor ring */ - i40e_mem_pd = 5, /* Page Descriptor */ - i40e_mem_bp = 6, /* Backing Page - 4KB */ - i40e_mem_bp_jumbo = 7, /* Backing Page - > 4KB */ - i40e_mem_reserved -}; - -/* prototype for functions used for dynamic memory allocation */ -enum i40e_status_code i40e_allocate_dma_mem(struct i40e_hw *hw, - struct i40e_dma_mem *mem, - enum i40e_memory_type type, - u64 size, u32 alignment); -enum i40e_status_code i40e_free_dma_mem(struct i40e_hw *hw, - struct i40e_dma_mem *mem); -enum i40e_status_code i40e_allocate_virt_mem(struct i40e_hw *hw, - struct i40e_virt_mem *mem, - u32 size); -enum i40e_status_code i40e_free_virt_mem(struct i40e_hw *hw, - struct i40e_virt_mem *mem); - -#endif /* _I40E_ALLOC_H_ */ diff --git a/lib/librte_pmd_i40e/i40e/i40e_common.c b/lib/librte_pmd_i40e/i40e/i40e_common.c deleted file mode 100644 index ffaa777..0000000 --- a/lib/librte_pmd_i40e/i40e/i40e_common.c +++ /dev/null @@ -1,4793 +0,0 @@ -/******************************************************************************* - -Copyright (c) 2013 - 2014, Intel Corporation -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - - 1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - 3. Neither the name of the Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. - -***************************************************************************/ - -#include "i40e_type.h" -#include "i40e_adminq.h" -#include "i40e_prototype.h" -#include "i40e_virtchnl.h" - -/** - * i40e_set_mac_type - Sets MAC type - * @hw: pointer to the HW structure - * - * This function sets the mac type of the adapter based on the - * vendor ID and device ID stored in the hw structure. - **/ -#ifdef VF_DRIVER -enum i40e_status_code i40e_set_mac_type(struct i40e_hw *hw) -#else -STATIC enum i40e_status_code i40e_set_mac_type(struct i40e_hw *hw) -#endif -{ - enum i40e_status_code status = I40E_SUCCESS; - - DEBUGFUNC("i40e_set_mac_type\n"); - - if (hw->vendor_id == I40E_INTEL_VENDOR_ID) { - switch (hw->device_id) { - case I40E_DEV_ID_SFP_XL710: - case I40E_DEV_ID_QEMU: - case I40E_DEV_ID_KX_A: - case I40E_DEV_ID_KX_B: - case I40E_DEV_ID_KX_C: - case I40E_DEV_ID_QSFP_A: - case I40E_DEV_ID_QSFP_B: - case I40E_DEV_ID_QSFP_C: - case I40E_DEV_ID_10G_BASE_T: - hw->mac.type = I40E_MAC_XL710; - break; - case I40E_DEV_ID_VF: - case I40E_DEV_ID_VF_HV: - hw->mac.type = I40E_MAC_VF; - break; - default: - hw->mac.type = I40E_MAC_GENERIC; - break; - } - } else { - status = I40E_ERR_DEVICE_NOT_SUPPORTED; - } - - DEBUGOUT2("i40e_set_mac_type found mac: %d, returns: %d\n", - hw->mac.type, status); - return status; -} - -/** - * i40e_debug_aq - * @hw: debug mask related to admin queue - * @mask: debug mask - * @desc: pointer to admin queue descriptor - * @buffer: pointer to command buffer - * @buf_len: max length of buffer - * - * Dumps debug log about adminq command with descriptor contents. - **/ -void i40e_debug_aq(struct i40e_hw *hw, enum i40e_debug_mask mask, void *desc, - void *buffer, u16 buf_len) -{ - struct i40e_aq_desc *aq_desc = (struct i40e_aq_desc *)desc; - u16 len = LE16_TO_CPU(aq_desc->datalen); - u8 *aq_buffer = (u8 *)buffer; - u32 data[4]; - u32 i = 0; - - if ((!(mask & hw->debug_mask)) || (desc == NULL)) - return; - - i40e_debug(hw, mask, - "AQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n", - aq_desc->opcode, aq_desc->flags, aq_desc->datalen, - aq_desc->retval); - i40e_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n", - aq_desc->cookie_high, aq_desc->cookie_low); - i40e_debug(hw, mask, "\tparam (0,1) 0x%08X 0x%08X\n", - aq_desc->params.internal.param0, - aq_desc->params.internal.param1); - i40e_debug(hw, mask, "\taddr (h,l) 0x%08X 0x%08X\n", - aq_desc->params.external.addr_high, - aq_desc->params.external.addr_low); - - if ((buffer != NULL) && (aq_desc->datalen != 0)) { - i40e_memset(data, 0, sizeof(data), I40E_NONDMA_MEM); - i40e_debug(hw, mask, "AQ CMD Buffer:\n"); - if (buf_len < len) - len = buf_len; - for (i = 0; i < len; i++) { - data[((i % 16) / 4)] |= - ((u32)aq_buffer[i]) << (8 * (i % 4)); - if ((i % 16) == 15) { - i40e_debug(hw, mask, - "\t0x%04X %08X %08X %08X %08X\n", - i - 15, data[0], data[1], data[2], - data[3]); - i40e_memset(data, 0, sizeof(data), - I40E_NONDMA_MEM); - } - } - if ((i % 16) != 0) - i40e_debug(hw, mask, "\t0x%04X %08X %08X %08X %08X\n", - i - (i % 16), data[0], data[1], data[2], - data[3]); - } -} - -/** - * i40e_check_asq_alive - * @hw: pointer to the hw struct - * - * Returns true if Queue is enabled else false. - **/ -bool i40e_check_asq_alive(struct i40e_hw *hw) -{ - if (hw->aq.asq.len) - return !!(rd32(hw, hw->aq.asq.len) & I40E_PF_ATQLEN_ATQENABLE_MASK); - else - return false; -} - -/** - * i40e_aq_queue_shutdown - * @hw: pointer to the hw struct - * @unloading: is the driver unloading itself - * - * Tell the Firmware that we're shutting down the AdminQ and whether - * or not the driver is unloading as well. - **/ -enum i40e_status_code i40e_aq_queue_shutdown(struct i40e_hw *hw, - bool unloading) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_queue_shutdown *cmd = - (struct i40e_aqc_queue_shutdown *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_queue_shutdown); - - if (unloading) - cmd->driver_unloading = CPU_TO_LE32(I40E_AQ_DRIVER_UNLOADING); - status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL); - - return status; -} - -/* The i40e_ptype_lookup table is used to convert from the 8-bit ptype in the - * hardware to a bit-field that can be used by SW to more easily determine the - * packet type. - * - * Macros are used to shorten the table lines and make this table human - * readable. - * - * We store the PTYPE in the top byte of the bit field - this is just so that - * we can check that the table doesn't have a row missing, as the index into - * the table should be the PTYPE. - * - * Typical work flow: - * - * IF NOT i40e_ptype_lookup[ptype].known - * THEN - * Packet is unknown - * ELSE IF i40e_ptype_lookup[ptype].outer_ip == I40E_RX_PTYPE_OUTER_IP - * Use the rest of the fields to look at the tunnels, inner protocols, etc - * ELSE - * Use the enum i40e_rx_l2_ptype to decode the packet type - * ENDIF - */ - -/* macro to make the table lines short */ -#define I40E_PTT(PTYPE, OUTER_IP, OUTER_IP_VER, OUTER_FRAG, T, TE, TEF, I, PL)\ - { PTYPE, \ - 1, \ - I40E_RX_PTYPE_OUTER_##OUTER_IP, \ - I40E_RX_PTYPE_OUTER_##OUTER_IP_VER, \ - I40E_RX_PTYPE_##OUTER_FRAG, \ - I40E_RX_PTYPE_TUNNEL_##T, \ - I40E_RX_PTYPE_TUNNEL_END_##TE, \ - I40E_RX_PTYPE_##TEF, \ - I40E_RX_PTYPE_INNER_PROT_##I, \ - I40E_RX_PTYPE_PAYLOAD_LAYER_##PL } - -#define I40E_PTT_UNUSED_ENTRY(PTYPE) \ - { PTYPE, 0, 0, 0, 0, 0, 0, 0, 0, 0 } - -/* shorter macros makes the table fit but are terse */ -#define I40E_RX_PTYPE_NOF I40E_RX_PTYPE_NOT_FRAG -#define I40E_RX_PTYPE_FRG I40E_RX_PTYPE_FRAG -#define I40E_RX_PTYPE_INNER_PROT_TS I40E_RX_PTYPE_INNER_PROT_TIMESYNC - -/* Lookup table mapping the HW PTYPE to the bit field for decoding */ -struct i40e_rx_ptype_decoded i40e_ptype_lookup[] = { - /* L2 Packet types */ - I40E_PTT_UNUSED_ENTRY(0), - I40E_PTT(1, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - I40E_PTT(2, L2, NONE, NOF, NONE, NONE, NOF, TS, PAY2), - I40E_PTT(3, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - I40E_PTT_UNUSED_ENTRY(4), - I40E_PTT_UNUSED_ENTRY(5), - I40E_PTT(6, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - I40E_PTT(7, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - I40E_PTT_UNUSED_ENTRY(8), - I40E_PTT_UNUSED_ENTRY(9), - I40E_PTT(10, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY2), - I40E_PTT(11, L2, NONE, NOF, NONE, NONE, NOF, NONE, NONE), - I40E_PTT(12, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(13, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(14, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(15, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(16, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(17, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(18, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(19, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(20, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(21, L2, NONE, NOF, NONE, NONE, NOF, NONE, PAY3), - - /* Non Tunneled IPv4 */ - I40E_PTT(22, IP, IPV4, FRG, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(23, IP, IPV4, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(24, IP, IPV4, NOF, NONE, NONE, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(25), - I40E_PTT(26, IP, IPV4, NOF, NONE, NONE, NOF, TCP, PAY4), - I40E_PTT(27, IP, IPV4, NOF, NONE, NONE, NOF, SCTP, PAY4), - I40E_PTT(28, IP, IPV4, NOF, NONE, NONE, NOF, ICMP, PAY4), - - /* IPv4 --> IPv4 */ - I40E_PTT(29, IP, IPV4, NOF, IP_IP, IPV4, FRG, NONE, PAY3), - I40E_PTT(30, IP, IPV4, NOF, IP_IP, IPV4, NOF, NONE, PAY3), - I40E_PTT(31, IP, IPV4, NOF, IP_IP, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(32), - I40E_PTT(33, IP, IPV4, NOF, IP_IP, IPV4, NOF, TCP, PAY4), - I40E_PTT(34, IP, IPV4, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), - I40E_PTT(35, IP, IPV4, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), - - /* IPv4 --> IPv6 */ - I40E_PTT(36, IP, IPV4, NOF, IP_IP, IPV6, FRG, NONE, PAY3), - I40E_PTT(37, IP, IPV4, NOF, IP_IP, IPV6, NOF, NONE, PAY3), - I40E_PTT(38, IP, IPV4, NOF, IP_IP, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(39), - I40E_PTT(40, IP, IPV4, NOF, IP_IP, IPV6, NOF, TCP, PAY4), - I40E_PTT(41, IP, IPV4, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), - I40E_PTT(42, IP, IPV4, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT */ - I40E_PTT(43, IP, IPV4, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), - - /* IPv4 --> GRE/NAT --> IPv4 */ - I40E_PTT(44, IP, IPV4, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), - I40E_PTT(45, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), - I40E_PTT(46, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(47), - I40E_PTT(48, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), - I40E_PTT(49, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), - I40E_PTT(50, IP, IPV4, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT --> IPv6 */ - I40E_PTT(51, IP, IPV4, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), - I40E_PTT(52, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), - I40E_PTT(53, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(54), - I40E_PTT(55, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), - I40E_PTT(56, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), - I40E_PTT(57, IP, IPV4, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT --> MAC */ - I40E_PTT(58, IP, IPV4, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), - - /* IPv4 --> GRE/NAT --> MAC --> IPv4 */ - I40E_PTT(59, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), - I40E_PTT(60, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), - I40E_PTT(61, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(62), - I40E_PTT(63, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), - I40E_PTT(64, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), - I40E_PTT(65, IP, IPV4, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT -> MAC --> IPv6 */ - I40E_PTT(66, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), - I40E_PTT(67, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), - I40E_PTT(68, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(69), - I40E_PTT(70, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), - I40E_PTT(71, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), - I40E_PTT(72, IP, IPV4, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), - - /* IPv4 --> GRE/NAT --> MAC/VLAN */ - I40E_PTT(73, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), - - /* IPv4 ---> GRE/NAT -> MAC/VLAN --> IPv4 */ - I40E_PTT(74, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), - I40E_PTT(75, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), - I40E_PTT(76, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(77), - I40E_PTT(78, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), - I40E_PTT(79, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), - I40E_PTT(80, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), - - /* IPv4 -> GRE/NAT -> MAC/VLAN --> IPv6 */ - I40E_PTT(81, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), - I40E_PTT(82, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), - I40E_PTT(83, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(84), - I40E_PTT(85, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), - I40E_PTT(86, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), - I40E_PTT(87, IP, IPV4, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), - - /* Non Tunneled IPv6 */ - I40E_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3), - I40E_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY3), - I40E_PTT_UNUSED_ENTRY(91), - I40E_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4), - I40E_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4), - I40E_PTT(94, IP, IPV6, NOF, NONE, NONE, NOF, ICMP, PAY4), - - /* IPv6 --> IPv4 */ - I40E_PTT(95, IP, IPV6, NOF, IP_IP, IPV4, FRG, NONE, PAY3), - I40E_PTT(96, IP, IPV6, NOF, IP_IP, IPV4, NOF, NONE, PAY3), - I40E_PTT(97, IP, IPV6, NOF, IP_IP, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(98), - I40E_PTT(99, IP, IPV6, NOF, IP_IP, IPV4, NOF, TCP, PAY4), - I40E_PTT(100, IP, IPV6, NOF, IP_IP, IPV4, NOF, SCTP, PAY4), - I40E_PTT(101, IP, IPV6, NOF, IP_IP, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> IPv6 */ - I40E_PTT(102, IP, IPV6, NOF, IP_IP, IPV6, FRG, NONE, PAY3), - I40E_PTT(103, IP, IPV6, NOF, IP_IP, IPV6, NOF, NONE, PAY3), - I40E_PTT(104, IP, IPV6, NOF, IP_IP, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(105), - I40E_PTT(106, IP, IPV6, NOF, IP_IP, IPV6, NOF, TCP, PAY4), - I40E_PTT(107, IP, IPV6, NOF, IP_IP, IPV6, NOF, SCTP, PAY4), - I40E_PTT(108, IP, IPV6, NOF, IP_IP, IPV6, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT */ - I40E_PTT(109, IP, IPV6, NOF, IP_GRENAT, NONE, NOF, NONE, PAY3), - - /* IPv6 --> GRE/NAT -> IPv4 */ - I40E_PTT(110, IP, IPV6, NOF, IP_GRENAT, IPV4, FRG, NONE, PAY3), - I40E_PTT(111, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, NONE, PAY3), - I40E_PTT(112, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(113), - I40E_PTT(114, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, TCP, PAY4), - I40E_PTT(115, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, SCTP, PAY4), - I40E_PTT(116, IP, IPV6, NOF, IP_GRENAT, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> IPv6 */ - I40E_PTT(117, IP, IPV6, NOF, IP_GRENAT, IPV6, FRG, NONE, PAY3), - I40E_PTT(118, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, NONE, PAY3), - I40E_PTT(119, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(120), - I40E_PTT(121, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, TCP, PAY4), - I40E_PTT(122, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, SCTP, PAY4), - I40E_PTT(123, IP, IPV6, NOF, IP_GRENAT, IPV6, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC */ - I40E_PTT(124, IP, IPV6, NOF, IP_GRENAT_MAC, NONE, NOF, NONE, PAY3), - - /* IPv6 --> GRE/NAT -> MAC -> IPv4 */ - I40E_PTT(125, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, FRG, NONE, PAY3), - I40E_PTT(126, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, NONE, PAY3), - I40E_PTT(127, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(128), - I40E_PTT(129, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, TCP, PAY4), - I40E_PTT(130, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, SCTP, PAY4), - I40E_PTT(131, IP, IPV6, NOF, IP_GRENAT_MAC, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC -> IPv6 */ - I40E_PTT(132, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, FRG, NONE, PAY3), - I40E_PTT(133, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, NONE, PAY3), - I40E_PTT(134, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(135), - I40E_PTT(136, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, TCP, PAY4), - I40E_PTT(137, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, SCTP, PAY4), - I40E_PTT(138, IP, IPV6, NOF, IP_GRENAT_MAC, IPV6, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC/VLAN */ - I40E_PTT(139, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, NONE, NOF, NONE, PAY3), - - /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv4 */ - I40E_PTT(140, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, FRG, NONE, PAY3), - I40E_PTT(141, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, NONE, PAY3), - I40E_PTT(142, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(143), - I40E_PTT(144, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, TCP, PAY4), - I40E_PTT(145, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, SCTP, PAY4), - I40E_PTT(146, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV4, NOF, ICMP, PAY4), - - /* IPv6 --> GRE/NAT -> MAC/VLAN --> IPv6 */ - I40E_PTT(147, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, FRG, NONE, PAY3), - I40E_PTT(148, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, NONE, PAY3), - I40E_PTT(149, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, UDP, PAY4), - I40E_PTT_UNUSED_ENTRY(150), - I40E_PTT(151, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, TCP, PAY4), - I40E_PTT(152, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, SCTP, PAY4), - I40E_PTT(153, IP, IPV6, NOF, IP_GRENAT_MAC_VLAN, IPV6, NOF, ICMP, PAY4), - - /* unused entries */ - I40E_PTT_UNUSED_ENTRY(154), - I40E_PTT_UNUSED_ENTRY(155), - I40E_PTT_UNUSED_ENTRY(156), - I40E_PTT_UNUSED_ENTRY(157), - I40E_PTT_UNUSED_ENTRY(158), - I40E_PTT_UNUSED_ENTRY(159), - - I40E_PTT_UNUSED_ENTRY(160), - I40E_PTT_UNUSED_ENTRY(161), - I40E_PTT_UNUSED_ENTRY(162), - I40E_PTT_UNUSED_ENTRY(163), - I40E_PTT_UNUSED_ENTRY(164), - I40E_PTT_UNUSED_ENTRY(165), - I40E_PTT_UNUSED_ENTRY(166), - I40E_PTT_UNUSED_ENTRY(167), - I40E_PTT_UNUSED_ENTRY(168), - I40E_PTT_UNUSED_ENTRY(169), - - I40E_PTT_UNUSED_ENTRY(170), - I40E_PTT_UNUSED_ENTRY(171), - I40E_PTT_UNUSED_ENTRY(172), - I40E_PTT_UNUSED_ENTRY(173), - I40E_PTT_UNUSED_ENTRY(174), - I40E_PTT_UNUSED_ENTRY(175), - I40E_PTT_UNUSED_ENTRY(176), - I40E_PTT_UNUSED_ENTRY(177), - I40E_PTT_UNUSED_ENTRY(178), - I40E_PTT_UNUSED_ENTRY(179), - - I40E_PTT_UNUSED_ENTRY(180), - I40E_PTT_UNUSED_ENTRY(181), - I40E_PTT_UNUSED_ENTRY(182), - I40E_PTT_UNUSED_ENTRY(183), - I40E_PTT_UNUSED_ENTRY(184), - I40E_PTT_UNUSED_ENTRY(185), - I40E_PTT_UNUSED_ENTRY(186), - I40E_PTT_UNUSED_ENTRY(187), - I40E_PTT_UNUSED_ENTRY(188), - I40E_PTT_UNUSED_ENTRY(189), - - I40E_PTT_UNUSED_ENTRY(190), - I40E_PTT_UNUSED_ENTRY(191), - I40E_PTT_UNUSED_ENTRY(192), - I40E_PTT_UNUSED_ENTRY(193), - I40E_PTT_UNUSED_ENTRY(194), - I40E_PTT_UNUSED_ENTRY(195), - I40E_PTT_UNUSED_ENTRY(196), - I40E_PTT_UNUSED_ENTRY(197), - I40E_PTT_UNUSED_ENTRY(198), - I40E_PTT_UNUSED_ENTRY(199), - - I40E_PTT_UNUSED_ENTRY(200), - I40E_PTT_UNUSED_ENTRY(201), - I40E_PTT_UNUSED_ENTRY(202), - I40E_PTT_UNUSED_ENTRY(203), - I40E_PTT_UNUSED_ENTRY(204), - I40E_PTT_UNUSED_ENTRY(205), - I40E_PTT_UNUSED_ENTRY(206), - I40E_PTT_UNUSED_ENTRY(207), - I40E_PTT_UNUSED_ENTRY(208), - I40E_PTT_UNUSED_ENTRY(209), - - I40E_PTT_UNUSED_ENTRY(210), - I40E_PTT_UNUSED_ENTRY(211), - I40E_PTT_UNUSED_ENTRY(212), - I40E_PTT_UNUSED_ENTRY(213), - I40E_PTT_UNUSED_ENTRY(214), - I40E_PTT_UNUSED_ENTRY(215), - I40E_PTT_UNUSED_ENTRY(216), - I40E_PTT_UNUSED_ENTRY(217), - I40E_PTT_UNUSED_ENTRY(218), - I40E_PTT_UNUSED_ENTRY(219), - - I40E_PTT_UNUSED_ENTRY(220), - I40E_PTT_UNUSED_ENTRY(221), - I40E_PTT_UNUSED_ENTRY(222), - I40E_PTT_UNUSED_ENTRY(223), - I40E_PTT_UNUSED_ENTRY(224), - I40E_PTT_UNUSED_ENTRY(225), - I40E_PTT_UNUSED_ENTRY(226), - I40E_PTT_UNUSED_ENTRY(227), - I40E_PTT_UNUSED_ENTRY(228), - I40E_PTT_UNUSED_ENTRY(229), - - I40E_PTT_UNUSED_ENTRY(230), - I40E_PTT_UNUSED_ENTRY(231), - I40E_PTT_UNUSED_ENTRY(232), - I40E_PTT_UNUSED_ENTRY(233), - I40E_PTT_UNUSED_ENTRY(234), - I40E_PTT_UNUSED_ENTRY(235), - I40E_PTT_UNUSED_ENTRY(236), - I40E_PTT_UNUSED_ENTRY(237), - I40E_PTT_UNUSED_ENTRY(238), - I40E_PTT_UNUSED_ENTRY(239), - - I40E_PTT_UNUSED_ENTRY(240), - I40E_PTT_UNUSED_ENTRY(241), - I40E_PTT_UNUSED_ENTRY(242), - I40E_PTT_UNUSED_ENTRY(243), - I40E_PTT_UNUSED_ENTRY(244), - I40E_PTT_UNUSED_ENTRY(245), - I40E_PTT_UNUSED_ENTRY(246), - I40E_PTT_UNUSED_ENTRY(247), - I40E_PTT_UNUSED_ENTRY(248), - I40E_PTT_UNUSED_ENTRY(249), - - I40E_PTT_UNUSED_ENTRY(250), - I40E_PTT_UNUSED_ENTRY(251), - I40E_PTT_UNUSED_ENTRY(252), - I40E_PTT_UNUSED_ENTRY(253), - I40E_PTT_UNUSED_ENTRY(254), - I40E_PTT_UNUSED_ENTRY(255) -}; - -#ifndef VF_DRIVER - -/** - * i40e_init_shared_code - Initialize the shared code - * @hw: pointer to hardware structure - * - * This assigns the MAC type and PHY code and inits the NVM. - * Does not touch the hardware. This function must be called prior to any - * other function in the shared code. The i40e_hw structure should be - * memset to 0 prior to calling this function. The following fields in - * hw structure should be filled in prior to calling this function: - * hw_addr, back, device_id, vendor_id, subsystem_device_id, - * subsystem_vendor_id, and revision_id - **/ -enum i40e_status_code i40e_init_shared_code(struct i40e_hw *hw) -{ - enum i40e_status_code status = I40E_SUCCESS; - u32 reg; - - DEBUGFUNC("i40e_init_shared_code"); - - i40e_set_mac_type(hw); - - switch (hw->mac.type) { - case I40E_MAC_XL710: - break; - default: - return I40E_ERR_DEVICE_NOT_SUPPORTED; - } - - hw->phy.get_link_info = true; - - /* Determine port number */ - reg = rd32(hw, I40E_PFGEN_PORTNUM); - reg = ((reg & I40E_PFGEN_PORTNUM_PORT_NUM_MASK) >> - I40E_PFGEN_PORTNUM_PORT_NUM_SHIFT); - hw->port = (u8)reg; - - /* Determine the PF number based on the PCI fn */ - reg = rd32(hw, I40E_GLPCI_CAPSUP); - if (reg & I40E_GLPCI_CAPSUP_ARI_EN_MASK) - hw->pf_id = (u8)((hw->bus.device << 3) | hw->bus.func); - else - hw->pf_id = (u8)hw->bus.func; - - status = i40e_init_nvm(hw); - return status; -} - -/** - * i40e_aq_mac_address_read - Retrieve the MAC addresses - * @hw: pointer to the hw struct - * @flags: a return indicator of what addresses were added to the addr store - * @addrs: the requestor's mac addr store - * @cmd_details: pointer to command details structure or NULL - **/ -STATIC enum i40e_status_code i40e_aq_mac_address_read(struct i40e_hw *hw, - u16 *flags, - struct i40e_aqc_mac_address_read_data *addrs, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_mac_address_read *cmd_data = - (struct i40e_aqc_mac_address_read *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_mac_address_read); - desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_BUF); - - status = i40e_asq_send_command(hw, &desc, addrs, - sizeof(*addrs), cmd_details); - *flags = LE16_TO_CPU(cmd_data->command_flags); - - return status; -} - -/** - * i40e_aq_mac_address_write - Change the MAC addresses - * @hw: pointer to the hw struct - * @flags: indicates which MAC to be written - * @mac_addr: address to write - * @cmd_details: pointer to command details structure or NULL - **/ -enum i40e_status_code i40e_aq_mac_address_write(struct i40e_hw *hw, - u16 flags, u8 *mac_addr, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_mac_address_write *cmd_data = - (struct i40e_aqc_mac_address_write *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_mac_address_write); - cmd_data->command_flags = CPU_TO_LE16(flags); - cmd_data->mac_sah = CPU_TO_LE16((u16)mac_addr[0] << 8 | mac_addr[1]); - cmd_data->mac_sal = CPU_TO_LE32(((u32)mac_addr[2] << 24) | - ((u32)mac_addr[3] << 16) | - ((u32)mac_addr[4] << 8) | - mac_addr[5]); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_get_mac_addr - get MAC address - * @hw: pointer to the HW structure - * @mac_addr: pointer to MAC address - * - * Reads the adapter's MAC address from register - **/ -enum i40e_status_code i40e_get_mac_addr(struct i40e_hw *hw, u8 *mac_addr) -{ - struct i40e_aqc_mac_address_read_data addrs; - enum i40e_status_code status; - u16 flags = 0; - - status = i40e_aq_mac_address_read(hw, &flags, &addrs, NULL); - - if (flags & I40E_AQC_LAN_ADDR_VALID) - memcpy(mac_addr, &addrs.pf_lan_mac, sizeof(addrs.pf_lan_mac)); - - return status; -} - -/** - * i40e_get_port_mac_addr - get Port MAC address - * @hw: pointer to the HW structure - * @mac_addr: pointer to Port MAC address - * - * Reads the adapter's Port MAC address - **/ -enum i40e_status_code i40e_get_port_mac_addr(struct i40e_hw *hw, u8 *mac_addr) -{ - struct i40e_aqc_mac_address_read_data addrs; - enum i40e_status_code status; - u16 flags = 0; - - status = i40e_aq_mac_address_read(hw, &flags, &addrs, NULL); - if (status) - return status; - - if (flags & I40E_AQC_PORT_ADDR_VALID) - memcpy(mac_addr, &addrs.port_mac, sizeof(addrs.port_mac)); - else - status = I40E_ERR_INVALID_MAC_ADDR; - - return status; -} - -/** - * i40e_pre_tx_queue_cfg - pre tx queue configure - * @hw: pointer to the HW structure - * @queue: target pf queue index - * @enable: state change request - * - * Handles hw requirement to indicate intention to enable - * or disable target queue. - **/ -void i40e_pre_tx_queue_cfg(struct i40e_hw *hw, u32 queue, bool enable) -{ - u32 abs_queue_idx = hw->func_caps.base_queue + queue; - u32 reg_block = 0; - u32 reg_val; - - if (abs_queue_idx >= 128) { - reg_block = abs_queue_idx / 128; - abs_queue_idx %= 128; - } - - reg_val = rd32(hw, I40E_GLLAN_TXPRE_QDIS(reg_block)); - reg_val &= ~I40E_GLLAN_TXPRE_QDIS_QINDX_MASK; - reg_val |= (abs_queue_idx << I40E_GLLAN_TXPRE_QDIS_QINDX_SHIFT); - - if (enable) - reg_val |= I40E_GLLAN_TXPRE_QDIS_CLEAR_QDIS_MASK; - else - reg_val |= I40E_GLLAN_TXPRE_QDIS_SET_QDIS_MASK; - - wr32(hw, I40E_GLLAN_TXPRE_QDIS(reg_block), reg_val); -} - -/** - * i40e_validate_mac_addr - Validate unicast MAC address - * @mac_addr: pointer to MAC address - * - * Tests a MAC address to ensure it is a valid Individual Address - **/ -enum i40e_status_code i40e_validate_mac_addr(u8 *mac_addr) -{ - enum i40e_status_code status = I40E_SUCCESS; - - DEBUGFUNC("i40e_validate_mac_addr"); - - /* Broadcast addresses ARE multicast addresses - * Make sure it is not a multicast address - * Reject the zero address - */ - if (I40E_IS_MULTICAST(mac_addr) || - (mac_addr[0] == 0 && mac_addr[1] == 0 && mac_addr[2] == 0 && - mac_addr[3] == 0 && mac_addr[4] == 0 && mac_addr[5] == 0)) - status = I40E_ERR_INVALID_MAC_ADDR; - - return status; -} - -/** - * i40e_get_media_type - Gets media type - * @hw: pointer to the hardware structure - **/ -STATIC enum i40e_media_type i40e_get_media_type(struct i40e_hw *hw) -{ - enum i40e_media_type media; - - switch (hw->phy.link_info.phy_type) { - case I40E_PHY_TYPE_10GBASE_SR: - case I40E_PHY_TYPE_10GBASE_LR: - case I40E_PHY_TYPE_1000BASE_SX: - case I40E_PHY_TYPE_1000BASE_LX: - case I40E_PHY_TYPE_40GBASE_SR4: - case I40E_PHY_TYPE_40GBASE_LR4: - media = I40E_MEDIA_TYPE_FIBER; - break; - case I40E_PHY_TYPE_100BASE_TX: - case I40E_PHY_TYPE_1000BASE_T: - case I40E_PHY_TYPE_10GBASE_T: - media = I40E_MEDIA_TYPE_BASET; - break; - case I40E_PHY_TYPE_10GBASE_CR1_CU: - case I40E_PHY_TYPE_40GBASE_CR4_CU: - case I40E_PHY_TYPE_10GBASE_CR1: - case I40E_PHY_TYPE_40GBASE_CR4: - case I40E_PHY_TYPE_10GBASE_SFPP_CU: - media = I40E_MEDIA_TYPE_DA; - break; - case I40E_PHY_TYPE_1000BASE_KX: - case I40E_PHY_TYPE_10GBASE_KX4: - case I40E_PHY_TYPE_10GBASE_KR: - case I40E_PHY_TYPE_40GBASE_KR4: - media = I40E_MEDIA_TYPE_BACKPLANE; - break; - case I40E_PHY_TYPE_SGMII: - case I40E_PHY_TYPE_XAUI: - case I40E_PHY_TYPE_XFI: - case I40E_PHY_TYPE_XLAUI: - case I40E_PHY_TYPE_XLPPI: - default: - media = I40E_MEDIA_TYPE_UNKNOWN; - break; - } - - return media; -} - -#define I40E_PF_RESET_WAIT_COUNT 100 -/** - * i40e_pf_reset - Reset the PF - * @hw: pointer to the hardware structure - * - * Assuming someone else has triggered a global reset, - * assure the global reset is complete and then reset the PF - **/ -enum i40e_status_code i40e_pf_reset(struct i40e_hw *hw) -{ - u32 cnt = 0; - u32 cnt1 = 0; - u32 reg = 0; - u32 grst_del; - - /* Poll for Global Reset steady state in case of recent GRST. - * The grst delay value is in 100ms units, and we'll wait a - * couple counts longer to be sure we don't just miss the end. - */ - grst_del = rd32(hw, I40E_GLGEN_RSTCTL) & I40E_GLGEN_RSTCTL_GRSTDEL_MASK - >> I40E_GLGEN_RSTCTL_GRSTDEL_SHIFT; - for (cnt = 0; cnt < grst_del + 2; cnt++) { - reg = rd32(hw, I40E_GLGEN_RSTAT); - if (!(reg & I40E_GLGEN_RSTAT_DEVSTATE_MASK)) - break; - i40e_msec_delay(100); - } - if (reg & I40E_GLGEN_RSTAT_DEVSTATE_MASK) { - DEBUGOUT("Global reset polling failed to complete.\n"); - return I40E_ERR_RESET_FAILED; - } - - /* Now Wait for the FW to be ready */ - for (cnt1 = 0; cnt1 < I40E_PF_RESET_WAIT_COUNT; cnt1++) { - reg = rd32(hw, I40E_GLNVM_ULD); - reg &= (I40E_GLNVM_ULD_CONF_CORE_DONE_MASK | - I40E_GLNVM_ULD_CONF_GLOBAL_DONE_MASK); - if (reg == (I40E_GLNVM_ULD_CONF_CORE_DONE_MASK | - I40E_GLNVM_ULD_CONF_GLOBAL_DONE_MASK)) { - DEBUGOUT1("Core and Global modules ready %d\n", cnt1); - break; - } - i40e_msec_delay(10); - } - if (!(reg & (I40E_GLNVM_ULD_CONF_CORE_DONE_MASK | - I40E_GLNVM_ULD_CONF_GLOBAL_DONE_MASK))) { - DEBUGOUT("wait for FW Reset complete timedout\n"); - DEBUGOUT1("I40E_GLNVM_ULD = 0x%x\n", reg); - return I40E_ERR_RESET_FAILED; - } - - /* If there was a Global Reset in progress when we got here, - * we don't need to do the PF Reset - */ - if (!cnt) { - reg = rd32(hw, I40E_PFGEN_CTRL); - wr32(hw, I40E_PFGEN_CTRL, - (reg | I40E_PFGEN_CTRL_PFSWR_MASK)); - for (cnt = 0; cnt < I40E_PF_RESET_WAIT_COUNT; cnt++) { - reg = rd32(hw, I40E_PFGEN_CTRL); - if (!(reg & I40E_PFGEN_CTRL_PFSWR_MASK)) - break; - i40e_msec_delay(1); - } - if (reg & I40E_PFGEN_CTRL_PFSWR_MASK) { - DEBUGOUT("PF reset polling failed to complete.\n"); - return I40E_ERR_RESET_FAILED; - } - } - - i40e_clear_pxe_mode(hw); - - - return I40E_SUCCESS; -} - -/** - * i40e_clear_hw - clear out any left over hw state - * @hw: pointer to the hw struct - * - * Clear queues and interrupts, typically called at init time, - * but after the capabilities have been found so we know how many - * queues and msix vectors have been allocated. - **/ -void i40e_clear_hw(struct i40e_hw *hw) -{ - u32 num_queues, base_queue; - u32 num_pf_int; - u32 num_vf_int; - u32 num_vfs; - u32 i, j; - u32 val; - u32 eol = 0x7ff; - - /* get number of interrupts, queues, and vfs */ - val = rd32(hw, I40E_GLPCI_CNF2); - num_pf_int = (val & I40E_GLPCI_CNF2_MSI_X_PF_N_MASK) >> - I40E_GLPCI_CNF2_MSI_X_PF_N_SHIFT; - num_vf_int = (val & I40E_GLPCI_CNF2_MSI_X_VF_N_MASK) >> - I40E_GLPCI_CNF2_MSI_X_VF_N_SHIFT; - - val = rd32(hw, I40E_PFLAN_QALLOC); - base_queue = (val & I40E_PFLAN_QALLOC_FIRSTQ_MASK) >> - I40E_PFLAN_QALLOC_FIRSTQ_SHIFT; - j = (val & I40E_PFLAN_QALLOC_LASTQ_MASK) >> - I40E_PFLAN_QALLOC_LASTQ_SHIFT; - if (val & I40E_PFLAN_QALLOC_VALID_MASK) - num_queues = (j - base_queue) + 1; - else - num_queues = 0; - - val = rd32(hw, I40E_PF_VT_PFALLOC); - i = (val & I40E_PF_VT_PFALLOC_FIRSTVF_MASK) >> - I40E_PF_VT_PFALLOC_FIRSTVF_SHIFT; - j = (val & I40E_PF_VT_PFALLOC_LASTVF_MASK) >> - I40E_PF_VT_PFALLOC_LASTVF_SHIFT; - if (val & I40E_PF_VT_PFALLOC_VALID_MASK) - num_vfs = (j - i) + 1; - else - num_vfs = 0; - - /* stop all the interrupts */ - wr32(hw, I40E_PFINT_ICR0_ENA, 0); - val = 0x3 << I40E_PFINT_DYN_CTLN_ITR_INDX_SHIFT; - for (i = 0; i < num_pf_int - 2; i++) - wr32(hw, I40E_PFINT_DYN_CTLN(i), val); - - /* Set the FIRSTQ_INDX field to 0x7FF in PFINT_LNKLSTx */ - val = eol << I40E_PFINT_LNKLST0_FIRSTQ_INDX_SHIFT; - wr32(hw, I40E_PFINT_LNKLST0, val); - for (i = 0; i < num_pf_int - 2; i++) - wr32(hw, I40E_PFINT_LNKLSTN(i), val); - val = eol << I40E_VPINT_LNKLST0_FIRSTQ_INDX_SHIFT; - for (i = 0; i < num_vfs; i++) - wr32(hw, I40E_VPINT_LNKLST0(i), val); - for (i = 0; i < num_vf_int - 2; i++) - wr32(hw, I40E_VPINT_LNKLSTN(i), val); - - /* warn the HW of the coming Tx disables */ - for (i = 0; i < num_queues; i++) { - u32 abs_queue_idx = base_queue + i; - u32 reg_block = 0; - - if (abs_queue_idx >= 128) { - reg_block = abs_queue_idx / 128; - abs_queue_idx %= 128; - } - - val = rd32(hw, I40E_GLLAN_TXPRE_QDIS(reg_block)); - val &= ~I40E_GLLAN_TXPRE_QDIS_QINDX_MASK; - val |= (abs_queue_idx << I40E_GLLAN_TXPRE_QDIS_QINDX_SHIFT); - val |= I40E_GLLAN_TXPRE_QDIS_SET_QDIS_MASK; - - wr32(hw, I40E_GLLAN_TXPRE_QDIS(reg_block), val); - } - i40e_usec_delay(400); - - /* stop all the queues */ - for (i = 0; i < num_queues; i++) { - wr32(hw, I40E_QINT_TQCTL(i), 0); - wr32(hw, I40E_QTX_ENA(i), 0); - wr32(hw, I40E_QINT_RQCTL(i), 0); - wr32(hw, I40E_QRX_ENA(i), 0); - } - - /* short wait for all queue disables to settle */ - i40e_usec_delay(50); -} - -/** - * i40e_clear_pxe_mode - clear pxe operations mode - * @hw: pointer to the hw struct - * - * Make sure all PXE mode settings are cleared, including things - * like descriptor fetch/write-back mode. - **/ -void i40e_clear_pxe_mode(struct i40e_hw *hw) -{ - if (i40e_check_asq_alive(hw)) - i40e_aq_clear_pxe_mode(hw, NULL); -} - -/** - * i40e_led_is_mine - helper to find matching led - * @hw: pointer to the hw struct - * @idx: index into GPIO registers - * - * returns: 0 if no match, otherwise the value of the GPIO_CTL register - */ -static u32 i40e_led_is_mine(struct i40e_hw *hw, int idx) -{ - u32 gpio_val = 0; - u32 port; - - if (!hw->func_caps.led[idx]) - return 0; - - gpio_val = rd32(hw, I40E_GLGEN_GPIO_CTL(idx)); - port = (gpio_val & I40E_GLGEN_GPIO_CTL_PRT_NUM_MASK) >> - I40E_GLGEN_GPIO_CTL_PRT_NUM_SHIFT; - - /* if PRT_NUM_NA is 1 then this LED is not port specific, OR - * if it is not our port then ignore - */ - if ((gpio_val & I40E_GLGEN_GPIO_CTL_PRT_NUM_NA_MASK) || - (port != hw->port)) - return 0; - - return gpio_val; -} - -#define I40E_LED0 22 -#define I40E_LINK_ACTIVITY 0xC - -/** - * i40e_led_get - return current on/off mode - * @hw: pointer to the hw struct - * - * The value returned is the 'mode' field as defined in the - * GPIO register definitions: 0x0 = off, 0xf = on, and other - * values are variations of possible behaviors relating to - * blink, link, and wire. - **/ -u32 i40e_led_get(struct i40e_hw *hw) -{ - u32 mode = 0; - int i; - - /* as per the documentation GPIO 22-29 are the LED - * GPIO pins named LED0..LED7 - */ - for (i = I40E_LED0; i <= I40E_GLGEN_GPIO_CTL_MAX_INDEX; i++) { - u32 gpio_val = i40e_led_is_mine(hw, i); - - if (!gpio_val) - continue; - - mode = (gpio_val & I40E_GLGEN_GPIO_CTL_LED_MODE_MASK) >> - I40E_GLGEN_GPIO_CTL_LED_MODE_SHIFT; - break; - } - - return mode; -} - -/** - * i40e_led_set - set new on/off mode - * @hw: pointer to the hw struct - * @mode: 0=off, 0xf=on (else see manual for mode details) - * @blink: true if the LED should blink when on, false if steady - * - * if this function is used to turn on the blink it should - * be used to disable the blink when restoring the original state. - **/ -void i40e_led_set(struct i40e_hw *hw, u32 mode, bool blink) -{ - int i; - - if (mode & 0xfffffff0) - DEBUGOUT1("invalid mode passed in %X\n", mode); - - /* as per the documentation GPIO 22-29 are the LED - * GPIO pins named LED0..LED7 - */ - for (i = I40E_LED0; i <= I40E_GLGEN_GPIO_CTL_MAX_INDEX; i++) { - u32 gpio_val = i40e_led_is_mine(hw, i); - - if (!gpio_val) - continue; - - gpio_val &= ~I40E_GLGEN_GPIO_CTL_LED_MODE_MASK; - /* this & is a bit of paranoia, but serves as a range check */ - gpio_val |= ((mode << I40E_GLGEN_GPIO_CTL_LED_MODE_SHIFT) & - I40E_GLGEN_GPIO_CTL_LED_MODE_MASK); - - if (mode == I40E_LINK_ACTIVITY) - blink = false; - - gpio_val |= (blink ? 1 : 0) << - I40E_GLGEN_GPIO_CTL_LED_BLINK_SHIFT; - - wr32(hw, I40E_GLGEN_GPIO_CTL(i), gpio_val); - break; - } -} - -/* Admin command wrappers */ - -/** - * i40e_aq_get_phy_capabilities - * @hw: pointer to the hw struct - * @abilities: structure for PHY capabilities to be filled - * @qualified_modules: report Qualified Modules - * @report_init: report init capabilities (active are default) - * @cmd_details: pointer to command details structure or NULL - * - * Returns the various PHY abilities supported on the Port. - **/ -enum i40e_status_code i40e_aq_get_phy_capabilities(struct i40e_hw *hw, - bool qualified_modules, bool report_init, - struct i40e_aq_get_phy_abilities_resp *abilities, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - enum i40e_status_code status; - u16 abilities_size = sizeof(struct i40e_aq_get_phy_abilities_resp); - - if (!abilities) - return I40E_ERR_PARAM; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_get_phy_abilities); - - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF); - if (abilities_size > I40E_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); - - if (qualified_modules) - desc.params.external.param0 |= - CPU_TO_LE32(I40E_AQ_PHY_REPORT_QUALIFIED_MODULES); - - if (report_init) - desc.params.external.param0 |= - CPU_TO_LE32(I40E_AQ_PHY_REPORT_INITIAL_VALUES); - - status = i40e_asq_send_command(hw, &desc, abilities, abilities_size, - cmd_details); - - if (hw->aq.asq_last_status == I40E_AQ_RC_EIO) - status = I40E_ERR_UNKNOWN_PHY; - - return status; -} - -/** - * i40e_aq_set_phy_config - * @hw: pointer to the hw struct - * @config: structure with PHY configuration to be set - * @cmd_details: pointer to command details structure or NULL - * - * Set the various PHY configuration parameters - * supported on the Port.One or more of the Set PHY config parameters may be - * ignored in an MFP mode as the PF may not have the privilege to set some - * of the PHY Config parameters. This status will be indicated by the - * command response. - **/ -enum i40e_status_code i40e_aq_set_phy_config(struct i40e_hw *hw, - struct i40e_aq_set_phy_config *config, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aq_set_phy_config *cmd = - (struct i40e_aq_set_phy_config *)&desc.params.raw; - enum i40e_status_code status; - - if (!config) - return I40E_ERR_PARAM; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_set_phy_config); - - *cmd = *config; - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_set_fc - * @hw: pointer to the hw struct - * - * Set the requested flow control mode using set_phy_config. - **/ -enum i40e_status_code i40e_set_fc(struct i40e_hw *hw, u8 *aq_failures, - bool atomic_restart) -{ - enum i40e_fc_mode fc_mode = hw->fc.requested_mode; - struct i40e_aq_get_phy_abilities_resp abilities; - struct i40e_aq_set_phy_config config; - enum i40e_status_code status; - u8 pause_mask = 0x0; - - *aq_failures = 0x0; - - switch (fc_mode) { - case I40E_FC_FULL: - pause_mask |= I40E_AQ_PHY_FLAG_PAUSE_TX; - pause_mask |= I40E_AQ_PHY_FLAG_PAUSE_RX; - break; - case I40E_FC_RX_PAUSE: - pause_mask |= I40E_AQ_PHY_FLAG_PAUSE_RX; - break; - case I40E_FC_TX_PAUSE: - pause_mask |= I40E_AQ_PHY_FLAG_PAUSE_TX; - break; - default: - break; - } - - /* Get the current phy config */ - status = i40e_aq_get_phy_capabilities(hw, false, false, &abilities, - NULL); - if (status) { - *aq_failures |= I40E_SET_FC_AQ_FAIL_GET; - return status; - } - - memset(&config, 0, sizeof(struct i40e_aq_set_phy_config)); - /* clear the old pause settings */ - config.abilities = abilities.abilities & ~(I40E_AQ_PHY_FLAG_PAUSE_TX) & - ~(I40E_AQ_PHY_FLAG_PAUSE_RX); - /* set the new abilities */ - config.abilities |= pause_mask; - /* If the abilities have changed, then set the new config */ - if (config.abilities != abilities.abilities) { - /* Auto restart link so settings take effect */ - if (atomic_restart) - config.abilities |= I40E_AQ_PHY_ENABLE_ATOMIC_LINK; - /* Copy over all the old settings */ - config.phy_type = abilities.phy_type; - config.link_speed = abilities.link_speed; - config.eee_capability = abilities.eee_capability; - config.eeer = abilities.eeer_val; - config.low_power_ctrl = abilities.d3_lpan; - status = i40e_aq_set_phy_config(hw, &config, NULL); - - if (status) - *aq_failures |= I40E_SET_FC_AQ_FAIL_SET; - } - /* Update the link info */ - status = i40e_update_link_info(hw, true); - if (status) { - /* Wait a little bit (on 40G cards it sometimes takes a really - * long time for link to come back from the atomic reset) - * and try once more - */ - i40e_msec_delay(1000); - status = i40e_update_link_info(hw, true); - } - if (status) - *aq_failures |= I40E_SET_FC_AQ_FAIL_UPDATE; - - return status; -} - -/** - * i40e_aq_set_mac_config - * @hw: pointer to the hw struct - * @max_frame_size: Maximum Frame Size to be supported by the port - * @crc_en: Tell HW to append a CRC to outgoing frames - * @pacing: Pacing configurations - * @cmd_details: pointer to command details structure or NULL - * - * Configure MAC settings for frame size, jumbo frame support and the - * addition of a CRC by the hardware. - **/ -enum i40e_status_code i40e_aq_set_mac_config(struct i40e_hw *hw, - u16 max_frame_size, - bool crc_en, u16 pacing, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aq_set_mac_config *cmd = - (struct i40e_aq_set_mac_config *)&desc.params.raw; - enum i40e_status_code status; - - if (max_frame_size == 0) - return I40E_ERR_PARAM; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_set_mac_config); - - cmd->max_frame_size = CPU_TO_LE16(max_frame_size); - cmd->params = ((u8)pacing & 0x0F) << 3; - if (crc_en) - cmd->params |= I40E_AQ_SET_MAC_CONFIG_CRC_EN; - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_aq_clear_pxe_mode - * @hw: pointer to the hw struct - * @cmd_details: pointer to command details structure or NULL - * - * Tell the firmware that the driver is taking over from PXE - **/ -enum i40e_status_code i40e_aq_clear_pxe_mode(struct i40e_hw *hw, - struct i40e_asq_cmd_details *cmd_details) -{ - enum i40e_status_code status; - struct i40e_aq_desc desc; - struct i40e_aqc_clear_pxe *cmd = - (struct i40e_aqc_clear_pxe *)&desc.params.raw; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_clear_pxe_mode); - - cmd->rx_cnt = 0x2; - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - wr32(hw, I40E_GLLAN_RCTL_0, 0x1); - - return status; -} - -/** - * i40e_aq_set_link_restart_an - * @hw: pointer to the hw struct - * @enable_link: if true: enable link, if false: disable link - * @cmd_details: pointer to command details structure or NULL - * - * Sets up the link and restarts the Auto-Negotiation over the link. - **/ -enum i40e_status_code i40e_aq_set_link_restart_an(struct i40e_hw *hw, - bool enable_link, struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_set_link_restart_an *cmd = - (struct i40e_aqc_set_link_restart_an *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_set_link_restart_an); - - cmd->command = I40E_AQ_PHY_RESTART_AN; - if (enable_link) - cmd->command |= I40E_AQ_PHY_LINK_ENABLE; - else - cmd->command &= ~I40E_AQ_PHY_LINK_ENABLE; - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_aq_get_link_info - * @hw: pointer to the hw struct - * @enable_lse: enable/disable LinkStatusEvent reporting - * @link: pointer to link status structure - optional - * @cmd_details: pointer to command details structure or NULL - * - * Returns the link status of the adapter. - **/ -enum i40e_status_code i40e_aq_get_link_info(struct i40e_hw *hw, - bool enable_lse, struct i40e_link_status *link, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_get_link_status *resp = - (struct i40e_aqc_get_link_status *)&desc.params.raw; - struct i40e_link_status *hw_link_info = &hw->phy.link_info; - enum i40e_status_code status; - bool tx_pause, rx_pause; - u16 command_flags; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_get_link_status); - - if (enable_lse) - command_flags = I40E_AQ_LSE_ENABLE; - else - command_flags = I40E_AQ_LSE_DISABLE; - resp->command_flags = CPU_TO_LE16(command_flags); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - if (status != I40E_SUCCESS) - goto aq_get_link_info_exit; - - /* save off old link status information */ - i40e_memcpy(&hw->phy.link_info_old, hw_link_info, - sizeof(struct i40e_link_status), I40E_NONDMA_TO_NONDMA); - - /* update link status */ - hw_link_info->phy_type = (enum i40e_aq_phy_type)resp->phy_type; - hw->phy.media_type = i40e_get_media_type(hw); - hw_link_info->link_speed = (enum i40e_aq_link_speed)resp->link_speed; - hw_link_info->link_info = resp->link_info; - hw_link_info->an_info = resp->an_info; - hw_link_info->ext_info = resp->ext_info; - hw_link_info->loopback = resp->loopback; - hw_link_info->max_frame_size = LE16_TO_CPU(resp->max_frame_size); - hw_link_info->pacing = resp->config & I40E_AQ_CONFIG_PACING_MASK; - - /* update fc info */ - tx_pause = !!(resp->an_info & I40E_AQ_LINK_PAUSE_TX); - rx_pause = !!(resp->an_info & I40E_AQ_LINK_PAUSE_RX); - if (tx_pause & rx_pause) - hw->fc.current_mode = I40E_FC_FULL; - else if (tx_pause) - hw->fc.current_mode = I40E_FC_TX_PAUSE; - else if (rx_pause) - hw->fc.current_mode = I40E_FC_RX_PAUSE; - else - hw->fc.current_mode = I40E_FC_NONE; - - if (resp->config & I40E_AQ_CONFIG_CRC_ENA) - hw_link_info->crc_enable = true; - else - hw_link_info->crc_enable = false; - - if (resp->command_flags & CPU_TO_LE16(I40E_AQ_LSE_ENABLE)) - hw_link_info->lse_enable = true; - else - hw_link_info->lse_enable = false; - - /* save link status information */ - if (link) - i40e_memcpy(link, hw_link_info, sizeof(struct i40e_link_status), - I40E_NONDMA_TO_NONDMA); - - /* flag cleared so helper functions don't call AQ again */ - hw->phy.get_link_info = false; - -aq_get_link_info_exit: - return status; -} - -/** - * i40e_update_link_info - * @hw: pointer to the hw struct - * @enable_lse: enable/disable LinkStatusEvent reporting - * - * Returns the link status of the adapter - **/ -enum i40e_status_code i40e_update_link_info(struct i40e_hw *hw, - bool enable_lse) -{ - struct i40e_aq_get_phy_abilities_resp abilities; - enum i40e_status_code status; - - status = i40e_aq_get_link_info(hw, enable_lse, NULL, NULL); - if (status) - return status; - - status = i40e_aq_get_phy_capabilities(hw, false, false, - &abilities, NULL); - if (status) - return status; - - if (abilities.abilities & I40E_AQ_PHY_AN_ENABLED) - hw->phy.link_info.an_enabled = true; - else - hw->phy.link_info.an_enabled = false; - - return status; -} - -/** - * i40e_aq_set_phy_int_mask - * @hw: pointer to the hw struct - * @mask: interrupt mask to be set - * @cmd_details: pointer to command details structure or NULL - * - * Set link interrupt mask. - **/ -enum i40e_status_code i40e_aq_set_phy_int_mask(struct i40e_hw *hw, - u16 mask, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_set_phy_int_mask *cmd = - (struct i40e_aqc_set_phy_int_mask *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_set_phy_int_mask); - - cmd->event_mask = CPU_TO_LE16(mask); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_aq_get_local_advt_reg - * @hw: pointer to the hw struct - * @advt_reg: local AN advertisement register value - * @cmd_details: pointer to command details structure or NULL - * - * Get the Local AN advertisement register value. - **/ -enum i40e_status_code i40e_aq_get_local_advt_reg(struct i40e_hw *hw, - u64 *advt_reg, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_an_advt_reg *resp = - (struct i40e_aqc_an_advt_reg *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_get_local_advt_reg); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - if (status != I40E_SUCCESS) - goto aq_get_local_advt_reg_exit; - - *advt_reg = (u64)(LE16_TO_CPU(resp->local_an_reg1)) << 32; - *advt_reg |= LE32_TO_CPU(resp->local_an_reg0); - -aq_get_local_advt_reg_exit: - return status; -} - -/** - * i40e_aq_set_local_advt_reg - * @hw: pointer to the hw struct - * @advt_reg: local AN advertisement register value - * @cmd_details: pointer to command details structure or NULL - * - * Get the Local AN advertisement register value. - **/ -enum i40e_status_code i40e_aq_set_local_advt_reg(struct i40e_hw *hw, - u64 advt_reg, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_an_advt_reg *cmd = - (struct i40e_aqc_an_advt_reg *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_get_local_advt_reg); - - cmd->local_an_reg0 = CPU_TO_LE32(I40E_LO_DWORD(advt_reg)); - cmd->local_an_reg1 = CPU_TO_LE16(I40E_HI_DWORD(advt_reg)); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_aq_get_partner_advt - * @hw: pointer to the hw struct - * @advt_reg: AN partner advertisement register value - * @cmd_details: pointer to command details structure or NULL - * - * Get the link partner AN advertisement register value. - **/ -enum i40e_status_code i40e_aq_get_partner_advt(struct i40e_hw *hw, - u64 *advt_reg, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_an_advt_reg *resp = - (struct i40e_aqc_an_advt_reg *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_get_partner_advt); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - if (status != I40E_SUCCESS) - goto aq_get_partner_advt_exit; - - *advt_reg = (u64)(LE16_TO_CPU(resp->local_an_reg1)) << 32; - *advt_reg |= LE32_TO_CPU(resp->local_an_reg0); - -aq_get_partner_advt_exit: - return status; -} - -/** - * i40e_aq_set_lb_modes - * @hw: pointer to the hw struct - * @lb_modes: loopback mode to be set - * @cmd_details: pointer to command details structure or NULL - * - * Sets loopback modes. - **/ -enum i40e_status_code i40e_aq_set_lb_modes(struct i40e_hw *hw, - u16 lb_modes, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_set_lb_mode *cmd = - (struct i40e_aqc_set_lb_mode *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_set_lb_modes); - - cmd->lb_mode = CPU_TO_LE16(lb_modes); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_aq_set_phy_debug - * @hw: pointer to the hw struct - * @cmd_flags: debug command flags - * @cmd_details: pointer to command details structure or NULL - * - * Reset the external PHY. - **/ -enum i40e_status_code i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_set_phy_debug *cmd = - (struct i40e_aqc_set_phy_debug *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_set_phy_debug); - - cmd->command_flags = cmd_flags; - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_aq_add_vsi - * @hw: pointer to the hw struct - * @vsi_ctx: pointer to a vsi context struct - * @cmd_details: pointer to command details structure or NULL - * - * Add a VSI context to the hardware. -**/ -enum i40e_status_code i40e_aq_add_vsi(struct i40e_hw *hw, - struct i40e_vsi_context *vsi_ctx, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_add_get_update_vsi *cmd = - (struct i40e_aqc_add_get_update_vsi *)&desc.params.raw; - struct i40e_aqc_add_get_update_vsi_completion *resp = - (struct i40e_aqc_add_get_update_vsi_completion *) - &desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_add_vsi); - - cmd->uplink_seid = CPU_TO_LE16(vsi_ctx->uplink_seid); - cmd->connection_type = vsi_ctx->connection_type; - cmd->vf_id = vsi_ctx->vf_num; - cmd->vsi_flags = CPU_TO_LE16(vsi_ctx->flags); - - desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); - - status = i40e_asq_send_command(hw, &desc, &vsi_ctx->info, - sizeof(vsi_ctx->info), cmd_details); - - if (status != I40E_SUCCESS) - goto aq_add_vsi_exit; - - vsi_ctx->seid = LE16_TO_CPU(resp->seid); - vsi_ctx->vsi_number = LE16_TO_CPU(resp->vsi_number); - vsi_ctx->vsis_allocated = LE16_TO_CPU(resp->vsi_used); - vsi_ctx->vsis_unallocated = LE16_TO_CPU(resp->vsi_free); - -aq_add_vsi_exit: - return status; -} - -/** - * i40e_aq_set_default_vsi - * @hw: pointer to the hw struct - * @seid: vsi number - * @cmd_details: pointer to command details structure or NULL - **/ -enum i40e_status_code i40e_aq_set_default_vsi(struct i40e_hw *hw, - u16 seid, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_set_vsi_promiscuous_modes *cmd = - (struct i40e_aqc_set_vsi_promiscuous_modes *) - &desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_set_vsi_promiscuous_modes); - - cmd->promiscuous_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_DEFAULT); - cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_DEFAULT); - cmd->seid = CPU_TO_LE16(seid); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_aq_set_vsi_unicast_promiscuous - * @hw: pointer to the hw struct - * @seid: vsi number - * @set: set unicast promiscuous enable/disable - * @cmd_details: pointer to command details structure or NULL - **/ -enum i40e_status_code i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw, - u16 seid, bool set, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_set_vsi_promiscuous_modes *cmd = - (struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw; - enum i40e_status_code status; - u16 flags = 0; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_set_vsi_promiscuous_modes); - - if (set) - flags |= I40E_AQC_SET_VSI_PROMISC_UNICAST; - - cmd->promiscuous_flags = CPU_TO_LE16(flags); - - cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_UNICAST); - - cmd->seid = CPU_TO_LE16(seid); - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_aq_set_vsi_multicast_promiscuous - * @hw: pointer to the hw struct - * @seid: vsi number - * @set: set multicast promiscuous enable/disable - * @cmd_details: pointer to command details structure or NULL - **/ -enum i40e_status_code i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw, - u16 seid, bool set, struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_set_vsi_promiscuous_modes *cmd = - (struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw; - enum i40e_status_code status; - u16 flags = 0; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_set_vsi_promiscuous_modes); - - if (set) - flags |= I40E_AQC_SET_VSI_PROMISC_MULTICAST; - - cmd->promiscuous_flags = CPU_TO_LE16(flags); - - cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_MULTICAST); - - cmd->seid = CPU_TO_LE16(seid); - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_aq_set_vsi_broadcast - * @hw: pointer to the hw struct - * @seid: vsi number - * @set_filter: true to set filter, false to clear filter - * @cmd_details: pointer to command details structure or NULL - * - * Set or clear the broadcast promiscuous flag (filter) for a given VSI. - **/ -enum i40e_status_code i40e_aq_set_vsi_broadcast(struct i40e_hw *hw, - u16 seid, bool set_filter, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_set_vsi_promiscuous_modes *cmd = - (struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_set_vsi_promiscuous_modes); - - if (set_filter) - cmd->promiscuous_flags - |= CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_BROADCAST); - else - cmd->promiscuous_flags - &= CPU_TO_LE16(~I40E_AQC_SET_VSI_PROMISC_BROADCAST); - - cmd->valid_flags = CPU_TO_LE16(I40E_AQC_SET_VSI_PROMISC_BROADCAST); - cmd->seid = CPU_TO_LE16(seid); - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_get_vsi_params - get VSI configuration info - * @hw: pointer to the hw struct - * @vsi_ctx: pointer to a vsi context struct - * @cmd_details: pointer to command details structure or NULL - **/ -enum i40e_status_code i40e_aq_get_vsi_params(struct i40e_hw *hw, - struct i40e_vsi_context *vsi_ctx, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_add_get_update_vsi *cmd = - (struct i40e_aqc_add_get_update_vsi *)&desc.params.raw; - struct i40e_aqc_add_get_update_vsi_completion *resp = - (struct i40e_aqc_add_get_update_vsi_completion *) - &desc.params.raw; - enum i40e_status_code status; - - UNREFERENCED_1PARAMETER(cmd_details); - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_get_vsi_parameters); - - cmd->uplink_seid = CPU_TO_LE16(vsi_ctx->seid); - - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF); - - status = i40e_asq_send_command(hw, &desc, &vsi_ctx->info, - sizeof(vsi_ctx->info), NULL); - - if (status != I40E_SUCCESS) - goto aq_get_vsi_params_exit; - - vsi_ctx->seid = LE16_TO_CPU(resp->seid); - vsi_ctx->vsi_number = LE16_TO_CPU(resp->vsi_number); - vsi_ctx->vsis_allocated = LE16_TO_CPU(resp->vsi_used); - vsi_ctx->vsis_unallocated = LE16_TO_CPU(resp->vsi_free); - -aq_get_vsi_params_exit: - return status; -} - -/** - * i40e_aq_update_vsi_params - * @hw: pointer to the hw struct - * @vsi_ctx: pointer to a vsi context struct - * @cmd_details: pointer to command details structure or NULL - * - * Update a VSI context. - **/ -enum i40e_status_code i40e_aq_update_vsi_params(struct i40e_hw *hw, - struct i40e_vsi_context *vsi_ctx, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_add_get_update_vsi *cmd = - (struct i40e_aqc_add_get_update_vsi *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_update_vsi_parameters); - cmd->uplink_seid = CPU_TO_LE16(vsi_ctx->seid); - - desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); - - status = i40e_asq_send_command(hw, &desc, &vsi_ctx->info, - sizeof(vsi_ctx->info), cmd_details); - - return status; -} - -/** - * i40e_aq_get_switch_config - * @hw: pointer to the hardware structure - * @buf: pointer to the result buffer - * @buf_size: length of input buffer - * @start_seid: seid to start for the report, 0 == beginning - * @cmd_details: pointer to command details structure or NULL - * - * Fill the buf with switch configuration returned from AdminQ command - **/ -enum i40e_status_code i40e_aq_get_switch_config(struct i40e_hw *hw, - struct i40e_aqc_get_switch_config_resp *buf, - u16 buf_size, u16 *start_seid, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_switch_seid *scfg = - (struct i40e_aqc_switch_seid *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_get_switch_config); - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF); - if (buf_size > I40E_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); - scfg->seid = CPU_TO_LE16(*start_seid); - - status = i40e_asq_send_command(hw, &desc, buf, buf_size, cmd_details); - *start_seid = LE16_TO_CPU(scfg->seid); - - return status; -} - -/** - * i40e_aq_get_firmware_version - * @hw: pointer to the hw struct - * @fw_major_version: firmware major version - * @fw_minor_version: firmware minor version - * @api_major_version: major queue version - * @api_minor_version: minor queue version - * @cmd_details: pointer to command details structure or NULL - * - * Get the firmware version from the admin queue commands - **/ -enum i40e_status_code i40e_aq_get_firmware_version(struct i40e_hw *hw, - u16 *fw_major_version, u16 *fw_minor_version, - u16 *api_major_version, u16 *api_minor_version, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_get_version *resp = - (struct i40e_aqc_get_version *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_get_version); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - if (status == I40E_SUCCESS) { - if (fw_major_version != NULL) - *fw_major_version = LE16_TO_CPU(resp->fw_major); - if (fw_minor_version != NULL) - *fw_minor_version = LE16_TO_CPU(resp->fw_minor); - if (api_major_version != NULL) - *api_major_version = LE16_TO_CPU(resp->api_major); - if (api_minor_version != NULL) - *api_minor_version = LE16_TO_CPU(resp->api_minor); - - /* A workaround to fix the API version in SW */ - if (api_major_version && api_minor_version && - fw_major_version && fw_minor_version && - ((*api_major_version == 1) && (*api_minor_version == 1)) && - (((*fw_major_version == 4) && (*fw_minor_version >= 2)) || - (*fw_major_version > 4))) - *api_minor_version = 2; - } - - return status; -} - -/** - * i40e_aq_send_driver_version - * @hw: pointer to the hw struct - * @dv: driver's major, minor version - * @cmd_details: pointer to command details structure or NULL - * - * Send the driver version to the firmware - **/ -enum i40e_status_code i40e_aq_send_driver_version(struct i40e_hw *hw, - struct i40e_driver_version *dv, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_driver_version *cmd = - (struct i40e_aqc_driver_version *)&desc.params.raw; - enum i40e_status_code status; - u16 len; - - if (dv == NULL) - return I40E_ERR_PARAM; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_driver_version); - - desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_SI); - cmd->driver_major_ver = dv->major_version; - cmd->driver_minor_ver = dv->minor_version; - cmd->driver_build_ver = dv->build_version; - cmd->driver_subbuild_ver = dv->subbuild_version; - - len = 0; - while (len < sizeof(dv->driver_string) && - (dv->driver_string[len] < 0x80) && - dv->driver_string[len]) - len++; - status = i40e_asq_send_command(hw, &desc, dv->driver_string, - len, cmd_details); - - return status; -} - -/** - * i40e_get_link_status - get status of the HW network link - * @hw: pointer to the hw struct - * - * Returns true if link is up, false if link is down. - * - * Side effect: LinkStatusEvent reporting becomes enabled - **/ -bool i40e_get_link_status(struct i40e_hw *hw) -{ - enum i40e_status_code status = I40E_SUCCESS; - bool link_status = false; - - if (hw->phy.get_link_info) { - status = i40e_aq_get_link_info(hw, true, NULL, NULL); - - if (status != I40E_SUCCESS) - goto i40e_get_link_status_exit; - } - - link_status = hw->phy.link_info.link_info & I40E_AQ_LINK_UP; - -i40e_get_link_status_exit: - return link_status; -} - -/** - * i40e_get_link_speed - * @hw: pointer to the hw struct - * - * Returns the link speed of the adapter. - **/ -enum i40e_aq_link_speed i40e_get_link_speed(struct i40e_hw *hw) -{ - enum i40e_aq_link_speed speed = I40E_LINK_SPEED_UNKNOWN; - enum i40e_status_code status = I40E_SUCCESS; - - if (hw->phy.get_link_info) { - status = i40e_aq_get_link_info(hw, true, NULL, NULL); - - if (status != I40E_SUCCESS) - goto i40e_link_speed_exit; - } - - speed = hw->phy.link_info.link_speed; - -i40e_link_speed_exit: - return speed; -} - -/** - * i40e_aq_add_veb - Insert a VEB between the VSI and the MAC - * @hw: pointer to the hw struct - * @uplink_seid: the MAC or other gizmo SEID - * @downlink_seid: the VSI SEID - * @enabled_tc: bitmap of TCs to be enabled - * @default_port: true for default port VSI, false for control port - * @enable_l2_filtering: true to add L2 filter table rules to regular forwarding rules for cloud support - * @veb_seid: pointer to where to put the resulting VEB SEID - * @cmd_details: pointer to command details structure or NULL - * - * This asks the FW to add a VEB between the uplink and downlink - * elements. If the uplink SEID is 0, this will be a floating VEB. - **/ -enum i40e_status_code i40e_aq_add_veb(struct i40e_hw *hw, u16 uplink_seid, - u16 downlink_seid, u8 enabled_tc, - bool default_port, bool enable_l2_filtering, - u16 *veb_seid, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_add_veb *cmd = - (struct i40e_aqc_add_veb *)&desc.params.raw; - struct i40e_aqc_add_veb_completion *resp = - (struct i40e_aqc_add_veb_completion *)&desc.params.raw; - enum i40e_status_code status; - u16 veb_flags = 0; - - /* SEIDs need to either both be set or both be 0 for floating VEB */ - if (!!uplink_seid != !!downlink_seid) - return I40E_ERR_PARAM; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_veb); - - cmd->uplink_seid = CPU_TO_LE16(uplink_seid); - cmd->downlink_seid = CPU_TO_LE16(downlink_seid); - cmd->enable_tcs = enabled_tc; - if (!uplink_seid) - veb_flags |= I40E_AQC_ADD_VEB_FLOATING; - if (default_port) - veb_flags |= I40E_AQC_ADD_VEB_PORT_TYPE_DEFAULT; - else - veb_flags |= I40E_AQC_ADD_VEB_PORT_TYPE_DATA; - - if (enable_l2_filtering) - veb_flags |= I40E_AQC_ADD_VEB_ENABLE_L2_FILTER; - - cmd->veb_flags = CPU_TO_LE16(veb_flags); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - if (!status && veb_seid) - *veb_seid = LE16_TO_CPU(resp->veb_seid); - - return status; -} - -/** - * i40e_aq_get_veb_parameters - Retrieve VEB parameters - * @hw: pointer to the hw struct - * @veb_seid: the SEID of the VEB to query - * @switch_id: the uplink switch id - * @floating: set to true if the VEB is floating - * @statistic_index: index of the stats counter block for this VEB - * @vebs_used: number of VEB's used by function - * @vebs_free: total VEB's not reserved by any function - * @cmd_details: pointer to command details structure or NULL - * - * This retrieves the parameters for a particular VEB, specified by - * uplink_seid, and returns them to the caller. - **/ -enum i40e_status_code i40e_aq_get_veb_parameters(struct i40e_hw *hw, - u16 veb_seid, u16 *switch_id, - bool *floating, u16 *statistic_index, - u16 *vebs_used, u16 *vebs_free, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_get_veb_parameters_completion *cmd_resp = - (struct i40e_aqc_get_veb_parameters_completion *) - &desc.params.raw; - enum i40e_status_code status; - - if (veb_seid == 0) - return I40E_ERR_PARAM; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_get_veb_parameters); - cmd_resp->seid = CPU_TO_LE16(veb_seid); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - if (status) - goto get_veb_exit; - - if (switch_id) - *switch_id = LE16_TO_CPU(cmd_resp->switch_id); - if (statistic_index) - *statistic_index = LE16_TO_CPU(cmd_resp->statistic_index); - if (vebs_used) - *vebs_used = LE16_TO_CPU(cmd_resp->vebs_used); - if (vebs_free) - *vebs_free = LE16_TO_CPU(cmd_resp->vebs_free); - if (floating) { - u16 flags = LE16_TO_CPU(cmd_resp->veb_flags); - if (flags & I40E_AQC_ADD_VEB_FLOATING) - *floating = true; - else - *floating = false; - } - -get_veb_exit: - return status; -} - -/** - * i40e_aq_add_macvlan - * @hw: pointer to the hw struct - * @seid: VSI for the mac address - * @mv_list: list of macvlans to be added - * @count: length of the list - * @cmd_details: pointer to command details structure or NULL - * - * Add MAC/VLAN addresses to the HW filtering - **/ -enum i40e_status_code i40e_aq_add_macvlan(struct i40e_hw *hw, u16 seid, - struct i40e_aqc_add_macvlan_element_data *mv_list, - u16 count, struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_macvlan *cmd = - (struct i40e_aqc_macvlan *)&desc.params.raw; - enum i40e_status_code status; - u16 buf_size; - - if (count == 0 || !mv_list || !hw) - return I40E_ERR_PARAM; - - buf_size = count * sizeof(struct i40e_aqc_add_macvlan_element_data); - - /* prep the rest of the request */ - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_macvlan); - cmd->num_addresses = CPU_TO_LE16(count); - cmd->seid[0] = CPU_TO_LE16(I40E_AQC_MACVLAN_CMD_SEID_VALID | seid); - cmd->seid[1] = 0; - cmd->seid[2] = 0; - - desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); - if (buf_size > I40E_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); - - status = i40e_asq_send_command(hw, &desc, mv_list, buf_size, - cmd_details); - - return status; -} - -/** - * i40e_aq_remove_macvlan - * @hw: pointer to the hw struct - * @seid: VSI for the mac address - * @mv_list: list of macvlans to be removed - * @count: length of the list - * @cmd_details: pointer to command details structure or NULL - * - * Remove MAC/VLAN addresses from the HW filtering - **/ -enum i40e_status_code i40e_aq_remove_macvlan(struct i40e_hw *hw, u16 seid, - struct i40e_aqc_remove_macvlan_element_data *mv_list, - u16 count, struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_macvlan *cmd = - (struct i40e_aqc_macvlan *)&desc.params.raw; - enum i40e_status_code status; - u16 buf_size; - - if (count == 0 || !mv_list || !hw) - return I40E_ERR_PARAM; - - buf_size = count * sizeof(struct i40e_aqc_remove_macvlan_element_data); - - /* prep the rest of the request */ - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_remove_macvlan); - cmd->num_addresses = CPU_TO_LE16(count); - cmd->seid[0] = CPU_TO_LE16(I40E_AQC_MACVLAN_CMD_SEID_VALID | seid); - cmd->seid[1] = 0; - cmd->seid[2] = 0; - - desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); - if (buf_size > I40E_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); - - status = i40e_asq_send_command(hw, &desc, mv_list, buf_size, - cmd_details); - - return status; -} - -/** - * i40e_aq_add_vlan - Add VLAN ids to the HW filtering - * @hw: pointer to the hw struct - * @seid: VSI for the vlan filters - * @v_list: list of vlan filters to be added - * @count: length of the list - * @cmd_details: pointer to command details structure or NULL - **/ -enum i40e_status_code i40e_aq_add_vlan(struct i40e_hw *hw, u16 seid, - struct i40e_aqc_add_remove_vlan_element_data *v_list, - u8 count, struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_macvlan *cmd = - (struct i40e_aqc_macvlan *)&desc.params.raw; - enum i40e_status_code status; - u16 buf_size; - - if (count == 0 || !v_list || !hw) - return I40E_ERR_PARAM; - - buf_size = count * sizeof(struct i40e_aqc_add_remove_vlan_element_data); - - /* prep the rest of the request */ - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_vlan); - cmd->num_addresses = CPU_TO_LE16(count); - cmd->seid[0] = CPU_TO_LE16(seid | I40E_AQC_MACVLAN_CMD_SEID_VALID); - cmd->seid[1] = 0; - cmd->seid[2] = 0; - - desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); - if (buf_size > I40E_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); - - status = i40e_asq_send_command(hw, &desc, v_list, buf_size, - cmd_details); - - return status; -} - -/** - * i40e_aq_remove_vlan - Remove VLANs from the HW filtering - * @hw: pointer to the hw struct - * @seid: VSI for the vlan filters - * @v_list: list of macvlans to be removed - * @count: length of the list - * @cmd_details: pointer to command details structure or NULL - **/ -enum i40e_status_code i40e_aq_remove_vlan(struct i40e_hw *hw, u16 seid, - struct i40e_aqc_add_remove_vlan_element_data *v_list, - u8 count, struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_macvlan *cmd = - (struct i40e_aqc_macvlan *)&desc.params.raw; - enum i40e_status_code status; - u16 buf_size; - - if (count == 0 || !v_list || !hw) - return I40E_ERR_PARAM; - - buf_size = count * sizeof(struct i40e_aqc_add_remove_vlan_element_data); - - /* prep the rest of the request */ - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_remove_vlan); - cmd->num_addresses = CPU_TO_LE16(count); - cmd->seid[0] = CPU_TO_LE16(seid | I40E_AQC_MACVLAN_CMD_SEID_VALID); - cmd->seid[1] = 0; - cmd->seid[2] = 0; - - desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); - if (buf_size > I40E_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); - - status = i40e_asq_send_command(hw, &desc, v_list, buf_size, - cmd_details); - - return status; -} - -/** - * i40e_aq_send_msg_to_vf - * @hw: pointer to the hardware structure - * @vfid: vf id to send msg - * @v_opcode: opcodes for VF-PF communication - * @v_retval: return error code - * @msg: pointer to the msg buffer - * @msglen: msg length - * @cmd_details: pointer to command details - * - * send msg to vf - **/ -enum i40e_status_code i40e_aq_send_msg_to_vf(struct i40e_hw *hw, u16 vfid, - u32 v_opcode, u32 v_retval, u8 *msg, u16 msglen, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_pf_vf_message *cmd = - (struct i40e_aqc_pf_vf_message *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_send_msg_to_vf); - cmd->id = CPU_TO_LE32(vfid); - desc.cookie_high = CPU_TO_LE32(v_opcode); - desc.cookie_low = CPU_TO_LE32(v_retval); - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_SI); - if (msglen) { - desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | - I40E_AQ_FLAG_RD)); - if (msglen > I40E_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); - desc.datalen = CPU_TO_LE16(msglen); - } - status = i40e_asq_send_command(hw, &desc, msg, msglen, cmd_details); - - return status; -} - -/** - * i40e_aq_debug_write_register - * @hw: pointer to the hw struct - * @reg_addr: register address - * @reg_val: register value - * @cmd_details: pointer to command details structure or NULL - * - * Write to a register using the admin queue commands - **/ -enum i40e_status_code i40e_aq_debug_write_register(struct i40e_hw *hw, - u32 reg_addr, u64 reg_val, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_debug_reg_read_write *cmd = - (struct i40e_aqc_debug_reg_read_write *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_debug_write_reg); - - cmd->address = CPU_TO_LE32(reg_addr); - cmd->value_high = CPU_TO_LE32((u32)(reg_val >> 32)); - cmd->value_low = CPU_TO_LE32((u32)(reg_val & 0xFFFFFFFF)); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_aq_get_hmc_resource_profile - * @hw: pointer to the hw struct - * @profile: type of profile the HMC is to be set as - * @pe_vf_enabled_count: the number of PE enabled VFs the system has - * @cmd_details: pointer to command details structure or NULL - * - * query the HMC profile of the device. - **/ -enum i40e_status_code i40e_aq_get_hmc_resource_profile(struct i40e_hw *hw, - enum i40e_aq_hmc_profile *profile, - u8 *pe_vf_enabled_count, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aq_get_set_hmc_resource_profile *resp = - (struct i40e_aq_get_set_hmc_resource_profile *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_query_hmc_resource_profile); - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - *profile = (enum i40e_aq_hmc_profile)(resp->pm_profile & - I40E_AQ_GET_HMC_RESOURCE_PROFILE_PM_MASK); - *pe_vf_enabled_count = resp->pe_vf_enabled & - I40E_AQ_GET_HMC_RESOURCE_PROFILE_COUNT_MASK; - - return status; -} - -/** - * i40e_aq_set_hmc_resource_profile - * @hw: pointer to the hw struct - * @profile: type of profile the HMC is to be set as - * @pe_vf_enabled_count: the number of PE enabled VFs the system has - * @cmd_details: pointer to command details structure or NULL - * - * set the HMC profile of the device. - **/ -enum i40e_status_code i40e_aq_set_hmc_resource_profile(struct i40e_hw *hw, - enum i40e_aq_hmc_profile profile, - u8 pe_vf_enabled_count, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aq_get_set_hmc_resource_profile *cmd = - (struct i40e_aq_get_set_hmc_resource_profile *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_set_hmc_resource_profile); - - cmd->pm_profile = (u8)profile; - cmd->pe_vf_enabled = pe_vf_enabled_count; - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_aq_request_resource - * @hw: pointer to the hw struct - * @resource: resource id - * @access: access type - * @sdp_number: resource number - * @timeout: the maximum time in ms that the driver may hold the resource - * @cmd_details: pointer to command details structure or NULL - * - * requests common resource using the admin queue commands - **/ -enum i40e_status_code i40e_aq_request_resource(struct i40e_hw *hw, - enum i40e_aq_resources_ids resource, - enum i40e_aq_resource_access_type access, - u8 sdp_number, u64 *timeout, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_request_resource *cmd_resp = - (struct i40e_aqc_request_resource *)&desc.params.raw; - enum i40e_status_code status; - - DEBUGFUNC("i40e_aq_request_resource"); - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_request_resource); - - cmd_resp->resource_id = CPU_TO_LE16(resource); - cmd_resp->access_type = CPU_TO_LE16(access); - cmd_resp->resource_number = CPU_TO_LE32(sdp_number); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - /* The completion specifies the maximum time in ms that the driver - * may hold the resource in the Timeout field. - * If the resource is held by someone else, the command completes with - * busy return value and the timeout field indicates the maximum time - * the current owner of the resource has to free it. - */ - if (status == I40E_SUCCESS || hw->aq.asq_last_status == I40E_AQ_RC_EBUSY) - *timeout = LE32_TO_CPU(cmd_resp->timeout); - - return status; -} - -/** - * i40e_aq_release_resource - * @hw: pointer to the hw struct - * @resource: resource id - * @sdp_number: resource number - * @cmd_details: pointer to command details structure or NULL - * - * release common resource using the admin queue commands - **/ -enum i40e_status_code i40e_aq_release_resource(struct i40e_hw *hw, - enum i40e_aq_resources_ids resource, - u8 sdp_number, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_request_resource *cmd = - (struct i40e_aqc_request_resource *)&desc.params.raw; - enum i40e_status_code status; - - DEBUGFUNC("i40e_aq_release_resource"); - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_release_resource); - - cmd->resource_id = CPU_TO_LE16(resource); - cmd->resource_number = CPU_TO_LE32(sdp_number); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_aq_read_nvm - * @hw: pointer to the hw struct - * @module_pointer: module pointer location in words from the NVM beginning - * @offset: byte offset from the module beginning - * @length: length of the section to be read (in bytes from the offset) - * @data: command buffer (size [bytes] = length) - * @last_command: tells if this is the last command in a series - * @cmd_details: pointer to command details structure or NULL - * - * Read the NVM using the admin queue commands - **/ -enum i40e_status_code i40e_aq_read_nvm(struct i40e_hw *hw, u8 module_pointer, - u32 offset, u16 length, void *data, - bool last_command, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_nvm_update *cmd = - (struct i40e_aqc_nvm_update *)&desc.params.raw; - enum i40e_status_code status; - - DEBUGFUNC("i40e_aq_read_nvm"); - - /* In offset the highest byte must be zeroed. */ - if (offset & 0xFF000000) { - status = I40E_ERR_PARAM; - goto i40e_aq_read_nvm_exit; - } - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_nvm_read); - - /* If this is the last command in a series, set the proper flag. */ - if (last_command) - cmd->command_flags |= I40E_AQ_NVM_LAST_CMD; - cmd->module_pointer = module_pointer; - cmd->offset = CPU_TO_LE32(offset); - cmd->length = CPU_TO_LE16(length); - - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF); - if (length > I40E_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); - - status = i40e_asq_send_command(hw, &desc, data, length, cmd_details); - -i40e_aq_read_nvm_exit: - return status; -} - -/** - * i40e_aq_erase_nvm - * @hw: pointer to the hw struct - * @module_pointer: module pointer location in words from the NVM beginning - * @offset: offset in the module (expressed in 4 KB from module's beginning) - * @length: length of the section to be erased (expressed in 4 KB) - * @last_command: tells if this is the last command in a series - * @cmd_details: pointer to command details structure or NULL - * - * Erase the NVM sector using the admin queue commands - **/ -enum i40e_status_code i40e_aq_erase_nvm(struct i40e_hw *hw, u8 module_pointer, - u32 offset, u16 length, bool last_command, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_nvm_update *cmd = - (struct i40e_aqc_nvm_update *)&desc.params.raw; - enum i40e_status_code status; - - DEBUGFUNC("i40e_aq_erase_nvm"); - - /* In offset the highest byte must be zeroed. */ - if (offset & 0xFF000000) { - status = I40E_ERR_PARAM; - goto i40e_aq_erase_nvm_exit; - } - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_nvm_erase); - - /* If this is the last command in a series, set the proper flag. */ - if (last_command) - cmd->command_flags |= I40E_AQ_NVM_LAST_CMD; - cmd->module_pointer = module_pointer; - cmd->offset = CPU_TO_LE32(offset); - cmd->length = CPU_TO_LE16(length); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - -i40e_aq_erase_nvm_exit: - return status; -} - -#define I40E_DEV_FUNC_CAP_SWITCH_MODE 0x01 -#define I40E_DEV_FUNC_CAP_MGMT_MODE 0x02 -#define I40E_DEV_FUNC_CAP_NPAR 0x03 -#define I40E_DEV_FUNC_CAP_OS2BMC 0x04 -#define I40E_DEV_FUNC_CAP_VALID_FUNC 0x05 -#define I40E_DEV_FUNC_CAP_SRIOV_1_1 0x12 -#define I40E_DEV_FUNC_CAP_VF 0x13 -#define I40E_DEV_FUNC_CAP_VMDQ 0x14 -#define I40E_DEV_FUNC_CAP_802_1_QBG 0x15 -#define I40E_DEV_FUNC_CAP_802_1_QBH 0x16 -#define I40E_DEV_FUNC_CAP_VSI 0x17 -#define I40E_DEV_FUNC_CAP_DCB 0x18 -#define I40E_DEV_FUNC_CAP_FCOE 0x21 -#define I40E_DEV_FUNC_CAP_RSS 0x40 -#define I40E_DEV_FUNC_CAP_RX_QUEUES 0x41 -#define I40E_DEV_FUNC_CAP_TX_QUEUES 0x42 -#define I40E_DEV_FUNC_CAP_MSIX 0x43 -#define I40E_DEV_FUNC_CAP_MSIX_VF 0x44 -#define I40E_DEV_FUNC_CAP_FLOW_DIRECTOR 0x45 -#define I40E_DEV_FUNC_CAP_IEEE_1588 0x46 -#define I40E_DEV_FUNC_CAP_MFP_MODE_1 0xF1 -#define I40E_DEV_FUNC_CAP_CEM 0xF2 -#define I40E_DEV_FUNC_CAP_IWARP 0x51 -#define I40E_DEV_FUNC_CAP_LED 0x61 -#define I40E_DEV_FUNC_CAP_SDP 0x62 -#define I40E_DEV_FUNC_CAP_MDIO 0x63 - -/** - * i40e_parse_discover_capabilities - * @hw: pointer to the hw struct - * @buff: pointer to a buffer containing device/function capability records - * @cap_count: number of capability records in the list - * @list_type_opc: type of capabilities list to parse - * - * Parse the device/function capabilities list. - **/ -STATIC void i40e_parse_discover_capabilities(struct i40e_hw *hw, void *buff, - u32 cap_count, - enum i40e_admin_queue_opc list_type_opc) -{ - struct i40e_aqc_list_capabilities_element_resp *cap; - u32 number, logical_id, phys_id; - struct i40e_hw_capabilities *p; - u32 i = 0; - u16 id; - - cap = (struct i40e_aqc_list_capabilities_element_resp *) buff; - - if (list_type_opc == i40e_aqc_opc_list_dev_capabilities) - p = (struct i40e_hw_capabilities *)&hw->dev_caps; - else if (list_type_opc == i40e_aqc_opc_list_func_capabilities) - p = (struct i40e_hw_capabilities *)&hw->func_caps; - else - return; - - for (i = 0; i < cap_count; i++, cap++) { - id = LE16_TO_CPU(cap->id); - number = LE32_TO_CPU(cap->number); - logical_id = LE32_TO_CPU(cap->logical_id); - phys_id = LE32_TO_CPU(cap->phys_id); - - switch (id) { - case I40E_DEV_FUNC_CAP_SWITCH_MODE: - p->switch_mode = number; - break; - case I40E_DEV_FUNC_CAP_MGMT_MODE: - p->management_mode = number; - break; - case I40E_DEV_FUNC_CAP_NPAR: - p->npar_enable = number; - break; - case I40E_DEV_FUNC_CAP_OS2BMC: - p->os2bmc = number; - break; - case I40E_DEV_FUNC_CAP_VALID_FUNC: - p->valid_functions = number; - break; - case I40E_DEV_FUNC_CAP_SRIOV_1_1: - if (number == 1) - p->sr_iov_1_1 = true; - break; - case I40E_DEV_FUNC_CAP_VF: - p->num_vfs = number; - p->vf_base_id = logical_id; - break; - case I40E_DEV_FUNC_CAP_VMDQ: - if (number == 1) - p->vmdq = true; - break; - case I40E_DEV_FUNC_CAP_802_1_QBG: - if (number == 1) - p->evb_802_1_qbg = true; - break; - case I40E_DEV_FUNC_CAP_802_1_QBH: - if (number == 1) - p->evb_802_1_qbh = true; - break; - case I40E_DEV_FUNC_CAP_VSI: - p->num_vsis = number; - break; - case I40E_DEV_FUNC_CAP_DCB: - if (number == 1) { - p->dcb = true; - p->enabled_tcmap = logical_id; - p->maxtc = phys_id; - } - break; - case I40E_DEV_FUNC_CAP_FCOE: - if (number == 1) - p->fcoe = true; - break; - case I40E_DEV_FUNC_CAP_RSS: - p->rss = true; - p->rss_table_size = number; - p->rss_table_entry_width = logical_id; - break; - case I40E_DEV_FUNC_CAP_RX_QUEUES: - p->num_rx_qp = number; - p->base_queue = phys_id; - break; - case I40E_DEV_FUNC_CAP_TX_QUEUES: - p->num_tx_qp = number; - p->base_queue = phys_id; - break; - case I40E_DEV_FUNC_CAP_MSIX: - p->num_msix_vectors = number; - break; - case I40E_DEV_FUNC_CAP_MSIX_VF: - p->num_msix_vectors_vf = number; - break; - case I40E_DEV_FUNC_CAP_MFP_MODE_1: - if (number == 1) - p->mfp_mode_1 = true; - break; - case I40E_DEV_FUNC_CAP_CEM: - if (number == 1) - p->mgmt_cem = true; - break; - case I40E_DEV_FUNC_CAP_IWARP: - if (number == 1) - p->iwarp = true; - break; - case I40E_DEV_FUNC_CAP_LED: - if (phys_id < I40E_HW_CAP_MAX_GPIO) - p->led[phys_id] = true; - break; - case I40E_DEV_FUNC_CAP_SDP: - if (phys_id < I40E_HW_CAP_MAX_GPIO) - p->sdp[phys_id] = true; - break; - case I40E_DEV_FUNC_CAP_MDIO: - if (number == 1) { - p->mdio_port_num = phys_id; - p->mdio_port_mode = logical_id; - } - break; - case I40E_DEV_FUNC_CAP_IEEE_1588: - if (number == 1) - p->ieee_1588 = true; - break; - case I40E_DEV_FUNC_CAP_FLOW_DIRECTOR: - p->fd = true; - p->fd_filters_guaranteed = number; - p->fd_filters_best_effort = logical_id; - break; - default: - break; - } - } - - /* Software override ensuring FCoE is disabled if npar or mfp - * mode because it is not supported in these modes. - */ - if (p->npar_enable || p->mfp_mode_1) - p->fcoe = false; - - /* additional HW specific goodies that might - * someday be HW version specific - */ - p->rx_buf_chain_len = I40E_MAX_CHAINED_RX_BUFFERS; -} - -/** - * i40e_aq_discover_capabilities - * @hw: pointer to the hw struct - * @buff: a virtual buffer to hold the capabilities - * @buff_size: Size of the virtual buffer - * @data_size: Size of the returned data, or buff size needed if AQ err==ENOMEM - * @list_type_opc: capabilities type to discover - pass in the command opcode - * @cmd_details: pointer to command details structure or NULL - * - * Get the device capabilities descriptions from the firmware - **/ -enum i40e_status_code i40e_aq_discover_capabilities(struct i40e_hw *hw, - void *buff, u16 buff_size, u16 *data_size, - enum i40e_admin_queue_opc list_type_opc, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aqc_list_capabilites *cmd; - struct i40e_aq_desc desc; - enum i40e_status_code status = I40E_SUCCESS; - - cmd = (struct i40e_aqc_list_capabilites *)&desc.params.raw; - - if (list_type_opc != i40e_aqc_opc_list_func_capabilities && - list_type_opc != i40e_aqc_opc_list_dev_capabilities) { - status = I40E_ERR_PARAM; - goto exit; - } - - i40e_fill_default_direct_cmd_desc(&desc, list_type_opc); - - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF); - if (buff_size > I40E_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); - - status = i40e_asq_send_command(hw, &desc, buff, buff_size, cmd_details); - *data_size = LE16_TO_CPU(desc.datalen); - - if (status) - goto exit; - - i40e_parse_discover_capabilities(hw, buff, LE32_TO_CPU(cmd->count), - list_type_opc); - -exit: - return status; -} - -/** - * i40e_aq_update_nvm - * @hw: pointer to the hw struct - * @module_pointer: module pointer location in words from the NVM beginning - * @offset: byte offset from the module beginning - * @length: length of the section to be written (in bytes from the offset) - * @data: command buffer (size [bytes] = length) - * @last_command: tells if this is the last command in a series - * @cmd_details: pointer to command details structure or NULL - * - * Update the NVM using the admin queue commands - **/ -enum i40e_status_code i40e_aq_update_nvm(struct i40e_hw *hw, u8 module_pointer, - u32 offset, u16 length, void *data, - bool last_command, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_nvm_update *cmd = - (struct i40e_aqc_nvm_update *)&desc.params.raw; - enum i40e_status_code status; - - DEBUGFUNC("i40e_aq_update_nvm"); - - /* In offset the highest byte must be zeroed. */ - if (offset & 0xFF000000) { - status = I40E_ERR_PARAM; - goto i40e_aq_update_nvm_exit; - } - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_nvm_update); - - /* If this is the last command in a series, set the proper flag. */ - if (last_command) - cmd->command_flags |= I40E_AQ_NVM_LAST_CMD; - cmd->module_pointer = module_pointer; - cmd->offset = CPU_TO_LE32(offset); - cmd->length = CPU_TO_LE16(length); - - desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); - if (length > I40E_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); - - status = i40e_asq_send_command(hw, &desc, data, length, cmd_details); - -i40e_aq_update_nvm_exit: - return status; -} - -/** - * i40e_aq_get_lldp_mib - * @hw: pointer to the hw struct - * @bridge_type: type of bridge requested - * @mib_type: Local, Remote or both Local and Remote MIBs - * @buff: pointer to a user supplied buffer to store the MIB block - * @buff_size: size of the buffer (in bytes) - * @local_len : length of the returned Local LLDP MIB - * @remote_len: length of the returned Remote LLDP MIB - * @cmd_details: pointer to command details structure or NULL - * - * Requests the complete LLDP MIB (entire packet). - **/ -enum i40e_status_code i40e_aq_get_lldp_mib(struct i40e_hw *hw, u8 bridge_type, - u8 mib_type, void *buff, u16 buff_size, - u16 *local_len, u16 *remote_len, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_lldp_get_mib *cmd = - (struct i40e_aqc_lldp_get_mib *)&desc.params.raw; - struct i40e_aqc_lldp_get_mib *resp = - (struct i40e_aqc_lldp_get_mib *)&desc.params.raw; - enum i40e_status_code status; - - if (buff_size == 0 || !buff) - return I40E_ERR_PARAM; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_get_mib); - /* Indirect Command */ - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF); - - cmd->type = mib_type & I40E_AQ_LLDP_MIB_TYPE_MASK; - cmd->type |= ((bridge_type << I40E_AQ_LLDP_BRIDGE_TYPE_SHIFT) & - I40E_AQ_LLDP_BRIDGE_TYPE_MASK); - - desc.datalen = CPU_TO_LE16(buff_size); - - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF); - if (buff_size > I40E_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); - - status = i40e_asq_send_command(hw, &desc, buff, buff_size, cmd_details); - if (!status) { - if (local_len != NULL) - *local_len = LE16_TO_CPU(resp->local_len); - if (remote_len != NULL) - *remote_len = LE16_TO_CPU(resp->remote_len); - } - - return status; -} - -/** - * i40e_aq_cfg_lldp_mib_change_event - * @hw: pointer to the hw struct - * @enable_update: Enable or Disable event posting - * @cmd_details: pointer to command details structure or NULL - * - * Enable or Disable posting of an event on ARQ when LLDP MIB - * associated with the interface changes - **/ -enum i40e_status_code i40e_aq_cfg_lldp_mib_change_event(struct i40e_hw *hw, - bool enable_update, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_lldp_update_mib *cmd = - (struct i40e_aqc_lldp_update_mib *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_update_mib); - - if (!enable_update) - cmd->command |= I40E_AQ_LLDP_MIB_UPDATE_DISABLE; - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_aq_add_lldp_tlv - * @hw: pointer to the hw struct - * @bridge_type: type of bridge - * @buff: buffer with TLV to add - * @buff_size: length of the buffer - * @tlv_len: length of the TLV to be added - * @mib_len: length of the LLDP MIB returned in response - * @cmd_details: pointer to command details structure or NULL - * - * Add the specified TLV to LLDP Local MIB for the given bridge type, - * it is responsibility of the caller to make sure that the TLV is not - * already present in the LLDPDU. - * In return firmware will write the complete LLDP MIB with the newly - * added TLV in the response buffer. - **/ -enum i40e_status_code i40e_aq_add_lldp_tlv(struct i40e_hw *hw, u8 bridge_type, - void *buff, u16 buff_size, u16 tlv_len, - u16 *mib_len, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_lldp_add_tlv *cmd = - (struct i40e_aqc_lldp_add_tlv *)&desc.params.raw; - enum i40e_status_code status; - - if (buff_size == 0 || !buff || tlv_len == 0) - return I40E_ERR_PARAM; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_add_tlv); - - /* Indirect Command */ - desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); - if (buff_size > I40E_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); - desc.datalen = CPU_TO_LE16(buff_size); - - cmd->type = ((bridge_type << I40E_AQ_LLDP_BRIDGE_TYPE_SHIFT) & - I40E_AQ_LLDP_BRIDGE_TYPE_MASK); - cmd->len = CPU_TO_LE16(tlv_len); - - status = i40e_asq_send_command(hw, &desc, buff, buff_size, cmd_details); - if (!status) { - if (mib_len != NULL) - *mib_len = LE16_TO_CPU(desc.datalen); - } - - return status; -} - -/** - * i40e_aq_update_lldp_tlv - * @hw: pointer to the hw struct - * @bridge_type: type of bridge - * @buff: buffer with TLV to update - * @buff_size: size of the buffer holding original and updated TLVs - * @old_len: Length of the Original TLV - * @new_len: Length of the Updated TLV - * @offset: offset of the updated TLV in the buff - * @mib_len: length of the returned LLDP MIB - * @cmd_details: pointer to command details structure or NULL - * - * Update the specified TLV to the LLDP Local MIB for the given bridge type. - * Firmware will place the complete LLDP MIB in response buffer with the - * updated TLV. - **/ -enum i40e_status_code i40e_aq_update_lldp_tlv(struct i40e_hw *hw, - u8 bridge_type, void *buff, u16 buff_size, - u16 old_len, u16 new_len, u16 offset, - u16 *mib_len, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_lldp_update_tlv *cmd = - (struct i40e_aqc_lldp_update_tlv *)&desc.params.raw; - enum i40e_status_code status; - - if (buff_size == 0 || !buff || offset == 0 || - old_len == 0 || new_len == 0) - return I40E_ERR_PARAM; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_update_tlv); - - /* Indirect Command */ - desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); - if (buff_size > I40E_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); - desc.datalen = CPU_TO_LE16(buff_size); - - cmd->type = ((bridge_type << I40E_AQ_LLDP_BRIDGE_TYPE_SHIFT) & - I40E_AQ_LLDP_BRIDGE_TYPE_MASK); - cmd->old_len = CPU_TO_LE16(old_len); - cmd->new_offset = CPU_TO_LE16(offset); - cmd->new_len = CPU_TO_LE16(new_len); - - status = i40e_asq_send_command(hw, &desc, buff, buff_size, cmd_details); - if (!status) { - if (mib_len != NULL) - *mib_len = LE16_TO_CPU(desc.datalen); - } - - return status; -} - -/** - * i40e_aq_delete_lldp_tlv - * @hw: pointer to the hw struct - * @bridge_type: type of bridge - * @buff: pointer to a user supplied buffer that has the TLV - * @buff_size: length of the buffer - * @tlv_len: length of the TLV to be deleted - * @mib_len: length of the returned LLDP MIB - * @cmd_details: pointer to command details structure or NULL - * - * Delete the specified TLV from LLDP Local MIB for the given bridge type. - * The firmware places the entire LLDP MIB in the response buffer. - **/ -enum i40e_status_code i40e_aq_delete_lldp_tlv(struct i40e_hw *hw, - u8 bridge_type, void *buff, u16 buff_size, - u16 tlv_len, u16 *mib_len, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_lldp_add_tlv *cmd = - (struct i40e_aqc_lldp_add_tlv *)&desc.params.raw; - enum i40e_status_code status; - - if (buff_size == 0 || !buff) - return I40E_ERR_PARAM; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_delete_tlv); - - /* Indirect Command */ - desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); - if (buff_size > I40E_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); - desc.datalen = CPU_TO_LE16(buff_size); - cmd->len = CPU_TO_LE16(tlv_len); - cmd->type = ((bridge_type << I40E_AQ_LLDP_BRIDGE_TYPE_SHIFT) & - I40E_AQ_LLDP_BRIDGE_TYPE_MASK); - - status = i40e_asq_send_command(hw, &desc, buff, buff_size, cmd_details); - if (!status) { - if (mib_len != NULL) - *mib_len = LE16_TO_CPU(desc.datalen); - } - - return status; -} - -/** - * i40e_aq_stop_lldp - * @hw: pointer to the hw struct - * @shutdown_agent: True if LLDP Agent needs to be Shutdown - * @cmd_details: pointer to command details structure or NULL - * - * Stop or Shutdown the embedded LLDP Agent - **/ -enum i40e_status_code i40e_aq_stop_lldp(struct i40e_hw *hw, bool shutdown_agent, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_lldp_stop *cmd = - (struct i40e_aqc_lldp_stop *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_stop); - - if (shutdown_agent) - cmd->command |= I40E_AQ_LLDP_AGENT_SHUTDOWN; - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_aq_start_lldp - * @hw: pointer to the hw struct - * @cmd_details: pointer to command details structure or NULL - * - * Start the embedded LLDP Agent on all ports. - **/ -enum i40e_status_code i40e_aq_start_lldp(struct i40e_hw *hw, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_lldp_start *cmd = - (struct i40e_aqc_lldp_start *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_lldp_start); - - cmd->command = I40E_AQ_LLDP_AGENT_START; - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_aq_add_udp_tunnel - * @hw: pointer to the hw struct - * @udp_port: the UDP port to add - * @header_len: length of the tunneling header length in DWords - * @protocol_index: protocol index type - * @filter_index: pointer to filter index - * @cmd_details: pointer to command details structure or NULL - **/ -enum i40e_status_code i40e_aq_add_udp_tunnel(struct i40e_hw *hw, - u16 udp_port, u8 protocol_index, - u8 *filter_index, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_add_udp_tunnel *cmd = - (struct i40e_aqc_add_udp_tunnel *)&desc.params.raw; - struct i40e_aqc_del_udp_tunnel_completion *resp = - (struct i40e_aqc_del_udp_tunnel_completion *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_udp_tunnel); - - cmd->udp_port = CPU_TO_LE16(udp_port); - cmd->protocol_type = protocol_index; - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - if (!status) - *filter_index = resp->index; - - return status; -} - -/** - * i40e_aq_del_udp_tunnel - * @hw: pointer to the hw struct - * @index: filter index - * @cmd_details: pointer to command details structure or NULL - **/ -enum i40e_status_code i40e_aq_del_udp_tunnel(struct i40e_hw *hw, u8 index, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_remove_udp_tunnel *cmd = - (struct i40e_aqc_remove_udp_tunnel *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_del_udp_tunnel); - - cmd->index = index; - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_aq_get_switch_resource_alloc (0x0204) - * @hw: pointer to the hw struct - * @num_entries: pointer to u8 to store the number of resource entries returned - * @buf: pointer to a user supplied buffer. This buffer must be large enough - * to store the resource information for all resource types. Each - * resource type is a i40e_aqc_switch_resource_alloc_data structure. - * @count: size, in bytes, of the buffer provided - * @cmd_details: pointer to command details structure or NULL - * - * Query the resources allocated to a function. - **/ -enum i40e_status_code i40e_aq_get_switch_resource_alloc(struct i40e_hw *hw, - u8 *num_entries, - struct i40e_aqc_switch_resource_alloc_element_resp *buf, - u16 count, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_get_switch_resource_alloc *cmd_resp = - (struct i40e_aqc_get_switch_resource_alloc *)&desc.params.raw; - enum i40e_status_code status; - u16 length = count - * sizeof(struct i40e_aqc_switch_resource_alloc_element_resp); - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_get_switch_resource_alloc); - - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF); - if (length > I40E_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); - - status = i40e_asq_send_command(hw, &desc, buf, length, cmd_details); - - if (!status) - *num_entries = cmd_resp->num_entries; - - return status; -} - -/** - * i40e_aq_delete_element - Delete switch element - * @hw: pointer to the hw struct - * @seid: the SEID to delete from the switch - * @cmd_details: pointer to command details structure or NULL - * - * This deletes a switch element from the switch. - **/ -enum i40e_status_code i40e_aq_delete_element(struct i40e_hw *hw, u16 seid, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_switch_seid *cmd = - (struct i40e_aqc_switch_seid *)&desc.params.raw; - enum i40e_status_code status; - - if (seid == 0) - return I40E_ERR_PARAM; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_delete_element); - - cmd->seid = CPU_TO_LE16(seid); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40_aq_add_pvirt - Instantiate a Port Virtualizer on a port - * @hw: pointer to the hw struct - * @flags: component flags - * @mac_seid: uplink seid (MAC SEID) - * @vsi_seid: connected vsi seid - * @ret_seid: seid of create pv component - * - * This instantiates an i40e port virtualizer with specified flags. - * Depending on specified flags the port virtualizer can act as a - * 802.1Qbr port virtualizer or a 802.1Qbg S-component. - */ -enum i40e_status_code i40e_aq_add_pvirt(struct i40e_hw *hw, u16 flags, - u16 mac_seid, u16 vsi_seid, - u16 *ret_seid) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_add_update_pv *cmd = - (struct i40e_aqc_add_update_pv *)&desc.params.raw; - struct i40e_aqc_add_update_pv_completion *resp = - (struct i40e_aqc_add_update_pv_completion *)&desc.params.raw; - enum i40e_status_code status; - - if (vsi_seid == 0) - return I40E_ERR_PARAM; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_pv); - cmd->command_flags = CPU_TO_LE16(flags); - cmd->uplink_seid = CPU_TO_LE16(mac_seid); - cmd->connected_seid = CPU_TO_LE16(vsi_seid); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL); - if (!status && ret_seid) - *ret_seid = LE16_TO_CPU(resp->pv_seid); - - return status; -} - -/** - * i40e_aq_add_tag - Add an S/E-tag - * @hw: pointer to the hw struct - * @direct_to_queue: should s-tag direct flow to a specific queue - * @vsi_seid: VSI SEID to use this tag - * @tag: value of the tag - * @queue_num: queue number, only valid is direct_to_queue is true - * @tags_used: return value, number of tags in use by this PF - * @tags_free: return value, number of unallocated tags - * @cmd_details: pointer to command details structure or NULL - * - * This associates an S- or E-tag to a VSI in the switch complex. It returns - * the number of tags allocated by the PF, and the number of unallocated - * tags available. - **/ -enum i40e_status_code i40e_aq_add_tag(struct i40e_hw *hw, bool direct_to_queue, - u16 vsi_seid, u16 tag, u16 queue_num, - u16 *tags_used, u16 *tags_free, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_add_tag *cmd = - (struct i40e_aqc_add_tag *)&desc.params.raw; - struct i40e_aqc_add_remove_tag_completion *resp = - (struct i40e_aqc_add_remove_tag_completion *)&desc.params.raw; - enum i40e_status_code status; - - if (vsi_seid == 0) - return I40E_ERR_PARAM; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_tag); - - cmd->seid = CPU_TO_LE16(vsi_seid); - cmd->tag = CPU_TO_LE16(tag); - if (direct_to_queue) { - cmd->flags = CPU_TO_LE16(I40E_AQC_ADD_TAG_FLAG_TO_QUEUE); - cmd->queue_number = CPU_TO_LE16(queue_num); - } - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - if (!status) { - if (tags_used != NULL) - *tags_used = LE16_TO_CPU(resp->tags_used); - if (tags_free != NULL) - *tags_free = LE16_TO_CPU(resp->tags_free); - } - - return status; -} - -/** - * i40e_aq_remove_tag - Remove an S- or E-tag - * @hw: pointer to the hw struct - * @vsi_seid: VSI SEID this tag is associated with - * @tag: value of the S-tag to delete - * @tags_used: return value, number of tags in use by this PF - * @tags_free: return value, number of unallocated tags - * @cmd_details: pointer to command details structure or NULL - * - * This deletes an S- or E-tag from a VSI in the switch complex. It returns - * the number of tags allocated by the PF, and the number of unallocated - * tags available. - **/ -enum i40e_status_code i40e_aq_remove_tag(struct i40e_hw *hw, u16 vsi_seid, - u16 tag, u16 *tags_used, u16 *tags_free, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_remove_tag *cmd = - (struct i40e_aqc_remove_tag *)&desc.params.raw; - struct i40e_aqc_add_remove_tag_completion *resp = - (struct i40e_aqc_add_remove_tag_completion *)&desc.params.raw; - enum i40e_status_code status; - - if (vsi_seid == 0) - return I40E_ERR_PARAM; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_remove_tag); - - cmd->seid = CPU_TO_LE16(vsi_seid); - cmd->tag = CPU_TO_LE16(tag); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - if (!status) { - if (tags_used != NULL) - *tags_used = LE16_TO_CPU(resp->tags_used); - if (tags_free != NULL) - *tags_free = LE16_TO_CPU(resp->tags_free); - } - - return status; -} - -/** - * i40e_aq_add_mcast_etag - Add a multicast E-tag - * @hw: pointer to the hw struct - * @pv_seid: Port Virtualizer of this SEID to associate E-tag with - * @etag: value of E-tag to add - * @num_tags_in_buf: number of unicast E-tags in indirect buffer - * @buf: address of indirect buffer - * @tags_used: return value, number of E-tags in use by this port - * @tags_free: return value, number of unallocated M-tags - * @cmd_details: pointer to command details structure or NULL - * - * This associates a multicast E-tag to a port virtualizer. It will return - * the number of tags allocated by the PF, and the number of unallocated - * tags available. - * - * The indirect buffer pointed to by buf is a list of 2-byte E-tags, - * num_tags_in_buf long. - **/ -enum i40e_status_code i40e_aq_add_mcast_etag(struct i40e_hw *hw, u16 pv_seid, - u16 etag, u8 num_tags_in_buf, void *buf, - u16 *tags_used, u16 *tags_free, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_add_remove_mcast_etag *cmd = - (struct i40e_aqc_add_remove_mcast_etag *)&desc.params.raw; - struct i40e_aqc_add_remove_mcast_etag_completion *resp = - (struct i40e_aqc_add_remove_mcast_etag_completion *)&desc.params.raw; - enum i40e_status_code status; - u16 length = sizeof(u16) * num_tags_in_buf; - - if ((pv_seid == 0) || (buf == NULL) || (num_tags_in_buf == 0)) - return I40E_ERR_PARAM; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_add_multicast_etag); - - cmd->pv_seid = CPU_TO_LE16(pv_seid); - cmd->etag = CPU_TO_LE16(etag); - cmd->num_unicast_etags = num_tags_in_buf; - - desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); - if (length > I40E_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); - - status = i40e_asq_send_command(hw, &desc, buf, length, cmd_details); - - if (!status) { - if (tags_used != NULL) - *tags_used = LE16_TO_CPU(resp->mcast_etags_used); - if (tags_free != NULL) - *tags_free = LE16_TO_CPU(resp->mcast_etags_free); - } - - return status; -} - -/** - * i40e_aq_remove_mcast_etag - Remove a multicast E-tag - * @hw: pointer to the hw struct - * @pv_seid: Port Virtualizer SEID this M-tag is associated with - * @etag: value of the E-tag to remove - * @tags_used: return value, number of tags in use by this port - * @tags_free: return value, number of unallocated tags - * @cmd_details: pointer to command details structure or NULL - * - * This deletes an E-tag from the port virtualizer. It will return - * the number of tags allocated by the port, and the number of unallocated - * tags available. - **/ -enum i40e_status_code i40e_aq_remove_mcast_etag(struct i40e_hw *hw, u16 pv_seid, - u16 etag, u16 *tags_used, u16 *tags_free, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_add_remove_mcast_etag *cmd = - (struct i40e_aqc_add_remove_mcast_etag *)&desc.params.raw; - struct i40e_aqc_add_remove_mcast_etag_completion *resp = - (struct i40e_aqc_add_remove_mcast_etag_completion *)&desc.params.raw; - enum i40e_status_code status; - - - if (pv_seid == 0) - return I40E_ERR_PARAM; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_remove_multicast_etag); - - cmd->pv_seid = CPU_TO_LE16(pv_seid); - cmd->etag = CPU_TO_LE16(etag); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - if (!status) { - if (tags_used != NULL) - *tags_used = LE16_TO_CPU(resp->mcast_etags_used); - if (tags_free != NULL) - *tags_free = LE16_TO_CPU(resp->mcast_etags_free); - } - - return status; -} - -/** - * i40e_aq_update_tag - Update an S/E-tag - * @hw: pointer to the hw struct - * @vsi_seid: VSI SEID using this S-tag - * @old_tag: old tag value - * @new_tag: new tag value - * @tags_used: return value, number of tags in use by this PF - * @tags_free: return value, number of unallocated tags - * @cmd_details: pointer to command details structure or NULL - * - * This updates the value of the tag currently attached to this VSI - * in the switch complex. It will return the number of tags allocated - * by the PF, and the number of unallocated tags available. - **/ -enum i40e_status_code i40e_aq_update_tag(struct i40e_hw *hw, u16 vsi_seid, - u16 old_tag, u16 new_tag, u16 *tags_used, - u16 *tags_free, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_update_tag *cmd = - (struct i40e_aqc_update_tag *)&desc.params.raw; - struct i40e_aqc_update_tag_completion *resp = - (struct i40e_aqc_update_tag_completion *)&desc.params.raw; - enum i40e_status_code status; - - if (vsi_seid == 0) - return I40E_ERR_PARAM; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_update_tag); - - cmd->seid = CPU_TO_LE16(vsi_seid); - cmd->old_tag = CPU_TO_LE16(old_tag); - cmd->new_tag = CPU_TO_LE16(new_tag); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - if (!status) { - if (tags_used != NULL) - *tags_used = LE16_TO_CPU(resp->tags_used); - if (tags_free != NULL) - *tags_free = LE16_TO_CPU(resp->tags_free); - } - - return status; -} - -/** - * i40e_aq_dcb_ignore_pfc - Ignore PFC for given TCs - * @hw: pointer to the hw struct - * @tcmap: TC map for request/release any ignore PFC condition - * @request: request or release ignore PFC condition - * @tcmap_ret: return TCs for which PFC is currently ignored - * @cmd_details: pointer to command details structure or NULL - * - * This sends out request/release to ignore PFC condition for a TC. - * It will return the TCs for which PFC is currently ignored. - **/ -enum i40e_status_code i40e_aq_dcb_ignore_pfc(struct i40e_hw *hw, u8 tcmap, - bool request, u8 *tcmap_ret, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_pfc_ignore *cmd_resp = - (struct i40e_aqc_pfc_ignore *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_dcb_ignore_pfc); - - if (request) - cmd_resp->command_flags = I40E_AQC_PFC_IGNORE_SET; - - cmd_resp->tc_bitmap = tcmap; - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - if (!status) { - if (tcmap_ret != NULL) - *tcmap_ret = cmd_resp->tc_bitmap; - } - - return status; -} - -/** - * i40e_aq_dcb_updated - DCB Updated Command - * @hw: pointer to the hw struct - * @cmd_details: pointer to command details structure or NULL - * - * When LLDP is handled in PF this command is used by the PF - * to notify EMP that a DCB setting is modified. - * When LLDP is handled in EMP this command is used by the PF - * to notify EMP whenever one of the following parameters get - * modified: - * - PFCLinkDelayAllowance in PRTDCB_GENC.PFCLDA - * - PCIRTT in PRTDCB_GENC.PCIRTT - * - Maximum Frame Size for non-FCoE TCs set by PRTDCB_TDPUC.MAX_TXFRAME. - * EMP will return when the shared RPB settings have been - * recomputed and modified. The retval field in the descriptor - * will be set to 0 when RPB is modified. - **/ -enum i40e_status_code i40e_aq_dcb_updated(struct i40e_hw *hw, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_dcb_updated); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_aq_add_statistics - Add a statistics block to a VLAN in a switch. - * @hw: pointer to the hw struct - * @seid: defines the SEID of the switch for which the stats are requested - * @vlan_id: the VLAN ID for which the statistics are requested - * @stat_index: index of the statistics counters block assigned to this VLAN - * @cmd_details: pointer to command details structure or NULL - * - * XL710 supports 128 smonVlanStats counters.This command is used to - * allocate a set of smonVlanStats counters to a specific VLAN in a specific - * switch. - **/ -enum i40e_status_code i40e_aq_add_statistics(struct i40e_hw *hw, u16 seid, - u16 vlan_id, u16 *stat_index, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_add_remove_statistics *cmd_resp = - (struct i40e_aqc_add_remove_statistics *)&desc.params.raw; - enum i40e_status_code status; - - if ((seid == 0) || (stat_index == NULL)) - return I40E_ERR_PARAM; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_add_statistics); - - cmd_resp->seid = CPU_TO_LE16(seid); - cmd_resp->vlan = CPU_TO_LE16(vlan_id); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - if (!status) - *stat_index = LE16_TO_CPU(cmd_resp->stat_index); - - return status; -} - -/** - * i40e_aq_remove_statistics - Remove a statistics block to a VLAN in a switch. - * @hw: pointer to the hw struct - * @seid: defines the SEID of the switch for which the stats are requested - * @vlan_id: the VLAN ID for which the statistics are requested - * @stat_index: index of the statistics counters block assigned to this VLAN - * @cmd_details: pointer to command details structure or NULL - * - * XL710 supports 128 smonVlanStats counters.This command is used to - * deallocate a set of smonVlanStats counters to a specific VLAN in a specific - * switch. - **/ -enum i40e_status_code i40e_aq_remove_statistics(struct i40e_hw *hw, u16 seid, - u16 vlan_id, u16 stat_index, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_add_remove_statistics *cmd = - (struct i40e_aqc_add_remove_statistics *)&desc.params.raw; - enum i40e_status_code status; - - if (seid == 0) - return I40E_ERR_PARAM; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_remove_statistics); - - cmd->seid = CPU_TO_LE16(seid); - cmd->vlan = CPU_TO_LE16(vlan_id); - cmd->stat_index = CPU_TO_LE16(stat_index); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_aq_set_port_parameters - set physical port parameters. - * @hw: pointer to the hw struct - * @bad_frame_vsi: defines the VSI to which bad frames are forwarded - * @save_bad_pac: if set packets with errors are forwarded to the bad frames VSI - * @pad_short_pac: if set transmit packets smaller than 60 bytes are padded - * @double_vlan: if set double VLAN is enabled - * @cmd_details: pointer to command details structure or NULL - **/ -enum i40e_status_code i40e_aq_set_port_parameters(struct i40e_hw *hw, - u16 bad_frame_vsi, bool save_bad_pac, - bool pad_short_pac, bool double_vlan, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aqc_set_port_parameters *cmd; - enum i40e_status_code status; - struct i40e_aq_desc desc; - u16 command_flags = 0; - - cmd = (struct i40e_aqc_set_port_parameters *)&desc.params.raw; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_set_port_parameters); - - cmd->bad_frame_vsi = CPU_TO_LE16(bad_frame_vsi); - if (save_bad_pac) - command_flags |= I40E_AQ_SET_P_PARAMS_SAVE_BAD_PACKETS; - if (pad_short_pac) - command_flags |= I40E_AQ_SET_P_PARAMS_PAD_SHORT_PACKETS; - if (double_vlan) - command_flags |= I40E_AQ_SET_P_PARAMS_DOUBLE_VLAN_ENA; - cmd->command_flags = CPU_TO_LE16(command_flags); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_aq_tx_sched_cmd - generic Tx scheduler AQ command handler - * @hw: pointer to the hw struct - * @seid: seid for the physical port/switching component/vsi - * @buff: Indirect buffer to hold data parameters and response - * @buff_size: Indirect buffer size - * @opcode: Tx scheduler AQ command opcode - * @cmd_details: pointer to command details structure or NULL - * - * Generic command handler for Tx scheduler AQ commands - **/ -static enum i40e_status_code i40e_aq_tx_sched_cmd(struct i40e_hw *hw, u16 seid, - void *buff, u16 buff_size, - enum i40e_admin_queue_opc opcode, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_tx_sched_ind *cmd = - (struct i40e_aqc_tx_sched_ind *)&desc.params.raw; - enum i40e_status_code status; - bool cmd_param_flag = false; - - switch (opcode) { - case i40e_aqc_opc_configure_vsi_ets_sla_bw_limit: - case i40e_aqc_opc_configure_vsi_tc_bw: - case i40e_aqc_opc_enable_switching_comp_ets: - case i40e_aqc_opc_modify_switching_comp_ets: - case i40e_aqc_opc_disable_switching_comp_ets: - case i40e_aqc_opc_configure_switching_comp_ets_bw_limit: - case i40e_aqc_opc_configure_switching_comp_bw_config: - cmd_param_flag = true; - break; - case i40e_aqc_opc_query_vsi_bw_config: - case i40e_aqc_opc_query_vsi_ets_sla_config: - case i40e_aqc_opc_query_switching_comp_ets_config: - case i40e_aqc_opc_query_port_ets_config: - case i40e_aqc_opc_query_switching_comp_bw_config: - cmd_param_flag = false; - break; - default: - return I40E_ERR_PARAM; - } - - i40e_fill_default_direct_cmd_desc(&desc, opcode); - - /* Indirect command */ - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF); - if (cmd_param_flag) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_RD); - if (buff_size > I40E_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); - - desc.datalen = CPU_TO_LE16(buff_size); - - cmd->vsi_seid = CPU_TO_LE16(seid); - - status = i40e_asq_send_command(hw, &desc, buff, buff_size, cmd_details); - - return status; -} - -/** - * i40e_aq_config_vsi_bw_limit - Configure VSI BW Limit - * @hw: pointer to the hw struct - * @seid: VSI seid - * @credit: BW limit credits (0 = disabled) - * @max_credit: Max BW limit credits - * @cmd_details: pointer to command details structure or NULL - **/ -enum i40e_status_code i40e_aq_config_vsi_bw_limit(struct i40e_hw *hw, - u16 seid, u16 credit, u8 max_credit, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_configure_vsi_bw_limit *cmd = - (struct i40e_aqc_configure_vsi_bw_limit *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_configure_vsi_bw_limit); - - cmd->vsi_seid = CPU_TO_LE16(seid); - cmd->credit = CPU_TO_LE16(credit); - cmd->max_credit = max_credit; - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_aq_config_switch_comp_bw_limit - Configure Switching component BW Limit - * @hw: pointer to the hw struct - * @seid: switching component seid - * @credit: BW limit credits (0 = disabled) - * @max_bw: Max BW limit credits - * @cmd_details: pointer to command details structure or NULL - **/ -enum i40e_status_code i40e_aq_config_switch_comp_bw_limit(struct i40e_hw *hw, - u16 seid, u16 credit, u8 max_bw, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_configure_switching_comp_bw_limit *cmd = - (struct i40e_aqc_configure_switching_comp_bw_limit *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_configure_switching_comp_bw_limit); - - cmd->seid = CPU_TO_LE16(seid); - cmd->credit = CPU_TO_LE16(credit); - cmd->max_bw = max_bw; - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_aq_config_vsi_ets_sla_bw_limit - Config VSI BW Limit per TC - * @hw: pointer to the hw struct - * @seid: VSI seid - * @bw_data: Buffer holding enabled TCs, per TC BW limit/credits - * @cmd_details: pointer to command details structure or NULL - **/ -enum i40e_status_code i40e_aq_config_vsi_ets_sla_bw_limit(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_configure_vsi_ets_sla_bw_data *bw_data, - struct i40e_asq_cmd_details *cmd_details) -{ - return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), - i40e_aqc_opc_configure_vsi_ets_sla_bw_limit, - cmd_details); -} - -/** - * i40e_aq_config_vsi_tc_bw - Config VSI BW Allocation per TC - * @hw: pointer to the hw struct - * @seid: VSI seid - * @bw_data: Buffer holding enabled TCs, relative TC BW limit/credits - * @cmd_details: pointer to command details structure or NULL - **/ -enum i40e_status_code i40e_aq_config_vsi_tc_bw(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_configure_vsi_tc_bw_data *bw_data, - struct i40e_asq_cmd_details *cmd_details) -{ - return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), - i40e_aqc_opc_configure_vsi_tc_bw, - cmd_details); -} - -/** - * i40e_aq_config_switch_comp_ets_bw_limit - Config Switch comp BW Limit per TC - * @hw: pointer to the hw struct - * @seid: seid of the switching component - * @bw_data: Buffer holding enabled TCs, per TC BW limit/credits - * @cmd_details: pointer to command details structure or NULL - **/ -enum i40e_status_code i40e_aq_config_switch_comp_ets_bw_limit( - struct i40e_hw *hw, u16 seid, - struct i40e_aqc_configure_switching_comp_ets_bw_limit_data *bw_data, - struct i40e_asq_cmd_details *cmd_details) -{ - return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), - i40e_aqc_opc_configure_switching_comp_ets_bw_limit, - cmd_details); -} - -/** - * i40e_aq_query_vsi_bw_config - Query VSI BW configuration - * @hw: pointer to the hw struct - * @seid: seid of the VSI - * @bw_data: Buffer to hold VSI BW configuration - * @cmd_details: pointer to command details structure or NULL - **/ -enum i40e_status_code i40e_aq_query_vsi_bw_config(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_query_vsi_bw_config_resp *bw_data, - struct i40e_asq_cmd_details *cmd_details) -{ - return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), - i40e_aqc_opc_query_vsi_bw_config, - cmd_details); -} - -/** - * i40e_aq_query_vsi_ets_sla_config - Query VSI BW configuration per TC - * @hw: pointer to the hw struct - * @seid: seid of the VSI - * @bw_data: Buffer to hold VSI BW configuration per TC - * @cmd_details: pointer to command details structure or NULL - **/ -enum i40e_status_code i40e_aq_query_vsi_ets_sla_config(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_query_vsi_ets_sla_config_resp *bw_data, - struct i40e_asq_cmd_details *cmd_details) -{ - return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), - i40e_aqc_opc_query_vsi_ets_sla_config, - cmd_details); -} - -/** - * i40e_aq_query_switch_comp_ets_config - Query Switch comp BW config per TC - * @hw: pointer to the hw struct - * @seid: seid of the switching component - * @bw_data: Buffer to hold switching component's per TC BW config - * @cmd_details: pointer to command details structure or NULL - **/ -enum i40e_status_code i40e_aq_query_switch_comp_ets_config(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_query_switching_comp_ets_config_resp *bw_data, - struct i40e_asq_cmd_details *cmd_details) -{ - return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), - i40e_aqc_opc_query_switching_comp_ets_config, - cmd_details); -} - -/** - * i40e_aq_query_port_ets_config - Query Physical Port ETS configuration - * @hw: pointer to the hw struct - * @seid: seid of the VSI or switching component connected to Physical Port - * @bw_data: Buffer to hold current ETS configuration for the Physical Port - * @cmd_details: pointer to command details structure or NULL - **/ -enum i40e_status_code i40e_aq_query_port_ets_config(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_query_port_ets_config_resp *bw_data, - struct i40e_asq_cmd_details *cmd_details) -{ - return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), - i40e_aqc_opc_query_port_ets_config, - cmd_details); -} - -/** - * i40e_aq_query_switch_comp_bw_config - Query Switch comp BW configuration - * @hw: pointer to the hw struct - * @seid: seid of the switching component - * @bw_data: Buffer to hold switching component's BW configuration - * @cmd_details: pointer to command details structure or NULL - **/ -enum i40e_status_code i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_query_switching_comp_bw_config_resp *bw_data, - struct i40e_asq_cmd_details *cmd_details) -{ - return i40e_aq_tx_sched_cmd(hw, seid, (void *)bw_data, sizeof(*bw_data), - i40e_aqc_opc_query_switching_comp_bw_config, - cmd_details); -} - -/** - * i40e_validate_filter_settings - * @hw: pointer to the hardware structure - * @settings: Filter control settings - * - * Check and validate the filter control settings passed. - * The function checks for the valid filter/context sizes being - * passed for FCoE and PE. - * - * Returns I40E_SUCCESS if the values passed are valid and within - * range else returns an error. - **/ -STATIC enum i40e_status_code i40e_validate_filter_settings(struct i40e_hw *hw, - struct i40e_filter_control_settings *settings) -{ - u32 fcoe_cntx_size, fcoe_filt_size; - u32 pe_cntx_size, pe_filt_size; - u32 fcoe_fmax; - - u32 val; - - /* Validate FCoE settings passed */ - switch (settings->fcoe_filt_num) { - case I40E_HASH_FILTER_SIZE_1K: - case I40E_HASH_FILTER_SIZE_2K: - case I40E_HASH_FILTER_SIZE_4K: - case I40E_HASH_FILTER_SIZE_8K: - case I40E_HASH_FILTER_SIZE_16K: - case I40E_HASH_FILTER_SIZE_32K: - fcoe_filt_size = I40E_HASH_FILTER_BASE_SIZE; - fcoe_filt_size <<= (u32)settings->fcoe_filt_num; - break; - default: - return I40E_ERR_PARAM; - } - - switch (settings->fcoe_cntx_num) { - case I40E_DMA_CNTX_SIZE_512: - case I40E_DMA_CNTX_SIZE_1K: - case I40E_DMA_CNTX_SIZE_2K: - case I40E_DMA_CNTX_SIZE_4K: - fcoe_cntx_size = I40E_DMA_CNTX_BASE_SIZE; - fcoe_cntx_size <<= (u32)settings->fcoe_cntx_num; - break; - default: - return I40E_ERR_PARAM; - } - - /* Validate PE settings passed */ - switch (settings->pe_filt_num) { - case I40E_HASH_FILTER_SIZE_1K: - case I40E_HASH_FILTER_SIZE_2K: - case I40E_HASH_FILTER_SIZE_4K: - case I40E_HASH_FILTER_SIZE_8K: - case I40E_HASH_FILTER_SIZE_16K: - case I40E_HASH_FILTER_SIZE_32K: - case I40E_HASH_FILTER_SIZE_64K: - case I40E_HASH_FILTER_SIZE_128K: - case I40E_HASH_FILTER_SIZE_256K: - case I40E_HASH_FILTER_SIZE_512K: - case I40E_HASH_FILTER_SIZE_1M: - pe_filt_size = I40E_HASH_FILTER_BASE_SIZE; - pe_filt_size <<= (u32)settings->pe_filt_num; - break; - default: - return I40E_ERR_PARAM; - } - - switch (settings->pe_cntx_num) { - case I40E_DMA_CNTX_SIZE_512: - case I40E_DMA_CNTX_SIZE_1K: - case I40E_DMA_CNTX_SIZE_2K: - case I40E_DMA_CNTX_SIZE_4K: - case I40E_DMA_CNTX_SIZE_8K: - case I40E_DMA_CNTX_SIZE_16K: - case I40E_DMA_CNTX_SIZE_32K: - case I40E_DMA_CNTX_SIZE_64K: - case I40E_DMA_CNTX_SIZE_128K: - case I40E_DMA_CNTX_SIZE_256K: - pe_cntx_size = I40E_DMA_CNTX_BASE_SIZE; - pe_cntx_size <<= (u32)settings->pe_cntx_num; - break; - default: - return I40E_ERR_PARAM; - } - - /* FCHSIZE + FCDSIZE should not be greater than PMFCOEFMAX */ - val = rd32(hw, I40E_GLHMC_FCOEFMAX); - fcoe_fmax = (val & I40E_GLHMC_FCOEFMAX_PMFCOEFMAX_MASK) - >> I40E_GLHMC_FCOEFMAX_PMFCOEFMAX_SHIFT; - if (fcoe_filt_size + fcoe_cntx_size > fcoe_fmax) - return I40E_ERR_INVALID_SIZE; - - return I40E_SUCCESS; -} - -/** - * i40e_set_filter_control - * @hw: pointer to the hardware structure - * @settings: Filter control settings - * - * Set the Queue Filters for PE/FCoE and enable filters required - * for a single PF. It is expected that these settings are programmed - * at the driver initialization time. - **/ -enum i40e_status_code i40e_set_filter_control(struct i40e_hw *hw, - struct i40e_filter_control_settings *settings) -{ - enum i40e_status_code ret = I40E_SUCCESS; - u32 hash_lut_size = 0; - u32 val; - - if (!settings) - return I40E_ERR_PARAM; - - /* Validate the input settings */ - ret = i40e_validate_filter_settings(hw, settings); - if (ret) - return ret; - - /* Read the PF Queue Filter control register */ - val = rd32(hw, I40E_PFQF_CTL_0); - - /* Program required PE hash buckets for the PF */ - val &= ~I40E_PFQF_CTL_0_PEHSIZE_MASK; - val |= ((u32)settings->pe_filt_num << I40E_PFQF_CTL_0_PEHSIZE_SHIFT) & - I40E_PFQF_CTL_0_PEHSIZE_MASK; - /* Program required PE contexts for the PF */ - val &= ~I40E_PFQF_CTL_0_PEDSIZE_MASK; - val |= ((u32)settings->pe_cntx_num << I40E_PFQF_CTL_0_PEDSIZE_SHIFT) & - I40E_PFQF_CTL_0_PEDSIZE_MASK; - - /* Program required FCoE hash buckets for the PF */ - val &= ~I40E_PFQF_CTL_0_PFFCHSIZE_MASK; - val |= ((u32)settings->fcoe_filt_num << - I40E_PFQF_CTL_0_PFFCHSIZE_SHIFT) & - I40E_PFQF_CTL_0_PFFCHSIZE_MASK; - /* Program required FCoE DDP contexts for the PF */ - val &= ~I40E_PFQF_CTL_0_PFFCDSIZE_MASK; - val |= ((u32)settings->fcoe_cntx_num << - I40E_PFQF_CTL_0_PFFCDSIZE_SHIFT) & - I40E_PFQF_CTL_0_PFFCDSIZE_MASK; - - /* Program Hash LUT size for the PF */ - val &= ~I40E_PFQF_CTL_0_HASHLUTSIZE_MASK; - if (settings->hash_lut_size == I40E_HASH_LUT_SIZE_512) - hash_lut_size = 1; - val |= (hash_lut_size << I40E_PFQF_CTL_0_HASHLUTSIZE_SHIFT) & - I40E_PFQF_CTL_0_HASHLUTSIZE_MASK; - - /* Enable FDIR, Ethertype and MACVLAN filters for PF and VFs */ - if (settings->enable_fdir) - val |= I40E_PFQF_CTL_0_FD_ENA_MASK; - if (settings->enable_ethtype) - val |= I40E_PFQF_CTL_0_ETYPE_ENA_MASK; - if (settings->enable_macvlan) - val |= I40E_PFQF_CTL_0_MACVLAN_ENA_MASK; - - wr32(hw, I40E_PFQF_CTL_0, val); - - return I40E_SUCCESS; -} - -/** - * i40e_aq_add_rem_control_packet_filter - Add or Remove Control Packet Filter - * @hw: pointer to the hw struct - * @mac_addr: MAC address to use in the filter - * @ethtype: Ethertype to use in the filter - * @flags: Flags that needs to be applied to the filter - * @vsi_seid: seid of the control VSI - * @queue: VSI queue number to send the packet to - * @is_add: Add control packet filter if True else remove - * @stats: Structure to hold information on control filter counts - * @cmd_details: pointer to command details structure or NULL - * - * This command will Add or Remove control packet filter for a control VSI. - * In return it will update the total number of perfect filter count in - * the stats member. - **/ -enum i40e_status_code i40e_aq_add_rem_control_packet_filter(struct i40e_hw *hw, - u8 *mac_addr, u16 ethtype, u16 flags, - u16 vsi_seid, u16 queue, bool is_add, - struct i40e_control_filter_stats *stats, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_add_remove_control_packet_filter *cmd = - (struct i40e_aqc_add_remove_control_packet_filter *) - &desc.params.raw; - struct i40e_aqc_add_remove_control_packet_filter_completion *resp = - (struct i40e_aqc_add_remove_control_packet_filter_completion *) - &desc.params.raw; - enum i40e_status_code status; - - if (vsi_seid == 0) - return I40E_ERR_PARAM; - - if (is_add) { - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_add_control_packet_filter); - cmd->queue = CPU_TO_LE16(queue); - } else { - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_remove_control_packet_filter); - } - - if (mac_addr) - i40e_memcpy(cmd->mac, mac_addr, I40E_ETH_LENGTH_OF_ADDRESS, - I40E_NONDMA_TO_NONDMA); - - cmd->etype = CPU_TO_LE16(ethtype); - cmd->flags = CPU_TO_LE16(flags); - cmd->seid = CPU_TO_LE16(vsi_seid); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - if (!status && stats) { - stats->mac_etype_used = LE16_TO_CPU(resp->mac_etype_used); - stats->etype_used = LE16_TO_CPU(resp->etype_used); - stats->mac_etype_free = LE16_TO_CPU(resp->mac_etype_free); - stats->etype_free = LE16_TO_CPU(resp->etype_free); - } - - return status; -} - -/** - * i40e_aq_add_cloud_filters - * @hw: pointer to the hardware structure - * @seid: VSI seid to add cloud filters from - * @filters: Buffer which contains the filters to be added - * @filter_count: number of filters contained in the buffer - * - * Set the cloud filters for a given VSI. The contents of the - * i40e_aqc_add_remove_cloud_filters_element_data are filled - * in by the caller of the function. - * - **/ -enum i40e_status_code i40e_aq_add_cloud_filters(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_add_remove_cloud_filters_element_data *filters, - u8 filter_count) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_add_remove_cloud_filters *cmd = - (struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw; - u16 buff_len; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_add_cloud_filters); - - buff_len = sizeof(struct i40e_aqc_add_remove_cloud_filters_element_data) * - filter_count; - desc.datalen = CPU_TO_LE16(buff_len); - desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); - cmd->num_filters = filter_count; - cmd->seid = CPU_TO_LE16(seid); - - status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL); - - return status; -} - -/** - * i40e_aq_remove_cloud_filters - * @hw: pointer to the hardware structure - * @seid: VSI seid to remove cloud filters from - * @filters: Buffer which contains the filters to be removed - * @filter_count: number of filters contained in the buffer - * - * Remove the cloud filters for a given VSI. The contents of the - * i40e_aqc_add_remove_cloud_filters_element_data are filled - * in by the caller of the function. - * - **/ -enum i40e_status_code i40e_aq_remove_cloud_filters(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_add_remove_cloud_filters_element_data *filters, - u8 filter_count) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_add_remove_cloud_filters *cmd = - (struct i40e_aqc_add_remove_cloud_filters *)&desc.params.raw; - enum i40e_status_code status; - u16 buff_len; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_remove_cloud_filters); - - buff_len = sizeof(struct i40e_aqc_add_remove_cloud_filters_element_data) * - filter_count; - desc.datalen = CPU_TO_LE16(buff_len); - desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF | I40E_AQ_FLAG_RD)); - cmd->num_filters = filter_count; - cmd->seid = CPU_TO_LE16(seid); - - status = i40e_asq_send_command(hw, &desc, filters, buff_len, NULL); - - return status; -} - -/** - * i40e_aq_alternate_write - * @hw: pointer to the hardware structure - * @reg_addr0: address of first dword to be read - * @reg_val0: value to be written under 'reg_addr0' - * @reg_addr1: address of second dword to be read - * @reg_val1: value to be written under 'reg_addr1' - * - * Write one or two dwords to alternate structure. Fields are indicated - * by 'reg_addr0' and 'reg_addr1' register numbers. - * - **/ -enum i40e_status_code i40e_aq_alternate_write(struct i40e_hw *hw, - u32 reg_addr0, u32 reg_val0, - u32 reg_addr1, u32 reg_val1) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_alternate_write *cmd_resp = - (struct i40e_aqc_alternate_write *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_alternate_write); - cmd_resp->address0 = CPU_TO_LE32(reg_addr0); - cmd_resp->address1 = CPU_TO_LE32(reg_addr1); - cmd_resp->data0 = CPU_TO_LE32(reg_val0); - cmd_resp->data1 = CPU_TO_LE32(reg_val1); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL); - - return status; -} - -/** - * i40e_aq_alternate_write_indirect - * @hw: pointer to the hardware structure - * @addr: address of a first register to be modified - * @dw_count: number of alternate structure fields to write - * @buffer: pointer to the command buffer - * - * Write 'dw_count' dwords from 'buffer' to alternate structure - * starting at 'addr'. - * - **/ -enum i40e_status_code i40e_aq_alternate_write_indirect(struct i40e_hw *hw, - u32 addr, u32 dw_count, void *buffer) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_alternate_ind_write *cmd_resp = - (struct i40e_aqc_alternate_ind_write *)&desc.params.raw; - enum i40e_status_code status; - - if (buffer == NULL) - return I40E_ERR_PARAM; - - /* Indirect command */ - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_alternate_write_indirect); - - desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_RD); - desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_BUF); - if (dw_count > (I40E_AQ_LARGE_BUF/4)) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); - - cmd_resp->address = CPU_TO_LE32(addr); - cmd_resp->length = CPU_TO_LE32(dw_count); - cmd_resp->addr_high = CPU_TO_LE32(I40E_HI_WORD((u64)buffer)); - cmd_resp->addr_low = CPU_TO_LE32(I40E_LO_DWORD((u64)buffer)); - - status = i40e_asq_send_command(hw, &desc, buffer, - I40E_LO_DWORD(4*dw_count), NULL); - - return status; -} - -/** - * i40e_aq_alternate_read - * @hw: pointer to the hardware structure - * @reg_addr0: address of first dword to be read - * @reg_val0: pointer for data read from 'reg_addr0' - * @reg_addr1: address of second dword to be read - * @reg_val1: pointer for data read from 'reg_addr1' - * - * Read one or two dwords from alternate structure. Fields are indicated - * by 'reg_addr0' and 'reg_addr1' register numbers. If 'reg_val1' pointer - * is not passed then only register at 'reg_addr0' is read. - * - **/ -enum i40e_status_code i40e_aq_alternate_read(struct i40e_hw *hw, - u32 reg_addr0, u32 *reg_val0, - u32 reg_addr1, u32 *reg_val1) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_alternate_write *cmd_resp = - (struct i40e_aqc_alternate_write *)&desc.params.raw; - enum i40e_status_code status; - - if (reg_val0 == NULL) - return I40E_ERR_PARAM; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_alternate_read); - cmd_resp->address0 = CPU_TO_LE32(reg_addr0); - cmd_resp->address1 = CPU_TO_LE32(reg_addr1); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL); - - if (status == I40E_SUCCESS) { - *reg_val0 = LE32_TO_CPU(cmd_resp->data0); - - if (reg_val1 != NULL) - *reg_val1 = LE32_TO_CPU(cmd_resp->data1); - } - - return status; -} - -/** - * i40e_aq_alternate_read_indirect - * @hw: pointer to the hardware structure - * @addr: address of the alternate structure field - * @dw_count: number of alternate structure fields to read - * @buffer: pointer to the command buffer - * - * Read 'dw_count' dwords from alternate structure starting at 'addr' and - * place them in 'buffer'. The buffer should be allocated by caller. - * - **/ -enum i40e_status_code i40e_aq_alternate_read_indirect(struct i40e_hw *hw, - u32 addr, u32 dw_count, void *buffer) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_alternate_ind_write *cmd_resp = - (struct i40e_aqc_alternate_ind_write *)&desc.params.raw; - enum i40e_status_code status; - - if (buffer == NULL) - return I40E_ERR_PARAM; - - /* Indirect command */ - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_alternate_read_indirect); - - desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_RD); - desc.flags |= CPU_TO_LE16(I40E_AQ_FLAG_BUF); - if (dw_count > (I40E_AQ_LARGE_BUF/4)) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); - - cmd_resp->address = CPU_TO_LE32(addr); - cmd_resp->length = CPU_TO_LE32(dw_count); - cmd_resp->addr_high = CPU_TO_LE32(I40E_HI_DWORD((u64)buffer)); - cmd_resp->addr_low = CPU_TO_LE32(I40E_LO_DWORD((u64)buffer)); - - status = i40e_asq_send_command(hw, &desc, buffer, - I40E_LO_DWORD(4*dw_count), NULL); - - return status; -} - -/** - * i40e_aq_alternate_clear - * @hw: pointer to the HW structure. - * - * Clear the alternate structures of the port from which the function - * is called. - * - **/ -enum i40e_status_code i40e_aq_alternate_clear(struct i40e_hw *hw) -{ - struct i40e_aq_desc desc; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_alternate_clear_port); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL); - - return status; -} - -/** - * i40e_aq_alternate_write_done - * @hw: pointer to the HW structure. - * @bios_mode: indicates whether the command is executed by UEFI or legacy BIOS - * @reset_needed: indicates the SW should trigger GLOBAL reset - * - * Indicates to the FW that alternate structures have been changed. - * - **/ -enum i40e_status_code i40e_aq_alternate_write_done(struct i40e_hw *hw, - u8 bios_mode, bool *reset_needed) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_alternate_write_done *cmd = - (struct i40e_aqc_alternate_write_done *)&desc.params.raw; - enum i40e_status_code status; - - if (reset_needed == NULL) - return I40E_ERR_PARAM; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_alternate_write_done); - - cmd->cmd_flags = CPU_TO_LE16(bios_mode); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL); - if (!status) - *reset_needed = ((LE16_TO_CPU(cmd->cmd_flags) & - I40E_AQ_ALTERNATE_RESET_NEEDED) != 0); - - return status; -} - -/** - * i40e_aq_set_oem_mode - * @hw: pointer to the HW structure. - * @oem_mode: the OEM mode to be used - * - * Sets the device to a specific operating mode. Currently the only supported - * mode is no_clp, which causes FW to refrain from using Alternate RAM. - * - **/ -enum i40e_status_code i40e_aq_set_oem_mode(struct i40e_hw *hw, - u8 oem_mode) -{ - struct i40e_aq_desc desc; - struct i40e_aqc_alternate_write_done *cmd = - (struct i40e_aqc_alternate_write_done *)&desc.params.raw; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_alternate_set_mode); - - cmd->cmd_flags = CPU_TO_LE16(oem_mode); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL); - - return status; -} - -/** - * i40e_aq_resume_port_tx - * @hw: pointer to the hardware structure - * @cmd_details: pointer to command details structure or NULL - * - * Resume port's Tx traffic - **/ -enum i40e_status_code i40e_aq_resume_port_tx(struct i40e_hw *hw, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_resume_port_tx); - - status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details); - - return status; -} - -/** - * i40e_set_pci_config_data - store PCI bus info - * @hw: pointer to hardware structure - * @link_status: the link status word from PCI config space - * - * Stores the PCI bus info (speed, width, type) within the i40e_hw structure - **/ -void i40e_set_pci_config_data(struct i40e_hw *hw, u16 link_status) -{ - hw->bus.type = i40e_bus_type_pci_express; - - switch (link_status & I40E_PCI_LINK_WIDTH) { - case I40E_PCI_LINK_WIDTH_1: - hw->bus.width = i40e_bus_width_pcie_x1; - break; - case I40E_PCI_LINK_WIDTH_2: - hw->bus.width = i40e_bus_width_pcie_x2; - break; - case I40E_PCI_LINK_WIDTH_4: - hw->bus.width = i40e_bus_width_pcie_x4; - break; - case I40E_PCI_LINK_WIDTH_8: - hw->bus.width = i40e_bus_width_pcie_x8; - break; - default: - hw->bus.width = i40e_bus_width_unknown; - break; - } - - switch (link_status & I40E_PCI_LINK_SPEED) { - case I40E_PCI_LINK_SPEED_2500: - hw->bus.speed = i40e_bus_speed_2500; - break; - case I40E_PCI_LINK_SPEED_5000: - hw->bus.speed = i40e_bus_speed_5000; - break; - case I40E_PCI_LINK_SPEED_8000: - hw->bus.speed = i40e_bus_speed_8000; - break; - default: - hw->bus.speed = i40e_bus_speed_unknown; - break; - } -} - -/** - * i40e_read_bw_from_alt_ram - * @hw: pointer to the hardware structure - * @max_bw: pointer for max_bw read - * @min_bw: pointer for min_bw read - * @min_valid: pointer for bool that is true if min_bw is a valid value - * @max_valid: pointer for bool that is true if max_bw is a valid value - * - * Read bw from the alternate ram for the given pf - **/ -enum i40e_status_code i40e_read_bw_from_alt_ram(struct i40e_hw *hw, - u32 *max_bw, u32 *min_bw, - bool *min_valid, bool *max_valid) -{ - enum i40e_status_code status; - u32 max_bw_addr, min_bw_addr; - - /* Calculate the address of the min/max bw registers */ - max_bw_addr = I40E_ALT_STRUCT_FIRST_PF_OFFSET + - I40E_ALT_STRUCT_MAX_BW_OFFSET + - (I40E_ALT_STRUCT_DWORDS_PER_PF*hw->pf_id); - min_bw_addr = I40E_ALT_STRUCT_FIRST_PF_OFFSET + - I40E_ALT_STRUCT_MIN_BW_OFFSET + - (I40E_ALT_STRUCT_DWORDS_PER_PF*hw->pf_id); - - /* Read the bandwidths from alt ram */ - status = i40e_aq_alternate_read(hw, max_bw_addr, max_bw, - min_bw_addr, min_bw); - - if (*min_bw & I40E_ALT_BW_VALID_MASK) - *min_valid = true; - else - *min_valid = false; - - if (*max_bw & I40E_ALT_BW_VALID_MASK) - *max_valid = true; - else - *max_valid = false; - - return status; -} - -/** - * i40e_aq_configure_partition_bw - * @hw: pointer to the hardware structure - * @bw_data: Buffer holding valid pfs and bw limits - * @cmd_details: pointer to command details - * - * Configure partitions guaranteed/max bw - **/ -enum i40e_status_code i40e_aq_configure_partition_bw(struct i40e_hw *hw, - struct i40e_aqc_configure_partition_bw_data *bw_data, - struct i40e_asq_cmd_details *cmd_details) -{ - enum i40e_status_code status; - struct i40e_aq_desc desc; - u16 bwd_size = sizeof(struct i40e_aqc_configure_partition_bw_data); - - i40e_fill_default_direct_cmd_desc(&desc, - i40e_aqc_opc_configure_partition_bw); - - /* Indirect command */ - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_BUF); - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_RD); - - if (bwd_size > I40E_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); - - desc.datalen = CPU_TO_LE16(bwd_size); - - status = i40e_asq_send_command(hw, &desc, bw_data, bwd_size, cmd_details); - - return status; -} - -/** - * i40e_aq_send_msg_to_pf - * @hw: pointer to the hardware structure - * @v_opcode: opcodes for VF-PF communication - * @v_retval: return error code - * @msg: pointer to the msg buffer - * @msglen: msg length - * @cmd_details: pointer to command details - * - * Send message to PF driver using admin queue. By default, this message - * is sent asynchronously, i.e. i40e_asq_send_command() does not wait for - * completion before returning. - **/ -enum i40e_status_code i40e_aq_send_msg_to_pf(struct i40e_hw *hw, - enum i40e_virtchnl_ops v_opcode, - enum i40e_status_code v_retval, - u8 *msg, u16 msglen, - struct i40e_asq_cmd_details *cmd_details) -{ - struct i40e_aq_desc desc; - struct i40e_asq_cmd_details details; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_send_msg_to_pf); - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_SI); - desc.cookie_high = CPU_TO_LE32(v_opcode); - desc.cookie_low = CPU_TO_LE32(v_retval); - if (msglen) { - desc.flags |= CPU_TO_LE16((u16)(I40E_AQ_FLAG_BUF - | I40E_AQ_FLAG_RD)); - if (msglen > I40E_AQ_LARGE_BUF) - desc.flags |= CPU_TO_LE16((u16)I40E_AQ_FLAG_LB); - desc.datalen = CPU_TO_LE16(msglen); - } - if (!cmd_details) { - i40e_memset(&details, 0, sizeof(details), I40E_NONDMA_MEM); - details.async = true; - cmd_details = &details; - } - status = i40e_asq_send_command(hw, (struct i40e_aq_desc *)&desc, msg, - msglen, cmd_details); - return status; -} - -/** - * i40e_vf_parse_hw_config - * @hw: pointer to the hardware structure - * @msg: pointer to the virtual channel VF resource structure - * - * Given a VF resource message from the PF, populate the hw struct - * with appropriate information. - **/ -void i40e_vf_parse_hw_config(struct i40e_hw *hw, - struct i40e_virtchnl_vf_resource *msg) -{ - struct i40e_virtchnl_vsi_resource *vsi_res; - int i; - - vsi_res = &msg->vsi_res[0]; - - hw->dev_caps.num_vsis = msg->num_vsis; - hw->dev_caps.num_rx_qp = msg->num_queue_pairs; - hw->dev_caps.num_tx_qp = msg->num_queue_pairs; - hw->dev_caps.num_msix_vectors_vf = msg->max_vectors; - hw->dev_caps.dcb = msg->vf_offload_flags & - I40E_VIRTCHNL_VF_OFFLOAD_L2; - hw->dev_caps.fcoe = (msg->vf_offload_flags & - I40E_VIRTCHNL_VF_OFFLOAD_FCOE) ? 1 : 0; - hw->dev_caps.iwarp = (msg->vf_offload_flags & - I40E_VIRTCHNL_VF_OFFLOAD_IWARP) ? 1 : 0; - for (i = 0; i < msg->num_vsis; i++) { - if (vsi_res->vsi_type == I40E_VSI_SRIOV) { - i40e_memcpy(hw->mac.perm_addr, - vsi_res->default_mac_addr, - I40E_ETH_LENGTH_OF_ADDRESS, - I40E_NONDMA_TO_NONDMA); - i40e_memcpy(hw->mac.addr, vsi_res->default_mac_addr, - I40E_ETH_LENGTH_OF_ADDRESS, - I40E_NONDMA_TO_NONDMA); - } - vsi_res++; - } -} - -/** - * i40e_vf_reset - * @hw: pointer to the hardware structure - * - * Send a VF_RESET message to the PF. Does not wait for response from PF - * as none will be forthcoming. Immediately after calling this function, - * the admin queue should be shut down and (optionally) reinitialized. - **/ -enum i40e_status_code i40e_vf_reset(struct i40e_hw *hw) -{ - return i40e_aq_send_msg_to_pf(hw, I40E_VIRTCHNL_OP_RESET_VF, - I40E_SUCCESS, NULL, 0, NULL); -} -#endif /* VF_DRIVER */ diff --git a/lib/librte_pmd_i40e/i40e/i40e_dcb.c b/lib/librte_pmd_i40e/i40e/i40e_dcb.c deleted file mode 100644 index d067028..0000000 --- a/lib/librte_pmd_i40e/i40e/i40e_dcb.c +++ /dev/null @@ -1,479 +0,0 @@ -/******************************************************************************* - -Copyright (c) 2013 - 2014, Intel Corporation -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - - 1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - 3. Neither the name of the Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. - -***************************************************************************/ - -#include "i40e_adminq.h" -#include "i40e_prototype.h" -#include "i40e_dcb.h" - -/** - * i40e_get_dcbx_status - * @hw: pointer to the hw struct - * @status: Embedded DCBX Engine Status - * - * Get the DCBX status from the Firmware - **/ -enum i40e_status_code i40e_get_dcbx_status(struct i40e_hw *hw, u16 *status) -{ - u32 reg; - - if (!status) - return I40E_ERR_PARAM; - - reg = rd32(hw, I40E_PRTDCB_GENS); - *status = (u16)((reg & I40E_PRTDCB_GENS_DCBX_STATUS_MASK) >> - I40E_PRTDCB_GENS_DCBX_STATUS_SHIFT); - - return I40E_SUCCESS; -} - -/** - * i40e_parse_ieee_etscfg_tlv - * @tlv: IEEE 802.1Qaz ETS CFG TLV - * @dcbcfg: Local store to update ETS CFG data - * - * Parses IEEE 802.1Qaz ETS CFG TLV - **/ -static void i40e_parse_ieee_etscfg_tlv(struct i40e_lldp_org_tlv *tlv, - struct i40e_dcbx_config *dcbcfg) -{ - struct i40e_ieee_ets_config *etscfg; - u8 *buf = tlv->tlvinfo; - u16 offset = 0; - u8 priority; - int i; - - /* First Octet post subtype - * -------------------------- - * |will-|CBS | Re- | Max | - * |ing | |served| TCs | - * -------------------------- - * |1bit | 1bit|3 bits|3bits| - */ - etscfg = &dcbcfg->etscfg; - etscfg->willing = (u8)((buf[offset] & I40E_IEEE_ETS_WILLING_MASK) >> - I40E_IEEE_ETS_WILLING_SHIFT); - etscfg->cbs = (u8)((buf[offset] & I40E_IEEE_ETS_CBS_MASK) >> - I40E_IEEE_ETS_CBS_SHIFT); - etscfg->maxtcs = (u8)((buf[offset] & I40E_IEEE_ETS_MAXTC_MASK) >> - I40E_IEEE_ETS_MAXTC_SHIFT); - - /* Move offset to Priority Assignment Table */ - offset++; - - /* Priority Assignment Table (4 octets) - * Octets:| 1 | 2 | 3 | 4 | - * ----------------------------------------- - * |pri0|pri1|pri2|pri3|pri4|pri5|pri6|pri7| - * ----------------------------------------- - * Bits:|7 4|3 0|7 4|3 0|7 4|3 0|7 4|3 0| - * ----------------------------------------- - */ - for (i = 0; i < 4; i++) { - priority = (u8)((buf[offset] & I40E_IEEE_ETS_PRIO_1_MASK) >> - I40E_IEEE_ETS_PRIO_1_SHIFT); - etscfg->prioritytable[i * 2] = priority; - priority = (u8)((buf[offset] & I40E_IEEE_ETS_PRIO_0_MASK) >> - I40E_IEEE_ETS_PRIO_0_SHIFT); - etscfg->prioritytable[i * 2 + 1] = priority; - offset++; - } - - /* TC Bandwidth Table (8 octets) - * Octets:| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | - * --------------------------------- - * |tc0|tc1|tc2|tc3|tc4|tc5|tc6|tc7| - * --------------------------------- - */ - for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) - etscfg->tcbwtable[i] = buf[offset++]; - - /* TSA Assignment Table (8 octets) - * Octets:| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | - * --------------------------------- - * |tc0|tc1|tc2|tc3|tc4|tc5|tc6|tc7| - * --------------------------------- - */ - for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) - etscfg->tsatable[i] = buf[offset++]; -} - -/** - * i40e_parse_ieee_etsrec_tlv - * @tlv: IEEE 802.1Qaz ETS REC TLV - * @dcbcfg: Local store to update ETS REC data - * - * Parses IEEE 802.1Qaz ETS REC TLV - **/ -static void i40e_parse_ieee_etsrec_tlv(struct i40e_lldp_org_tlv *tlv, - struct i40e_dcbx_config *dcbcfg) -{ - u8 *buf = tlv->tlvinfo; - u16 offset = 0; - u8 priority; - int i; - - /* Move offset to priority table */ - offset++; - - /* Priority Assignment Table (4 octets) - * Octets:| 1 | 2 | 3 | 4 | - * ----------------------------------------- - * |pri0|pri1|pri2|pri3|pri4|pri5|pri6|pri7| - * ----------------------------------------- - * Bits:|7 4|3 0|7 4|3 0|7 4|3 0|7 4|3 0| - * ----------------------------------------- - */ - for (i = 0; i < 4; i++) { - priority = (u8)((buf[offset] & I40E_IEEE_ETS_PRIO_1_MASK) >> - I40E_IEEE_ETS_PRIO_1_SHIFT); - dcbcfg->etsrec.prioritytable[i*2] = priority; - priority = (u8)((buf[offset] & I40E_IEEE_ETS_PRIO_0_MASK) >> - I40E_IEEE_ETS_PRIO_0_SHIFT); - dcbcfg->etsrec.prioritytable[i*2 + 1] = priority; - offset++; - } - - /* TC Bandwidth Table (8 octets) - * Octets:| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | - * --------------------------------- - * |tc0|tc1|tc2|tc3|tc4|tc5|tc6|tc7| - * --------------------------------- - */ - for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) - dcbcfg->etsrec.tcbwtable[i] = buf[offset++]; - - /* TSA Assignment Table (8 octets) - * Octets:| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | - * --------------------------------- - * |tc0|tc1|tc2|tc3|tc4|tc5|tc6|tc7| - * --------------------------------- - */ - for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) - dcbcfg->etsrec.tsatable[i] = buf[offset++]; -} - -/** - * i40e_parse_ieee_pfccfg_tlv - * @tlv: IEEE 802.1Qaz PFC CFG TLV - * @dcbcfg: Local store to update PFC CFG data - * - * Parses IEEE 802.1Qaz PFC CFG TLV - **/ -static void i40e_parse_ieee_pfccfg_tlv(struct i40e_lldp_org_tlv *tlv, - struct i40e_dcbx_config *dcbcfg) -{ - u8 *buf = tlv->tlvinfo; - - /* ---------------------------------------- - * |will-|MBC | Re- | PFC | PFC Enable | - * |ing | |served| cap | | - * ----------------------------------------- - * |1bit | 1bit|2 bits|4bits| 1 octet | - */ - dcbcfg->pfc.willing = (u8)((buf[0] & I40E_IEEE_PFC_WILLING_MASK) >> - I40E_IEEE_PFC_WILLING_SHIFT); - dcbcfg->pfc.mbc = (u8)((buf[0] & I40E_IEEE_PFC_MBC_MASK) >> - I40E_IEEE_PFC_MBC_SHIFT); - dcbcfg->pfc.pfccap = (u8)((buf[0] & I40E_IEEE_PFC_CAP_MASK) >> - I40E_IEEE_PFC_CAP_SHIFT); - dcbcfg->pfc.pfcenable = buf[1]; -} - -/** - * i40e_parse_ieee_app_tlv - * @tlv: IEEE 802.1Qaz APP TLV - * @dcbcfg: Local store to update APP PRIO data - * - * Parses IEEE 802.1Qaz APP PRIO TLV - **/ -static void i40e_parse_ieee_app_tlv(struct i40e_lldp_org_tlv *tlv, - struct i40e_dcbx_config *dcbcfg) -{ - u16 typelength; - u16 offset = 0; - u16 length; - int i = 0; - u8 *buf; - - typelength = I40E_NTOHS(tlv->typelength); - length = (u16)((typelength & I40E_LLDP_TLV_LEN_MASK) >> - I40E_LLDP_TLV_LEN_SHIFT); - buf = tlv->tlvinfo; - - /* The App priority table starts 5 octets after TLV header */ - length -= (sizeof(tlv->ouisubtype) + 1); - - /* Move offset to App Priority Table */ - offset++; - - /* Application Priority Table (3 octets) - * Octets:| 1 | 2 | 3 | - * ----------------------------------------- - * |Priority|Rsrvd| Sel | Protocol ID | - * ----------------------------------------- - * Bits:|23 21|20 19|18 16|15 0| - * ----------------------------------------- - */ - while (offset < length) { - dcbcfg->app[i].priority = (u8)((buf[offset] & - I40E_IEEE_APP_PRIO_MASK) >> - I40E_IEEE_APP_PRIO_SHIFT); - dcbcfg->app[i].selector = (u8)((buf[offset] & - I40E_IEEE_APP_SEL_MASK) >> - I40E_IEEE_APP_SEL_SHIFT); - dcbcfg->app[i].protocolid = (buf[offset + 1] << 0x8) | - buf[offset + 2]; - /* Move to next app */ - offset += 3; - i++; - if (i >= I40E_DCBX_MAX_APPS) - break; - } - - dcbcfg->numapps = i; -} - -/** - * i40e_parse_ieee_etsrec_tlv - * @tlv: IEEE 802.1Qaz TLV - * @dcbcfg: Local store to update ETS REC data - * - * Get the TLV subtype and send it to parsing function - * based on the subtype value - **/ -static void i40e_parse_ieee_tlv(struct i40e_lldp_org_tlv *tlv, - struct i40e_dcbx_config *dcbcfg) -{ - u32 ouisubtype; - u8 subtype; - - ouisubtype = I40E_NTOHL(tlv->ouisubtype); - subtype = (u8)((ouisubtype & I40E_LLDP_TLV_SUBTYPE_MASK) >> - I40E_LLDP_TLV_SUBTYPE_SHIFT); - switch (subtype) { - case I40E_IEEE_SUBTYPE_ETS_CFG: - i40e_parse_ieee_etscfg_tlv(tlv, dcbcfg); - break; - case I40E_IEEE_SUBTYPE_ETS_REC: - i40e_parse_ieee_etsrec_tlv(tlv, dcbcfg); - break; - case I40E_IEEE_SUBTYPE_PFC_CFG: - i40e_parse_ieee_pfccfg_tlv(tlv, dcbcfg); - break; - case I40E_IEEE_SUBTYPE_APP_PRI: - i40e_parse_ieee_app_tlv(tlv, dcbcfg); - break; - default: - break; - } -} - -/** - * i40e_parse_org_tlv - * @tlv: Organization specific TLV - * @dcbcfg: Local store to update ETS REC data - * - * Currently only IEEE 802.1Qaz TLV is supported, all others - * will be returned - **/ -static void i40e_parse_org_tlv(struct i40e_lldp_org_tlv *tlv, - struct i40e_dcbx_config *dcbcfg) -{ - u32 ouisubtype; - u32 oui; - - ouisubtype = I40E_NTOHL(tlv->ouisubtype); - oui = (u32)((ouisubtype & I40E_LLDP_TLV_OUI_MASK) >> - I40E_LLDP_TLV_OUI_SHIFT); - switch (oui) { - case I40E_IEEE_8021QAZ_OUI: - i40e_parse_ieee_tlv(tlv, dcbcfg); - break; - default: - break; - } -} - -/** - * i40e_lldp_to_dcb_config - * @lldpmib: LLDPDU to be parsed - * @dcbcfg: store for LLDPDU data - * - * Parse DCB configuration from the LLDPDU - **/ -enum i40e_status_code i40e_lldp_to_dcb_config(u8 *lldpmib, - struct i40e_dcbx_config *dcbcfg) -{ - enum i40e_status_code ret = I40E_SUCCESS; - struct i40e_lldp_org_tlv *tlv; - u16 type; - u16 length; - u16 typelength; - u16 offset = 0; - - if (!lldpmib || !dcbcfg) - return I40E_ERR_PARAM; - - /* set to the start of LLDPDU */ - lldpmib += I40E_LLDP_MIB_HLEN; - tlv = (struct i40e_lldp_org_tlv *)lldpmib; - while (1) { - typelength = I40E_NTOHS(tlv->typelength); - type = (u16)((typelength & I40E_LLDP_TLV_TYPE_MASK) >> - I40E_LLDP_TLV_TYPE_SHIFT); - length = (u16)((typelength & I40E_LLDP_TLV_LEN_MASK) >> - I40E_LLDP_TLV_LEN_SHIFT); - offset += sizeof(typelength) + length; - - /* END TLV or beyond LLDPDU size */ - if ((type == I40E_TLV_TYPE_END) || (offset > I40E_LLDPDU_SIZE)) - break; - - switch (type) { - case I40E_TLV_TYPE_ORG: - i40e_parse_org_tlv(tlv, dcbcfg); - break; - default: - break; - } - - /* Move to next TLV */ - tlv = (struct i40e_lldp_org_tlv *)((char *)tlv + - sizeof(tlv->typelength) + - length); - } - - return ret; -} - -/** - * i40e_aq_get_dcb_config - * @hw: pointer to the hw struct - * @mib_type: mib type for the query - * @bridgetype: bridge type for the query (remote) - * @dcbcfg: store for LLDPDU data - * - * Query DCB configuration from the Firmware - **/ -enum i40e_status_code i40e_aq_get_dcb_config(struct i40e_hw *hw, u8 mib_type, - u8 bridgetype, - struct i40e_dcbx_config *dcbcfg) -{ - enum i40e_status_code ret = I40E_SUCCESS; - struct i40e_virt_mem mem; - u8 *lldpmib; - - /* Allocate the LLDPDU */ - ret = i40e_allocate_virt_mem(hw, &mem, I40E_LLDPDU_SIZE); - if (ret) - return ret; - - lldpmib = (u8 *)mem.va; - ret = i40e_aq_get_lldp_mib(hw, bridgetype, mib_type, - (void *)lldpmib, I40E_LLDPDU_SIZE, - NULL, NULL, NULL); - if (ret) - goto free_mem; - - /* Parse LLDP MIB to get dcb configuration */ - ret = i40e_lldp_to_dcb_config(lldpmib, dcbcfg); - -free_mem: - i40e_free_virt_mem(hw, &mem); - return ret; -} - -/** - * i40e_get_dcb_config - * @hw: pointer to the hw struct - * - * Get DCB configuration from the Firmware - **/ -enum i40e_status_code i40e_get_dcb_config(struct i40e_hw *hw) -{ - enum i40e_status_code ret = I40E_SUCCESS; - - /* Get Local DCB Config */ - ret = i40e_aq_get_dcb_config(hw, I40E_AQ_LLDP_MIB_LOCAL, 0, - &hw->local_dcbx_config); - if (ret) - goto out; - - /* Get Remote DCB Config */ - ret = i40e_aq_get_dcb_config(hw, I40E_AQ_LLDP_MIB_REMOTE, - I40E_AQ_LLDP_BRIDGE_TYPE_NEAREST_BRIDGE, - &hw->remote_dcbx_config); -out: - return ret; -} - -/** - * i40e_init_dcb - * @hw: pointer to the hw struct - * - * Update DCB configuration from the Firmware - **/ -enum i40e_status_code i40e_init_dcb(struct i40e_hw *hw) -{ - enum i40e_status_code ret = I40E_SUCCESS; - - if (!hw->func_caps.dcb) - return ret; - - /* Get DCBX status */ - ret = i40e_get_dcbx_status(hw, &hw->dcbx_status); - if (ret) - return ret; - - /* Check the DCBX Status */ - switch (hw->dcbx_status) { - case I40E_DCBX_STATUS_DONE: - case I40E_DCBX_STATUS_IN_PROGRESS: - /* Get current DCBX configuration */ - ret = i40e_get_dcb_config(hw); - break; - case I40E_DCBX_STATUS_DISABLED: - return ret; - case I40E_DCBX_STATUS_NOT_STARTED: - case I40E_DCBX_STATUS_MULTIPLE_PEERS: - default: - break; - } - - /* Configure the LLDP MIB change event */ - ret = i40e_aq_cfg_lldp_mib_change_event(hw, true, NULL); - if (ret) - return ret; - - return ret; -} diff --git a/lib/librte_pmd_i40e/i40e/i40e_dcb.h b/lib/librte_pmd_i40e/i40e/i40e_dcb.h deleted file mode 100644 index 2261e08..0000000 --- a/lib/librte_pmd_i40e/i40e/i40e_dcb.h +++ /dev/null @@ -1,161 +0,0 @@ -/******************************************************************************* - -Copyright (c) 2013 - 2014, Intel Corporation -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - - 1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - 3. Neither the name of the Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. - -***************************************************************************/ - -#ifndef _I40E_DCB_H_ -#define _I40E_DCB_H_ - -#include "i40e_type.h" - -#define I40E_DCBX_OFFLOAD_DISABLED 0 -#define I40E_DCBX_OFFLOAD_ENABLED 1 - -#define I40E_DCBX_STATUS_NOT_STARTED 0 -#define I40E_DCBX_STATUS_IN_PROGRESS 1 -#define I40E_DCBX_STATUS_DONE 2 -#define I40E_DCBX_STATUS_MULTIPLE_PEERS 3 -#define I40E_DCBX_STATUS_DISABLED 7 - -#define I40E_TLV_TYPE_END 0 -#define I40E_TLV_TYPE_ORG 127 - -#define I40E_IEEE_8021QAZ_OUI 0x0080C2 -#define I40E_IEEE_SUBTYPE_ETS_CFG 9 -#define I40E_IEEE_SUBTYPE_ETS_REC 10 -#define I40E_IEEE_SUBTYPE_PFC_CFG 11 -#define I40E_IEEE_SUBTYPE_APP_PRI 12 - -#define I40E_LLDP_ADMINSTATUS_DISABLED 0 -#define I40E_LLDP_ADMINSTATUS_ENABLED_RX 1 -#define I40E_LLDP_ADMINSTATUS_ENABLED_TX 2 -#define I40E_LLDP_ADMINSTATUS_ENABLED_RXTX 3 - -/* Defines for LLDP TLV header */ -#define I40E_LLDP_MIB_HLEN 14 -#define I40E_LLDP_TLV_LEN_SHIFT 0 -#define I40E_LLDP_TLV_LEN_MASK (0x01FF << I40E_LLDP_TLV_LEN_SHIFT) -#define I40E_LLDP_TLV_TYPE_SHIFT 9 -#define I40E_LLDP_TLV_TYPE_MASK (0x7F << I40E_LLDP_TLV_TYPE_SHIFT) -#define I40E_LLDP_TLV_SUBTYPE_SHIFT 0 -#define I40E_LLDP_TLV_SUBTYPE_MASK (0xFF << I40E_LLDP_TLV_SUBTYPE_SHIFT) -#define I40E_LLDP_TLV_OUI_SHIFT 8 -#define I40E_LLDP_TLV_OUI_MASK (0xFFFFFF << I40E_LLDP_TLV_OUI_SHIFT) - -/* Defines for IEEE ETS TLV */ -#define I40E_IEEE_ETS_MAXTC_SHIFT 0 -#define I40E_IEEE_ETS_MAXTC_MASK (0x7 << I40E_IEEE_ETS_MAXTC_SHIFT) -#define I40E_IEEE_ETS_CBS_SHIFT 6 -#define I40E_IEEE_ETS_CBS_MASK (0x1 << I40E_IEEE_ETS_CBS_SHIFT) -#define I40E_IEEE_ETS_WILLING_SHIFT 7 -#define I40E_IEEE_ETS_WILLING_MASK (0x1 << I40E_IEEE_ETS_WILLING_SHIFT) -#define I40E_IEEE_ETS_PRIO_0_SHIFT 0 -#define I40E_IEEE_ETS_PRIO_0_MASK (0x7 << I40E_IEEE_ETS_PRIO_0_SHIFT) -#define I40E_IEEE_ETS_PRIO_1_SHIFT 4 -#define I40E_IEEE_ETS_PRIO_1_MASK (0x7 << I40E_IEEE_ETS_PRIO_1_SHIFT) - -/* Defines for IEEE TSA types */ -#define I40E_IEEE_TSA_STRICT 0 -#define I40E_IEEE_TSA_CBS 1 -#define I40E_IEEE_TSA_ETS 2 -#define I40E_IEEE_TSA_VENDOR 255 - -/* Defines for IEEE PFC TLV */ -#define I40E_IEEE_PFC_CAP_SHIFT 0 -#define I40E_IEEE_PFC_CAP_MASK (0xF << I40E_IEEE_PFC_CAP_SHIFT) -#define I40E_IEEE_PFC_MBC_SHIFT 6 -#define I40E_IEEE_PFC_MBC_MASK (0x1 << I40E_IEEE_PFC_MBC_SHIFT) -#define I40E_IEEE_PFC_WILLING_SHIFT 7 -#define I40E_IEEE_PFC_WILLING_MASK (0x1 << I40E_IEEE_PFC_WILLING_SHIFT) - -/* Defines for IEEE APP TLV */ -#define I40E_IEEE_APP_SEL_SHIFT 0 -#define I40E_IEEE_APP_SEL_MASK (0x7 << I40E_IEEE_APP_SEL_SHIFT) -#define I40E_IEEE_APP_PRIO_SHIFT 5 -#define I40E_IEEE_APP_PRIO_MASK (0x7 << I40E_IEEE_APP_PRIO_SHIFT) - - -#pragma pack(1) - -/* IEEE 802.1AB LLDP TLV structure */ -struct i40e_lldp_generic_tlv { - __be16 typelength; - u8 tlvinfo[1]; -}; - -/* IEEE 802.1AB LLDP Organization specific TLV */ -struct i40e_lldp_org_tlv { - __be16 typelength; - __be32 ouisubtype; - u8 tlvinfo[1]; -}; -#pragma pack() - -/* - * TODO: The below structures related LLDP/DCBX variables - * and statistics are defined but need to find how to get - * the required information from the Firmware to use them - */ - -/* IEEE 802.1AB LLDP Agent Statistics */ -struct i40e_lldp_stats { - u64 remtablelastchangetime; - u64 remtableinserts; - u64 remtabledeletes; - u64 remtabledrops; - u64 remtableageouts; - u64 txframestotal; - u64 rxframesdiscarded; - u64 rxportframeerrors; - u64 rxportframestotal; - u64 rxporttlvsdiscardedtotal; - u64 rxporttlvsunrecognizedtotal; - u64 remtoomanyneighbors; -}; - -/* IEEE 802.1Qaz DCBX variables */ -struct i40e_dcbx_variables { - u32 defmaxtrafficclasses; - u32 defprioritytcmapping; - u32 deftcbandwidth; - u32 deftsaassignment; -}; - -enum i40e_status_code i40e_get_dcbx_status(struct i40e_hw *hw, - u16 *status); -enum i40e_status_code i40e_lldp_to_dcb_config(u8 *lldpmib, - struct i40e_dcbx_config *dcbcfg); -enum i40e_status_code i40e_aq_get_dcb_config(struct i40e_hw *hw, u8 mib_type, - u8 bridgetype, - struct i40e_dcbx_config *dcbcfg); -enum i40e_status_code i40e_get_dcb_config(struct i40e_hw *hw); -enum i40e_status_code i40e_init_dcb(struct i40e_hw *hw); -#endif /* _I40E_DCB_H_ */ diff --git a/lib/librte_pmd_i40e/i40e/i40e_diag.c b/lib/librte_pmd_i40e/i40e/i40e_diag.c deleted file mode 100644 index 167fcf8..0000000 --- a/lib/librte_pmd_i40e/i40e/i40e_diag.c +++ /dev/null @@ -1,178 +0,0 @@ -/******************************************************************************* - -Copyright (c) 2013 - 2014, Intel Corporation -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - - 1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - 3. Neither the name of the Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. - -***************************************************************************/ - -#include "i40e_diag.h" -#include "i40e_prototype.h" - -/** - * i40e_diag_set_loopback - * @hw: pointer to the hw struct - * @mode: loopback mode - * - * Set chosen loopback mode - **/ -enum i40e_status_code i40e_diag_set_loopback(struct i40e_hw *hw, - enum i40e_lb_mode mode) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - - if (i40e_aq_set_lb_modes(hw, mode, NULL)) - ret_code = I40E_ERR_DIAG_TEST_FAILED; - - return ret_code; -} - -/** - * i40e_diag_reg_pattern_test - * @hw: pointer to the hw struct - * @reg: reg to be tested - * @mask: bits to be touched - **/ -static enum i40e_status_code i40e_diag_reg_pattern_test(struct i40e_hw *hw, - u32 reg, u32 mask) -{ - const u32 patterns[] = {0x5A5A5A5A, 0xA5A5A5A5, 0x00000000, 0xFFFFFFFF}; - u32 pat, val, orig_val; - int i; - - orig_val = rd32(hw, reg); - for (i = 0; i < ARRAY_SIZE(patterns); i++) { - pat = patterns[i]; - wr32(hw, reg, (pat & mask)); - val = rd32(hw, reg); - if ((val & mask) != (pat & mask)) { - return I40E_ERR_DIAG_TEST_FAILED; - } - } - - wr32(hw, reg, orig_val); - val = rd32(hw, reg); - if (val != orig_val) { - return I40E_ERR_DIAG_TEST_FAILED; - } - - return I40E_SUCCESS; -} - -struct i40e_diag_reg_test_info i40e_reg_list[] = { - /* offset mask elements stride */ - {I40E_QTX_CTL(0), 0x0000FFBF, 1, I40E_QTX_CTL(1) - I40E_QTX_CTL(0)}, - {I40E_PFINT_ITR0(0), 0x00000FFF, 3, I40E_PFINT_ITR0(1) - I40E_PFINT_ITR0(0)}, - {I40E_PFINT_ITRN(0, 0), 0x00000FFF, 1, I40E_PFINT_ITRN(0, 1) - I40E_PFINT_ITRN(0, 0)}, - {I40E_PFINT_ITRN(1, 0), 0x00000FFF, 1, I40E_PFINT_ITRN(1, 1) - I40E_PFINT_ITRN(1, 0)}, - {I40E_PFINT_ITRN(2, 0), 0x00000FFF, 1, I40E_PFINT_ITRN(2, 1) - I40E_PFINT_ITRN(2, 0)}, - {I40E_PFINT_STAT_CTL0, 0x0000000C, 1, 0}, - {I40E_PFINT_LNKLST0, 0x00001FFF, 1, 0}, - {I40E_PFINT_LNKLSTN(0), 0x000007FF, 1, I40E_PFINT_LNKLSTN(1) - I40E_PFINT_LNKLSTN(0)}, - {I40E_QINT_TQCTL(0), 0x000000FF, 1, I40E_QINT_TQCTL(1) - I40E_QINT_TQCTL(0)}, - {I40E_QINT_RQCTL(0), 0x000000FF, 1, I40E_QINT_RQCTL(1) - I40E_QINT_RQCTL(0)}, - {I40E_PFINT_ICR0_ENA, 0xF7F20000, 1, 0}, - { 0 } -}; - -/** - * i40e_diag_reg_test - * @hw: pointer to the hw struct - * - * Perform registers diagnostic test - **/ -enum i40e_status_code i40e_diag_reg_test(struct i40e_hw *hw) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - u32 reg, mask; - u32 i, j; - - for (i = 0; i40e_reg_list[i].offset != 0 && - ret_code == I40E_SUCCESS; i++) { - - /* set actual reg range for dynamically allocated resources */ - if (i40e_reg_list[i].offset == I40E_QTX_CTL(0) && - hw->func_caps.num_tx_qp != 0) - i40e_reg_list[i].elements = hw->func_caps.num_tx_qp; - if ((i40e_reg_list[i].offset == I40E_PFINT_ITRN(0, 0) || - i40e_reg_list[i].offset == I40E_PFINT_ITRN(1, 0) || - i40e_reg_list[i].offset == I40E_PFINT_ITRN(2, 0) || - i40e_reg_list[i].offset == I40E_QINT_TQCTL(0) || - i40e_reg_list[i].offset == I40E_QINT_RQCTL(0)) && - hw->func_caps.num_msix_vectors != 0) - i40e_reg_list[i].elements = - hw->func_caps.num_msix_vectors - 1; - - /* test register access */ - mask = i40e_reg_list[i].mask; - for (j = 0; j < i40e_reg_list[i].elements && - ret_code == I40E_SUCCESS; j++) { - reg = i40e_reg_list[i].offset - + (j * i40e_reg_list[i].stride); - ret_code = i40e_diag_reg_pattern_test(hw, reg, mask); - } - } - - return ret_code; -} - -/** - * i40e_diag_eeprom_test - * @hw: pointer to the hw struct - * - * Perform EEPROM diagnostic test - **/ -enum i40e_status_code i40e_diag_eeprom_test(struct i40e_hw *hw) -{ - enum i40e_status_code ret_code; - u16 reg_val; - - /* read NVM control word and if NVM valid, validate EEPROM checksum*/ - ret_code = i40e_read_nvm_word(hw, I40E_SR_NVM_CONTROL_WORD, ®_val); - if ((ret_code == I40E_SUCCESS) && - ((reg_val & I40E_SR_CONTROL_WORD_1_MASK) == - (0x01 << I40E_SR_CONTROL_WORD_1_SHIFT))) { - ret_code = i40e_validate_nvm_checksum(hw, NULL); - } else { - ret_code = I40E_ERR_DIAG_TEST_FAILED; - } - - return ret_code; -} - -/** - * i40e_diag_fw_alive_test - * @hw: pointer to the hw struct - * - * Perform FW alive diagnostic test - **/ -enum i40e_status_code i40e_diag_fw_alive_test(struct i40e_hw *hw) -{ - UNREFERENCED_1PARAMETER(hw); - return I40E_SUCCESS; -} diff --git a/lib/librte_pmd_i40e/i40e/i40e_diag.h b/lib/librte_pmd_i40e/i40e/i40e_diag.h deleted file mode 100644 index feb4d4b..0000000 --- a/lib/librte_pmd_i40e/i40e/i40e_diag.h +++ /dev/null @@ -1,61 +0,0 @@ -/******************************************************************************* - -Copyright (c) 2013 - 2014, Intel Corporation -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - - 1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - 3. Neither the name of the Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. - -***************************************************************************/ - -#ifndef _I40E_DIAG_H_ -#define _I40E_DIAG_H_ - -#include "i40e_type.h" - -enum i40e_lb_mode { - I40E_LB_MODE_NONE = 0x0, - I40E_LB_MODE_PHY_LOCAL = I40E_AQ_LB_PHY_LOCAL, - I40E_LB_MODE_PHY_REMOTE = I40E_AQ_LB_PHY_REMOTE, - I40E_LB_MODE_MAC_LOCAL = I40E_AQ_LB_MAC_LOCAL, -}; - -struct i40e_diag_reg_test_info { - u32 offset; /* the base register */ - u32 mask; /* bits that can be tested */ - u32 elements; /* number of elements if array */ - u32 stride; /* bytes between each element */ -}; - -extern struct i40e_diag_reg_test_info i40e_reg_list[]; - -enum i40e_status_code i40e_diag_set_loopback(struct i40e_hw *hw, - enum i40e_lb_mode mode); -enum i40e_status_code i40e_diag_fw_alive_test(struct i40e_hw *hw); -enum i40e_status_code i40e_diag_reg_test(struct i40e_hw *hw); -enum i40e_status_code i40e_diag_eeprom_test(struct i40e_hw *hw); - -#endif /* _I40E_DIAG_H_ */ diff --git a/lib/librte_pmd_i40e/i40e/i40e_hmc.c b/lib/librte_pmd_i40e/i40e/i40e_hmc.c deleted file mode 100644 index ae6896a..0000000 --- a/lib/librte_pmd_i40e/i40e/i40e_hmc.c +++ /dev/null @@ -1,373 +0,0 @@ -/******************************************************************************* - -Copyright (c) 2013 - 2014, Intel Corporation -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - - 1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - 3. Neither the name of the Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. - -***************************************************************************/ - -#include "i40e_osdep.h" -#include "i40e_register.h" -#include "i40e_status.h" -#include "i40e_alloc.h" -#include "i40e_hmc.h" -#include "i40e_type.h" - -/** - * i40e_add_sd_table_entry - Adds a segment descriptor to the table - * @hw: pointer to our hw struct - * @hmc_info: pointer to the HMC configuration information struct - * @sd_index: segment descriptor index to manipulate - * @type: what type of segment descriptor we're manipulating - * @direct_mode_sz: size to alloc in direct mode - **/ -enum i40e_status_code i40e_add_sd_table_entry(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 sd_index, - enum i40e_sd_entry_type type, - u64 direct_mode_sz) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - struct i40e_hmc_sd_entry *sd_entry; - enum i40e_memory_type mem_type; - bool dma_mem_alloc_done = false; - struct i40e_dma_mem mem; - u64 alloc_len; - - if (NULL == hmc_info->sd_table.sd_entry) { - ret_code = I40E_ERR_BAD_PTR; - DEBUGOUT("i40e_add_sd_table_entry: bad sd_entry\n"); - goto exit; - } - - if (sd_index >= hmc_info->sd_table.sd_cnt) { - ret_code = I40E_ERR_INVALID_SD_INDEX; - DEBUGOUT("i40e_add_sd_table_entry: bad sd_index\n"); - goto exit; - } - - sd_entry = &hmc_info->sd_table.sd_entry[sd_index]; - if (!sd_entry->valid) { - if (I40E_SD_TYPE_PAGED == type) { - mem_type = i40e_mem_pd; - alloc_len = I40E_HMC_PAGED_BP_SIZE; - } else { - mem_type = i40e_mem_bp_jumbo; - alloc_len = direct_mode_sz; - } - - /* allocate a 4K pd page or 2M backing page */ - ret_code = i40e_allocate_dma_mem(hw, &mem, mem_type, alloc_len, - I40E_HMC_PD_BP_BUF_ALIGNMENT); - if (ret_code) - goto exit; - dma_mem_alloc_done = true; - if (I40E_SD_TYPE_PAGED == type) { - ret_code = i40e_allocate_virt_mem(hw, - &sd_entry->u.pd_table.pd_entry_virt_mem, - sizeof(struct i40e_hmc_pd_entry) * 512); - if (ret_code) - goto exit; - sd_entry->u.pd_table.pd_entry = - (struct i40e_hmc_pd_entry *) - sd_entry->u.pd_table.pd_entry_virt_mem.va; - i40e_memcpy(&sd_entry->u.pd_table.pd_page_addr, - &mem, sizeof(struct i40e_dma_mem), - I40E_NONDMA_TO_NONDMA); - } else { - i40e_memcpy(&sd_entry->u.bp.addr, - &mem, sizeof(struct i40e_dma_mem), - I40E_NONDMA_TO_NONDMA); - sd_entry->u.bp.sd_pd_index = sd_index; - } - /* initialize the sd entry */ - hmc_info->sd_table.sd_entry[sd_index].entry_type = type; - - /* increment the ref count */ - I40E_INC_SD_REFCNT(&hmc_info->sd_table); - } - /* Increment backing page reference count */ - if (I40E_SD_TYPE_DIRECT == sd_entry->entry_type) - I40E_INC_BP_REFCNT(&sd_entry->u.bp); -exit: - if (I40E_SUCCESS != ret_code) - if (dma_mem_alloc_done) - i40e_free_dma_mem(hw, &mem); - - return ret_code; -} - -/** - * i40e_add_pd_table_entry - Adds page descriptor to the specified table - * @hw: pointer to our HW structure - * @hmc_info: pointer to the HMC configuration information structure - * @pd_index: which page descriptor index to manipulate - * - * This function: - * 1. Initializes the pd entry - * 2. Adds pd_entry in the pd_table - * 3. Mark the entry valid in i40e_hmc_pd_entry structure - * 4. Initializes the pd_entry's ref count to 1 - * assumptions: - * 1. The memory for pd should be pinned down, physically contiguous and - * aligned on 4K boundary and zeroed memory. - * 2. It should be 4K in size. - **/ -enum i40e_status_code i40e_add_pd_table_entry(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 pd_index) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - struct i40e_hmc_pd_table *pd_table; - struct i40e_hmc_pd_entry *pd_entry; - struct i40e_dma_mem mem; - u32 sd_idx, rel_pd_idx; - u64 *pd_addr; - u64 page_desc; - - if (pd_index / I40E_HMC_PD_CNT_IN_SD >= hmc_info->sd_table.sd_cnt) { - ret_code = I40E_ERR_INVALID_PAGE_DESC_INDEX; - DEBUGOUT("i40e_add_pd_table_entry: bad pd_index\n"); - goto exit; - } - - /* find corresponding sd */ - sd_idx = (pd_index / I40E_HMC_PD_CNT_IN_SD); - if (I40E_SD_TYPE_PAGED != - hmc_info->sd_table.sd_entry[sd_idx].entry_type) - goto exit; - - rel_pd_idx = (pd_index % I40E_HMC_PD_CNT_IN_SD); - pd_table = &hmc_info->sd_table.sd_entry[sd_idx].u.pd_table; - pd_entry = &pd_table->pd_entry[rel_pd_idx]; - if (!pd_entry->valid) { - /* allocate a 4K backing page */ - ret_code = i40e_allocate_dma_mem(hw, &mem, i40e_mem_bp, - I40E_HMC_PAGED_BP_SIZE, - I40E_HMC_PD_BP_BUF_ALIGNMENT); - if (ret_code) - goto exit; - - i40e_memcpy(&pd_entry->bp.addr, &mem, - sizeof(struct i40e_dma_mem), I40E_NONDMA_TO_NONDMA); - pd_entry->bp.sd_pd_index = pd_index; - pd_entry->bp.entry_type = I40E_SD_TYPE_PAGED; - /* Set page address and valid bit */ - page_desc = mem.pa | 0x1; - - pd_addr = (u64 *)pd_table->pd_page_addr.va; - pd_addr += rel_pd_idx; - - /* Add the backing page physical address in the pd entry */ - i40e_memcpy(pd_addr, &page_desc, sizeof(u64), - I40E_NONDMA_TO_DMA); - - pd_entry->sd_index = sd_idx; - pd_entry->valid = true; - I40E_INC_PD_REFCNT(pd_table); - } - I40E_INC_BP_REFCNT(&pd_entry->bp); -exit: - return ret_code; -} - -/** - * i40e_remove_pd_bp - remove a backing page from a page descriptor - * @hw: pointer to our HW structure - * @hmc_info: pointer to the HMC configuration information structure - * @idx: the page index - * @is_pf: distinguishes a VF from a PF - * - * This function: - * 1. Marks the entry in pd tabe (for paged address mode) or in sd table - * (for direct address mode) invalid. - * 2. Write to register PMPDINV to invalidate the backing page in FV cache - * 3. Decrement the ref count for the pd _entry - * assumptions: - * 1. Caller can deallocate the memory used by backing storage after this - * function returns. - **/ -enum i40e_status_code i40e_remove_pd_bp(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 idx) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - struct i40e_hmc_pd_entry *pd_entry; - struct i40e_hmc_pd_table *pd_table; - struct i40e_hmc_sd_entry *sd_entry; - u32 sd_idx, rel_pd_idx; - u64 *pd_addr; - - /* calculate index */ - sd_idx = idx / I40E_HMC_PD_CNT_IN_SD; - rel_pd_idx = idx % I40E_HMC_PD_CNT_IN_SD; - if (sd_idx >= hmc_info->sd_table.sd_cnt) { - ret_code = I40E_ERR_INVALID_PAGE_DESC_INDEX; - DEBUGOUT("i40e_remove_pd_bp: bad idx\n"); - goto exit; - } - sd_entry = &hmc_info->sd_table.sd_entry[sd_idx]; - if (I40E_SD_TYPE_PAGED != sd_entry->entry_type) { - ret_code = I40E_ERR_INVALID_SD_TYPE; - DEBUGOUT("i40e_remove_pd_bp: wrong sd_entry type\n"); - goto exit; - } - /* get the entry and decrease its ref counter */ - pd_table = &hmc_info->sd_table.sd_entry[sd_idx].u.pd_table; - pd_entry = &pd_table->pd_entry[rel_pd_idx]; - I40E_DEC_BP_REFCNT(&pd_entry->bp); - if (pd_entry->bp.ref_cnt) - goto exit; - - /* mark the entry invalid */ - pd_entry->valid = false; - I40E_DEC_PD_REFCNT(pd_table); - pd_addr = (u64 *)pd_table->pd_page_addr.va; - pd_addr += rel_pd_idx; - i40e_memset(pd_addr, 0, sizeof(u64), I40E_DMA_MEM); - I40E_INVALIDATE_PF_HMC_PD(hw, sd_idx, idx); - - /* free memory here */ - ret_code = i40e_free_dma_mem(hw, &(pd_entry->bp.addr)); - if (I40E_SUCCESS != ret_code) - goto exit; - if (!pd_table->ref_cnt) - i40e_free_virt_mem(hw, &pd_table->pd_entry_virt_mem); -exit: - return ret_code; -} - -/** - * i40e_prep_remove_sd_bp - Prepares to remove a backing page from a sd entry - * @hmc_info: pointer to the HMC configuration information structure - * @idx: the page index - **/ -enum i40e_status_code i40e_prep_remove_sd_bp(struct i40e_hmc_info *hmc_info, - u32 idx) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - struct i40e_hmc_sd_entry *sd_entry; - - /* get the entry and decrease its ref counter */ - sd_entry = &hmc_info->sd_table.sd_entry[idx]; - I40E_DEC_BP_REFCNT(&sd_entry->u.bp); - if (sd_entry->u.bp.ref_cnt) { - ret_code = I40E_ERR_NOT_READY; - goto exit; - } - I40E_DEC_SD_REFCNT(&hmc_info->sd_table); - - /* mark the entry invalid */ - sd_entry->valid = false; -exit: - return ret_code; -} - -/** - * i40e_remove_sd_bp_new - Removes a backing page from a segment descriptor - * @hw: pointer to our hw struct - * @hmc_info: pointer to the HMC configuration information structure - * @idx: the page index - * @is_pf: used to distinguish between VF and PF - **/ -enum i40e_status_code i40e_remove_sd_bp_new(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 idx, bool is_pf) -{ - struct i40e_hmc_sd_entry *sd_entry; - enum i40e_status_code ret_code = I40E_SUCCESS; - - /* get the entry and decrease its ref counter */ - sd_entry = &hmc_info->sd_table.sd_entry[idx]; - if (is_pf) { - I40E_CLEAR_PF_SD_ENTRY(hw, idx, I40E_SD_TYPE_DIRECT); - } else { - ret_code = I40E_NOT_SUPPORTED; - goto exit; - } - ret_code = i40e_free_dma_mem(hw, &(sd_entry->u.bp.addr)); - if (I40E_SUCCESS != ret_code) - goto exit; -exit: - return ret_code; -} - -/** - * i40e_prep_remove_pd_page - Prepares to remove a PD page from sd entry. - * @hmc_info: pointer to the HMC configuration information structure - * @idx: segment descriptor index to find the relevant page descriptor - **/ -enum i40e_status_code i40e_prep_remove_pd_page(struct i40e_hmc_info *hmc_info, - u32 idx) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - struct i40e_hmc_sd_entry *sd_entry; - - sd_entry = &hmc_info->sd_table.sd_entry[idx]; - - if (sd_entry->u.pd_table.ref_cnt) { - ret_code = I40E_ERR_NOT_READY; - goto exit; - } - - /* mark the entry invalid */ - sd_entry->valid = false; - - I40E_DEC_SD_REFCNT(&hmc_info->sd_table); -exit: - return ret_code; -} - -/** - * i40e_remove_pd_page_new - Removes a PD page from sd entry. - * @hw: pointer to our hw struct - * @hmc_info: pointer to the HMC configuration information structure - * @idx: segment descriptor index to find the relevant page descriptor - * @is_pf: used to distinguish between VF and PF - **/ -enum i40e_status_code i40e_remove_pd_page_new(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 idx, bool is_pf) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - struct i40e_hmc_sd_entry *sd_entry; - - sd_entry = &hmc_info->sd_table.sd_entry[idx]; - if (is_pf) { - I40E_CLEAR_PF_SD_ENTRY(hw, idx, I40E_SD_TYPE_PAGED); - } else { - ret_code = I40E_NOT_SUPPORTED; - goto exit; - } - /* free memory here */ - ret_code = i40e_free_dma_mem(hw, &(sd_entry->u.pd_table.pd_page_addr)); - if (I40E_SUCCESS != ret_code) - goto exit; -exit: - return ret_code; -} diff --git a/lib/librte_pmd_i40e/i40e/i40e_hmc.h b/lib/librte_pmd_i40e/i40e/i40e_hmc.h deleted file mode 100644 index eb629fc..0000000 --- a/lib/librte_pmd_i40e/i40e/i40e_hmc.h +++ /dev/null @@ -1,243 +0,0 @@ -/******************************************************************************* - -Copyright (c) 2013 - 2014, Intel Corporation -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - - 1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - 3. Neither the name of the Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. - -***************************************************************************/ - -#ifndef _I40E_HMC_H_ -#define _I40E_HMC_H_ - -#define I40E_HMC_MAX_BP_COUNT 512 - -/* forward-declare the HW struct for the compiler */ -struct i40e_hw; - -#define I40E_HMC_INFO_SIGNATURE 0x484D5347 /* HMSG */ -#define I40E_HMC_PD_CNT_IN_SD 512 -#define I40E_HMC_DIRECT_BP_SIZE 0x200000 /* 2M */ -#define I40E_HMC_PAGED_BP_SIZE 4096 -#define I40E_HMC_PD_BP_BUF_ALIGNMENT 4096 -#define I40E_FIRST_VF_FPM_ID 16 - -struct i40e_hmc_obj_info { - u64 base; /* base addr in FPM */ - u32 max_cnt; /* max count available for this hmc func */ - u32 cnt; /* count of objects driver actually wants to create */ - u64 size; /* size in bytes of one object */ -}; - -enum i40e_sd_entry_type { - I40E_SD_TYPE_INVALID = 0, - I40E_SD_TYPE_PAGED = 1, - I40E_SD_TYPE_DIRECT = 2 -}; - -struct i40e_hmc_bp { - enum i40e_sd_entry_type entry_type; - struct i40e_dma_mem addr; /* populate to be used by hw */ - u32 sd_pd_index; - u32 ref_cnt; -}; - -struct i40e_hmc_pd_entry { - struct i40e_hmc_bp bp; - u32 sd_index; - bool valid; -}; - -struct i40e_hmc_pd_table { - struct i40e_dma_mem pd_page_addr; /* populate to be used by hw */ - struct i40e_hmc_pd_entry *pd_entry; /* [512] for sw book keeping */ - struct i40e_virt_mem pd_entry_virt_mem; /* virt mem for pd_entry */ - - u32 ref_cnt; - u32 sd_index; -}; - -struct i40e_hmc_sd_entry { - enum i40e_sd_entry_type entry_type; - bool valid; - - union { - struct i40e_hmc_pd_table pd_table; - struct i40e_hmc_bp bp; - } u; -}; - -struct i40e_hmc_sd_table { - struct i40e_virt_mem addr; /* used to track sd_entry allocations */ - u32 sd_cnt; - u32 ref_cnt; - struct i40e_hmc_sd_entry *sd_entry; /* (sd_cnt*512) entries max */ -}; - -struct i40e_hmc_info { - u32 signature; - /* equals to pci func num for PF and dynamically allocated for VFs */ - u8 hmc_fn_id; - u16 first_sd_index; /* index of the first available SD */ - - /* hmc objects */ - struct i40e_hmc_obj_info *hmc_obj; - struct i40e_virt_mem hmc_obj_virt_mem; - struct i40e_hmc_sd_table sd_table; -}; - -#define I40E_INC_SD_REFCNT(sd_table) ((sd_table)->ref_cnt++) -#define I40E_INC_PD_REFCNT(pd_table) ((pd_table)->ref_cnt++) -#define I40E_INC_BP_REFCNT(bp) ((bp)->ref_cnt++) - -#define I40E_DEC_SD_REFCNT(sd_table) ((sd_table)->ref_cnt--) -#define I40E_DEC_PD_REFCNT(pd_table) ((pd_table)->ref_cnt--) -#define I40E_DEC_BP_REFCNT(bp) ((bp)->ref_cnt--) - -/** - * I40E_SET_PF_SD_ENTRY - marks the sd entry as valid in the hardware - * @hw: pointer to our hw struct - * @pa: pointer to physical address - * @sd_index: segment descriptor index - * @type: if sd entry is direct or paged - **/ -#define I40E_SET_PF_SD_ENTRY(hw, pa, sd_index, type) \ -{ \ - u32 val1, val2, val3; \ - val1 = (u32)(I40E_HI_DWORD(pa)); \ - val2 = (u32)(pa) | (I40E_HMC_MAX_BP_COUNT << \ - I40E_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) | \ - ((((type) == I40E_SD_TYPE_PAGED) ? 0 : 1) << \ - I40E_PFHMC_SDDATALOW_PMSDTYPE_SHIFT) | \ - (1 << I40E_PFHMC_SDDATALOW_PMSDVALID_SHIFT); \ - val3 = (sd_index) | (1u << I40E_PFHMC_SDCMD_PMSDWR_SHIFT); \ - wr32((hw), I40E_PFHMC_SDDATAHIGH, val1); \ - wr32((hw), I40E_PFHMC_SDDATALOW, val2); \ - wr32((hw), I40E_PFHMC_SDCMD, val3); \ -} - -/** - * I40E_CLEAR_PF_SD_ENTRY - marks the sd entry as invalid in the hardware - * @hw: pointer to our hw struct - * @sd_index: segment descriptor index - * @type: if sd entry is direct or paged - **/ -#define I40E_CLEAR_PF_SD_ENTRY(hw, sd_index, type) \ -{ \ - u32 val2, val3; \ - val2 = (I40E_HMC_MAX_BP_COUNT << \ - I40E_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) | \ - ((((type) == I40E_SD_TYPE_PAGED) ? 0 : 1) << \ - I40E_PFHMC_SDDATALOW_PMSDTYPE_SHIFT); \ - val3 = (sd_index) | (1u << I40E_PFHMC_SDCMD_PMSDWR_SHIFT); \ - wr32((hw), I40E_PFHMC_SDDATAHIGH, 0); \ - wr32((hw), I40E_PFHMC_SDDATALOW, val2); \ - wr32((hw), I40E_PFHMC_SDCMD, val3); \ -} - -/** - * I40E_INVALIDATE_PF_HMC_PD - Invalidates the pd cache in the hardware - * @hw: pointer to our hw struct - * @sd_idx: segment descriptor index - * @pd_idx: page descriptor index - **/ -#define I40E_INVALIDATE_PF_HMC_PD(hw, sd_idx, pd_idx) \ - wr32((hw), I40E_PFHMC_PDINV, \ - (((sd_idx) << I40E_PFHMC_PDINV_PMSDIDX_SHIFT) | \ - ((pd_idx) << I40E_PFHMC_PDINV_PMPDIDX_SHIFT))) - -/** - * I40E_FIND_SD_INDEX_LIMIT - finds segment descriptor index limit - * @hmc_info: pointer to the HMC configuration information structure - * @type: type of HMC resources we're searching - * @index: starting index for the object - * @cnt: number of objects we're trying to create - * @sd_idx: pointer to return index of the segment descriptor in question - * @sd_limit: pointer to return the maximum number of segment descriptors - * - * This function calculates the segment descriptor index and index limit - * for the resource defined by i40e_hmc_rsrc_type. - **/ -#define I40E_FIND_SD_INDEX_LIMIT(hmc_info, type, index, cnt, sd_idx, sd_limit)\ -{ \ - u64 fpm_addr, fpm_limit; \ - fpm_addr = (hmc_info)->hmc_obj[(type)].base + \ - (hmc_info)->hmc_obj[(type)].size * (index); \ - fpm_limit = fpm_addr + (hmc_info)->hmc_obj[(type)].size * (cnt);\ - *(sd_idx) = (u32)(fpm_addr / I40E_HMC_DIRECT_BP_SIZE); \ - *(sd_limit) = (u32)((fpm_limit - 1) / I40E_HMC_DIRECT_BP_SIZE); \ - /* add one more to the limit to correct our range */ \ - *(sd_limit) += 1; \ -} - -/** - * I40E_FIND_PD_INDEX_LIMIT - finds page descriptor index limit - * @hmc_info: pointer to the HMC configuration information struct - * @type: HMC resource type we're examining - * @idx: starting index for the object - * @cnt: number of objects we're trying to create - * @pd_index: pointer to return page descriptor index - * @pd_limit: pointer to return page descriptor index limit - * - * Calculates the page descriptor index and index limit for the resource - * defined by i40e_hmc_rsrc_type. - **/ -#define I40E_FIND_PD_INDEX_LIMIT(hmc_info, type, idx, cnt, pd_index, pd_limit)\ -{ \ - u64 fpm_adr, fpm_limit; \ - fpm_adr = (hmc_info)->hmc_obj[(type)].base + \ - (hmc_info)->hmc_obj[(type)].size * (idx); \ - fpm_limit = fpm_adr + (hmc_info)->hmc_obj[(type)].size * (cnt); \ - *(pd_index) = (u32)(fpm_adr / I40E_HMC_PAGED_BP_SIZE); \ - *(pd_limit) = (u32)((fpm_limit - 1) / I40E_HMC_PAGED_BP_SIZE); \ - /* add one more to the limit to correct our range */ \ - *(pd_limit) += 1; \ -} -enum i40e_status_code i40e_add_sd_table_entry(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 sd_index, - enum i40e_sd_entry_type type, - u64 direct_mode_sz); - -enum i40e_status_code i40e_add_pd_table_entry(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 pd_index); -enum i40e_status_code i40e_remove_pd_bp(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 idx); -enum i40e_status_code i40e_prep_remove_sd_bp(struct i40e_hmc_info *hmc_info, - u32 idx); -enum i40e_status_code i40e_remove_sd_bp_new(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 idx, bool is_pf); -enum i40e_status_code i40e_prep_remove_pd_page(struct i40e_hmc_info *hmc_info, - u32 idx); -enum i40e_status_code i40e_remove_pd_page_new(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 idx, bool is_pf); - -#endif /* _I40E_HMC_H_ */ diff --git a/lib/librte_pmd_i40e/i40e/i40e_lan_hmc.c b/lib/librte_pmd_i40e/i40e/i40e_lan_hmc.c deleted file mode 100644 index b08534b..0000000 --- a/lib/librte_pmd_i40e/i40e/i40e_lan_hmc.c +++ /dev/null @@ -1,1417 +0,0 @@ -/******************************************************************************* - -Copyright (c) 2013 - 2014, Intel Corporation -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - - 1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - 3. Neither the name of the Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. - -***************************************************************************/ - -#include "i40e_osdep.h" -#include "i40e_register.h" -#include "i40e_type.h" -#include "i40e_hmc.h" -#include "i40e_lan_hmc.h" -#include "i40e_prototype.h" - -/* lan specific interface functions */ - -/** - * i40e_align_l2obj_base - aligns base object pointer to 512 bytes - * @offset: base address offset needing alignment - * - * Aligns the layer 2 function private memory so it's 512-byte aligned. - **/ -STATIC u64 i40e_align_l2obj_base(u64 offset) -{ - u64 aligned_offset = offset; - - if ((offset % I40E_HMC_L2OBJ_BASE_ALIGNMENT) > 0) - aligned_offset += (I40E_HMC_L2OBJ_BASE_ALIGNMENT - - (offset % I40E_HMC_L2OBJ_BASE_ALIGNMENT)); - - return aligned_offset; -} - -/** - * i40e_calculate_l2fpm_size - calculates layer 2 FPM memory size - * @txq_num: number of Tx queues needing backing context - * @rxq_num: number of Rx queues needing backing context - * @fcoe_cntx_num: amount of FCoE statefull contexts needing backing context - * @fcoe_filt_num: number of FCoE filters needing backing context - * - * Calculates the maximum amount of memory for the function required, based - * on the number of resources it must provide context for. - **/ -u64 i40e_calculate_l2fpm_size(u32 txq_num, u32 rxq_num, - u32 fcoe_cntx_num, u32 fcoe_filt_num) -{ - u64 fpm_size = 0; - - fpm_size = txq_num * I40E_HMC_OBJ_SIZE_TXQ; - fpm_size = i40e_align_l2obj_base(fpm_size); - - fpm_size += (rxq_num * I40E_HMC_OBJ_SIZE_RXQ); - fpm_size = i40e_align_l2obj_base(fpm_size); - - fpm_size += (fcoe_cntx_num * I40E_HMC_OBJ_SIZE_FCOE_CNTX); - fpm_size = i40e_align_l2obj_base(fpm_size); - - fpm_size += (fcoe_filt_num * I40E_HMC_OBJ_SIZE_FCOE_FILT); - fpm_size = i40e_align_l2obj_base(fpm_size); - - return fpm_size; -} - -/** - * i40e_init_lan_hmc - initialize i40e_hmc_info struct - * @hw: pointer to the HW structure - * @txq_num: number of Tx queues needing backing context - * @rxq_num: number of Rx queues needing backing context - * @fcoe_cntx_num: amount of FCoE statefull contexts needing backing context - * @fcoe_filt_num: number of FCoE filters needing backing context - * - * This function will be called once per physical function initialization. - * It will fill out the i40e_hmc_obj_info structure for LAN objects based on - * the driver's provided input, as well as information from the HMC itself - * loaded from NVRAM. - * - * Assumptions: - * - HMC Resource Profile has been selected before calling this function. - **/ -enum i40e_status_code i40e_init_lan_hmc(struct i40e_hw *hw, u32 txq_num, - u32 rxq_num, u32 fcoe_cntx_num, - u32 fcoe_filt_num) -{ - struct i40e_hmc_obj_info *obj, *full_obj; - enum i40e_status_code ret_code = I40E_SUCCESS; - u64 l2fpm_size; - u32 size_exp; - - hw->hmc.signature = I40E_HMC_INFO_SIGNATURE; - hw->hmc.hmc_fn_id = hw->pf_id; - - /* allocate memory for hmc_obj */ - ret_code = i40e_allocate_virt_mem(hw, &hw->hmc.hmc_obj_virt_mem, - sizeof(struct i40e_hmc_obj_info) * I40E_HMC_LAN_MAX); - if (ret_code) - goto init_lan_hmc_out; - hw->hmc.hmc_obj = (struct i40e_hmc_obj_info *) - hw->hmc.hmc_obj_virt_mem.va; - - /* The full object will be used to create the LAN HMC SD */ - full_obj = &hw->hmc.hmc_obj[I40E_HMC_LAN_FULL]; - full_obj->max_cnt = 0; - full_obj->cnt = 0; - full_obj->base = 0; - full_obj->size = 0; - - /* Tx queue context information */ - obj = &hw->hmc.hmc_obj[I40E_HMC_LAN_TX]; - obj->max_cnt = rd32(hw, I40E_GLHMC_LANQMAX); - obj->cnt = txq_num; - obj->base = 0; - size_exp = rd32(hw, I40E_GLHMC_LANTXOBJSZ); - obj->size = (u64)1 << size_exp; - - /* validate values requested by driver don't exceed HMC capacity */ - if (txq_num > obj->max_cnt) { - ret_code = I40E_ERR_INVALID_HMC_OBJ_COUNT; - DEBUGOUT3("i40e_init_lan_hmc: Tx context: asks for 0x%x but max allowed is 0x%x, returns error %d\n", - txq_num, obj->max_cnt, ret_code); - goto init_lan_hmc_out; - } - - /* aggregate values into the full LAN object for later */ - full_obj->max_cnt += obj->max_cnt; - full_obj->cnt += obj->cnt; - - /* Rx queue context information */ - obj = &hw->hmc.hmc_obj[I40E_HMC_LAN_RX]; - obj->max_cnt = rd32(hw, I40E_GLHMC_LANQMAX); - obj->cnt = rxq_num; - obj->base = hw->hmc.hmc_obj[I40E_HMC_LAN_TX].base + - (hw->hmc.hmc_obj[I40E_HMC_LAN_TX].cnt * - hw->hmc.hmc_obj[I40E_HMC_LAN_TX].size); - obj->base = i40e_align_l2obj_base(obj->base); - size_exp = rd32(hw, I40E_GLHMC_LANRXOBJSZ); - obj->size = (u64)1 << size_exp; - - /* validate values requested by driver don't exceed HMC capacity */ - if (rxq_num > obj->max_cnt) { - ret_code = I40E_ERR_INVALID_HMC_OBJ_COUNT; - DEBUGOUT3("i40e_init_lan_hmc: Rx context: asks for 0x%x but max allowed is 0x%x, returns error %d\n", - rxq_num, obj->max_cnt, ret_code); - goto init_lan_hmc_out; - } - - /* aggregate values into the full LAN object for later */ - full_obj->max_cnt += obj->max_cnt; - full_obj->cnt += obj->cnt; - - /* FCoE context information */ - obj = &hw->hmc.hmc_obj[I40E_HMC_FCOE_CTX]; - obj->max_cnt = rd32(hw, I40E_GLHMC_FCOEMAX); - obj->cnt = fcoe_cntx_num; - obj->base = hw->hmc.hmc_obj[I40E_HMC_LAN_RX].base + - (hw->hmc.hmc_obj[I40E_HMC_LAN_RX].cnt * - hw->hmc.hmc_obj[I40E_HMC_LAN_RX].size); - obj->base = i40e_align_l2obj_base(obj->base); - size_exp = rd32(hw, I40E_GLHMC_FCOEDDPOBJSZ); - obj->size = (u64)1 << size_exp; - - /* validate values requested by driver don't exceed HMC capacity */ - if (fcoe_cntx_num > obj->max_cnt) { - ret_code = I40E_ERR_INVALID_HMC_OBJ_COUNT; - DEBUGOUT3("i40e_init_lan_hmc: FCoE context: asks for 0x%x but max allowed is 0x%x, returns error %d\n", - fcoe_cntx_num, obj->max_cnt, ret_code); - goto init_lan_hmc_out; - } - - /* aggregate values into the full LAN object for later */ - full_obj->max_cnt += obj->max_cnt; - full_obj->cnt += obj->cnt; - - /* FCoE filter information */ - obj = &hw->hmc.hmc_obj[I40E_HMC_FCOE_FILT]; - obj->max_cnt = rd32(hw, I40E_GLHMC_FCOEFMAX); - obj->cnt = fcoe_filt_num; - obj->base = hw->hmc.hmc_obj[I40E_HMC_FCOE_CTX].base + - (hw->hmc.hmc_obj[I40E_HMC_FCOE_CTX].cnt * - hw->hmc.hmc_obj[I40E_HMC_FCOE_CTX].size); - obj->base = i40e_align_l2obj_base(obj->base); - size_exp = rd32(hw, I40E_GLHMC_FCOEFOBJSZ); - obj->size = (u64)1 << size_exp; - - /* validate values requested by driver don't exceed HMC capacity */ - if (fcoe_filt_num > obj->max_cnt) { - ret_code = I40E_ERR_INVALID_HMC_OBJ_COUNT; - DEBUGOUT3("i40e_init_lan_hmc: FCoE filter: asks for 0x%x but max allowed is 0x%x, returns error %d\n", - fcoe_filt_num, obj->max_cnt, ret_code); - goto init_lan_hmc_out; - } - - /* aggregate values into the full LAN object for later */ - full_obj->max_cnt += obj->max_cnt; - full_obj->cnt += obj->cnt; - - hw->hmc.first_sd_index = 0; - hw->hmc.sd_table.ref_cnt = 0; - l2fpm_size = i40e_calculate_l2fpm_size(txq_num, rxq_num, fcoe_cntx_num, - fcoe_filt_num); - if (NULL == hw->hmc.sd_table.sd_entry) { - hw->hmc.sd_table.sd_cnt = (u32) - (l2fpm_size + I40E_HMC_DIRECT_BP_SIZE - 1) / - I40E_HMC_DIRECT_BP_SIZE; - - /* allocate the sd_entry members in the sd_table */ - ret_code = i40e_allocate_virt_mem(hw, &hw->hmc.sd_table.addr, - (sizeof(struct i40e_hmc_sd_entry) * - hw->hmc.sd_table.sd_cnt)); - if (ret_code) - goto init_lan_hmc_out; - hw->hmc.sd_table.sd_entry = - (struct i40e_hmc_sd_entry *)hw->hmc.sd_table.addr.va; - } - /* store in the LAN full object for later */ - full_obj->size = l2fpm_size; - -init_lan_hmc_out: - return ret_code; -} - -/** - * i40e_remove_pd_page - Remove a page from the page descriptor table - * @hw: pointer to the HW structure - * @hmc_info: pointer to the HMC configuration information structure - * @idx: segment descriptor index to find the relevant page descriptor - * - * This function: - * 1. Marks the entry in pd table (for paged address mode) invalid - * 2. write to register PMPDINV to invalidate the backing page in FV cache - * 3. Decrement the ref count for pd_entry - * assumptions: - * 1. caller can deallocate the memory used by pd after this function - * returns. - **/ -STATIC enum i40e_status_code i40e_remove_pd_page(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 idx) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - - if (i40e_prep_remove_pd_page(hmc_info, idx) == I40E_SUCCESS) - ret_code = i40e_remove_pd_page_new(hw, hmc_info, idx, true); - - return ret_code; -} - -/** - * i40e_remove_sd_bp - remove a backing page from a segment descriptor - * @hw: pointer to our HW structure - * @hmc_info: pointer to the HMC configuration information structure - * @idx: the page index - * - * This function: - * 1. Marks the entry in sd table (for direct address mode) invalid - * 2. write to register PMSDCMD, PMSDDATALOW(PMSDDATALOW.PMSDVALID set - * to 0) and PMSDDATAHIGH to invalidate the sd page - * 3. Decrement the ref count for the sd_entry - * assumptions: - * 1. caller can deallocate the memory used by backing storage after this - * function returns. - **/ -STATIC enum i40e_status_code i40e_remove_sd_bp(struct i40e_hw *hw, - struct i40e_hmc_info *hmc_info, - u32 idx) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - - if (i40e_prep_remove_sd_bp(hmc_info, idx) == I40E_SUCCESS) - ret_code = i40e_remove_sd_bp_new(hw, hmc_info, idx, true); - - return ret_code; -} - -/** - * i40e_create_lan_hmc_object - allocate backing store for hmc objects - * @hw: pointer to the HW structure - * @info: pointer to i40e_hmc_create_obj_info struct - * - * This will allocate memory for PDs and backing pages and populate - * the sd and pd entries. - **/ -enum i40e_status_code i40e_create_lan_hmc_object(struct i40e_hw *hw, - struct i40e_hmc_lan_create_obj_info *info) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - struct i40e_hmc_sd_entry *sd_entry; - u32 pd_idx1 = 0, pd_lmt1 = 0; - u32 pd_idx = 0, pd_lmt = 0; - bool pd_error = false; - u32 sd_idx, sd_lmt; - u64 sd_size; - u32 i, j; - - if (NULL == info) { - ret_code = I40E_ERR_BAD_PTR; - DEBUGOUT("i40e_create_lan_hmc_object: bad info ptr\n"); - goto exit; - } - if (NULL == info->hmc_info) { - ret_code = I40E_ERR_BAD_PTR; - DEBUGOUT("i40e_create_lan_hmc_object: bad hmc_info ptr\n"); - goto exit; - } - if (I40E_HMC_INFO_SIGNATURE != info->hmc_info->signature) { - ret_code = I40E_ERR_BAD_PTR; - DEBUGOUT("i40e_create_lan_hmc_object: bad signature\n"); - goto exit; - } - - if (info->start_idx >= info->hmc_info->hmc_obj[info->rsrc_type].cnt) { - ret_code = I40E_ERR_INVALID_HMC_OBJ_INDEX; - DEBUGOUT1("i40e_create_lan_hmc_object: returns error %d\n", - ret_code); - goto exit; - } - if ((info->start_idx + info->count) > - info->hmc_info->hmc_obj[info->rsrc_type].cnt) { - ret_code = I40E_ERR_INVALID_HMC_OBJ_COUNT; - DEBUGOUT1("i40e_create_lan_hmc_object: returns error %d\n", - ret_code); - goto exit; - } - - /* find sd index and limit */ - I40E_FIND_SD_INDEX_LIMIT(info->hmc_info, info->rsrc_type, - info->start_idx, info->count, - &sd_idx, &sd_lmt); - if (sd_idx >= info->hmc_info->sd_table.sd_cnt || - sd_lmt > info->hmc_info->sd_table.sd_cnt) { - ret_code = I40E_ERR_INVALID_SD_INDEX; - goto exit; - } - /* find pd index */ - I40E_FIND_PD_INDEX_LIMIT(info->hmc_info, info->rsrc_type, - info->start_idx, info->count, &pd_idx, - &pd_lmt); - - /* This is to cover for cases where you may not want to have an SD with - * the full 2M memory but something smaller. By not filling out any - * size, the function will default the SD size to be 2M. - */ - if (info->direct_mode_sz == 0) - sd_size = I40E_HMC_DIRECT_BP_SIZE; - else - sd_size = info->direct_mode_sz; - - /* check if all the sds are valid. If not, allocate a page and - * initialize it. - */ - for (j = sd_idx; j < sd_lmt; j++) { - /* update the sd table entry */ - ret_code = i40e_add_sd_table_entry(hw, info->hmc_info, j, - info->entry_type, - sd_size); - if (I40E_SUCCESS != ret_code) - goto exit_sd_error; - sd_entry = &info->hmc_info->sd_table.sd_entry[j]; - if (I40E_SD_TYPE_PAGED == sd_entry->entry_type) { - /* check if all the pds in this sd are valid. If not, - * allocate a page and initialize it. - */ - - /* find pd_idx and pd_lmt in this sd */ - pd_idx1 = max(pd_idx, (j * I40E_HMC_MAX_BP_COUNT)); - pd_lmt1 = min(pd_lmt, - ((j + 1) * I40E_HMC_MAX_BP_COUNT)); - for (i = pd_idx1; i < pd_lmt1; i++) { - /* update the pd table entry */ - ret_code = i40e_add_pd_table_entry(hw, - info->hmc_info, - i); - if (I40E_SUCCESS != ret_code) { - pd_error = true; - break; - } - } - if (pd_error) { - /* remove the backing pages from pd_idx1 to i */ - while (i && (i > pd_idx1)) { - i40e_remove_pd_bp(hw, info->hmc_info, - (i - 1)); - i--; - } - } - } - if (!sd_entry->valid) { - sd_entry->valid = true; - switch (sd_entry->entry_type) { - case I40E_SD_TYPE_PAGED: - I40E_SET_PF_SD_ENTRY(hw, - sd_entry->u.pd_table.pd_page_addr.pa, - j, sd_entry->entry_type); - break; - case I40E_SD_TYPE_DIRECT: - I40E_SET_PF_SD_ENTRY(hw, sd_entry->u.bp.addr.pa, - j, sd_entry->entry_type); - break; - default: - ret_code = I40E_ERR_INVALID_SD_TYPE; - goto exit; - } - } - } - goto exit; - -exit_sd_error: - /* cleanup for sd entries from j to sd_idx */ - while (j && (j > sd_idx)) { - sd_entry = &info->hmc_info->sd_table.sd_entry[j - 1]; - switch (sd_entry->entry_type) { - case I40E_SD_TYPE_PAGED: - pd_idx1 = max(pd_idx, - ((j - 1) * I40E_HMC_MAX_BP_COUNT)); - pd_lmt1 = min(pd_lmt, (j * I40E_HMC_MAX_BP_COUNT)); - for (i = pd_idx1; i < pd_lmt1; i++) { - i40e_remove_pd_bp(hw, info->hmc_info, i); - } - i40e_remove_pd_page(hw, info->hmc_info, (j - 1)); - break; - case I40E_SD_TYPE_DIRECT: - i40e_remove_sd_bp(hw, info->hmc_info, (j - 1)); - break; - default: - ret_code = I40E_ERR_INVALID_SD_TYPE; - break; - } - j--; - } -exit: - return ret_code; -} - -/** - * i40e_configure_lan_hmc - prepare the HMC backing store - * @hw: pointer to the hw structure - * @model: the model for the layout of the SD/PD tables - * - * - This function will be called once per physical function initialization. - * - This function will be called after i40e_init_lan_hmc() and before - * any LAN/FCoE HMC objects can be created. - **/ -enum i40e_status_code i40e_configure_lan_hmc(struct i40e_hw *hw, - enum i40e_hmc_model model) -{ - struct i40e_hmc_lan_create_obj_info info; - u8 hmc_fn_id = hw->hmc.hmc_fn_id; - struct i40e_hmc_obj_info *obj; - enum i40e_status_code ret_code = I40E_SUCCESS; - - /* Initialize part of the create object info struct */ - info.hmc_info = &hw->hmc; - info.rsrc_type = I40E_HMC_LAN_FULL; - info.start_idx = 0; - info.direct_mode_sz = hw->hmc.hmc_obj[I40E_HMC_LAN_FULL].size; - - /* Build the SD entry for the LAN objects */ - switch (model) { - case I40E_HMC_MODEL_DIRECT_PREFERRED: - case I40E_HMC_MODEL_DIRECT_ONLY: - info.entry_type = I40E_SD_TYPE_DIRECT; - /* Make one big object, a single SD */ - info.count = 1; - ret_code = i40e_create_lan_hmc_object(hw, &info); - if ((ret_code != I40E_SUCCESS) && (model == I40E_HMC_MODEL_DIRECT_PREFERRED)) - goto try_type_paged; - else if (ret_code != I40E_SUCCESS) - goto configure_lan_hmc_out; - /* else clause falls through the break */ - break; - case I40E_HMC_MODEL_PAGED_ONLY: -try_type_paged: - info.entry_type = I40E_SD_TYPE_PAGED; - /* Make one big object in the PD table */ - info.count = 1; - ret_code = i40e_create_lan_hmc_object(hw, &info); - if (ret_code != I40E_SUCCESS) - goto configure_lan_hmc_out; - break; - default: - /* unsupported type */ - ret_code = I40E_ERR_INVALID_SD_TYPE; - DEBUGOUT1("i40e_configure_lan_hmc: Unknown SD type: %d\n", - ret_code); - goto configure_lan_hmc_out; - } - - /* Configure and program the FPM registers so objects can be created */ - - /* Tx contexts */ - obj = &hw->hmc.hmc_obj[I40E_HMC_LAN_TX]; - wr32(hw, I40E_GLHMC_LANTXBASE(hmc_fn_id), - (u32)((obj->base & I40E_GLHMC_LANTXBASE_FPMLANTXBASE_MASK) / 512)); - wr32(hw, I40E_GLHMC_LANTXCNT(hmc_fn_id), obj->cnt); - - /* Rx contexts */ - obj = &hw->hmc.hmc_obj[I40E_HMC_LAN_RX]; - wr32(hw, I40E_GLHMC_LANRXBASE(hmc_fn_id), - (u32)((obj->base & I40E_GLHMC_LANRXBASE_FPMLANRXBASE_MASK) / 512)); - wr32(hw, I40E_GLHMC_LANRXCNT(hmc_fn_id), obj->cnt); - - /* FCoE contexts */ - obj = &hw->hmc.hmc_obj[I40E_HMC_FCOE_CTX]; - wr32(hw, I40E_GLHMC_FCOEDDPBASE(hmc_fn_id), - (u32)((obj->base & I40E_GLHMC_FCOEDDPBASE_FPMFCOEDDPBASE_MASK) / 512)); - wr32(hw, I40E_GLHMC_FCOEDDPCNT(hmc_fn_id), obj->cnt); - - /* FCoE filters */ - obj = &hw->hmc.hmc_obj[I40E_HMC_FCOE_FILT]; - wr32(hw, I40E_GLHMC_FCOEFBASE(hmc_fn_id), - (u32)((obj->base & I40E_GLHMC_FCOEFBASE_FPMFCOEFBASE_MASK) / 512)); - wr32(hw, I40E_GLHMC_FCOEFCNT(hmc_fn_id), obj->cnt); - -configure_lan_hmc_out: - return ret_code; -} - -/** - * i40e_delete_hmc_object - remove hmc objects - * @hw: pointer to the HW structure - * @info: pointer to i40e_hmc_delete_obj_info struct - * - * This will de-populate the SDs and PDs. It frees - * the memory for PDS and backing storage. After this function is returned, - * caller should deallocate memory allocated previously for - * book-keeping information about PDs and backing storage. - **/ -enum i40e_status_code i40e_delete_lan_hmc_object(struct i40e_hw *hw, - struct i40e_hmc_lan_delete_obj_info *info) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - struct i40e_hmc_pd_table *pd_table; - u32 pd_idx, pd_lmt, rel_pd_idx; - u32 sd_idx, sd_lmt; - u32 i, j; - - if (NULL == info) { - ret_code = I40E_ERR_BAD_PTR; - DEBUGOUT("i40e_delete_hmc_object: bad info ptr\n"); - goto exit; - } - if (NULL == info->hmc_info) { - ret_code = I40E_ERR_BAD_PTR; - DEBUGOUT("i40e_delete_hmc_object: bad info->hmc_info ptr\n"); - goto exit; - } - if (I40E_HMC_INFO_SIGNATURE != info->hmc_info->signature) { - ret_code = I40E_ERR_BAD_PTR; - DEBUGOUT("i40e_delete_hmc_object: bad hmc_info->signature\n"); - goto exit; - } - - if (NULL == info->hmc_info->sd_table.sd_entry) { - ret_code = I40E_ERR_BAD_PTR; - DEBUGOUT("i40e_delete_hmc_object: bad sd_entry\n"); - goto exit; - } - - if (NULL == info->hmc_info->hmc_obj) { - ret_code = I40E_ERR_BAD_PTR; - DEBUGOUT("i40e_delete_hmc_object: bad hmc_info->hmc_obj\n"); - goto exit; - } - if (info->start_idx >= info->hmc_info->hmc_obj[info->rsrc_type].cnt) { - ret_code = I40E_ERR_INVALID_HMC_OBJ_INDEX; - DEBUGOUT1("i40e_delete_hmc_object: returns error %d\n", - ret_code); - goto exit; - } - - if ((info->start_idx + info->count) > - info->hmc_info->hmc_obj[info->rsrc_type].cnt) { - ret_code = I40E_ERR_INVALID_HMC_OBJ_COUNT; - DEBUGOUT1("i40e_delete_hmc_object: returns error %d\n", - ret_code); - goto exit; - } - - I40E_FIND_PD_INDEX_LIMIT(info->hmc_info, info->rsrc_type, - info->start_idx, info->count, &pd_idx, - &pd_lmt); - - for (j = pd_idx; j < pd_lmt; j++) { - sd_idx = j / I40E_HMC_PD_CNT_IN_SD; - - if (I40E_SD_TYPE_PAGED != - info->hmc_info->sd_table.sd_entry[sd_idx].entry_type) - continue; - - rel_pd_idx = j % I40E_HMC_PD_CNT_IN_SD; - - pd_table = - &info->hmc_info->sd_table.sd_entry[sd_idx].u.pd_table; - if (pd_table->pd_entry[rel_pd_idx].valid) { - ret_code = i40e_remove_pd_bp(hw, info->hmc_info, j); - if (I40E_SUCCESS != ret_code) - goto exit; - } - } - - /* find sd index and limit */ - I40E_FIND_SD_INDEX_LIMIT(info->hmc_info, info->rsrc_type, - info->start_idx, info->count, - &sd_idx, &sd_lmt); - if (sd_idx >= info->hmc_info->sd_table.sd_cnt || - sd_lmt > info->hmc_info->sd_table.sd_cnt) { - ret_code = I40E_ERR_INVALID_SD_INDEX; - goto exit; - } - - for (i = sd_idx; i < sd_lmt; i++) { - if (!info->hmc_info->sd_table.sd_entry[i].valid) - continue; - switch (info->hmc_info->sd_table.sd_entry[i].entry_type) { - case I40E_SD_TYPE_DIRECT: - ret_code = i40e_remove_sd_bp(hw, info->hmc_info, i); - if (I40E_SUCCESS != ret_code) - goto exit; - break; - case I40E_SD_TYPE_PAGED: - ret_code = i40e_remove_pd_page(hw, info->hmc_info, i); - if (I40E_SUCCESS != ret_code) - goto exit; - break; - default: - break; - } - } -exit: - return ret_code; -} - -/** - * i40e_shutdown_lan_hmc - Remove HMC backing store, free allocated memory - * @hw: pointer to the hw structure - * - * This must be called by drivers as they are shutting down and being - * removed from the OS. - **/ -enum i40e_status_code i40e_shutdown_lan_hmc(struct i40e_hw *hw) -{ - struct i40e_hmc_lan_delete_obj_info info; - enum i40e_status_code ret_code; - - info.hmc_info = &hw->hmc; - info.rsrc_type = I40E_HMC_LAN_FULL; - info.start_idx = 0; - info.count = 1; - - /* delete the object */ - ret_code = i40e_delete_lan_hmc_object(hw, &info); - - /* free the SD table entry for LAN */ - i40e_free_virt_mem(hw, &hw->hmc.sd_table.addr); - hw->hmc.sd_table.sd_cnt = 0; - hw->hmc.sd_table.sd_entry = NULL; - - /* free memory used for hmc_obj */ - i40e_free_virt_mem(hw, &hw->hmc.hmc_obj_virt_mem); - hw->hmc.hmc_obj = NULL; - - return ret_code; -} - -#define I40E_HMC_STORE(_struct, _ele) \ - offsetof(struct _struct, _ele), \ - FIELD_SIZEOF(struct _struct, _ele) - -struct i40e_context_ele { - u16 offset; - u16 size_of; - u16 width; - u16 lsb; -}; - -/* LAN Tx Queue Context */ -static struct i40e_context_ele i40e_hmc_txq_ce_info[] = { - /* Field Width LSB */ - {I40E_HMC_STORE(i40e_hmc_obj_txq, head), 13, 0 }, - {I40E_HMC_STORE(i40e_hmc_obj_txq, new_context), 1, 30 }, - {I40E_HMC_STORE(i40e_hmc_obj_txq, base), 57, 32 }, - {I40E_HMC_STORE(i40e_hmc_obj_txq, fc_ena), 1, 89 }, - {I40E_HMC_STORE(i40e_hmc_obj_txq, timesync_ena), 1, 90 }, - {I40E_HMC_STORE(i40e_hmc_obj_txq, fd_ena), 1, 91 }, - {I40E_HMC_STORE(i40e_hmc_obj_txq, alt_vlan_ena), 1, 92 }, - {I40E_HMC_STORE(i40e_hmc_obj_txq, cpuid), 8, 96 }, -/* line 1 */ - {I40E_HMC_STORE(i40e_hmc_obj_txq, thead_wb), 13, 0 + 128 }, - {I40E_HMC_STORE(i40e_hmc_obj_txq, head_wb_ena), 1, 32 + 128 }, - {I40E_HMC_STORE(i40e_hmc_obj_txq, qlen), 13, 33 + 128 }, - {I40E_HMC_STORE(i40e_hmc_obj_txq, tphrdesc_ena), 1, 46 + 128 }, - {I40E_HMC_STORE(i40e_hmc_obj_txq, tphrpacket_ena), 1, 47 + 128 }, - {I40E_HMC_STORE(i40e_hmc_obj_txq, tphwdesc_ena), 1, 48 + 128 }, - {I40E_HMC_STORE(i40e_hmc_obj_txq, head_wb_addr), 64, 64 + 128 }, -/* line 7 */ - {I40E_HMC_STORE(i40e_hmc_obj_txq, crc), 32, 0 + (7 * 128) }, - {I40E_HMC_STORE(i40e_hmc_obj_txq, rdylist), 10, 84 + (7 * 128) }, - {I40E_HMC_STORE(i40e_hmc_obj_txq, rdylist_act), 1, 94 + (7 * 128) }, - { 0 } -}; - -/* LAN Rx Queue Context */ -static struct i40e_context_ele i40e_hmc_rxq_ce_info[] = { - /* Field Width LSB */ - { I40E_HMC_STORE(i40e_hmc_obj_rxq, head), 13, 0 }, - { I40E_HMC_STORE(i40e_hmc_obj_rxq, cpuid), 8, 13 }, - { I40E_HMC_STORE(i40e_hmc_obj_rxq, base), 57, 32 }, - { I40E_HMC_STORE(i40e_hmc_obj_rxq, qlen), 13, 89 }, - { I40E_HMC_STORE(i40e_hmc_obj_rxq, dbuff), 7, 102 }, - { I40E_HMC_STORE(i40e_hmc_obj_rxq, hbuff), 5, 109 }, - { I40E_HMC_STORE(i40e_hmc_obj_rxq, dtype), 2, 114 }, - { I40E_HMC_STORE(i40e_hmc_obj_rxq, dsize), 1, 116 }, - { I40E_HMC_STORE(i40e_hmc_obj_rxq, crcstrip), 1, 117 }, - { I40E_HMC_STORE(i40e_hmc_obj_rxq, fc_ena), 1, 118 }, - { I40E_HMC_STORE(i40e_hmc_obj_rxq, l2tsel), 1, 119 }, - { I40E_HMC_STORE(i40e_hmc_obj_rxq, hsplit_0), 4, 120 }, - { I40E_HMC_STORE(i40e_hmc_obj_rxq, hsplit_1), 2, 124 }, - { I40E_HMC_STORE(i40e_hmc_obj_rxq, showiv), 1, 127 }, - { I40E_HMC_STORE(i40e_hmc_obj_rxq, rxmax), 14, 174 }, - { I40E_HMC_STORE(i40e_hmc_obj_rxq, tphrdesc_ena), 1, 193 }, - { I40E_HMC_STORE(i40e_hmc_obj_rxq, tphwdesc_ena), 1, 194 }, - { I40E_HMC_STORE(i40e_hmc_obj_rxq, tphdata_ena), 1, 195 }, - { I40E_HMC_STORE(i40e_hmc_obj_rxq, tphhead_ena), 1, 196 }, - { I40E_HMC_STORE(i40e_hmc_obj_rxq, lrxqthresh), 3, 198 }, - { I40E_HMC_STORE(i40e_hmc_obj_rxq, prefena), 1, 201 }, - { 0 } -}; - -/** - * i40e_write_byte - replace HMC context byte - * @hmc_bits: pointer to the HMC memory - * @ce_info: a description of the struct to be read from - * @src: the struct to be read from - **/ -static void i40e_write_byte(u8 *hmc_bits, - struct i40e_context_ele *ce_info, - u8 *src) -{ - u8 src_byte, dest_byte, mask; - u8 *from, *dest; - u16 shift_width; - - /* copy from the next struct field */ - from = src + ce_info->offset; - - /* prepare the bits and mask */ - shift_width = ce_info->lsb % 8; - mask = ((u8)1 << ce_info->width) - 1; - - src_byte = *from; - src_byte &= mask; - - /* shift to correct alignment */ - mask <<= shift_width; - src_byte <<= shift_width; - - /* get the current bits from the target bit string */ - dest = hmc_bits + (ce_info->lsb / 8); - - i40e_memcpy(&dest_byte, dest, sizeof(dest_byte), I40E_DMA_TO_NONDMA); - - dest_byte &= ~mask; /* get the bits not changing */ - dest_byte |= src_byte; /* add in the new bits */ - - /* put it all back */ - i40e_memcpy(dest, &dest_byte, sizeof(dest_byte), I40E_NONDMA_TO_DMA); -} - -/** - * i40e_write_word - replace HMC context word - * @hmc_bits: pointer to the HMC memory - * @ce_info: a description of the struct to be read from - * @src: the struct to be read from - **/ -static void i40e_write_word(u8 *hmc_bits, - struct i40e_context_ele *ce_info, - u8 *src) -{ - u16 src_word, mask; - u8 *from, *dest; - u16 shift_width; - __le16 dest_word; - - /* copy from the next struct field */ - from = src + ce_info->offset; - - /* prepare the bits and mask */ - shift_width = ce_info->lsb % 8; - mask = ((u16)1 << ce_info->width) - 1; - - /* don't swizzle the bits until after the mask because the mask bits - * will be in a different bit position on big endian machines - */ - src_word = *(u16 *)from; - src_word &= mask; - - /* shift to correct alignment */ - mask <<= shift_width; - src_word <<= shift_width; - - /* get the current bits from the target bit string */ - dest = hmc_bits + (ce_info->lsb / 8); - - i40e_memcpy(&dest_word, dest, sizeof(dest_word), I40E_DMA_TO_NONDMA); - - dest_word &= ~(CPU_TO_LE16(mask)); /* get the bits not changing */ - dest_word |= CPU_TO_LE16(src_word); /* add in the new bits */ - - /* put it all back */ - i40e_memcpy(dest, &dest_word, sizeof(dest_word), I40E_NONDMA_TO_DMA); -} - -/** - * i40e_write_dword - replace HMC context dword - * @hmc_bits: pointer to the HMC memory - * @ce_info: a description of the struct to be read from - * @src: the struct to be read from - **/ -static void i40e_write_dword(u8 *hmc_bits, - struct i40e_context_ele *ce_info, - u8 *src) -{ - u32 src_dword, mask; - u8 *from, *dest; - u16 shift_width; - __le32 dest_dword; - - /* copy from the next struct field */ - from = src + ce_info->offset; - - /* prepare the bits and mask */ - shift_width = ce_info->lsb % 8; - - /* if the field width is exactly 32 on an x86 machine, then the shift - * operation will not work because the SHL instructions count is masked - * to 5 bits so the shift will do nothing - */ - if (ce_info->width < 32) - mask = ((u32)1 << ce_info->width) - 1; - else - mask = 0xFFFFFFFF; - - /* don't swizzle the bits until after the mask because the mask bits - * will be in a different bit position on big endian machines - */ - src_dword = *(u32 *)from; - src_dword &= mask; - - /* shift to correct alignment */ - mask <<= shift_width; - src_dword <<= shift_width; - - /* get the current bits from the target bit string */ - dest = hmc_bits + (ce_info->lsb / 8); - - i40e_memcpy(&dest_dword, dest, sizeof(dest_dword), I40E_DMA_TO_NONDMA); - - dest_dword &= ~(CPU_TO_LE32(mask)); /* get the bits not changing */ - dest_dword |= CPU_TO_LE32(src_dword); /* add in the new bits */ - - /* put it all back */ - i40e_memcpy(dest, &dest_dword, sizeof(dest_dword), I40E_NONDMA_TO_DMA); -} - -/** - * i40e_write_qword - replace HMC context qword - * @hmc_bits: pointer to the HMC memory - * @ce_info: a description of the struct to be read from - * @src: the struct to be read from - **/ -static void i40e_write_qword(u8 *hmc_bits, - struct i40e_context_ele *ce_info, - u8 *src) -{ - u64 src_qword, mask; - u8 *from, *dest; - u16 shift_width; - __le64 dest_qword; - - /* copy from the next struct field */ - from = src + ce_info->offset; - - /* prepare the bits and mask */ - shift_width = ce_info->lsb % 8; - - /* if the field width is exactly 64 on an x86 machine, then the shift - * operation will not work because the SHL instructions count is masked - * to 6 bits so the shift will do nothing - */ - if (ce_info->width < 64) - mask = ((u64)1 << ce_info->width) - 1; - else - mask = 0xFFFFFFFFFFFFFFFFUL; - - /* don't swizzle the bits until after the mask because the mask bits - * will be in a different bit position on big endian machines - */ - src_qword = *(u64 *)from; - src_qword &= mask; - - /* shift to correct alignment */ - mask <<= shift_width; - src_qword <<= shift_width; - - /* get the current bits from the target bit string */ - dest = hmc_bits + (ce_info->lsb / 8); - - i40e_memcpy(&dest_qword, dest, sizeof(dest_qword), I40E_DMA_TO_NONDMA); - - dest_qword &= ~(CPU_TO_LE64(mask)); /* get the bits not changing */ - dest_qword |= CPU_TO_LE64(src_qword); /* add in the new bits */ - - /* put it all back */ - i40e_memcpy(dest, &dest_qword, sizeof(dest_qword), I40E_NONDMA_TO_DMA); -} - -/** - * i40e_read_byte - read HMC context byte into struct - * @hmc_bits: pointer to the HMC memory - * @ce_info: a description of the struct to be filled - * @dest: the struct to be filled - **/ -static void i40e_read_byte(u8 *hmc_bits, - struct i40e_context_ele *ce_info, - u8 *dest) -{ - u8 dest_byte, mask; - u8 *src, *target; - u16 shift_width; - - /* prepare the bits and mask */ - shift_width = ce_info->lsb % 8; - mask = ((u8)1 << ce_info->width) - 1; - - /* shift to correct alignment */ - mask <<= shift_width; - - /* get the current bits from the src bit string */ - src = hmc_bits + (ce_info->lsb / 8); - - i40e_memcpy(&dest_byte, src, sizeof(dest_byte), I40E_DMA_TO_NONDMA); - - dest_byte &= ~(mask); - - dest_byte >>= shift_width; - - /* get the address from the struct field */ - target = dest + ce_info->offset; - - /* put it back in the struct */ - i40e_memcpy(target, &dest_byte, sizeof(dest_byte), I40E_NONDMA_TO_DMA); -} - -/** - * i40e_read_word - read HMC context word into struct - * @hmc_bits: pointer to the HMC memory - * @ce_info: a description of the struct to be filled - * @dest: the struct to be filled - **/ -static void i40e_read_word(u8 *hmc_bits, - struct i40e_context_ele *ce_info, - u8 *dest) -{ - u16 dest_word, mask; - u8 *src, *target; - u16 shift_width; - __le16 src_word; - - /* prepare the bits and mask */ - shift_width = ce_info->lsb % 8; - mask = ((u16)1 << ce_info->width) - 1; - - /* shift to correct alignment */ - mask <<= shift_width; - - /* get the current bits from the src bit string */ - src = hmc_bits + (ce_info->lsb / 8); - - i40e_memcpy(&src_word, src, sizeof(src_word), I40E_DMA_TO_NONDMA); - - /* the data in the memory is stored as little endian so mask it - * correctly - */ - src_word &= ~(CPU_TO_LE16(mask)); - - /* get the data back into host order before shifting */ - dest_word = LE16_TO_CPU(src_word); - - dest_word >>= shift_width; - - /* get the address from the struct field */ - target = dest + ce_info->offset; - - /* put it back in the struct */ - i40e_memcpy(target, &dest_word, sizeof(dest_word), I40E_NONDMA_TO_DMA); -} - -/** - * i40e_read_dword - read HMC context dword into struct - * @hmc_bits: pointer to the HMC memory - * @ce_info: a description of the struct to be filled - * @dest: the struct to be filled - **/ -static void i40e_read_dword(u8 *hmc_bits, - struct i40e_context_ele *ce_info, - u8 *dest) -{ - u32 dest_dword, mask; - u8 *src, *target; - u16 shift_width; - __le32 src_dword; - - /* prepare the bits and mask */ - shift_width = ce_info->lsb % 8; - - /* if the field width is exactly 32 on an x86 machine, then the shift - * operation will not work because the SHL instructions count is masked - * to 5 bits so the shift will do nothing - */ - if (ce_info->width < 32) - mask = ((u32)1 << ce_info->width) - 1; - else - mask = 0xFFFFFFFF; - - /* shift to correct alignment */ - mask <<= shift_width; - - /* get the current bits from the src bit string */ - src = hmc_bits + (ce_info->lsb / 8); - - i40e_memcpy(&src_dword, src, sizeof(src_dword), I40E_DMA_TO_NONDMA); - - /* the data in the memory is stored as little endian so mask it - * correctly - */ - src_dword &= ~(CPU_TO_LE32(mask)); - - /* get the data back into host order before shifting */ - dest_dword = LE32_TO_CPU(src_dword); - - dest_dword >>= shift_width; - - /* get the address from the struct field */ - target = dest + ce_info->offset; - - /* put it back in the struct */ - i40e_memcpy(target, &dest_dword, sizeof(dest_dword), - I40E_NONDMA_TO_DMA); -} - -/** - * i40e_read_qword - read HMC context qword into struct - * @hmc_bits: pointer to the HMC memory - * @ce_info: a description of the struct to be filled - * @dest: the struct to be filled - **/ -static void i40e_read_qword(u8 *hmc_bits, - struct i40e_context_ele *ce_info, - u8 *dest) -{ - u64 dest_qword, mask; - u8 *src, *target; - u16 shift_width; - __le64 src_qword; - - /* prepare the bits and mask */ - shift_width = ce_info->lsb % 8; - - /* if the field width is exactly 64 on an x86 machine, then the shift - * operation will not work because the SHL instructions count is masked - * to 6 bits so the shift will do nothing - */ - if (ce_info->width < 64) - mask = ((u64)1 << ce_info->width) - 1; - else - mask = 0xFFFFFFFFFFFFFFFFUL; - - /* shift to correct alignment */ - mask <<= shift_width; - - /* get the current bits from the src bit string */ - src = hmc_bits + (ce_info->lsb / 8); - - i40e_memcpy(&src_qword, src, sizeof(src_qword), I40E_DMA_TO_NONDMA); - - /* the data in the memory is stored as little endian so mask it - * correctly - */ - src_qword &= ~(CPU_TO_LE64(mask)); - - /* get the data back into host order before shifting */ - dest_qword = LE64_TO_CPU(src_qword); - - dest_qword >>= shift_width; - - /* get the address from the struct field */ - target = dest + ce_info->offset; - - /* put it back in the struct */ - i40e_memcpy(target, &dest_qword, sizeof(dest_qword), - I40E_NONDMA_TO_DMA); -} - -/** - * i40e_get_hmc_context - extract HMC context bits - * @context_bytes: pointer to the context bit array - * @ce_info: a description of the struct to be filled - * @dest: the struct to be filled - **/ -static enum i40e_status_code i40e_get_hmc_context(u8 *context_bytes, - struct i40e_context_ele *ce_info, - u8 *dest) -{ - int f; - - for (f = 0; ce_info[f].width != 0; f++) { - switch (ce_info[f].size_of) { - case 1: - i40e_read_byte(context_bytes, &ce_info[f], dest); - break; - case 2: - i40e_read_word(context_bytes, &ce_info[f], dest); - break; - case 4: - i40e_read_dword(context_bytes, &ce_info[f], dest); - break; - case 8: - i40e_read_qword(context_bytes, &ce_info[f], dest); - break; - default: - /* nothing to do, just keep going */ - break; - } - } - - return I40E_SUCCESS; -} - -/** - * i40e_clear_hmc_context - zero out the HMC context bits - * @hw: the hardware struct - * @context_bytes: pointer to the context bit array (DMA memory) - * @hmc_type: the type of HMC resource - **/ -static enum i40e_status_code i40e_clear_hmc_context(struct i40e_hw *hw, - u8 *context_bytes, - enum i40e_hmc_lan_rsrc_type hmc_type) -{ - /* clean the bit array */ - i40e_memset(context_bytes, 0, (u32)hw->hmc.hmc_obj[hmc_type].size, - I40E_DMA_MEM); - - return I40E_SUCCESS; -} - -/** - * i40e_set_hmc_context - replace HMC context bits - * @context_bytes: pointer to the context bit array - * @ce_info: a description of the struct to be filled - * @dest: the struct to be filled - **/ -static enum i40e_status_code i40e_set_hmc_context(u8 *context_bytes, - struct i40e_context_ele *ce_info, - u8 *dest) -{ - int f; - - for (f = 0; ce_info[f].width != 0; f++) { - - /* we have to deal with each element of the HMC using the - * correct size so that we are correct regardless of the - * endianness of the machine - */ - switch (ce_info[f].size_of) { - case 1: - i40e_write_byte(context_bytes, &ce_info[f], dest); - break; - case 2: - i40e_write_word(context_bytes, &ce_info[f], dest); - break; - case 4: - i40e_write_dword(context_bytes, &ce_info[f], dest); - break; - case 8: - i40e_write_qword(context_bytes, &ce_info[f], dest); - break; - } - } - - return I40E_SUCCESS; -} - -/** - * i40e_hmc_get_object_va - retrieves an object's virtual address - * @hmc_info: pointer to i40e_hmc_info struct - * @object_base: pointer to u64 to get the va - * @rsrc_type: the hmc resource type - * @obj_idx: hmc object index - * - * This function retrieves the object's virtual address from the object - * base pointer. This function is used for LAN Queue contexts. - **/ -STATIC -enum i40e_status_code i40e_hmc_get_object_va(struct i40e_hmc_info *hmc_info, - u8 **object_base, - enum i40e_hmc_lan_rsrc_type rsrc_type, - u32 obj_idx) -{ - u32 obj_offset_in_sd, obj_offset_in_pd; - struct i40e_hmc_sd_entry *sd_entry; - struct i40e_hmc_pd_entry *pd_entry; - u32 pd_idx, pd_lmt, rel_pd_idx; - enum i40e_status_code ret_code = I40E_SUCCESS; - u64 obj_offset_in_fpm; - u32 sd_idx, sd_lmt; - - if (NULL == hmc_info) { - ret_code = I40E_ERR_BAD_PTR; - DEBUGOUT("i40e_hmc_get_object_va: bad hmc_info ptr\n"); - goto exit; - } - if (NULL == hmc_info->hmc_obj) { - ret_code = I40E_ERR_BAD_PTR; - DEBUGOUT("i40e_hmc_get_object_va: bad hmc_info->hmc_obj ptr\n"); - goto exit; - } - if (NULL == object_base) { - ret_code = I40E_ERR_BAD_PTR; - DEBUGOUT("i40e_hmc_get_object_va: bad object_base ptr\n"); - goto exit; - } - if (I40E_HMC_INFO_SIGNATURE != hmc_info->signature) { - ret_code = I40E_ERR_BAD_PTR; - DEBUGOUT("i40e_hmc_get_object_va: bad hmc_info->signature\n"); - goto exit; - } - if (obj_idx >= hmc_info->hmc_obj[rsrc_type].cnt) { - DEBUGOUT1("i40e_hmc_get_object_va: returns error %d\n", - ret_code); - ret_code = I40E_ERR_INVALID_HMC_OBJ_INDEX; - goto exit; - } - /* find sd index and limit */ - I40E_FIND_SD_INDEX_LIMIT(hmc_info, rsrc_type, obj_idx, 1, - &sd_idx, &sd_lmt); - - sd_entry = &hmc_info->sd_table.sd_entry[sd_idx]; - obj_offset_in_fpm = hmc_info->hmc_obj[rsrc_type].base + - hmc_info->hmc_obj[rsrc_type].size * obj_idx; - - if (I40E_SD_TYPE_PAGED == sd_entry->entry_type) { - I40E_FIND_PD_INDEX_LIMIT(hmc_info, rsrc_type, obj_idx, 1, - &pd_idx, &pd_lmt); - rel_pd_idx = pd_idx % I40E_HMC_PD_CNT_IN_SD; - pd_entry = &sd_entry->u.pd_table.pd_entry[rel_pd_idx]; - obj_offset_in_pd = (u32)(obj_offset_in_fpm % - I40E_HMC_PAGED_BP_SIZE); - *object_base = (u8 *)pd_entry->bp.addr.va + obj_offset_in_pd; - } else { - obj_offset_in_sd = (u32)(obj_offset_in_fpm % - I40E_HMC_DIRECT_BP_SIZE); - *object_base = (u8 *)sd_entry->u.bp.addr.va + obj_offset_in_sd; - } -exit: - return ret_code; -} - -/** - * i40e_get_lan_tx_queue_context - return the HMC context for the queue - * @hw: the hardware struct - * @queue: the queue we care about - * @s: the struct to be filled - **/ -enum i40e_status_code i40e_get_lan_tx_queue_context(struct i40e_hw *hw, - u16 queue, - struct i40e_hmc_obj_txq *s) -{ - enum i40e_status_code err; - u8 *context_bytes; - - err = i40e_hmc_get_object_va(&hw->hmc, &context_bytes, - I40E_HMC_LAN_TX, queue); - if (err < 0) - return err; - - return i40e_get_hmc_context(context_bytes, - i40e_hmc_txq_ce_info, (u8 *)s); -} - -/** - * i40e_clear_lan_tx_queue_context - clear the HMC context for the queue - * @hw: the hardware struct - * @queue: the queue we care about - **/ -enum i40e_status_code i40e_clear_lan_tx_queue_context(struct i40e_hw *hw, - u16 queue) -{ - enum i40e_status_code err; - u8 *context_bytes; - - err = i40e_hmc_get_object_va(&hw->hmc, &context_bytes, - I40E_HMC_LAN_TX, queue); - if (err < 0) - return err; - - return i40e_clear_hmc_context(hw, context_bytes, I40E_HMC_LAN_TX); -} - -/** - * i40e_set_lan_tx_queue_context - set the HMC context for the queue - * @hw: the hardware struct - * @queue: the queue we care about - * @s: the struct to be filled - **/ -enum i40e_status_code i40e_set_lan_tx_queue_context(struct i40e_hw *hw, - u16 queue, - struct i40e_hmc_obj_txq *s) -{ - enum i40e_status_code err; - u8 *context_bytes; - - err = i40e_hmc_get_object_va(&hw->hmc, &context_bytes, - I40E_HMC_LAN_TX, queue); - if (err < 0) - return err; - - return i40e_set_hmc_context(context_bytes, - i40e_hmc_txq_ce_info, (u8 *)s); -} - -/** - * i40e_get_lan_rx_queue_context - return the HMC context for the queue - * @hw: the hardware struct - * @queue: the queue we care about - * @s: the struct to be filled - **/ -enum i40e_status_code i40e_get_lan_rx_queue_context(struct i40e_hw *hw, - u16 queue, - struct i40e_hmc_obj_rxq *s) -{ - enum i40e_status_code err; - u8 *context_bytes; - - err = i40e_hmc_get_object_va(&hw->hmc, &context_bytes, - I40E_HMC_LAN_RX, queue); - if (err < 0) - return err; - - return i40e_get_hmc_context(context_bytes, - i40e_hmc_rxq_ce_info, (u8 *)s); -} - -/** - * i40e_clear_lan_rx_queue_context - clear the HMC context for the queue - * @hw: the hardware struct - * @queue: the queue we care about - **/ -enum i40e_status_code i40e_clear_lan_rx_queue_context(struct i40e_hw *hw, - u16 queue) -{ - enum i40e_status_code err; - u8 *context_bytes; - - err = i40e_hmc_get_object_va(&hw->hmc, &context_bytes, - I40E_HMC_LAN_RX, queue); - if (err < 0) - return err; - - return i40e_clear_hmc_context(hw, context_bytes, I40E_HMC_LAN_RX); -} - -/** - * i40e_set_lan_rx_queue_context - set the HMC context for the queue - * @hw: the hardware struct - * @queue: the queue we care about - * @s: the struct to be filled - **/ -enum i40e_status_code i40e_set_lan_rx_queue_context(struct i40e_hw *hw, - u16 queue, - struct i40e_hmc_obj_rxq *s) -{ - enum i40e_status_code err; - u8 *context_bytes; - - err = i40e_hmc_get_object_va(&hw->hmc, &context_bytes, - I40E_HMC_LAN_RX, queue); - if (err < 0) - return err; - - return i40e_set_hmc_context(context_bytes, - i40e_hmc_rxq_ce_info, (u8 *)s); -} diff --git a/lib/librte_pmd_i40e/i40e/i40e_lan_hmc.h b/lib/librte_pmd_i40e/i40e/i40e_lan_hmc.h deleted file mode 100644 index 70ef65c..0000000 --- a/lib/librte_pmd_i40e/i40e/i40e_lan_hmc.h +++ /dev/null @@ -1,200 +0,0 @@ -/******************************************************************************* - -Copyright (c) 2013 - 2014, Intel Corporation -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - - 1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - 3. Neither the name of the Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. - -***************************************************************************/ - -#ifndef _I40E_LAN_HMC_H_ -#define _I40E_LAN_HMC_H_ - -/* forward-declare the HW struct for the compiler */ -struct i40e_hw; - -/* HMC element context information */ - -/* Rx queue context data - * - * The sizes of the variables may be larger than needed due to crossing byte - * boundaries. If we do not have the width of the variable set to the correct - * size then we could end up shifting bits off the top of the variable when the - * variable is at the top of a byte and crosses over into the next byte. - */ -struct i40e_hmc_obj_rxq { - u16 head; - u16 cpuid; /* bigger than needed, see above for reason */ - u64 base; - u16 qlen; -#define I40E_RXQ_CTX_DBUFF_SHIFT 7 - u16 dbuff; /* bigger than needed, see above for reason */ -#define I40E_RXQ_CTX_HBUFF_SHIFT 6 - u16 hbuff; /* bigger than needed, see above for reason */ - u8 dtype; - u8 dsize; - u8 crcstrip; - u8 fc_ena; - u8 l2tsel; - u8 hsplit_0; - u8 hsplit_1; - u8 showiv; - u32 rxmax; /* bigger than needed, see above for reason */ - u8 tphrdesc_ena; - u8 tphwdesc_ena; - u8 tphdata_ena; - u8 tphhead_ena; - u16 lrxqthresh; /* bigger than needed, see above for reason */ - u8 prefena; /* NOTE: normally must be set to 1 at init */ -}; - -/* Tx queue context data -* -* The sizes of the variables may be larger than needed due to crossing byte -* boundaries. If we do not have the width of the variable set to the correct -* size then we could end up shifting bits off the top of the variable when the -* variable is at the top of a byte and crosses over into the next byte. -*/ -struct i40e_hmc_obj_txq { - u16 head; - u8 new_context; - u64 base; - u8 fc_ena; - u8 timesync_ena; - u8 fd_ena; - u8 alt_vlan_ena; - u16 thead_wb; - u8 cpuid; - u8 head_wb_ena; - u16 qlen; - u8 tphrdesc_ena; - u8 tphrpacket_ena; - u8 tphwdesc_ena; - u64 head_wb_addr; - u32 crc; - u16 rdylist; - u8 rdylist_act; -}; - -/* for hsplit_0 field of Rx HMC context */ -enum i40e_hmc_obj_rx_hsplit_0 { - I40E_HMC_OBJ_RX_HSPLIT_0_NO_SPLIT = 0, - I40E_HMC_OBJ_RX_HSPLIT_0_SPLIT_L2 = 1, - I40E_HMC_OBJ_RX_HSPLIT_0_SPLIT_IP = 2, - I40E_HMC_OBJ_RX_HSPLIT_0_SPLIT_TCP_UDP = 4, - I40E_HMC_OBJ_RX_HSPLIT_0_SPLIT_SCTP = 8, -}; - -/* fcoe_cntx and fcoe_filt are for debugging purpose only */ -struct i40e_hmc_obj_fcoe_cntx { - u32 rsv[32]; -}; - -struct i40e_hmc_obj_fcoe_filt { - u32 rsv[8]; -}; - -/* Context sizes for LAN objects */ -enum i40e_hmc_lan_object_size { - I40E_HMC_LAN_OBJ_SZ_8 = 0x3, - I40E_HMC_LAN_OBJ_SZ_16 = 0x4, - I40E_HMC_LAN_OBJ_SZ_32 = 0x5, - I40E_HMC_LAN_OBJ_SZ_64 = 0x6, - I40E_HMC_LAN_OBJ_SZ_128 = 0x7, - I40E_HMC_LAN_OBJ_SZ_256 = 0x8, - I40E_HMC_LAN_OBJ_SZ_512 = 0x9, -}; - -#define I40E_HMC_L2OBJ_BASE_ALIGNMENT 512 -#define I40E_HMC_OBJ_SIZE_TXQ 128 -#define I40E_HMC_OBJ_SIZE_RXQ 32 -#define I40E_HMC_OBJ_SIZE_FCOE_CNTX 64 -#define I40E_HMC_OBJ_SIZE_FCOE_FILT 64 - -enum i40e_hmc_lan_rsrc_type { - I40E_HMC_LAN_FULL = 0, - I40E_HMC_LAN_TX = 1, - I40E_HMC_LAN_RX = 2, - I40E_HMC_FCOE_CTX = 3, - I40E_HMC_FCOE_FILT = 4, - I40E_HMC_LAN_MAX = 5 -}; - -enum i40e_hmc_model { - I40E_HMC_MODEL_DIRECT_PREFERRED = 0, - I40E_HMC_MODEL_DIRECT_ONLY = 1, - I40E_HMC_MODEL_PAGED_ONLY = 2, - I40E_HMC_MODEL_UNKNOWN, -}; - -struct i40e_hmc_lan_create_obj_info { - struct i40e_hmc_info *hmc_info; - u32 rsrc_type; - u32 start_idx; - u32 count; - enum i40e_sd_entry_type entry_type; - u64 direct_mode_sz; -}; - -struct i40e_hmc_lan_delete_obj_info { - struct i40e_hmc_info *hmc_info; - u32 rsrc_type; - u32 start_idx; - u32 count; -}; - -enum i40e_status_code i40e_init_lan_hmc(struct i40e_hw *hw, u32 txq_num, - u32 rxq_num, u32 fcoe_cntx_num, - u32 fcoe_filt_num); -enum i40e_status_code i40e_configure_lan_hmc(struct i40e_hw *hw, - enum i40e_hmc_model model); -enum i40e_status_code i40e_shutdown_lan_hmc(struct i40e_hw *hw); - -u64 i40e_calculate_l2fpm_size(u32 txq_num, u32 rxq_num, - u32 fcoe_cntx_num, u32 fcoe_filt_num); -enum i40e_status_code i40e_get_lan_tx_queue_context(struct i40e_hw *hw, - u16 queue, - struct i40e_hmc_obj_txq *s); -enum i40e_status_code i40e_clear_lan_tx_queue_context(struct i40e_hw *hw, - u16 queue); -enum i40e_status_code i40e_set_lan_tx_queue_context(struct i40e_hw *hw, - u16 queue, - struct i40e_hmc_obj_txq *s); -enum i40e_status_code i40e_get_lan_rx_queue_context(struct i40e_hw *hw, - u16 queue, - struct i40e_hmc_obj_rxq *s); -enum i40e_status_code i40e_clear_lan_rx_queue_context(struct i40e_hw *hw, - u16 queue); -enum i40e_status_code i40e_set_lan_rx_queue_context(struct i40e_hw *hw, - u16 queue, - struct i40e_hmc_obj_rxq *s); -enum i40e_status_code i40e_create_lan_hmc_object(struct i40e_hw *hw, - struct i40e_hmc_lan_create_obj_info *info); -enum i40e_status_code i40e_delete_lan_hmc_object(struct i40e_hw *hw, - struct i40e_hmc_lan_delete_obj_info *info); - -#endif /* _I40E_LAN_HMC_H_ */ diff --git a/lib/librte_pmd_i40e/i40e/i40e_nvm.c b/lib/librte_pmd_i40e/i40e/i40e_nvm.c deleted file mode 100644 index c62f5eb..0000000 --- a/lib/librte_pmd_i40e/i40e/i40e_nvm.c +++ /dev/null @@ -1,940 +0,0 @@ -/******************************************************************************* - -Copyright (c) 2013 - 2014, Intel Corporation -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - - 1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - 3. Neither the name of the Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. - -***************************************************************************/ - -#include "i40e_prototype.h" - -/** - * i40e_init_nvm_ops - Initialize NVM function pointers - * @hw: pointer to the HW structure - * - * Setup the function pointers and the NVM info structure. Should be called - * once per NVM initialization, e.g. inside the i40e_init_shared_code(). - * Please notice that the NVM term is used here (& in all methods covered - * in this file) as an equivalent of the FLASH part mapped into the SR. - * We are accessing FLASH always thru the Shadow RAM. - **/ -enum i40e_status_code i40e_init_nvm(struct i40e_hw *hw) -{ - struct i40e_nvm_info *nvm = &hw->nvm; - enum i40e_status_code ret_code = I40E_SUCCESS; - u32 fla, gens; - u8 sr_size; - - DEBUGFUNC("i40e_init_nvm"); - - /* The SR size is stored regardless of the nvm programming mode - * as the blank mode may be used in the factory line. - */ - gens = rd32(hw, I40E_GLNVM_GENS); - sr_size = ((gens & I40E_GLNVM_GENS_SR_SIZE_MASK) >> - I40E_GLNVM_GENS_SR_SIZE_SHIFT); - /* Switching to words (sr_size contains power of 2KB) */ - nvm->sr_size = (1 << sr_size) * I40E_SR_WORDS_IN_1KB; - - /* Check if we are in the normal or blank NVM programming mode */ - fla = rd32(hw, I40E_GLNVM_FLA); - if (fla & I40E_GLNVM_FLA_LOCKED_MASK) { /* Normal programming mode */ - /* Max NVM timeout */ - nvm->timeout = I40E_MAX_NVM_TIMEOUT; - nvm->blank_nvm_mode = false; - } else { /* Blank programming mode */ - nvm->blank_nvm_mode = true; - ret_code = I40E_ERR_NVM_BLANK_MODE; - DEBUGOUT("NVM init error: unsupported blank mode.\n"); - } - - return ret_code; -} - -/** - * i40e_acquire_nvm - Generic request for acquiring the NVM ownership - * @hw: pointer to the HW structure - * @access: NVM access type (read or write) - * - * This function will request NVM ownership for reading - * via the proper Admin Command. - **/ -enum i40e_status_code i40e_acquire_nvm(struct i40e_hw *hw, - enum i40e_aq_resource_access_type access) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - u64 gtime, timeout; - u64 time = 0; - - DEBUGFUNC("i40e_acquire_nvm"); - - if (hw->nvm.blank_nvm_mode) - goto i40e_i40e_acquire_nvm_exit; - - ret_code = i40e_aq_request_resource(hw, I40E_NVM_RESOURCE_ID, access, - 0, &time, NULL); - /* Reading the Global Device Timer */ - gtime = rd32(hw, I40E_GLVFGEN_TIMER); - - /* Store the timeout */ - hw->nvm.hw_semaphore_timeout = I40E_MS_TO_GTIME(time) + gtime; - - if (ret_code != I40E_SUCCESS) { - /* Set the polling timeout */ - if (time > I40E_MAX_NVM_TIMEOUT) - timeout = I40E_MS_TO_GTIME(I40E_MAX_NVM_TIMEOUT) - + gtime; - else - timeout = hw->nvm.hw_semaphore_timeout; - /* Poll until the current NVM owner timeouts */ - while (gtime < timeout) { - i40e_msec_delay(10); - ret_code = i40e_aq_request_resource(hw, - I40E_NVM_RESOURCE_ID, - access, 0, &time, - NULL); - if (ret_code == I40E_SUCCESS) { - hw->nvm.hw_semaphore_timeout = - I40E_MS_TO_GTIME(time) + gtime; - break; - } - gtime = rd32(hw, I40E_GLVFGEN_TIMER); - } - if (ret_code != I40E_SUCCESS) { - hw->nvm.hw_semaphore_timeout = 0; - hw->nvm.hw_semaphore_wait = - I40E_MS_TO_GTIME(time) + gtime; - DEBUGOUT1("NVM acquire timed out, wait %llu ms before trying again.\n", - time); - } - } - -i40e_i40e_acquire_nvm_exit: - return ret_code; -} - -/** - * i40e_release_nvm - Generic request for releasing the NVM ownership - * @hw: pointer to the HW structure - * - * This function will release NVM resource via the proper Admin Command. - **/ -void i40e_release_nvm(struct i40e_hw *hw) -{ - DEBUGFUNC("i40e_release_nvm"); - - if (!hw->nvm.blank_nvm_mode) - i40e_aq_release_resource(hw, I40E_NVM_RESOURCE_ID, 0, NULL); -} - -/** - * i40e_poll_sr_srctl_done_bit - Polls the GLNVM_SRCTL done bit - * @hw: pointer to the HW structure - * - * Polls the SRCTL Shadow RAM register done bit. - **/ -static enum i40e_status_code i40e_poll_sr_srctl_done_bit(struct i40e_hw *hw) -{ - enum i40e_status_code ret_code = I40E_ERR_TIMEOUT; - u32 srctl, wait_cnt; - - DEBUGFUNC("i40e_poll_sr_srctl_done_bit"); - - /* Poll the I40E_GLNVM_SRCTL until the done bit is set */ - for (wait_cnt = 0; wait_cnt < I40E_SRRD_SRCTL_ATTEMPTS; wait_cnt++) { - srctl = rd32(hw, I40E_GLNVM_SRCTL); - if (srctl & I40E_GLNVM_SRCTL_DONE_MASK) { - ret_code = I40E_SUCCESS; - break; - } - i40e_usec_delay(5); - } - if (ret_code == I40E_ERR_TIMEOUT) - DEBUGOUT("Done bit in GLNVM_SRCTL not set"); - return ret_code; -} - -/** - * i40e_read_nvm_word - Reads Shadow RAM - * @hw: pointer to the HW structure - * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF) - * @data: word read from the Shadow RAM - * - * Reads one 16 bit word from the Shadow RAM using the GLNVM_SRCTL register. - **/ -enum i40e_status_code i40e_read_nvm_word(struct i40e_hw *hw, u16 offset, - u16 *data) -{ - enum i40e_status_code ret_code = I40E_ERR_TIMEOUT; - u32 sr_reg; - - DEBUGFUNC("i40e_read_nvm_srctl"); - - if (offset >= hw->nvm.sr_size) { - DEBUGOUT("NVM read error: Offset beyond Shadow RAM limit.\n"); - ret_code = I40E_ERR_PARAM; - goto read_nvm_exit; - } - - /* Poll the done bit first */ - ret_code = i40e_poll_sr_srctl_done_bit(hw); - if (ret_code == I40E_SUCCESS) { - /* Write the address and start reading */ - sr_reg = (u32)(offset << I40E_GLNVM_SRCTL_ADDR_SHIFT) | - (1 << I40E_GLNVM_SRCTL_START_SHIFT); - wr32(hw, I40E_GLNVM_SRCTL, sr_reg); - - /* Poll I40E_GLNVM_SRCTL until the done bit is set */ - ret_code = i40e_poll_sr_srctl_done_bit(hw); - if (ret_code == I40E_SUCCESS) { - sr_reg = rd32(hw, I40E_GLNVM_SRDATA); - *data = (u16)((sr_reg & - I40E_GLNVM_SRDATA_RDDATA_MASK) - >> I40E_GLNVM_SRDATA_RDDATA_SHIFT); - } - } - if (ret_code != I40E_SUCCESS) - DEBUGOUT1("NVM read error: Couldn't access Shadow RAM address: 0x%x\n", - offset); - -read_nvm_exit: - return ret_code; -} - -/** - * i40e_read_nvm_buffer - Reads Shadow RAM buffer - * @hw: pointer to the HW structure - * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF). - * @words: (in) number of words to read; (out) number of words actually read - * @data: words read from the Shadow RAM - * - * Reads 16 bit words (data buffer) from the SR using the i40e_read_nvm_srrd() - * method. The buffer read is preceded by the NVM ownership take - * and followed by the release. - **/ -enum i40e_status_code i40e_read_nvm_buffer(struct i40e_hw *hw, u16 offset, - u16 *words, u16 *data) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - u16 index, word; - - DEBUGFUNC("i40e_read_nvm_buffer"); - - /* Loop thru the selected region */ - for (word = 0; word < *words; word++) { - index = offset + word; - ret_code = i40e_read_nvm_word(hw, index, &data[word]); - if (ret_code != I40E_SUCCESS) - break; - } - - /* Update the number of words read from the Shadow RAM */ - *words = word; - - return ret_code; -} -/** - * i40e_write_nvm_aq - Writes Shadow RAM. - * @hw: pointer to the HW structure. - * @module_pointer: module pointer location in words from the NVM beginning - * @offset: offset in words from module start - * @words: number of words to write - * @data: buffer with words to write to the Shadow RAM - * @last_command: tells the AdminQ that this is the last command - * - * Writes a 16 bit words buffer to the Shadow RAM using the admin command. - **/ -enum i40e_status_code i40e_write_nvm_aq(struct i40e_hw *hw, u8 module_pointer, - u32 offset, u16 words, void *data, - bool last_command) -{ - enum i40e_status_code ret_code = I40E_ERR_NVM; - - DEBUGFUNC("i40e_write_nvm_aq"); - - /* Here we are checking the SR limit only for the flat memory model. - * We cannot do it for the module-based model, as we did not acquire - * the NVM resource yet (we cannot get the module pointer value). - * Firmware will check the module-based model. - */ - if ((offset + words) > hw->nvm.sr_size) - DEBUGOUT("NVM write error: offset beyond Shadow RAM limit.\n"); - else if (words > I40E_SR_SECTOR_SIZE_IN_WORDS) - /* We can write only up to 4KB (one sector), in one AQ write */ - DEBUGOUT("NVM write fail error: cannot write more than 4KB in a single write.\n"); - else if (((offset + (words - 1)) / I40E_SR_SECTOR_SIZE_IN_WORDS) - != (offset / I40E_SR_SECTOR_SIZE_IN_WORDS)) - /* A single write cannot spread over two sectors */ - DEBUGOUT("NVM write error: cannot spread over two sectors in a single write.\n"); - else - ret_code = i40e_aq_update_nvm(hw, module_pointer, - 2 * offset, /*bytes*/ - 2 * words, /*bytes*/ - data, last_command, NULL); - - return ret_code; -} - -/** - * i40e_write_nvm_word - Writes Shadow RAM word - * @hw: pointer to the HW structure - * @offset: offset of the Shadow RAM word to write - * @data: word to write to the Shadow RAM - * - * Writes a 16 bit word to the SR using the i40e_write_nvm_aq() method. - * NVM ownership have to be acquired and released (on ARQ completion event - * reception) by caller. To commit SR to NVM update checksum function - * should be called. - **/ -enum i40e_status_code i40e_write_nvm_word(struct i40e_hw *hw, u32 offset, - void *data) -{ - DEBUGFUNC("i40e_write_nvm_word"); - - /* Value 0x00 below means that we treat SR as a flat mem */ - return i40e_write_nvm_aq(hw, 0x00, offset, 1, data, false); -} - -/** - * i40e_write_nvm_buffer - Writes Shadow RAM buffer - * @hw: pointer to the HW structure - * @module_pointer: module pointer location in words from the NVM beginning - * @offset: offset of the Shadow RAM buffer to write - * @words: number of words to write - * @data: words to write to the Shadow RAM - * - * Writes a 16 bit words buffer to the Shadow RAM using the admin command. - * NVM ownership must be acquired before calling this function and released - * on ARQ completion event reception by caller. To commit SR to NVM update - * checksum function should be called. - **/ -enum i40e_status_code i40e_write_nvm_buffer(struct i40e_hw *hw, - u8 module_pointer, u32 offset, - u16 words, void *data) -{ - DEBUGFUNC("i40e_write_nvm_buffer"); - - /* Here we will only write one buffer as the size of the modules - * mirrored in the Shadow RAM is always less than 4K. - */ - return i40e_write_nvm_aq(hw, module_pointer, offset, words, - data, false); -} - -/** - * i40e_calc_nvm_checksum - Calculates and returns the checksum - * @hw: pointer to hardware structure - * @checksum: pointer to the checksum - * - * This function calculates SW Checksum that covers the whole 64kB shadow RAM - * except the VPD and PCIe ALT Auto-load modules. The structure and size of VPD - * is customer specific and unknown. Therefore, this function skips all maximum - * possible size of VPD (1kB). - **/ -enum i40e_status_code i40e_calc_nvm_checksum(struct i40e_hw *hw, u16 *checksum) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - u16 pcie_alt_module = 0; - u16 checksum_local = 0; - u16 vpd_module = 0; - u16 word = 0; - u32 i = 0; - - DEBUGFUNC("i40e_calc_nvm_checksum"); - - /* read pointer to VPD area */ - ret_code = i40e_read_nvm_word(hw, I40E_SR_VPD_PTR, &vpd_module); - if (ret_code != I40E_SUCCESS) { - ret_code = I40E_ERR_NVM_CHECKSUM; - goto i40e_calc_nvm_checksum_exit; - } - - /* read pointer to PCIe Alt Auto-load module */ - ret_code = i40e_read_nvm_word(hw, I40E_SR_PCIE_ALT_AUTO_LOAD_PTR, - &pcie_alt_module); - if (ret_code != I40E_SUCCESS) { - ret_code = I40E_ERR_NVM_CHECKSUM; - goto i40e_calc_nvm_checksum_exit; - } - - /* Calculate SW checksum that covers the whole 64kB shadow RAM - * except the VPD and PCIe ALT Auto-load modules - */ - for (i = 0; i < hw->nvm.sr_size; i++) { - /* Skip Checksum word */ - if (i == I40E_SR_SW_CHECKSUM_WORD) - i++; - /* Skip VPD module (convert byte size to word count) */ - if (i == (u32)vpd_module) { - i += (I40E_SR_VPD_MODULE_MAX_SIZE / 2); - if (i >= hw->nvm.sr_size) - break; - } - /* Skip PCIe ALT module (convert byte size to word count) */ - if (i == (u32)pcie_alt_module) { - i += (I40E_SR_PCIE_ALT_MODULE_MAX_SIZE / 2); - if (i >= hw->nvm.sr_size) - break; - } - - ret_code = i40e_read_nvm_word(hw, (u16)i, &word); - if (ret_code != I40E_SUCCESS) { - ret_code = I40E_ERR_NVM_CHECKSUM; - goto i40e_calc_nvm_checksum_exit; - } - checksum_local += word; - } - - *checksum = (u16)I40E_SR_SW_CHECKSUM_BASE - checksum_local; - -i40e_calc_nvm_checksum_exit: - return ret_code; -} - -/** - * i40e_update_nvm_checksum - Updates the NVM checksum - * @hw: pointer to hardware structure - * - * NVM ownership must be acquired before calling this function and released - * on ARQ completion event reception by caller. - * This function will commit SR to NVM. - **/ -enum i40e_status_code i40e_update_nvm_checksum(struct i40e_hw *hw) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - u16 checksum; - - DEBUGFUNC("i40e_update_nvm_checksum"); - - ret_code = i40e_calc_nvm_checksum(hw, &checksum); - if (ret_code == I40E_SUCCESS) - ret_code = i40e_write_nvm_aq(hw, 0x00, I40E_SR_SW_CHECKSUM_WORD, - 1, &checksum, true); - - return ret_code; -} - -/** - * i40e_validate_nvm_checksum - Validate EEPROM checksum - * @hw: pointer to hardware structure - * @checksum: calculated checksum - * - * Performs checksum calculation and validates the NVM SW checksum. If the - * caller does not need checksum, the value can be NULL. - **/ -enum i40e_status_code i40e_validate_nvm_checksum(struct i40e_hw *hw, - u16 *checksum) -{ - enum i40e_status_code ret_code = I40E_SUCCESS; - u16 checksum_sr = 0; - u16 checksum_local = 0; - - DEBUGFUNC("i40e_validate_nvm_checksum"); - - ret_code = i40e_calc_nvm_checksum(hw, &checksum_local); - if (ret_code != I40E_SUCCESS) - goto i40e_validate_nvm_checksum_exit; - - /* Do not use i40e_read_nvm_word() because we do not want to take - * the synchronization semaphores twice here. - */ - i40e_read_nvm_word(hw, I40E_SR_SW_CHECKSUM_WORD, &checksum_sr); - - /* Verify read checksum from EEPROM is the same as - * calculated checksum - */ - if (checksum_local != checksum_sr) - ret_code = I40E_ERR_NVM_CHECKSUM; - - /* If the user cares, return the calculated checksum */ - if (checksum) - *checksum = checksum_local; - -i40e_validate_nvm_checksum_exit: - return ret_code; -} - -STATIC enum i40e_status_code i40e_nvmupd_state_init(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *err); -STATIC enum i40e_status_code i40e_nvmupd_state_reading(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *err); -STATIC enum i40e_status_code i40e_nvmupd_state_writing(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *err); -STATIC enum i40e_nvmupd_cmd i40e_nvmupd_validate_command(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - int *err); -STATIC enum i40e_status_code i40e_nvmupd_nvm_erase(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - int *err); -STATIC enum i40e_status_code i40e_nvmupd_nvm_write(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *err); -STATIC enum i40e_status_code i40e_nvmupd_nvm_read(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *err); -STATIC inline u8 i40e_nvmupd_get_module(u32 val) -{ - return (u8)(val & I40E_NVM_MOD_PNT_MASK); -} -STATIC inline u8 i40e_nvmupd_get_transaction(u32 val) -{ - return (u8)((val & I40E_NVM_TRANS_MASK) >> I40E_NVM_TRANS_SHIFT); -} - -/** - * i40e_nvmupd_command - Process an NVM update command - * @hw: pointer to hardware structure - * @cmd: pointer to nvm update command - * @bytes: pointer to the data buffer - * @err: pointer to return error code - * - * Dispatches command depending on what update state is current - **/ -enum i40e_status_code i40e_nvmupd_command(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *err) -{ - enum i40e_status_code status; - - DEBUGFUNC("i40e_nvmupd_command"); - - /* assume success */ - *err = 0; - - switch (hw->nvmupd_state) { - case I40E_NVMUPD_STATE_INIT: - status = i40e_nvmupd_state_init(hw, cmd, bytes, err); - break; - - case I40E_NVMUPD_STATE_READING: - status = i40e_nvmupd_state_reading(hw, cmd, bytes, err); - break; - - case I40E_NVMUPD_STATE_WRITING: - status = i40e_nvmupd_state_writing(hw, cmd, bytes, err); - break; - - default: - /* invalid state, should never happen */ - status = I40E_NOT_SUPPORTED; - *err = -ESRCH; - break; - } - return status; -} - -/** - * i40e_nvmupd_state_init - Handle NVM update state Init - * @hw: pointer to hardware structure - * @cmd: pointer to nvm update command buffer - * @bytes: pointer to the data buffer - * @err: pointer to return error code - * - * Process legitimate commands of the Init state and conditionally set next - * state. Reject all other commands. - **/ -STATIC enum i40e_status_code i40e_nvmupd_state_init(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *err) -{ - enum i40e_status_code status = I40E_SUCCESS; - enum i40e_nvmupd_cmd upd_cmd; - - DEBUGFUNC("i40e_nvmupd_state_init"); - - upd_cmd = i40e_nvmupd_validate_command(hw, cmd, err); - - switch (upd_cmd) { - case I40E_NVMUPD_READ_SA: - status = i40e_acquire_nvm(hw, I40E_RESOURCE_READ); - if (status) { - *err = i40e_aq_rc_to_posix(hw->aq.asq_last_status); - } else { - status = i40e_nvmupd_nvm_read(hw, cmd, bytes, err); - i40e_release_nvm(hw); - } - break; - - case I40E_NVMUPD_READ_SNT: - status = i40e_acquire_nvm(hw, I40E_RESOURCE_READ); - if (status) { - *err = i40e_aq_rc_to_posix(hw->aq.asq_last_status); - } else { - status = i40e_nvmupd_nvm_read(hw, cmd, bytes, err); - hw->nvmupd_state = I40E_NVMUPD_STATE_READING; - } - break; - - case I40E_NVMUPD_WRITE_ERA: - status = i40e_acquire_nvm(hw, I40E_RESOURCE_WRITE); - if (status) { - *err = i40e_aq_rc_to_posix(hw->aq.asq_last_status); - } else { - status = i40e_nvmupd_nvm_erase(hw, cmd, err); - if (status) - i40e_release_nvm(hw); - else - hw->aq.nvm_release_on_done = true; - } - break; - - case I40E_NVMUPD_WRITE_SA: - status = i40e_acquire_nvm(hw, I40E_RESOURCE_WRITE); - if (status) { - *err = i40e_aq_rc_to_posix(hw->aq.asq_last_status); - } else { - status = i40e_nvmupd_nvm_write(hw, cmd, bytes, err); - if (status) - i40e_release_nvm(hw); - else - hw->aq.nvm_release_on_done = true; - } - break; - - case I40E_NVMUPD_WRITE_SNT: - status = i40e_acquire_nvm(hw, I40E_RESOURCE_WRITE); - if (status) { - *err = i40e_aq_rc_to_posix(hw->aq.asq_last_status); - } else { - status = i40e_nvmupd_nvm_write(hw, cmd, bytes, err); - hw->nvmupd_state = I40E_NVMUPD_STATE_WRITING; - } - break; - - case I40E_NVMUPD_CSUM_SA: - status = i40e_acquire_nvm(hw, I40E_RESOURCE_WRITE); - if (status) { - *err = i40e_aq_rc_to_posix(hw->aq.asq_last_status); - } else { - status = i40e_update_nvm_checksum(hw); - if (status) { - *err = hw->aq.asq_last_status ? - i40e_aq_rc_to_posix(hw->aq.asq_last_status) : - -EIO; - i40e_release_nvm(hw); - } else { - hw->aq.nvm_release_on_done = true; - } - } - break; - - default: - status = I40E_ERR_NVM; - *err = -ESRCH; - break; - } - return status; -} - -/** - * i40e_nvmupd_state_reading - Handle NVM update state Reading - * @hw: pointer to hardware structure - * @cmd: pointer to nvm update command buffer - * @bytes: pointer to the data buffer - * @err: pointer to return error code - * - * NVM ownership is already held. Process legitimate commands and set any - * change in state; reject all other commands. - **/ -STATIC enum i40e_status_code i40e_nvmupd_state_reading(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *err) -{ - enum i40e_status_code status; - enum i40e_nvmupd_cmd upd_cmd; - - DEBUGFUNC("i40e_nvmupd_state_reading"); - - upd_cmd = i40e_nvmupd_validate_command(hw, cmd, err); - - switch (upd_cmd) { - case I40E_NVMUPD_READ_SA: - case I40E_NVMUPD_READ_CON: - status = i40e_nvmupd_nvm_read(hw, cmd, bytes, err); - break; - - case I40E_NVMUPD_READ_LCB: - status = i40e_nvmupd_nvm_read(hw, cmd, bytes, err); - i40e_release_nvm(hw); - hw->nvmupd_state = I40E_NVMUPD_STATE_INIT; - break; - - default: - status = I40E_NOT_SUPPORTED; - *err = -ESRCH; - break; - } - return status; -} - -/** - * i40e_nvmupd_state_writing - Handle NVM update state Writing - * @hw: pointer to hardware structure - * @cmd: pointer to nvm update command buffer - * @bytes: pointer to the data buffer - * @err: pointer to return error code - * - * NVM ownership is already held. Process legitimate commands and set any - * change in state; reject all other commands - **/ -STATIC enum i40e_status_code i40e_nvmupd_state_writing(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *err) -{ - enum i40e_status_code status; - enum i40e_nvmupd_cmd upd_cmd; - - DEBUGFUNC("i40e_nvmupd_state_writing"); - - upd_cmd = i40e_nvmupd_validate_command(hw, cmd, err); - - switch (upd_cmd) { - case I40E_NVMUPD_WRITE_CON: - status = i40e_nvmupd_nvm_write(hw, cmd, bytes, err); - break; - - case I40E_NVMUPD_WRITE_LCB: - status = i40e_nvmupd_nvm_write(hw, cmd, bytes, err); - if (!status) { - hw->aq.nvm_release_on_done = true; - hw->nvmupd_state = I40E_NVMUPD_STATE_INIT; - } - break; - - case I40E_NVMUPD_CSUM_CON: - status = i40e_update_nvm_checksum(hw); - if (status) - *err = hw->aq.asq_last_status ? - i40e_aq_rc_to_posix(hw->aq.asq_last_status) : - -EIO; - break; - - case I40E_NVMUPD_CSUM_LCB: - status = i40e_update_nvm_checksum(hw); - if (status) { - *err = hw->aq.asq_last_status ? - i40e_aq_rc_to_posix(hw->aq.asq_last_status) : - -EIO; - } else { - hw->aq.nvm_release_on_done = true; - hw->nvmupd_state = I40E_NVMUPD_STATE_INIT; - } - break; - - default: - status = I40E_NOT_SUPPORTED; - *err = -ESRCH; - break; - } - return status; -} - -/** - * i40e_nvmupd_validate_command - Validate given command - * @hw: pointer to hardware structure - * @cmd: pointer to nvm update command buffer - * @err: pointer to return error code - * - * Return one of the valid command types or I40E_NVMUPD_INVALID - **/ -STATIC enum i40e_nvmupd_cmd i40e_nvmupd_validate_command(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - int *err) -{ - enum i40e_nvmupd_cmd upd_cmd; - u8 transaction, module; - - DEBUGFUNC("i40e_nvmupd_validate_command\n"); - - /* anything that doesn't match a recognized case is an error */ - upd_cmd = I40E_NVMUPD_INVALID; - - transaction = i40e_nvmupd_get_transaction(cmd->config); - module = i40e_nvmupd_get_module(cmd->config); - - /* limits on data size */ - if ((cmd->data_size < 1) || - (cmd->data_size > I40E_NVMUPD_MAX_DATA)) { - DEBUGOUT1("i40e_nvmupd_validate_command data_size %d\n", - cmd->data_size); - *err = -EFAULT; - return I40E_NVMUPD_INVALID; - } - - switch (cmd->command) { - case I40E_NVM_READ: - switch (transaction) { - case I40E_NVM_CON: - upd_cmd = I40E_NVMUPD_READ_CON; - break; - case I40E_NVM_SNT: - upd_cmd = I40E_NVMUPD_READ_SNT; - break; - case I40E_NVM_LCB: - upd_cmd = I40E_NVMUPD_READ_LCB; - break; - case I40E_NVM_SA: - upd_cmd = I40E_NVMUPD_READ_SA; - break; - } - break; - - case I40E_NVM_WRITE: - switch (transaction) { - case I40E_NVM_CON: - upd_cmd = I40E_NVMUPD_WRITE_CON; - break; - case I40E_NVM_SNT: - upd_cmd = I40E_NVMUPD_WRITE_SNT; - break; - case I40E_NVM_LCB: - upd_cmd = I40E_NVMUPD_WRITE_LCB; - break; - case I40E_NVM_SA: - upd_cmd = I40E_NVMUPD_WRITE_SA; - break; - case I40E_NVM_ERA: - upd_cmd = I40E_NVMUPD_WRITE_ERA; - break; - case I40E_NVM_CSUM: - upd_cmd = I40E_NVMUPD_CSUM_CON; - break; - case (I40E_NVM_CSUM|I40E_NVM_SA): - upd_cmd = I40E_NVMUPD_CSUM_SA; - break; - case (I40E_NVM_CSUM|I40E_NVM_LCB): - upd_cmd = I40E_NVMUPD_CSUM_LCB; - break; - } - break; - } - - if (upd_cmd == I40E_NVMUPD_INVALID) { - *err = -EFAULT; - DEBUGOUT2( - "i40e_nvmupd_validate_command returns %d err: %d\n", - upd_cmd, *err); - } - return upd_cmd; -} - -/** - * i40e_nvmupd_nvm_read - Read NVM - * @hw: pointer to hardware structure - * @cmd: pointer to nvm update command buffer - * @bytes: pointer to the data buffer - * @err: pointer to return error code - * - * cmd structure contains identifiers and data buffer - **/ -STATIC enum i40e_status_code i40e_nvmupd_nvm_read(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *err) -{ - enum i40e_status_code status; - u8 module, transaction; - bool last; - - transaction = i40e_nvmupd_get_transaction(cmd->config); - module = i40e_nvmupd_get_module(cmd->config); - last = (transaction == I40E_NVM_LCB) || (transaction == I40E_NVM_SA); - DEBUGOUT3("i40e_nvmupd_nvm_read mod 0x%x off 0x%x len 0x%x\n", - module, cmd->offset, cmd->data_size); - - status = i40e_aq_read_nvm(hw, module, cmd->offset, (u16)cmd->data_size, - bytes, last, NULL); - DEBUGOUT1("i40e_nvmupd_nvm_read status %d\n", status); - if (status != I40E_SUCCESS) - *err = i40e_aq_rc_to_posix(hw->aq.asq_last_status); - - return status; -} - -/** - * i40e_nvmupd_nvm_erase - Erase an NVM module - * @hw: pointer to hardware structure - * @cmd: pointer to nvm update command buffer - * @err: pointer to return error code - * - * module, offset, data_size and data are in cmd structure - **/ -STATIC enum i40e_status_code i40e_nvmupd_nvm_erase(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - int *err) -{ - enum i40e_status_code status = I40E_SUCCESS; - u8 module, transaction; - bool last; - - transaction = i40e_nvmupd_get_transaction(cmd->config); - module = i40e_nvmupd_get_module(cmd->config); - last = (transaction & I40E_NVM_LCB); - DEBUGOUT3("i40e_nvmupd_nvm_erase mod 0x%x off 0x%x len 0x%x\n", - module, cmd->offset, cmd->data_size); - status = i40e_aq_erase_nvm(hw, module, cmd->offset, (u16)cmd->data_size, - last, NULL); - DEBUGOUT1("i40e_nvmupd_nvm_erase status %d\n", status); - if (status != I40E_SUCCESS) - *err = i40e_aq_rc_to_posix(hw->aq.asq_last_status); - - return status; -} - -/** - * i40e_nvmupd_nvm_write - Write NVM - * @hw: pointer to hardware structure - * @cmd: pointer to nvm update command buffer - * @bytes: pointer to the data buffer - * @err: pointer to return error code - * - * module, offset, data_size and data are in cmd structure - **/ -STATIC enum i40e_status_code i40e_nvmupd_nvm_write(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *err) -{ - enum i40e_status_code status = I40E_SUCCESS; - u8 module, transaction; - bool last; - - transaction = i40e_nvmupd_get_transaction(cmd->config); - module = i40e_nvmupd_get_module(cmd->config); - last = (transaction & I40E_NVM_LCB); - DEBUGOUT3("i40e_nvmupd_nvm_write mod 0x%x off 0x%x len 0x%x\n", - module, cmd->offset, cmd->data_size); - status = i40e_aq_update_nvm(hw, module, cmd->offset, - (u16)cmd->data_size, bytes, last, NULL); - DEBUGOUT1("i40e_nvmupd_nvm_write status %d\n", status); - if (status != I40E_SUCCESS) - *err = i40e_aq_rc_to_posix(hw->aq.asq_last_status); - - return status; -} diff --git a/lib/librte_pmd_i40e/i40e/i40e_osdep.h b/lib/librte_pmd_i40e/i40e/i40e_osdep.h deleted file mode 100644 index de71b0d..0000000 --- a/lib/librte_pmd_i40e/i40e/i40e_osdep.h +++ /dev/null @@ -1,197 +0,0 @@ -/****************************************************************************** - - Copyright (c) 2001-2014, Intel Corporation - All rights reserved. - - Redistribution and use in source and binary forms, with or without - modification, are permitted provided that the following conditions are met: - - 1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - 3. Neither the name of the Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - - THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE - LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR - CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF - SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS - INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN - CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) - ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE - POSSIBILITY OF SUCH DAMAGE. -******************************************************************************/ - -#ifndef _I40E_OSDEP_H_ -#define _I40E_OSDEP_H_ - -#include -#include -#include -#include - -#include -#include -#include -#include -#include -#include - -#include "../i40e_logs.h" - -#define INLINE inline -#define STATIC static - -typedef uint8_t u8; -typedef int8_t s8; -typedef uint16_t u16; -typedef uint32_t u32; -typedef int32_t s32; -typedef uint64_t u64; -typedef int bool; - -typedef enum i40e_status_code i40e_status; -#define __iomem -#define hw_dbg(hw, S, A...) do {} while (0) -#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) -#define lower_32_bits(n) ((u32)(n)) -#define low_16_bits(x) ((x) & 0xFFFF) -#define high_16_bits(x) (((x) & 0xFFFF0000) >> 16) - -#ifndef ETH_ADDR_LEN -#define ETH_ADDR_LEN 6 -#endif - -#ifndef __le16 -#define __le16 uint16_t -#endif -#ifndef __le32 -#define __le32 uint32_t -#endif -#ifndef __le64 -#define __le64 uint64_t -#endif -#ifndef __be16 -#define __be16 uint16_t -#endif -#ifndef __be32 -#define __be32 uint32_t -#endif -#ifndef __be64 -#define __be64 uint64_t -#endif - -#define FALSE 0 -#define TRUE 1 -#define false 0 -#define true 1 - -#define min(a,b) RTE_MIN(a,b) -#define max(a,b) RTE_MAX(a,b) - -#define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f)) -#define ASSERT(x) if(!(x)) rte_panic("IXGBE: x") - -#define DEBUGOUT(S) PMD_DRV_LOG_RAW(DEBUG, S) -#define DEBUGOUT1(S, A...) PMD_DRV_LOG_RAW(DEBUG, S, ##A) - -#define DEBUGFUNC(F) DEBUGOUT(F "\n") -#define DEBUGOUT2 DEBUGOUT1 -#define DEBUGOUT3 DEBUGOUT2 -#define DEBUGOUT6 DEBUGOUT3 -#define DEBUGOUT7 DEBUGOUT6 - -#define i40e_debug(h, m, s, ...) \ -do { \ - if (((m) & (h)->debug_mask)) \ - PMD_DRV_LOG_RAW(DEBUG, "i40e %02x.%x " s, \ - (h)->bus.device, (h)->bus.func, \ - ##__VA_ARGS__); \ -} while (0) - -#define I40E_PCI_REG(reg) (*((volatile uint32_t *)(reg))) -#define I40E_PCI_REG_ADDR(a, reg) \ - ((volatile uint32_t *)((char *)(a)->hw_addr + (reg))) -static inline uint32_t i40e_read_addr(volatile void *addr) -{ - return I40E_PCI_REG(addr); -} -#define I40E_PCI_REG_WRITE(reg, value) \ - do {I40E_PCI_REG((reg)) = (value);} while(0) - -#define I40E_WRITE_FLUSH(a) I40E_READ_REG(a, I40E_GLGEN_STAT) -#define I40EVF_WRITE_FLUSH(a) I40E_READ_REG(a, I40E_VFGEN_RSTAT) - -#define I40E_READ_REG(hw, reg) i40e_read_addr(I40E_PCI_REG_ADDR((hw), (reg))) -#define I40E_WRITE_REG(hw, reg, value) \ - I40E_PCI_REG_WRITE(I40E_PCI_REG_ADDR((hw), (reg)), (value)) - -#define rd32(a, reg) i40e_read_addr(I40E_PCI_REG_ADDR((a), (reg))) -#define wr32(a, reg, value) \ - I40E_PCI_REG_WRITE(I40E_PCI_REG_ADDR((a), (reg)), (value)) -#define flush(a) i40e_read_addr(I40E_PCI_REG_ADDR((a), (I40E_GLGEN_STAT))) - -#define ARRAY_SIZE(arr) (sizeof(arr)/sizeof(arr[0])) - -/* memory allocation tracking */ -struct i40e_dma_mem { - void *va; - u64 pa; - u32 size; - u64 id; -} __attribute__((packed)); - -#define i40e_allocate_dma_mem(h, m, unused, s, a) \ - i40e_allocate_dma_mem_d(h, m, s, a) -#define i40e_free_dma_mem(h, m) i40e_free_dma_mem_d(h, m) - -struct i40e_virt_mem { - void *va; - u32 size; -} __attribute__((packed)); - -#define i40e_allocate_virt_mem(h, m, s) i40e_allocate_virt_mem_d(h, m, s) -#define i40e_free_virt_mem(h, m) i40e_free_virt_mem_d(h, m) - -#define CPU_TO_LE16(o) rte_cpu_to_le_16(o) -#define CPU_TO_LE32(s) rte_cpu_to_le_32(s) -#define CPU_TO_LE64(h) rte_cpu_to_le_64(h) -#define LE16_TO_CPU(a) rte_le_to_cpu_16(a) -#define LE32_TO_CPU(c) rte_le_to_cpu_32(c) -#define LE64_TO_CPU(k) rte_le_to_cpu_64(k) - -/* SW spinlock */ -struct i40e_spinlock { - rte_spinlock_t spinlock; -}; - -#define i40e_init_spinlock(_sp) i40e_init_spinlock_d(_sp) -#define i40e_acquire_spinlock(_sp) i40e_acquire_spinlock_d(_sp) -#define i40e_release_spinlock(_sp) i40e_release_spinlock_d(_sp) -#define i40e_destroy_spinlock(_sp) i40e_destroy_spinlock_d(_sp) - -#define I40E_NTOHS(a) rte_be_to_cpu_16(a) -#define I40E_NTOHL(a) rte_be_to_cpu_32(a) -#define I40E_HTONS(a) rte_cpu_to_be_16(a) -#define I40E_HTONL(a) rte_cpu_to_be_32(a) - -#define i40e_memset(a, b, c, d) memset((a), (b), (c)) -#define i40e_memcpy(a, b, c, d) rte_memcpy((a), (b), (c)) - -#define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d)) -#define DELAY(x) rte_delay_us(x) -#define i40e_usec_delay(x) rte_delay_us(x) -#define i40e_msec_delay(x) rte_delay_us(1000*(x)) -#define udelay(x) DELAY(x) -#define msleep(x) DELAY(1000*(x)) -#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000)) - -#endif /* _I40E_OSDEP_H_ */ diff --git a/lib/librte_pmd_i40e/i40e/i40e_prototype.h b/lib/librte_pmd_i40e/i40e/i40e_prototype.h deleted file mode 100644 index f3215cf..0000000 --- a/lib/librte_pmd_i40e/i40e/i40e_prototype.h +++ /dev/null @@ -1,430 +0,0 @@ -/******************************************************************************* - -Copyright (c) 2013 - 2014, Intel Corporation -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - - 1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - 3. Neither the name of the Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. - -***************************************************************************/ - -#ifndef _I40E_PROTOTYPE_H_ -#define _I40E_PROTOTYPE_H_ - -#include "i40e_type.h" -#include "i40e_alloc.h" -#include "i40e_virtchnl.h" - -/* Prototypes for shared code functions that are not in - * the standard function pointer structures. These are - * mostly because they are needed even before the init - * has happened and will assist in the early SW and FW - * setup. - */ - -/* adminq functions */ -enum i40e_status_code i40e_init_adminq(struct i40e_hw *hw); -enum i40e_status_code i40e_shutdown_adminq(struct i40e_hw *hw); -enum i40e_status_code i40e_init_asq(struct i40e_hw *hw); -enum i40e_status_code i40e_init_arq(struct i40e_hw *hw); -enum i40e_status_code i40e_alloc_adminq_asq_ring(struct i40e_hw *hw); -enum i40e_status_code i40e_alloc_adminq_arq_ring(struct i40e_hw *hw); -enum i40e_status_code i40e_shutdown_asq(struct i40e_hw *hw); -enum i40e_status_code i40e_shutdown_arq(struct i40e_hw *hw); -u16 i40e_clean_asq(struct i40e_hw *hw); -void i40e_free_adminq_asq(struct i40e_hw *hw); -void i40e_free_adminq_arq(struct i40e_hw *hw); -void i40e_adminq_init_ring_data(struct i40e_hw *hw); -enum i40e_status_code i40e_clean_arq_element(struct i40e_hw *hw, - struct i40e_arq_event_info *e, - u16 *events_pending); -enum i40e_status_code i40e_asq_send_command(struct i40e_hw *hw, - struct i40e_aq_desc *desc, - void *buff, /* can be NULL */ - u16 buff_size, - struct i40e_asq_cmd_details *cmd_details); -bool i40e_asq_done(struct i40e_hw *hw); - -/* debug function for adminq */ -void i40e_debug_aq(struct i40e_hw *hw, enum i40e_debug_mask mask, - void *desc, void *buffer, u16 buf_len); - -void i40e_idle_aq(struct i40e_hw *hw); -void i40e_resume_aq(struct i40e_hw *hw); -bool i40e_check_asq_alive(struct i40e_hw *hw); -enum i40e_status_code i40e_aq_queue_shutdown(struct i40e_hw *hw, bool unloading); - -#ifndef VF_DRIVER - -u32 i40e_led_get(struct i40e_hw *hw); -void i40e_led_set(struct i40e_hw *hw, u32 mode, bool blink); - -/* admin send queue commands */ - -enum i40e_status_code i40e_aq_get_firmware_version(struct i40e_hw *hw, - u16 *fw_major_version, u16 *fw_minor_version, - u16 *api_major_version, u16 *api_minor_version, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_debug_write_register(struct i40e_hw *hw, - u32 reg_addr, u64 reg_val, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_set_default_vsi(struct i40e_hw *hw, u16 vsi_id, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_get_phy_capabilities(struct i40e_hw *hw, - bool qualified_modules, bool report_init, - struct i40e_aq_get_phy_abilities_resp *abilities, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_set_phy_config(struct i40e_hw *hw, - struct i40e_aq_set_phy_config *config, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_set_fc(struct i40e_hw *hw, u8 *aq_failures, - bool atomic_reset); -enum i40e_status_code i40e_aq_set_phy_int_mask(struct i40e_hw *hw, u16 mask, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_set_mac_config(struct i40e_hw *hw, - u16 max_frame_size, bool crc_en, u16 pacing, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_get_local_advt_reg(struct i40e_hw *hw, - u64 *advt_reg, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_get_partner_advt(struct i40e_hw *hw, - u64 *advt_reg, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_set_lb_modes(struct i40e_hw *hw, u16 lb_modes, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_clear_pxe_mode(struct i40e_hw *hw, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_set_link_restart_an(struct i40e_hw *hw, - bool enable_link, struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_get_link_info(struct i40e_hw *hw, - bool enable_lse, struct i40e_link_status *link, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_update_link_info(struct i40e_hw *hw, - bool enable_lse); -enum i40e_status_code i40e_aq_set_local_advt_reg(struct i40e_hw *hw, - u64 advt_reg, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_send_driver_version(struct i40e_hw *hw, - struct i40e_driver_version *dv, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_add_vsi(struct i40e_hw *hw, - struct i40e_vsi_context *vsi_ctx, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_set_vsi_broadcast(struct i40e_hw *hw, - u16 vsi_id, bool set_filter, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw, - u16 vsi_id, bool set, struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw, - u16 vsi_id, bool set, struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_get_vsi_params(struct i40e_hw *hw, - struct i40e_vsi_context *vsi_ctx, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_update_vsi_params(struct i40e_hw *hw, - struct i40e_vsi_context *vsi_ctx, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_add_veb(struct i40e_hw *hw, u16 uplink_seid, - u16 downlink_seid, u8 enabled_tc, - bool default_port, bool enable_l2_filtering, - u16 *pveb_seid, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_get_veb_parameters(struct i40e_hw *hw, - u16 veb_seid, u16 *switch_id, bool *floating, - u16 *statistic_index, u16 *vebs_used, - u16 *vebs_free, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_add_macvlan(struct i40e_hw *hw, u16 vsi_id, - struct i40e_aqc_add_macvlan_element_data *mv_list, - u16 count, struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_remove_macvlan(struct i40e_hw *hw, u16 vsi_id, - struct i40e_aqc_remove_macvlan_element_data *mv_list, - u16 count, struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_add_vlan(struct i40e_hw *hw, u16 vsi_id, - struct i40e_aqc_add_remove_vlan_element_data *v_list, - u8 count, struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_remove_vlan(struct i40e_hw *hw, u16 vsi_id, - struct i40e_aqc_add_remove_vlan_element_data *v_list, - u8 count, struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_send_msg_to_vf(struct i40e_hw *hw, u16 vfid, - u32 v_opcode, u32 v_retval, u8 *msg, u16 msglen, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_get_switch_config(struct i40e_hw *hw, - struct i40e_aqc_get_switch_config_resp *buf, - u16 buf_size, u16 *start_seid, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_request_resource(struct i40e_hw *hw, - enum i40e_aq_resources_ids resource, - enum i40e_aq_resource_access_type access, - u8 sdp_number, u64 *timeout, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_release_resource(struct i40e_hw *hw, - enum i40e_aq_resources_ids resource, - u8 sdp_number, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_read_nvm(struct i40e_hw *hw, u8 module_pointer, - u32 offset, u16 length, void *data, - bool last_command, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_erase_nvm(struct i40e_hw *hw, u8 module_pointer, - u32 offset, u16 length, bool last_command, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_discover_capabilities(struct i40e_hw *hw, - void *buff, u16 buff_size, u16 *data_size, - enum i40e_admin_queue_opc list_type_opc, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_update_nvm(struct i40e_hw *hw, u8 module_pointer, - u32 offset, u16 length, void *data, - bool last_command, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_get_lldp_mib(struct i40e_hw *hw, u8 bridge_type, - u8 mib_type, void *buff, u16 buff_size, - u16 *local_len, u16 *remote_len, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_cfg_lldp_mib_change_event(struct i40e_hw *hw, - bool enable_update, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_add_lldp_tlv(struct i40e_hw *hw, u8 bridge_type, - void *buff, u16 buff_size, u16 tlv_len, - u16 *mib_len, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_update_lldp_tlv(struct i40e_hw *hw, - u8 bridge_type, void *buff, u16 buff_size, - u16 old_len, u16 new_len, u16 offset, - u16 *mib_len, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_delete_lldp_tlv(struct i40e_hw *hw, - u8 bridge_type, void *buff, u16 buff_size, - u16 tlv_len, u16 *mib_len, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_stop_lldp(struct i40e_hw *hw, bool shutdown_agent, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_start_lldp(struct i40e_hw *hw, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_add_udp_tunnel(struct i40e_hw *hw, - u16 udp_port, u8 protocol_index, - u8 *filter_index, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_del_udp_tunnel(struct i40e_hw *hw, u8 index, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_get_switch_resource_alloc(struct i40e_hw *hw, - u8 *num_entries, - struct i40e_aqc_switch_resource_alloc_element_resp *buf, - u16 count, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_add_pvirt(struct i40e_hw *hw, u16 flags, - u16 mac_seid, u16 vsi_seid, - u16 *ret_seid); -enum i40e_status_code i40e_aq_add_tag(struct i40e_hw *hw, bool direct_to_queue, - u16 vsi_seid, u16 tag, u16 queue_num, - u16 *tags_used, u16 *tags_free, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_remove_tag(struct i40e_hw *hw, u16 vsi_seid, - u16 tag, u16 *tags_used, u16 *tags_free, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_add_mcast_etag(struct i40e_hw *hw, u16 pe_seid, - u16 etag, u8 num_tags_in_buf, void *buf, - u16 *tags_used, u16 *tags_free, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_remove_mcast_etag(struct i40e_hw *hw, u16 pe_seid, - u16 etag, u16 *tags_used, u16 *tags_free, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_update_tag(struct i40e_hw *hw, u16 vsi_seid, - u16 old_tag, u16 new_tag, u16 *tags_used, - u16 *tags_free, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_add_statistics(struct i40e_hw *hw, u16 seid, - u16 vlan_id, u16 *stat_index, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_remove_statistics(struct i40e_hw *hw, u16 seid, - u16 vlan_id, u16 stat_index, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_set_port_parameters(struct i40e_hw *hw, - u16 bad_frame_vsi, bool save_bad_pac, - bool pad_short_pac, bool double_vlan, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_delete_element(struct i40e_hw *hw, u16 seid, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_mac_address_write(struct i40e_hw *hw, - u16 flags, u8 *mac_addr, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_config_vsi_bw_limit(struct i40e_hw *hw, - u16 seid, u16 credit, u8 max_credit, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_dcb_ignore_pfc(struct i40e_hw *hw, - u8 tcmap, bool request, u8 *tcmap_ret, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_get_hmc_resource_profile(struct i40e_hw *hw, - enum i40e_aq_hmc_profile *profile, - u8 *pe_vf_enabled_count, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_config_switch_comp_ets_bw_limit( - struct i40e_hw *hw, u16 seid, - struct i40e_aqc_configure_switching_comp_ets_bw_limit_data *bw_data, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_config_vsi_ets_sla_bw_limit(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_configure_vsi_ets_sla_bw_data *bw_data, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_dcb_updated(struct i40e_hw *hw, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_set_hmc_resource_profile(struct i40e_hw *hw, - enum i40e_aq_hmc_profile profile, - u8 pe_vf_enabled_count, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_config_switch_comp_bw_limit(struct i40e_hw *hw, - u16 seid, u16 credit, u8 max_bw, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_config_vsi_tc_bw(struct i40e_hw *hw, u16 seid, - struct i40e_aqc_configure_vsi_tc_bw_data *bw_data, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_query_vsi_bw_config(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_query_vsi_bw_config_resp *bw_data, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_query_vsi_ets_sla_config(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_query_vsi_ets_sla_config_resp *bw_data, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_query_switch_comp_ets_config(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_query_switching_comp_ets_config_resp *bw_data, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_query_port_ets_config(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_query_port_ets_config_resp *bw_data, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_query_switch_comp_bw_config(struct i40e_hw *hw, - u16 seid, - struct i40e_aqc_query_switching_comp_bw_config_resp *bw_data, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_resume_port_tx(struct i40e_hw *hw, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_aq_add_cloud_filters(struct i40e_hw *hw, - u16 vsi, - struct i40e_aqc_add_remove_cloud_filters_element_data *filters, - u8 filter_count); - -enum i40e_status_code i40e_aq_remove_cloud_filters(struct i40e_hw *hw, - u16 vsi, - struct i40e_aqc_add_remove_cloud_filters_element_data *filters, - u8 filter_count); - -enum i40e_status_code i40e_aq_alternate_read(struct i40e_hw *hw, - u32 reg_addr0, u32 *reg_val0, - u32 reg_addr1, u32 *reg_val1); -enum i40e_status_code i40e_aq_alternate_read_indirect(struct i40e_hw *hw, - u32 addr, u32 dw_count, void *buffer); -enum i40e_status_code i40e_aq_alternate_write(struct i40e_hw *hw, - u32 reg_addr0, u32 reg_val0, - u32 reg_addr1, u32 reg_val1); -enum i40e_status_code i40e_aq_alternate_write_indirect(struct i40e_hw *hw, - u32 addr, u32 dw_count, void *buffer); -enum i40e_status_code i40e_aq_alternate_clear(struct i40e_hw *hw); -enum i40e_status_code i40e_aq_alternate_write_done(struct i40e_hw *hw, - u8 bios_mode, bool *reset_needed); -enum i40e_status_code i40e_aq_set_oem_mode(struct i40e_hw *hw, - u8 oem_mode); - -/* i40e_common */ -enum i40e_status_code i40e_init_shared_code(struct i40e_hw *hw); -enum i40e_status_code i40e_pf_reset(struct i40e_hw *hw); -void i40e_clear_hw(struct i40e_hw *hw); -void i40e_clear_pxe_mode(struct i40e_hw *hw); -bool i40e_get_link_status(struct i40e_hw *hw); -enum i40e_status_code i40e_get_mac_addr(struct i40e_hw *hw, u8 *mac_addr); -enum i40e_status_code i40e_read_bw_from_alt_ram(struct i40e_hw *hw, - u32 *max_bw, u32 *min_bw, bool *min_valid, bool *max_valid); -enum i40e_status_code i40e_aq_configure_partition_bw(struct i40e_hw *hw, - struct i40e_aqc_configure_partition_bw_data *bw_data, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_get_port_mac_addr(struct i40e_hw *hw, u8 *mac_addr); -void i40e_pre_tx_queue_cfg(struct i40e_hw *hw, u32 queue, bool enable); -enum i40e_status_code i40e_validate_mac_addr(u8 *mac_addr); -enum i40e_aq_link_speed i40e_get_link_speed(struct i40e_hw *hw); -/* prototype for functions used for NVM access */ -enum i40e_status_code i40e_init_nvm(struct i40e_hw *hw); -enum i40e_status_code i40e_acquire_nvm(struct i40e_hw *hw, - enum i40e_aq_resource_access_type access); -void i40e_release_nvm(struct i40e_hw *hw); -enum i40e_status_code i40e_read_nvm_srrd(struct i40e_hw *hw, u16 offset, - u16 *data); -enum i40e_status_code i40e_read_nvm_word(struct i40e_hw *hw, u16 offset, - u16 *data); -enum i40e_status_code i40e_read_nvm_buffer(struct i40e_hw *hw, u16 offset, - u16 *words, u16 *data); -enum i40e_status_code i40e_write_nvm_aq(struct i40e_hw *hw, u8 module, - u32 offset, u16 words, void *data, - bool last_command); -enum i40e_status_code i40e_write_nvm_word(struct i40e_hw *hw, u32 offset, - void *data); -enum i40e_status_code i40e_write_nvm_buffer(struct i40e_hw *hw, u8 module, - u32 offset, u16 words, void *data); -enum i40e_status_code i40e_calc_nvm_checksum(struct i40e_hw *hw, u16 *checksum); -enum i40e_status_code i40e_update_nvm_checksum(struct i40e_hw *hw); -enum i40e_status_code i40e_validate_nvm_checksum(struct i40e_hw *hw, - u16 *checksum); -enum i40e_status_code i40e_nvmupd_command(struct i40e_hw *hw, - struct i40e_nvm_access *cmd, - u8 *bytes, int *); -void i40e_set_pci_config_data(struct i40e_hw *hw, u16 link_status); -#endif /* VF_DRIVER */ - -#if defined(I40E_QV) || defined(VF_DRIVER) -enum i40e_status_code i40e_set_mac_type(struct i40e_hw *hw); - -#endif -extern struct i40e_rx_ptype_decoded i40e_ptype_lookup[]; - -STATIC INLINE struct i40e_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype) -{ - return i40e_ptype_lookup[ptype]; -} - -/* prototype for functions used for SW spinlocks */ -void i40e_init_spinlock(struct i40e_spinlock *sp); -void i40e_acquire_spinlock(struct i40e_spinlock *sp); -void i40e_release_spinlock(struct i40e_spinlock *sp); -void i40e_destroy_spinlock(struct i40e_spinlock *sp); - -/* i40e_common for VF drivers*/ -void i40e_vf_parse_hw_config(struct i40e_hw *hw, - struct i40e_virtchnl_vf_resource *msg); -enum i40e_status_code i40e_vf_reset(struct i40e_hw *hw); -enum i40e_status_code i40e_aq_send_msg_to_pf(struct i40e_hw *hw, - enum i40e_virtchnl_ops v_opcode, - enum i40e_status_code v_retval, - u8 *msg, u16 msglen, - struct i40e_asq_cmd_details *cmd_details); -enum i40e_status_code i40e_set_filter_control(struct i40e_hw *hw, - struct i40e_filter_control_settings *settings); -enum i40e_status_code i40e_aq_add_rem_control_packet_filter(struct i40e_hw *hw, - u8 *mac_addr, u16 ethtype, u16 flags, - u16 vsi_seid, u16 queue, bool is_add, - struct i40e_control_filter_stats *stats, - struct i40e_asq_cmd_details *cmd_details); -#endif /* _I40E_PROTOTYPE_H_ */ diff --git a/lib/librte_pmd_i40e/i40e/i40e_register.h b/lib/librte_pmd_i40e/i40e/i40e_register.h deleted file mode 100644 index f236c39..0000000 --- a/lib/librte_pmd_i40e/i40e/i40e_register.h +++ /dev/null @@ -1,3377 +0,0 @@ -/******************************************************************************* - -Copyright (c) 2013 - 2014, Intel Corporation -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - - 1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - 3. Neither the name of the Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. - -***************************************************************************/ - -#ifndef _I40E_REGISTER_H_ -#define _I40E_REGISTER_H_ - - -#define I40E_GL_ARQBAH 0x000801C0 /* Reset: EMPR */ -#define I40E_GL_ARQBAH_ARQBAH_SHIFT 0 -#define I40E_GL_ARQBAH_ARQBAH_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_ARQBAH_ARQBAH_SHIFT) -#define I40E_GL_ARQBAL 0x000800C0 /* Reset: EMPR */ -#define I40E_GL_ARQBAL_ARQBAL_SHIFT 0 -#define I40E_GL_ARQBAL_ARQBAL_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_ARQBAL_ARQBAL_SHIFT) -#define I40E_GL_ARQH 0x000803C0 /* Reset: EMPR */ -#define I40E_GL_ARQH_ARQH_SHIFT 0 -#define I40E_GL_ARQH_ARQH_MASK I40E_MASK(0x3FF, I40E_GL_ARQH_ARQH_SHIFT) -#define I40E_GL_ARQT 0x000804C0 /* Reset: EMPR */ -#define I40E_GL_ARQT_ARQT_SHIFT 0 -#define I40E_GL_ARQT_ARQT_MASK I40E_MASK(0x3FF, I40E_GL_ARQT_ARQT_SHIFT) -#define I40E_GL_ATQBAH 0x00080140 /* Reset: EMPR */ -#define I40E_GL_ATQBAH_ATQBAH_SHIFT 0 -#define I40E_GL_ATQBAH_ATQBAH_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_ATQBAH_ATQBAH_SHIFT) -#define I40E_GL_ATQBAL 0x00080040 /* Reset: EMPR */ -#define I40E_GL_ATQBAL_ATQBAL_SHIFT 0 -#define I40E_GL_ATQBAL_ATQBAL_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_ATQBAL_ATQBAL_SHIFT) -#define I40E_GL_ATQH 0x00080340 /* Reset: EMPR */ -#define I40E_GL_ATQH_ATQH_SHIFT 0 -#define I40E_GL_ATQH_ATQH_MASK I40E_MASK(0x3FF, I40E_GL_ATQH_ATQH_SHIFT) -#define I40E_GL_ATQLEN 0x00080240 /* Reset: EMPR */ -#define I40E_GL_ATQLEN_ATQLEN_SHIFT 0 -#define I40E_GL_ATQLEN_ATQLEN_MASK I40E_MASK(0x3FF, I40E_GL_ATQLEN_ATQLEN_SHIFT) -#define I40E_GL_ATQLEN_ATQVFE_SHIFT 28 -#define I40E_GL_ATQLEN_ATQVFE_MASK I40E_MASK(0x1, I40E_GL_ATQLEN_ATQVFE_SHIFT) -#define I40E_GL_ATQLEN_ATQOVFL_SHIFT 29 -#define I40E_GL_ATQLEN_ATQOVFL_MASK I40E_MASK(0x1, I40E_GL_ATQLEN_ATQOVFL_SHIFT) -#define I40E_GL_ATQLEN_ATQCRIT_SHIFT 30 -#define I40E_GL_ATQLEN_ATQCRIT_MASK I40E_MASK(0x1, I40E_GL_ATQLEN_ATQCRIT_SHIFT) -#define I40E_GL_ATQLEN_ATQENABLE_SHIFT 31 -#define I40E_GL_ATQLEN_ATQENABLE_MASK I40E_MASK(0x1, I40E_GL_ATQLEN_ATQENABLE_SHIFT) -#define I40E_GL_ATQT 0x00080440 /* Reset: EMPR */ -#define I40E_GL_ATQT_ATQT_SHIFT 0 -#define I40E_GL_ATQT_ATQT_MASK I40E_MASK(0x3FF, I40E_GL_ATQT_ATQT_SHIFT) -#define I40E_PF_ARQBAH 0x00080180 /* Reset: EMPR */ -#define I40E_PF_ARQBAH_ARQBAH_SHIFT 0 -#define I40E_PF_ARQBAH_ARQBAH_MASK I40E_MASK(0xFFFFFFFF, I40E_PF_ARQBAH_ARQBAH_SHIFT) -#define I40E_PF_ARQBAL 0x00080080 /* Reset: EMPR */ -#define I40E_PF_ARQBAL_ARQBAL_SHIFT 0 -#define I40E_PF_ARQBAL_ARQBAL_MASK I40E_MASK(0xFFFFFFFF, I40E_PF_ARQBAL_ARQBAL_SHIFT) -#define I40E_PF_ARQH 0x00080380 /* Reset: EMPR */ -#define I40E_PF_ARQH_ARQH_SHIFT 0 -#define I40E_PF_ARQH_ARQH_MASK I40E_MASK(0x3FF, I40E_PF_ARQH_ARQH_SHIFT) -#define I40E_PF_ARQLEN 0x00080280 /* Reset: EMPR */ -#define I40E_PF_ARQLEN_ARQLEN_SHIFT 0 -#define I40E_PF_ARQLEN_ARQLEN_MASK I40E_MASK(0x3FF, I40E_PF_ARQLEN_ARQLEN_SHIFT) -#define I40E_PF_ARQLEN_ARQVFE_SHIFT 28 -#define I40E_PF_ARQLEN_ARQVFE_MASK I40E_MASK(0x1, I40E_PF_ARQLEN_ARQVFE_SHIFT) -#define I40E_PF_ARQLEN_ARQOVFL_SHIFT 29 -#define I40E_PF_ARQLEN_ARQOVFL_MASK I40E_MASK(0x1, I40E_PF_ARQLEN_ARQOVFL_SHIFT) -#define I40E_PF_ARQLEN_ARQCRIT_SHIFT 30 -#define I40E_PF_ARQLEN_ARQCRIT_MASK I40E_MASK(0x1, I40E_PF_ARQLEN_ARQCRIT_SHIFT) -#define I40E_PF_ARQLEN_ARQENABLE_SHIFT 31 -#define I40E_PF_ARQLEN_ARQENABLE_MASK I40E_MASK(0x1, I40E_PF_ARQLEN_ARQENABLE_SHIFT) -#define I40E_PF_ARQT 0x00080480 /* Reset: EMPR */ -#define I40E_PF_ARQT_ARQT_SHIFT 0 -#define I40E_PF_ARQT_ARQT_MASK I40E_MASK(0x3FF, I40E_PF_ARQT_ARQT_SHIFT) -#define I40E_PF_ATQBAH 0x00080100 /* Reset: EMPR */ -#define I40E_PF_ATQBAH_ATQBAH_SHIFT 0 -#define I40E_PF_ATQBAH_ATQBAH_MASK I40E_MASK(0xFFFFFFFF, I40E_PF_ATQBAH_ATQBAH_SHIFT) -#define I40E_PF_ATQBAL 0x00080000 /* Reset: EMPR */ -#define I40E_PF_ATQBAL_ATQBAL_SHIFT 0 -#define I40E_PF_ATQBAL_ATQBAL_MASK I40E_MASK(0xFFFFFFFF, I40E_PF_ATQBAL_ATQBAL_SHIFT) -#define I40E_PF_ATQH 0x00080300 /* Reset: EMPR */ -#define I40E_PF_ATQH_ATQH_SHIFT 0 -#define I40E_PF_ATQH_ATQH_MASK I40E_MASK(0x3FF, I40E_PF_ATQH_ATQH_SHIFT) -#define I40E_PF_ATQLEN 0x00080200 /* Reset: EMPR */ -#define I40E_PF_ATQLEN_ATQLEN_SHIFT 0 -#define I40E_PF_ATQLEN_ATQLEN_MASK I40E_MASK(0x3FF, I40E_PF_ATQLEN_ATQLEN_SHIFT) -#define I40E_PF_ATQLEN_ATQVFE_SHIFT 28 -#define I40E_PF_ATQLEN_ATQVFE_MASK I40E_MASK(0x1, I40E_PF_ATQLEN_ATQVFE_SHIFT) -#define I40E_PF_ATQLEN_ATQOVFL_SHIFT 29 -#define I40E_PF_ATQLEN_ATQOVFL_MASK I40E_MASK(0x1, I40E_PF_ATQLEN_ATQOVFL_SHIFT) -#define I40E_PF_ATQLEN_ATQCRIT_SHIFT 30 -#define I40E_PF_ATQLEN_ATQCRIT_MASK I40E_MASK(0x1, I40E_PF_ATQLEN_ATQCRIT_SHIFT) -#define I40E_PF_ATQLEN_ATQENABLE_SHIFT 31 -#define I40E_PF_ATQLEN_ATQENABLE_MASK I40E_MASK(0x1, I40E_PF_ATQLEN_ATQENABLE_SHIFT) -#define I40E_PF_ATQT 0x00080400 /* Reset: EMPR */ -#define I40E_PF_ATQT_ATQT_SHIFT 0 -#define I40E_PF_ATQT_ATQT_MASK I40E_MASK(0x3FF, I40E_PF_ATQT_ATQT_SHIFT) -#define I40E_VF_ARQBAH(_VF) (0x00081400 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: EMPR */ -#define I40E_VF_ARQBAH_MAX_INDEX 127 -#define I40E_VF_ARQBAH_ARQBAH_SHIFT 0 -#define I40E_VF_ARQBAH_ARQBAH_MASK I40E_MASK(0xFFFFFFFF, I40E_VF_ARQBAH_ARQBAH_SHIFT) -#define I40E_VF_ARQBAL(_VF) (0x00080C00 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: EMPR */ -#define I40E_VF_ARQBAL_MAX_INDEX 127 -#define I40E_VF_ARQBAL_ARQBAL_SHIFT 0 -#define I40E_VF_ARQBAL_ARQBAL_MASK I40E_MASK(0xFFFFFFFF, I40E_VF_ARQBAL_ARQBAL_SHIFT) -#define I40E_VF_ARQH(_VF) (0x00082400 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: EMPR */ -#define I40E_VF_ARQH_MAX_INDEX 127 -#define I40E_VF_ARQH_ARQH_SHIFT 0 -#define I40E_VF_ARQH_ARQH_MASK I40E_MASK(0x3FF, I40E_VF_ARQH_ARQH_SHIFT) -#define I40E_VF_ARQLEN(_VF) (0x00081C00 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: EMPR */ -#define I40E_VF_ARQLEN_MAX_INDEX 127 -#define I40E_VF_ARQLEN_ARQLEN_SHIFT 0 -#define I40E_VF_ARQLEN_ARQLEN_MASK I40E_MASK(0x3FF, I40E_VF_ARQLEN_ARQLEN_SHIFT) -#define I40E_VF_ARQLEN_ARQVFE_SHIFT 28 -#define I40E_VF_ARQLEN_ARQVFE_MASK I40E_MASK(0x1, I40E_VF_ARQLEN_ARQVFE_SHIFT) -#define I40E_VF_ARQLEN_ARQOVFL_SHIFT 29 -#define I40E_VF_ARQLEN_ARQOVFL_MASK I40E_MASK(0x1, I40E_VF_ARQLEN_ARQOVFL_SHIFT) -#define I40E_VF_ARQLEN_ARQCRIT_SHIFT 30 -#define I40E_VF_ARQLEN_ARQCRIT_MASK I40E_MASK(0x1, I40E_VF_ARQLEN_ARQCRIT_SHIFT) -#define I40E_VF_ARQLEN_ARQENABLE_SHIFT 31 -#define I40E_VF_ARQLEN_ARQENABLE_MASK I40E_MASK(0x1, I40E_VF_ARQLEN_ARQENABLE_SHIFT) -#define I40E_VF_ARQT(_VF) (0x00082C00 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: EMPR */ -#define I40E_VF_ARQT_MAX_INDEX 127 -#define I40E_VF_ARQT_ARQT_SHIFT 0 -#define I40E_VF_ARQT_ARQT_MASK I40E_MASK(0x3FF, I40E_VF_ARQT_ARQT_SHIFT) -#define I40E_VF_ATQBAH(_VF) (0x00081000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: EMPR */ -#define I40E_VF_ATQBAH_MAX_INDEX 127 -#define I40E_VF_ATQBAH_ATQBAH_SHIFT 0 -#define I40E_VF_ATQBAH_ATQBAH_MASK I40E_MASK(0xFFFFFFFF, I40E_VF_ATQBAH_ATQBAH_SHIFT) -#define I40E_VF_ATQBAL(_VF) (0x00080800 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: EMPR */ -#define I40E_VF_ATQBAL_MAX_INDEX 127 -#define I40E_VF_ATQBAL_ATQBAL_SHIFT 0 -#define I40E_VF_ATQBAL_ATQBAL_MASK I40E_MASK(0xFFFFFFFF, I40E_VF_ATQBAL_ATQBAL_SHIFT) -#define I40E_VF_ATQH(_VF) (0x00082000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: EMPR */ -#define I40E_VF_ATQH_MAX_INDEX 127 -#define I40E_VF_ATQH_ATQH_SHIFT 0 -#define I40E_VF_ATQH_ATQH_MASK I40E_MASK(0x3FF, I40E_VF_ATQH_ATQH_SHIFT) -#define I40E_VF_ATQLEN(_VF) (0x00081800 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: EMPR */ -#define I40E_VF_ATQLEN_MAX_INDEX 127 -#define I40E_VF_ATQLEN_ATQLEN_SHIFT 0 -#define I40E_VF_ATQLEN_ATQLEN_MASK I40E_MASK(0x3FF, I40E_VF_ATQLEN_ATQLEN_SHIFT) -#define I40E_VF_ATQLEN_ATQVFE_SHIFT 28 -#define I40E_VF_ATQLEN_ATQVFE_MASK I40E_MASK(0x1, I40E_VF_ATQLEN_ATQVFE_SHIFT) -#define I40E_VF_ATQLEN_ATQOVFL_SHIFT 29 -#define I40E_VF_ATQLEN_ATQOVFL_MASK I40E_MASK(0x1, I40E_VF_ATQLEN_ATQOVFL_SHIFT) -#define I40E_VF_ATQLEN_ATQCRIT_SHIFT 30 -#define I40E_VF_ATQLEN_ATQCRIT_MASK I40E_MASK(0x1, I40E_VF_ATQLEN_ATQCRIT_SHIFT) -#define I40E_VF_ATQLEN_ATQENABLE_SHIFT 31 -#define I40E_VF_ATQLEN_ATQENABLE_MASK I40E_MASK(0x1, I40E_VF_ATQLEN_ATQENABLE_SHIFT) -#define I40E_VF_ATQT(_VF) (0x00082800 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: EMPR */ -#define I40E_VF_ATQT_MAX_INDEX 127 -#define I40E_VF_ATQT_ATQT_SHIFT 0 -#define I40E_VF_ATQT_ATQT_MASK I40E_MASK(0x3FF, I40E_VF_ATQT_ATQT_SHIFT) -#define I40E_PRT_L2TAGSEN 0x001C0B20 /* Reset: CORER */ -#define I40E_PRT_L2TAGSEN_ENABLE_SHIFT 0 -#define I40E_PRT_L2TAGSEN_ENABLE_MASK I40E_MASK(0xFF, I40E_PRT_L2TAGSEN_ENABLE_SHIFT) -#define I40E_PFCM_LAN_ERRDATA 0x0010C080 /* Reset: PFR */ -#define I40E_PFCM_LAN_ERRDATA_ERROR_CODE_SHIFT 0 -#define I40E_PFCM_LAN_ERRDATA_ERROR_CODE_MASK I40E_MASK(0xF, I40E_PFCM_LAN_ERRDATA_ERROR_CODE_SHIFT) -#define I40E_PFCM_LAN_ERRDATA_Q_TYPE_SHIFT 4 -#define I40E_PFCM_LAN_ERRDATA_Q_TYPE_MASK I40E_MASK(0x7, I40E_PFCM_LAN_ERRDATA_Q_TYPE_SHIFT) -#define I40E_PFCM_LAN_ERRDATA_Q_NUM_SHIFT 8 -#define I40E_PFCM_LAN_ERRDATA_Q_NUM_MASK I40E_MASK(0xFFF, I40E_PFCM_LAN_ERRDATA_Q_NUM_SHIFT) -#define I40E_PFCM_LAN_ERRINFO 0x0010C000 /* Reset: PFR */ -#define I40E_PFCM_LAN_ERRINFO_ERROR_VALID_SHIFT 0 -#define I40E_PFCM_LAN_ERRINFO_ERROR_VALID_MASK I40E_MASK(0x1, I40E_PFCM_LAN_ERRINFO_ERROR_VALID_SHIFT) -#define I40E_PFCM_LAN_ERRINFO_ERROR_INST_SHIFT 4 -#define I40E_PFCM_LAN_ERRINFO_ERROR_INST_MASK I40E_MASK(0x7, I40E_PFCM_LAN_ERRINFO_ERROR_INST_SHIFT) -#define I40E_PFCM_LAN_ERRINFO_DBL_ERROR_CNT_SHIFT 8 -#define I40E_PFCM_LAN_ERRINFO_DBL_ERROR_CNT_MASK I40E_MASK(0xFF, I40E_PFCM_LAN_ERRINFO_DBL_ERROR_CNT_SHIFT) -#define I40E_PFCM_LAN_ERRINFO_RLU_ERROR_CNT_SHIFT 16 -#define I40E_PFCM_LAN_ERRINFO_RLU_ERROR_CNT_MASK I40E_MASK(0xFF, I40E_PFCM_LAN_ERRINFO_RLU_ERROR_CNT_SHIFT) -#define I40E_PFCM_LAN_ERRINFO_RLS_ERROR_CNT_SHIFT 24 -#define I40E_PFCM_LAN_ERRINFO_RLS_ERROR_CNT_MASK I40E_MASK(0xFF, I40E_PFCM_LAN_ERRINFO_RLS_ERROR_CNT_SHIFT) -#define I40E_PFCM_LANCTXCTL 0x0010C300 /* Reset: CORER */ -#define I40E_PFCM_LANCTXCTL_QUEUE_NUM_SHIFT 0 -#define I40E_PFCM_LANCTXCTL_QUEUE_NUM_MASK I40E_MASK(0xFFF, I40E_PFCM_LANCTXCTL_QUEUE_NUM_SHIFT) -#define I40E_PFCM_LANCTXCTL_SUB_LINE_SHIFT 12 -#define I40E_PFCM_LANCTXCTL_SUB_LINE_MASK I40E_MASK(0x7, I40E_PFCM_LANCTXCTL_SUB_LINE_SHIFT) -#define I40E_PFCM_LANCTXCTL_QUEUE_TYPE_SHIFT 15 -#define I40E_PFCM_LANCTXCTL_QUEUE_TYPE_MASK I40E_MASK(0x3, I40E_PFCM_LANCTXCTL_QUEUE_TYPE_SHIFT) -#define I40E_PFCM_LANCTXCTL_OP_CODE_SHIFT 17 -#define I40E_PFCM_LANCTXCTL_OP_CODE_MASK I40E_MASK(0x3, I40E_PFCM_LANCTXCTL_OP_CODE_SHIFT) -#define I40E_PFCM_LANCTXDATA(_i) (0x0010C100 + ((_i) * 128)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_PFCM_LANCTXDATA_MAX_INDEX 3 -#define I40E_PFCM_LANCTXDATA_DATA_SHIFT 0 -#define I40E_PFCM_LANCTXDATA_DATA_MASK I40E_MASK(0xFFFFFFFF, I40E_PFCM_LANCTXDATA_DATA_SHIFT) -#define I40E_PFCM_LANCTXSTAT 0x0010C380 /* Reset: CORER */ -#define I40E_PFCM_LANCTXSTAT_CTX_DONE_SHIFT 0 -#define I40E_PFCM_LANCTXSTAT_CTX_DONE_MASK I40E_MASK(0x1, I40E_PFCM_LANCTXSTAT_CTX_DONE_SHIFT) -#define I40E_PFCM_LANCTXSTAT_CTX_MISS_SHIFT 1 -#define I40E_PFCM_LANCTXSTAT_CTX_MISS_MASK I40E_MASK(0x1, I40E_PFCM_LANCTXSTAT_CTX_MISS_SHIFT) -#define I40E_VFCM_PE_ERRDATA1(_VF) (0x00138800 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: VFR */ -#define I40E_VFCM_PE_ERRDATA1_MAX_INDEX 127 -#define I40E_VFCM_PE_ERRDATA1_ERROR_CODE_SHIFT 0 -#define I40E_VFCM_PE_ERRDATA1_ERROR_CODE_MASK I40E_MASK(0xF, I40E_VFCM_PE_ERRDATA1_ERROR_CODE_SHIFT) -#define I40E_VFCM_PE_ERRDATA1_Q_TYPE_SHIFT 4 -#define I40E_VFCM_PE_ERRDATA1_Q_TYPE_MASK I40E_MASK(0x7, I40E_VFCM_PE_ERRDATA1_Q_TYPE_SHIFT) -#define I40E_VFCM_PE_ERRDATA1_Q_NUM_SHIFT 8 -#define I40E_VFCM_PE_ERRDATA1_Q_NUM_MASK I40E_MASK(0x3FFFF, I40E_VFCM_PE_ERRDATA1_Q_NUM_SHIFT) -#define I40E_VFCM_PE_ERRINFO1(_VF) (0x00138400 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: VFR */ -#define I40E_VFCM_PE_ERRINFO1_MAX_INDEX 127 -#define I40E_VFCM_PE_ERRINFO1_ERROR_VALID_SHIFT 0 -#define I40E_VFCM_PE_ERRINFO1_ERROR_VALID_MASK I40E_MASK(0x1, I40E_VFCM_PE_ERRINFO1_ERROR_VALID_SHIFT) -#define I40E_VFCM_PE_ERRINFO1_ERROR_INST_SHIFT 4 -#define I40E_VFCM_PE_ERRINFO1_ERROR_INST_MASK I40E_MASK(0x7, I40E_VFCM_PE_ERRINFO1_ERROR_INST_SHIFT) -#define I40E_VFCM_PE_ERRINFO1_DBL_ERROR_CNT_SHIFT 8 -#define I40E_VFCM_PE_ERRINFO1_DBL_ERROR_CNT_MASK I40E_MASK(0xFF, I40E_VFCM_PE_ERRINFO1_DBL_ERROR_CNT_SHIFT) -#define I40E_VFCM_PE_ERRINFO1_RLU_ERROR_CNT_SHIFT 16 -#define I40E_VFCM_PE_ERRINFO1_RLU_ERROR_CNT_MASK I40E_MASK(0xFF, I40E_VFCM_PE_ERRINFO1_RLU_ERROR_CNT_SHIFT) -#define I40E_VFCM_PE_ERRINFO1_RLS_ERROR_CNT_SHIFT 24 -#define I40E_VFCM_PE_ERRINFO1_RLS_ERROR_CNT_MASK I40E_MASK(0xFF, I40E_VFCM_PE_ERRINFO1_RLS_ERROR_CNT_SHIFT) -#define I40E_GLDCB_GENC 0x00083044 /* Reset: CORER */ -#define I40E_GLDCB_GENC_PCIRTT_SHIFT 0 -#define I40E_GLDCB_GENC_PCIRTT_MASK I40E_MASK(0xFFFF, I40E_GLDCB_GENC_PCIRTT_SHIFT) -#define I40E_GLDCB_RUPTI 0x00122618 /* Reset: CORER */ -#define I40E_GLDCB_RUPTI_PFCTIMEOUT_UP_SHIFT 0 -#define I40E_GLDCB_RUPTI_PFCTIMEOUT_UP_MASK I40E_MASK(0xFFFFFFFF, I40E_GLDCB_RUPTI_PFCTIMEOUT_UP_SHIFT) -#define I40E_PRTDCB_FCCFG 0x001E4640 /* Reset: GLOBR */ -#define I40E_PRTDCB_FCCFG_TFCE_SHIFT 3 -#define I40E_PRTDCB_FCCFG_TFCE_MASK I40E_MASK(0x3, I40E_PRTDCB_FCCFG_TFCE_SHIFT) -#define I40E_PRTDCB_FCRTV 0x001E4600 /* Reset: GLOBR */ -#define I40E_PRTDCB_FCRTV_FC_REFRESH_TH_SHIFT 0 -#define I40E_PRTDCB_FCRTV_FC_REFRESH_TH_MASK I40E_MASK(0xFFFF, I40E_PRTDCB_FCRTV_FC_REFRESH_TH_SHIFT) -#define I40E_PRTDCB_FCTTVN(_i) (0x001E4580 + ((_i) * 32)) /* _i=0...3 */ /* Reset: GLOBR */ -#define I40E_PRTDCB_FCTTVN_MAX_INDEX 3 -#define I40E_PRTDCB_FCTTVN_TTV_2N_SHIFT 0 -#define I40E_PRTDCB_FCTTVN_TTV_2N_MASK I40E_MASK(0xFFFF, I40E_PRTDCB_FCTTVN_TTV_2N_SHIFT) -#define I40E_PRTDCB_FCTTVN_TTV_2N_P1_SHIFT 16 -#define I40E_PRTDCB_FCTTVN_TTV_2N_P1_MASK I40E_MASK(0xFFFF, I40E_PRTDCB_FCTTVN_TTV_2N_P1_SHIFT) -#define I40E_PRTDCB_GENC 0x00083000 /* Reset: CORER */ -#define I40E_PRTDCB_GENC_RESERVED_1_SHIFT 0 -#define I40E_PRTDCB_GENC_RESERVED_1_MASK I40E_MASK(0x3, I40E_PRTDCB_GENC_RESERVED_1_SHIFT) -#define I40E_PRTDCB_GENC_NUMTC_SHIFT 2 -#define I40E_PRTDCB_GENC_NUMTC_MASK I40E_MASK(0xF, I40E_PRTDCB_GENC_NUMTC_SHIFT) -#define I40E_PRTDCB_GENC_FCOEUP_SHIFT 6 -#define I40E_PRTDCB_GENC_FCOEUP_MASK I40E_MASK(0x7, I40E_PRTDCB_GENC_FCOEUP_SHIFT) -#define I40E_PRTDCB_GENC_FCOEUP_VALID_SHIFT 9 -#define I40E_PRTDCB_GENC_FCOEUP_VALID_MASK I40E_MASK(0x1, I40E_PRTDCB_GENC_FCOEUP_VALID_SHIFT) -#define I40E_PRTDCB_GENC_PFCLDA_SHIFT 16 -#define I40E_PRTDCB_GENC_PFCLDA_MASK I40E_MASK(0xFFFF, I40E_PRTDCB_GENC_PFCLDA_SHIFT) -#define I40E_PRTDCB_GENS 0x00083020 /* Reset: CORER */ -#define I40E_PRTDCB_GENS_DCBX_STATUS_SHIFT 0 -#define I40E_PRTDCB_GENS_DCBX_STATUS_MASK I40E_MASK(0x7, I40E_PRTDCB_GENS_DCBX_STATUS_SHIFT) -#define I40E_PRTDCB_MFLCN 0x001E2400 /* Reset: GLOBR */ -#define I40E_PRTDCB_MFLCN_PMCF_SHIFT 0 -#define I40E_PRTDCB_MFLCN_PMCF_MASK I40E_MASK(0x1, I40E_PRTDCB_MFLCN_PMCF_SHIFT) -#define I40E_PRTDCB_MFLCN_DPF_SHIFT 1 -#define I40E_PRTDCB_MFLCN_DPF_MASK I40E_MASK(0x1, I40E_PRTDCB_MFLCN_DPF_SHIFT) -#define I40E_PRTDCB_MFLCN_RPFCM_SHIFT 2 -#define I40E_PRTDCB_MFLCN_RPFCM_MASK I40E_MASK(0x1, I40E_PRTDCB_MFLCN_RPFCM_SHIFT) -#define I40E_PRTDCB_MFLCN_RFCE_SHIFT 3 -#define I40E_PRTDCB_MFLCN_RFCE_MASK I40E_MASK(0x1, I40E_PRTDCB_MFLCN_RFCE_SHIFT) -#define I40E_PRTDCB_MFLCN_RPFCE_SHIFT 4 -#define I40E_PRTDCB_MFLCN_RPFCE_MASK I40E_MASK(0xFF, I40E_PRTDCB_MFLCN_RPFCE_SHIFT) -#define I40E_PRTDCB_RETSC 0x001223E0 /* Reset: CORER */ -#define I40E_PRTDCB_RETSC_ETS_MODE_SHIFT 0 -#define I40E_PRTDCB_RETSC_ETS_MODE_MASK I40E_MASK(0x1, I40E_PRTDCB_RETSC_ETS_MODE_SHIFT) -#define I40E_PRTDCB_RETSC_NON_ETS_MODE_SHIFT 1 -#define I40E_PRTDCB_RETSC_NON_ETS_MODE_MASK I40E_MASK(0x1, I40E_PRTDCB_RETSC_NON_ETS_MODE_SHIFT) -#define I40E_PRTDCB_RETSC_ETS_MAX_EXP_SHIFT 2 -#define I40E_PRTDCB_RETSC_ETS_MAX_EXP_MASK I40E_MASK(0xF, I40E_PRTDCB_RETSC_ETS_MAX_EXP_SHIFT) -#define I40E_PRTDCB_RETSC_LLTC_SHIFT 8 -#define I40E_PRTDCB_RETSC_LLTC_MASK I40E_MASK(0xFF, I40E_PRTDCB_RETSC_LLTC_SHIFT) -#define I40E_PRTDCB_RETSTCC(_i) (0x00122180 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ -#define I40E_PRTDCB_RETSTCC_MAX_INDEX 7 -#define I40E_PRTDCB_RETSTCC_BWSHARE_SHIFT 0 -#define I40E_PRTDCB_RETSTCC_BWSHARE_MASK I40E_MASK(0x7F, I40E_PRTDCB_RETSTCC_BWSHARE_SHIFT) -#define I40E_PRTDCB_RETSTCC_UPINTC_MODE_SHIFT 30 -#define I40E_PRTDCB_RETSTCC_UPINTC_MODE_MASK I40E_MASK(0x1, I40E_PRTDCB_RETSTCC_UPINTC_MODE_SHIFT) -#define I40E_PRTDCB_RETSTCC_ETSTC_SHIFT 31 -#define I40E_PRTDCB_RETSTCC_ETSTC_MASK I40E_MASK(0x1, I40E_PRTDCB_RETSTCC_ETSTC_SHIFT) -#define I40E_PRTDCB_RPPMC 0x001223A0 /* Reset: CORER */ -#define I40E_PRTDCB_RPPMC_LANRPPM_SHIFT 0 -#define I40E_PRTDCB_RPPMC_LANRPPM_MASK I40E_MASK(0xFF, I40E_PRTDCB_RPPMC_LANRPPM_SHIFT) -#define I40E_PRTDCB_RPPMC_RDMARPPM_SHIFT 8 -#define I40E_PRTDCB_RPPMC_RDMARPPM_MASK I40E_MASK(0xFF, I40E_PRTDCB_RPPMC_RDMARPPM_SHIFT) -#define I40E_PRTDCB_RPPMC_RX_FIFO_SIZE_SHIFT 16 -#define I40E_PRTDCB_RPPMC_RX_FIFO_SIZE_MASK I40E_MASK(0xFF, I40E_PRTDCB_RPPMC_RX_FIFO_SIZE_SHIFT) -#define I40E_PRTDCB_RUP 0x001C0B00 /* Reset: CORER */ -#define I40E_PRTDCB_RUP_NOVLANUP_SHIFT 0 -#define I40E_PRTDCB_RUP_NOVLANUP_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP_NOVLANUP_SHIFT) -#define I40E_PRTDCB_RUP2TC 0x001C09A0 /* Reset: CORER */ -#define I40E_PRTDCB_RUP2TC_UP0TC_SHIFT 0 -#define I40E_PRTDCB_RUP2TC_UP0TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP0TC_SHIFT) -#define I40E_PRTDCB_RUP2TC_UP1TC_SHIFT 3 -#define I40E_PRTDCB_RUP2TC_UP1TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP1TC_SHIFT) -#define I40E_PRTDCB_RUP2TC_UP2TC_SHIFT 6 -#define I40E_PRTDCB_RUP2TC_UP2TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP2TC_SHIFT) -#define I40E_PRTDCB_RUP2TC_UP3TC_SHIFT 9 -#define I40E_PRTDCB_RUP2TC_UP3TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP3TC_SHIFT) -#define I40E_PRTDCB_RUP2TC_UP4TC_SHIFT 12 -#define I40E_PRTDCB_RUP2TC_UP4TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP4TC_SHIFT) -#define I40E_PRTDCB_RUP2TC_UP5TC_SHIFT 15 -#define I40E_PRTDCB_RUP2TC_UP5TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP5TC_SHIFT) -#define I40E_PRTDCB_RUP2TC_UP6TC_SHIFT 18 -#define I40E_PRTDCB_RUP2TC_UP6TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP6TC_SHIFT) -#define I40E_PRTDCB_RUP2TC_UP7TC_SHIFT 21 -#define I40E_PRTDCB_RUP2TC_UP7TC_MASK I40E_MASK(0x7, I40E_PRTDCB_RUP2TC_UP7TC_SHIFT) -#define I40E_PRTDCB_TC2PFC 0x001C0980 /* Reset: CORER */ -#define I40E_PRTDCB_TC2PFC_TC2PFC_SHIFT 0 -#define I40E_PRTDCB_TC2PFC_TC2PFC_MASK I40E_MASK(0xFF, I40E_PRTDCB_TC2PFC_TC2PFC_SHIFT) -#define I40E_PRTDCB_TCMSTC(_i) (0x000A0040 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ -#define I40E_PRTDCB_TCMSTC_MAX_INDEX 7 -#define I40E_PRTDCB_TCMSTC_MSTC_SHIFT 0 -#define I40E_PRTDCB_TCMSTC_MSTC_MASK I40E_MASK(0xFFFFF, I40E_PRTDCB_TCMSTC_MSTC_SHIFT) -#define I40E_PRTDCB_TCPMC 0x000A21A0 /* Reset: CORER */ -#define I40E_PRTDCB_TCPMC_CPM_SHIFT 0 -#define I40E_PRTDCB_TCPMC_CPM_MASK I40E_MASK(0x1FFF, I40E_PRTDCB_TCPMC_CPM_SHIFT) -#define I40E_PRTDCB_TCPMC_LLTC_SHIFT 13 -#define I40E_PRTDCB_TCPMC_LLTC_MASK I40E_MASK(0xFF, I40E_PRTDCB_TCPMC_LLTC_SHIFT) -#define I40E_PRTDCB_TCPMC_TCPM_MODE_SHIFT 30 -#define I40E_PRTDCB_TCPMC_TCPM_MODE_MASK I40E_MASK(0x1, I40E_PRTDCB_TCPMC_TCPM_MODE_SHIFT) -#define I40E_PRTDCB_TCWSTC(_i) (0x000A2040 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ -#define I40E_PRTDCB_TCWSTC_MAX_INDEX 7 -#define I40E_PRTDCB_TCWSTC_MSTC_SHIFT 0 -#define I40E_PRTDCB_TCWSTC_MSTC_MASK I40E_MASK(0xFFFFF, I40E_PRTDCB_TCWSTC_MSTC_SHIFT) -#define I40E_PRTDCB_TDPMC 0x000A0180 /* Reset: CORER */ -#define I40E_PRTDCB_TDPMC_DPM_SHIFT 0 -#define I40E_PRTDCB_TDPMC_DPM_MASK I40E_MASK(0xFF, I40E_PRTDCB_TDPMC_DPM_SHIFT) -#define I40E_PRTDCB_TDPMC_TCPM_MODE_SHIFT 30 -#define I40E_PRTDCB_TDPMC_TCPM_MODE_MASK I40E_MASK(0x1, I40E_PRTDCB_TDPMC_TCPM_MODE_SHIFT) -#define I40E_PRTDCB_TETSC_TCB 0x000AE060 /* Reset: CORER */ -#define I40E_PRTDCB_TETSC_TCB_EN_LL_STRICT_PRIORITY_SHIFT 0 -#define I40E_PRTDCB_TETSC_TCB_EN_LL_STRICT_PRIORITY_MASK I40E_MASK(0x1, I40E_PRTDCB_TETSC_TCB_EN_LL_STRICT_PRIORITY_SHIFT) -#define I40E_PRTDCB_TETSC_TCB_LLTC_SHIFT 8 -#define I40E_PRTDCB_TETSC_TCB_LLTC_MASK I40E_MASK(0xFF, I40E_PRTDCB_TETSC_TCB_LLTC_SHIFT) -#define I40E_PRTDCB_TETSC_TPB 0x00098060 /* Reset: CORER */ -#define I40E_PRTDCB_TETSC_TPB_EN_LL_STRICT_PRIORITY_SHIFT 0 -#define I40E_PRTDCB_TETSC_TPB_EN_LL_STRICT_PRIORITY_MASK I40E_MASK(0x1, I40E_PRTDCB_TETSC_TPB_EN_LL_STRICT_PRIORITY_SHIFT) -#define I40E_PRTDCB_TETSC_TPB_LLTC_SHIFT 8 -#define I40E_PRTDCB_TETSC_TPB_LLTC_MASK I40E_MASK(0xFF, I40E_PRTDCB_TETSC_TPB_LLTC_SHIFT) -#define I40E_PRTDCB_TFCS 0x001E4560 /* Reset: GLOBR */ -#define I40E_PRTDCB_TFCS_TXOFF_SHIFT 0 -#define I40E_PRTDCB_TFCS_TXOFF_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF_SHIFT) -#define I40E_PRTDCB_TFCS_TXOFF0_SHIFT 8 -#define I40E_PRTDCB_TFCS_TXOFF0_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF0_SHIFT) -#define I40E_PRTDCB_TFCS_TXOFF1_SHIFT 9 -#define I40E_PRTDCB_TFCS_TXOFF1_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF1_SHIFT) -#define I40E_PRTDCB_TFCS_TXOFF2_SHIFT 10 -#define I40E_PRTDCB_TFCS_TXOFF2_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF2_SHIFT) -#define I40E_PRTDCB_TFCS_TXOFF3_SHIFT 11 -#define I40E_PRTDCB_TFCS_TXOFF3_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF3_SHIFT) -#define I40E_PRTDCB_TFCS_TXOFF4_SHIFT 12 -#define I40E_PRTDCB_TFCS_TXOFF4_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF4_SHIFT) -#define I40E_PRTDCB_TFCS_TXOFF5_SHIFT 13 -#define I40E_PRTDCB_TFCS_TXOFF5_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF5_SHIFT) -#define I40E_PRTDCB_TFCS_TXOFF6_SHIFT 14 -#define I40E_PRTDCB_TFCS_TXOFF6_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF6_SHIFT) -#define I40E_PRTDCB_TFCS_TXOFF7_SHIFT 15 -#define I40E_PRTDCB_TFCS_TXOFF7_MASK I40E_MASK(0x1, I40E_PRTDCB_TFCS_TXOFF7_SHIFT) -#define I40E_PRTDCB_TPFCTS(_i) (0x001E4660 + ((_i) * 32)) /* _i=0...7 */ /* Reset: GLOBR */ -#define I40E_PRTDCB_TPFCTS_MAX_INDEX 7 -#define I40E_PRTDCB_TPFCTS_PFCTIMER_SHIFT 0 -#define I40E_PRTDCB_TPFCTS_PFCTIMER_MASK I40E_MASK(0x3FFF, I40E_PRTDCB_TPFCTS_PFCTIMER_SHIFT) -#define I40E_GLFCOE_RCTL 0x00269B94 /* Reset: CORER */ -#define I40E_GLFCOE_RCTL_FCOEVER_SHIFT 0 -#define I40E_GLFCOE_RCTL_FCOEVER_MASK I40E_MASK(0xF, I40E_GLFCOE_RCTL_FCOEVER_SHIFT) -#define I40E_GLFCOE_RCTL_SAVBAD_SHIFT 4 -#define I40E_GLFCOE_RCTL_SAVBAD_MASK I40E_MASK(0x1, I40E_GLFCOE_RCTL_SAVBAD_SHIFT) -#define I40E_GLFCOE_RCTL_ICRC_SHIFT 5 -#define I40E_GLFCOE_RCTL_ICRC_MASK I40E_MASK(0x1, I40E_GLFCOE_RCTL_ICRC_SHIFT) -#define I40E_GLFCOE_RCTL_MAX_SIZE_SHIFT 16 -#define I40E_GLFCOE_RCTL_MAX_SIZE_MASK I40E_MASK(0x3FFF, I40E_GLFCOE_RCTL_MAX_SIZE_SHIFT) -#define I40E_GL_FWSTS 0x00083048 /* Reset: POR */ -#define I40E_GL_FWSTS_FWS0B_SHIFT 0 -#define I40E_GL_FWSTS_FWS0B_MASK I40E_MASK(0xFF, I40E_GL_FWSTS_FWS0B_SHIFT) -#define I40E_GL_FWSTS_FWRI_SHIFT 9 -#define I40E_GL_FWSTS_FWRI_MASK I40E_MASK(0x1, I40E_GL_FWSTS_FWRI_SHIFT) -#define I40E_GL_FWSTS_FWS1B_SHIFT 16 -#define I40E_GL_FWSTS_FWS1B_MASK I40E_MASK(0xFF, I40E_GL_FWSTS_FWS1B_SHIFT) -#define I40E_GLGEN_CLKSTAT 0x000B8184 /* Reset: POR */ -#define I40E_GLGEN_CLKSTAT_CLKMODE_SHIFT 0 -#define I40E_GLGEN_CLKSTAT_CLKMODE_MASK I40E_MASK(0x1, I40E_GLGEN_CLKSTAT_CLKMODE_SHIFT) -#define I40E_GLGEN_CLKSTAT_U_CLK_SPEED_SHIFT 4 -#define I40E_GLGEN_CLKSTAT_U_CLK_SPEED_MASK I40E_MASK(0x3, I40E_GLGEN_CLKSTAT_U_CLK_SPEED_SHIFT) -#define I40E_GLGEN_CLKSTAT_P0_CLK_SPEED_SHIFT 8 -#define I40E_GLGEN_CLKSTAT_P0_CLK_SPEED_MASK I40E_MASK(0x7, I40E_GLGEN_CLKSTAT_P0_CLK_SPEED_SHIFT) -#define I40E_GLGEN_CLKSTAT_P1_CLK_SPEED_SHIFT 12 -#define I40E_GLGEN_CLKSTAT_P1_CLK_SPEED_MASK I40E_MASK(0x7, I40E_GLGEN_CLKSTAT_P1_CLK_SPEED_SHIFT) -#define I40E_GLGEN_CLKSTAT_P2_CLK_SPEED_SHIFT 16 -#define I40E_GLGEN_CLKSTAT_P2_CLK_SPEED_MASK I40E_MASK(0x7, I40E_GLGEN_CLKSTAT_P2_CLK_SPEED_SHIFT) -#define I40E_GLGEN_CLKSTAT_P3_CLK_SPEED_SHIFT 20 -#define I40E_GLGEN_CLKSTAT_P3_CLK_SPEED_MASK I40E_MASK(0x7, I40E_GLGEN_CLKSTAT_P3_CLK_SPEED_SHIFT) -#define I40E_GLGEN_GPIO_CTL(_i) (0x00088100 + ((_i) * 4)) /* _i=0...29 */ /* Reset: POR */ -#define I40E_GLGEN_GPIO_CTL_MAX_INDEX 29 -#define I40E_GLGEN_GPIO_CTL_PRT_NUM_SHIFT 0 -#define I40E_GLGEN_GPIO_CTL_PRT_NUM_MASK I40E_MASK(0x3, I40E_GLGEN_GPIO_CTL_PRT_NUM_SHIFT) -#define I40E_GLGEN_GPIO_CTL_PRT_NUM_NA_SHIFT 3 -#define I40E_GLGEN_GPIO_CTL_PRT_NUM_NA_MASK I40E_MASK(0x1, I40E_GLGEN_GPIO_CTL_PRT_NUM_NA_SHIFT) -#define I40E_GLGEN_GPIO_CTL_PIN_DIR_SHIFT 4 -#define I40E_GLGEN_GPIO_CTL_PIN_DIR_MASK I40E_MASK(0x1, I40E_GLGEN_GPIO_CTL_PIN_DIR_SHIFT) -#define I40E_GLGEN_GPIO_CTL_TRI_CTL_SHIFT 5 -#define I40E_GLGEN_GPIO_CTL_TRI_CTL_MASK I40E_MASK(0x1, I40E_GLGEN_GPIO_CTL_TRI_CTL_SHIFT) -#define I40E_GLGEN_GPIO_CTL_OUT_CTL_SHIFT 6 -#define I40E_GLGEN_GPIO_CTL_OUT_CTL_MASK I40E_MASK(0x1, I40E_GLGEN_GPIO_CTL_OUT_CTL_SHIFT) -#define I40E_GLGEN_GPIO_CTL_PIN_FUNC_SHIFT 7 -#define I40E_GLGEN_GPIO_CTL_PIN_FUNC_MASK I40E_MASK(0x7, I40E_GLGEN_GPIO_CTL_PIN_FUNC_SHIFT) -#define I40E_GLGEN_GPIO_CTL_LED_INVRT_SHIFT 10 -#define I40E_GLGEN_GPIO_CTL_LED_INVRT_MASK I40E_MASK(0x1, I40E_GLGEN_GPIO_CTL_LED_INVRT_SHIFT) -#define I40E_GLGEN_GPIO_CTL_LED_BLINK_SHIFT 11 -#define I40E_GLGEN_GPIO_CTL_LED_BLINK_MASK I40E_MASK(0x1, I40E_GLGEN_GPIO_CTL_LED_BLINK_SHIFT) -#define I40E_GLGEN_GPIO_CTL_LED_MODE_SHIFT 12 -#define I40E_GLGEN_GPIO_CTL_LED_MODE_MASK I40E_MASK(0x1F, I40E_GLGEN_GPIO_CTL_LED_MODE_SHIFT) -#define I40E_GLGEN_GPIO_CTL_INT_MODE_SHIFT 17 -#define I40E_GLGEN_GPIO_CTL_INT_MODE_MASK I40E_MASK(0x3, I40E_GLGEN_GPIO_CTL_INT_MODE_SHIFT) -#define I40E_GLGEN_GPIO_CTL_OUT_DEFAULT_SHIFT 19 -#define I40E_GLGEN_GPIO_CTL_OUT_DEFAULT_MASK I40E_MASK(0x1, I40E_GLGEN_GPIO_CTL_OUT_DEFAULT_SHIFT) -#define I40E_GLGEN_GPIO_CTL_PHY_PIN_NAME_SHIFT 20 -#define I40E_GLGEN_GPIO_CTL_PHY_PIN_NAME_MASK I40E_MASK(0x3F, I40E_GLGEN_GPIO_CTL_PHY_PIN_NAME_SHIFT) -#define I40E_GLGEN_GPIO_SET 0x00088184 /* Reset: POR */ -#define I40E_GLGEN_GPIO_SET_GPIO_INDX_SHIFT 0 -#define I40E_GLGEN_GPIO_SET_GPIO_INDX_MASK I40E_MASK(0x1F, I40E_GLGEN_GPIO_SET_GPIO_INDX_SHIFT) -#define I40E_GLGEN_GPIO_SET_SDP_DATA_SHIFT 5 -#define I40E_GLGEN_GPIO_SET_SDP_DATA_MASK I40E_MASK(0x1, I40E_GLGEN_GPIO_SET_SDP_DATA_SHIFT) -#define I40E_GLGEN_GPIO_SET_DRIVE_SDP_SHIFT 6 -#define I40E_GLGEN_GPIO_SET_DRIVE_SDP_MASK I40E_MASK(0x1, I40E_GLGEN_GPIO_SET_DRIVE_SDP_SHIFT) -#define I40E_GLGEN_GPIO_STAT 0x0008817C /* Reset: POR */ -#define I40E_GLGEN_GPIO_STAT_GPIO_VALUE_SHIFT 0 -#define I40E_GLGEN_GPIO_STAT_GPIO_VALUE_MASK I40E_MASK(0x3FFFFFFF, I40E_GLGEN_GPIO_STAT_GPIO_VALUE_SHIFT) -#define I40E_GLGEN_GPIO_TRANSIT 0x00088180 /* Reset: POR */ -#define I40E_GLGEN_GPIO_TRANSIT_GPIO_TRANSITION_SHIFT 0 -#define I40E_GLGEN_GPIO_TRANSIT_GPIO_TRANSITION_MASK I40E_MASK(0x3FFFFFFF, I40E_GLGEN_GPIO_TRANSIT_GPIO_TRANSITION_SHIFT) -#define I40E_GLGEN_I2CCMD(_i) (0x000881E0 + ((_i) * 4)) /* _i=0...3 */ /* Reset: POR */ -#define I40E_GLGEN_I2CCMD_MAX_INDEX 3 -#define I40E_GLGEN_I2CCMD_DATA_SHIFT 0 -#define I40E_GLGEN_I2CCMD_DATA_MASK I40E_MASK(0xFFFF, I40E_GLGEN_I2CCMD_DATA_SHIFT) -#define I40E_GLGEN_I2CCMD_REGADD_SHIFT 16 -#define I40E_GLGEN_I2CCMD_REGADD_MASK I40E_MASK(0xFF, I40E_GLGEN_I2CCMD_REGADD_SHIFT) -#define I40E_GLGEN_I2CCMD_PHYADD_SHIFT 24 -#define I40E_GLGEN_I2CCMD_PHYADD_MASK I40E_MASK(0x7, I40E_GLGEN_I2CCMD_PHYADD_SHIFT) -#define I40E_GLGEN_I2CCMD_OP_SHIFT 27 -#define I40E_GLGEN_I2CCMD_OP_MASK I40E_MASK(0x1, I40E_GLGEN_I2CCMD_OP_SHIFT) -#define I40E_GLGEN_I2CCMD_RESET_SHIFT 28 -#define I40E_GLGEN_I2CCMD_RESET_MASK I40E_MASK(0x1, I40E_GLGEN_I2CCMD_RESET_SHIFT) -#define I40E_GLGEN_I2CCMD_R_SHIFT 29 -#define I40E_GLGEN_I2CCMD_R_MASK I40E_MASK(0x1, I40E_GLGEN_I2CCMD_R_SHIFT) -#define I40E_GLGEN_I2CCMD_E_SHIFT 31 -#define I40E_GLGEN_I2CCMD_E_MASK I40E_MASK(0x1, I40E_GLGEN_I2CCMD_E_SHIFT) -#define I40E_GLGEN_I2CPARAMS(_i) (0x000881AC + ((_i) * 4)) /* _i=0...3 */ /* Reset: POR */ -#define I40E_GLGEN_I2CPARAMS_MAX_INDEX 3 -#define I40E_GLGEN_I2CPARAMS_WRITE_TIME_SHIFT 0 -#define I40E_GLGEN_I2CPARAMS_WRITE_TIME_MASK I40E_MASK(0x1F, I40E_GLGEN_I2CPARAMS_WRITE_TIME_SHIFT) -#define I40E_GLGEN_I2CPARAMS_READ_TIME_SHIFT 5 -#define I40E_GLGEN_I2CPARAMS_READ_TIME_MASK I40E_MASK(0x7, I40E_GLGEN_I2CPARAMS_READ_TIME_SHIFT) -#define I40E_GLGEN_I2CPARAMS_I2CBB_EN_SHIFT 8 -#define I40E_GLGEN_I2CPARAMS_I2CBB_EN_MASK I40E_MASK(0x1, I40E_GLGEN_I2CPARAMS_I2CBB_EN_SHIFT) -#define I40E_GLGEN_I2CPARAMS_CLK_SHIFT 9 -#define I40E_GLGEN_I2CPARAMS_CLK_MASK I40E_MASK(0x1, I40E_GLGEN_I2CPARAMS_CLK_SHIFT) -#define I40E_GLGEN_I2CPARAMS_DATA_OUT_SHIFT 10 -#define I40E_GLGEN_I2CPARAMS_DATA_OUT_MASK I40E_MASK(0x1, I40E_GLGEN_I2CPARAMS_DATA_OUT_SHIFT) -#define I40E_GLGEN_I2CPARAMS_DATA_OE_N_SHIFT 11 -#define I40E_GLGEN_I2CPARAMS_DATA_OE_N_MASK I40E_MASK(0x1, I40E_GLGEN_I2CPARAMS_DATA_OE_N_SHIFT) -#define I40E_GLGEN_I2CPARAMS_DATA_IN_SHIFT 12 -#define I40E_GLGEN_I2CPARAMS_DATA_IN_MASK I40E_MASK(0x1, I40E_GLGEN_I2CPARAMS_DATA_IN_SHIFT) -#define I40E_GLGEN_I2CPARAMS_CLK_OE_N_SHIFT 13 -#define I40E_GLGEN_I2CPARAMS_CLK_OE_N_MASK I40E_MASK(0x1, I40E_GLGEN_I2CPARAMS_CLK_OE_N_SHIFT) -#define I40E_GLGEN_I2CPARAMS_CLK_IN_SHIFT 14 -#define I40E_GLGEN_I2CPARAMS_CLK_IN_MASK I40E_MASK(0x1, I40E_GLGEN_I2CPARAMS_CLK_IN_SHIFT) -#define I40E_GLGEN_I2CPARAMS_CLK_STRETCH_DIS_SHIFT 15 -#define I40E_GLGEN_I2CPARAMS_CLK_STRETCH_DIS_MASK I40E_MASK(0x1, I40E_GLGEN_I2CPARAMS_CLK_STRETCH_DIS_SHIFT) -#define I40E_GLGEN_I2CPARAMS_I2C_DATA_ORDER_SHIFT 31 -#define I40E_GLGEN_I2CPARAMS_I2C_DATA_ORDER_MASK I40E_MASK(0x1, I40E_GLGEN_I2CPARAMS_I2C_DATA_ORDER_SHIFT) -#define I40E_GLGEN_LED_CTL 0x00088178 /* Reset: POR */ -#define I40E_GLGEN_LED_CTL_GLOBAL_BLINK_MODE_SHIFT 0 -#define I40E_GLGEN_LED_CTL_GLOBAL_BLINK_MODE_MASK I40E_MASK(0x1, I40E_GLGEN_LED_CTL_GLOBAL_BLINK_MODE_SHIFT) -#define I40E_GLGEN_MDIO_CTRL(_i) (0x000881D0 + ((_i) * 4)) /* _i=0...3 */ /* Reset: POR */ -#define I40E_GLGEN_MDIO_CTRL_MAX_INDEX 3 -#define I40E_GLGEN_MDIO_CTRL_LEGACY_RSVD2_SHIFT 0 -#define I40E_GLGEN_MDIO_CTRL_LEGACY_RSVD2_MASK I40E_MASK(0x1FFFF, I40E_GLGEN_MDIO_CTRL_LEGACY_RSVD2_SHIFT) -#define I40E_GLGEN_MDIO_CTRL_CONTMDC_SHIFT 17 -#define I40E_GLGEN_MDIO_CTRL_CONTMDC_MASK I40E_MASK(0x1, I40E_GLGEN_MDIO_CTRL_CONTMDC_SHIFT) -#define I40E_GLGEN_MDIO_CTRL_LEGACY_RSVD1_SHIFT 18 -#define I40E_GLGEN_MDIO_CTRL_LEGACY_RSVD1_MASK I40E_MASK(0x3FFF, I40E_GLGEN_MDIO_CTRL_LEGACY_RSVD1_SHIFT) -#define I40E_GLGEN_MDIO_I2C_SEL(_i) (0x000881C0 + ((_i) * 4)) /* _i=0...3 */ /* Reset: POR */ -#define I40E_GLGEN_MDIO_I2C_SEL_MAX_INDEX 3 -#define I40E_GLGEN_MDIO_I2C_SEL_MDIO_I2C_SEL_SHIFT 0 -#define I40E_GLGEN_MDIO_I2C_SEL_MDIO_I2C_SEL_MASK I40E_MASK(0x1, I40E_GLGEN_MDIO_I2C_SEL_MDIO_I2C_SEL_SHIFT) -#define I40E_GLGEN_MDIO_I2C_SEL_PHY_PORT_NUM_SHIFT 1 -#define I40E_GLGEN_MDIO_I2C_SEL_PHY_PORT_NUM_MASK I40E_MASK(0xF, I40E_GLGEN_MDIO_I2C_SEL_PHY_PORT_NUM_SHIFT) -#define I40E_GLGEN_MDIO_I2C_SEL_PHY0_ADDRESS_SHIFT 5 -#define I40E_GLGEN_MDIO_I2C_SEL_PHY0_ADDRESS_MASK I40E_MASK(0x1F, I40E_GLGEN_MDIO_I2C_SEL_PHY0_ADDRESS_SHIFT) -#define I40E_GLGEN_MDIO_I2C_SEL_PHY1_ADDRESS_SHIFT 10 -#define I40E_GLGEN_MDIO_I2C_SEL_PHY1_ADDRESS_MASK I40E_MASK(0x1F, I40E_GLGEN_MDIO_I2C_SEL_PHY1_ADDRESS_SHIFT) -#define I40E_GLGEN_MDIO_I2C_SEL_PHY2_ADDRESS_SHIFT 15 -#define I40E_GLGEN_MDIO_I2C_SEL_PHY2_ADDRESS_MASK I40E_MASK(0x1F, I40E_GLGEN_MDIO_I2C_SEL_PHY2_ADDRESS_SHIFT) -#define I40E_GLGEN_MDIO_I2C_SEL_PHY3_ADDRESS_SHIFT 20 -#define I40E_GLGEN_MDIO_I2C_SEL_PHY3_ADDRESS_MASK I40E_MASK(0x1F, I40E_GLGEN_MDIO_I2C_SEL_PHY3_ADDRESS_SHIFT) -#define I40E_GLGEN_MDIO_I2C_SEL_MDIO_IF_MODE_SHIFT 25 -#define I40E_GLGEN_MDIO_I2C_SEL_MDIO_IF_MODE_MASK I40E_MASK(0xF, I40E_GLGEN_MDIO_I2C_SEL_MDIO_IF_MODE_SHIFT) -#define I40E_GLGEN_MDIO_I2C_SEL_EN_FAST_MODE_SHIFT 31 -#define I40E_GLGEN_MDIO_I2C_SEL_EN_FAST_MODE_MASK I40E_MASK(0x1, I40E_GLGEN_MDIO_I2C_SEL_EN_FAST_MODE_SHIFT) -#define I40E_GLGEN_MSCA(_i) (0x0008818C + ((_i) * 4)) /* _i=0...3 */ /* Reset: POR */ -#define I40E_GLGEN_MSCA_MAX_INDEX 3 -#define I40E_GLGEN_MSCA_MDIADD_SHIFT 0 -#define I40E_GLGEN_MSCA_MDIADD_MASK I40E_MASK(0xFFFF, I40E_GLGEN_MSCA_MDIADD_SHIFT) -#define I40E_GLGEN_MSCA_DEVADD_SHIFT 16 -#define I40E_GLGEN_MSCA_DEVADD_MASK I40E_MASK(0x1F, I40E_GLGEN_MSCA_DEVADD_SHIFT) -#define I40E_GLGEN_MSCA_PHYADD_SHIFT 21 -#define I40E_GLGEN_MSCA_PHYADD_MASK I40E_MASK(0x1F, I40E_GLGEN_MSCA_PHYADD_SHIFT) -#define I40E_GLGEN_MSCA_OPCODE_SHIFT 26 -#define I40E_GLGEN_MSCA_OPCODE_MASK I40E_MASK(0x3, I40E_GLGEN_MSCA_OPCODE_SHIFT) -#define I40E_GLGEN_MSCA_STCODE_SHIFT 28 -#define I40E_GLGEN_MSCA_STCODE_MASK I40E_MASK(0x3, I40E_GLGEN_MSCA_STCODE_SHIFT) -#define I40E_GLGEN_MSCA_MDICMD_SHIFT 30 -#define I40E_GLGEN_MSCA_MDICMD_MASK I40E_MASK(0x1, I40E_GLGEN_MSCA_MDICMD_SHIFT) -#define I40E_GLGEN_MSCA_MDIINPROGEN_SHIFT 31 -#define I40E_GLGEN_MSCA_MDIINPROGEN_MASK I40E_MASK(0x1, I40E_GLGEN_MSCA_MDIINPROGEN_SHIFT) -#define I40E_GLGEN_MSRWD(_i) (0x0008819C + ((_i) * 4)) /* _i=0...3 */ /* Reset: POR */ -#define I40E_GLGEN_MSRWD_MAX_INDEX 3 -#define I40E_GLGEN_MSRWD_MDIWRDATA_SHIFT 0 -#define I40E_GLGEN_MSRWD_MDIWRDATA_MASK I40E_MASK(0xFFFF, I40E_GLGEN_MSRWD_MDIWRDATA_SHIFT) -#define I40E_GLGEN_MSRWD_MDIRDDATA_SHIFT 16 -#define I40E_GLGEN_MSRWD_MDIRDDATA_MASK I40E_MASK(0xFFFF, I40E_GLGEN_MSRWD_MDIRDDATA_SHIFT) -#define I40E_GLGEN_PCIFCNCNT 0x001C0AB4 /* Reset: PCIR */ -#define I40E_GLGEN_PCIFCNCNT_PCIPFCNT_SHIFT 0 -#define I40E_GLGEN_PCIFCNCNT_PCIPFCNT_MASK I40E_MASK(0x1F, I40E_GLGEN_PCIFCNCNT_PCIPFCNT_SHIFT) -#define I40E_GLGEN_PCIFCNCNT_PCIVFCNT_SHIFT 16 -#define I40E_GLGEN_PCIFCNCNT_PCIVFCNT_MASK I40E_MASK(0xFF, I40E_GLGEN_PCIFCNCNT_PCIVFCNT_SHIFT) -#define I40E_GLGEN_RSTAT 0x000B8188 /* Reset: POR */ -#define I40E_GLGEN_RSTAT_DEVSTATE_SHIFT 0 -#define I40E_GLGEN_RSTAT_DEVSTATE_MASK I40E_MASK(0x3, I40E_GLGEN_RSTAT_DEVSTATE_SHIFT) -#define I40E_GLGEN_RSTAT_RESET_TYPE_SHIFT 2 -#define I40E_GLGEN_RSTAT_RESET_TYPE_MASK I40E_MASK(0x3, I40E_GLGEN_RSTAT_RESET_TYPE_SHIFT) -#define I40E_GLGEN_RSTAT_CORERCNT_SHIFT 4 -#define I40E_GLGEN_RSTAT_CORERCNT_MASK I40E_MASK(0x3, I40E_GLGEN_RSTAT_CORERCNT_SHIFT) -#define I40E_GLGEN_RSTAT_GLOBRCNT_SHIFT 6 -#define I40E_GLGEN_RSTAT_GLOBRCNT_MASK I40E_MASK(0x3, I40E_GLGEN_RSTAT_GLOBRCNT_SHIFT) -#define I40E_GLGEN_RSTAT_EMPRCNT_SHIFT 8 -#define I40E_GLGEN_RSTAT_EMPRCNT_MASK I40E_MASK(0x3, I40E_GLGEN_RSTAT_EMPRCNT_SHIFT) -#define I40E_GLGEN_RSTAT_TIME_TO_RST_SHIFT 10 -#define I40E_GLGEN_RSTAT_TIME_TO_RST_MASK I40E_MASK(0x3F, I40E_GLGEN_RSTAT_TIME_TO_RST_SHIFT) -#define I40E_GLGEN_RSTCTL 0x000B8180 /* Reset: POR */ -#define I40E_GLGEN_RSTCTL_GRSTDEL_SHIFT 0 -#define I40E_GLGEN_RSTCTL_GRSTDEL_MASK I40E_MASK(0x3F, I40E_GLGEN_RSTCTL_GRSTDEL_SHIFT) -#define I40E_GLGEN_RSTCTL_ECC_RST_ENA_SHIFT 8 -#define I40E_GLGEN_RSTCTL_ECC_RST_ENA_MASK I40E_MASK(0x1, I40E_GLGEN_RSTCTL_ECC_RST_ENA_SHIFT) -#define I40E_GLGEN_RSTENA_EMP 0x000B818C /* Reset: POR */ -#define I40E_GLGEN_RSTENA_EMP_EMP_RST_ENA_SHIFT 0 -#define I40E_GLGEN_RSTENA_EMP_EMP_RST_ENA_MASK I40E_MASK(0x1, I40E_GLGEN_RSTENA_EMP_EMP_RST_ENA_SHIFT) -#define I40E_GLGEN_RTRIG 0x000B8190 /* Reset: CORER */ -#define I40E_GLGEN_RTRIG_CORER_SHIFT 0 -#define I40E_GLGEN_RTRIG_CORER_MASK I40E_MASK(0x1, I40E_GLGEN_RTRIG_CORER_SHIFT) -#define I40E_GLGEN_RTRIG_GLOBR_SHIFT 1 -#define I40E_GLGEN_RTRIG_GLOBR_MASK I40E_MASK(0x1, I40E_GLGEN_RTRIG_GLOBR_SHIFT) -#define I40E_GLGEN_RTRIG_EMPFWR_SHIFT 2 -#define I40E_GLGEN_RTRIG_EMPFWR_MASK I40E_MASK(0x1, I40E_GLGEN_RTRIG_EMPFWR_SHIFT) -#define I40E_GLGEN_STAT 0x000B612C /* Reset: POR */ -#define I40E_GLGEN_STAT_HWRSVD0_SHIFT 0 -#define I40E_GLGEN_STAT_HWRSVD0_MASK I40E_MASK(0x3, I40E_GLGEN_STAT_HWRSVD0_SHIFT) -#define I40E_GLGEN_STAT_DCBEN_SHIFT 2 -#define I40E_GLGEN_STAT_DCBEN_MASK I40E_MASK(0x1, I40E_GLGEN_STAT_DCBEN_SHIFT) -#define I40E_GLGEN_STAT_VTEN_SHIFT 3 -#define I40E_GLGEN_STAT_VTEN_MASK I40E_MASK(0x1, I40E_GLGEN_STAT_VTEN_SHIFT) -#define I40E_GLGEN_STAT_FCOEN_SHIFT 4 -#define I40E_GLGEN_STAT_FCOEN_MASK I40E_MASK(0x1, I40E_GLGEN_STAT_FCOEN_SHIFT) -#define I40E_GLGEN_STAT_EVBEN_SHIFT 5 -#define I40E_GLGEN_STAT_EVBEN_MASK I40E_MASK(0x1, I40E_GLGEN_STAT_EVBEN_SHIFT) -#define I40E_GLGEN_STAT_HWRSVD1_SHIFT 6 -#define I40E_GLGEN_STAT_HWRSVD1_MASK I40E_MASK(0x3, I40E_GLGEN_STAT_HWRSVD1_SHIFT) -#define I40E_GLGEN_VFLRSTAT(_i) (0x00092600 + ((_i) * 4)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLGEN_VFLRSTAT_MAX_INDEX 3 -#define I40E_GLGEN_VFLRSTAT_VFLRE_SHIFT 0 -#define I40E_GLGEN_VFLRSTAT_VFLRE_MASK I40E_MASK(0xFFFFFFFF, I40E_GLGEN_VFLRSTAT_VFLRE_SHIFT) -#define I40E_GLVFGEN_TIMER 0x000881BC /* Reset: CORER */ -#define I40E_GLVFGEN_TIMER_GTIME_SHIFT 0 -#define I40E_GLVFGEN_TIMER_GTIME_MASK I40E_MASK(0xFFFFFFFF, I40E_GLVFGEN_TIMER_GTIME_SHIFT) -#define I40E_PFGEN_CTRL 0x00092400 /* Reset: PFR */ -#define I40E_PFGEN_CTRL_PFSWR_SHIFT 0 -#define I40E_PFGEN_CTRL_PFSWR_MASK I40E_MASK(0x1, I40E_PFGEN_CTRL_PFSWR_SHIFT) -#define I40E_PFGEN_DRUN 0x00092500 /* Reset: CORER */ -#define I40E_PFGEN_DRUN_DRVUNLD_SHIFT 0 -#define I40E_PFGEN_DRUN_DRVUNLD_MASK I40E_MASK(0x1, I40E_PFGEN_DRUN_DRVUNLD_SHIFT) -#define I40E_PFGEN_PORTNUM 0x001C0480 /* Reset: CORER */ -#define I40E_PFGEN_PORTNUM_PORT_NUM_SHIFT 0 -#define I40E_PFGEN_PORTNUM_PORT_NUM_MASK I40E_MASK(0x3, I40E_PFGEN_PORTNUM_PORT_NUM_SHIFT) -#define I40E_PFGEN_STATE 0x00088000 /* Reset: CORER */ -#define I40E_PFGEN_STATE_RESERVED_0_SHIFT 0 -#define I40E_PFGEN_STATE_RESERVED_0_MASK I40E_MASK(0x1, I40E_PFGEN_STATE_RESERVED_0_SHIFT) -#define I40E_PFGEN_STATE_PFFCEN_SHIFT 1 -#define I40E_PFGEN_STATE_PFFCEN_MASK I40E_MASK(0x1, I40E_PFGEN_STATE_PFFCEN_SHIFT) -#define I40E_PFGEN_STATE_PFLINKEN_SHIFT 2 -#define I40E_PFGEN_STATE_PFLINKEN_MASK I40E_MASK(0x1, I40E_PFGEN_STATE_PFLINKEN_SHIFT) -#define I40E_PFGEN_STATE_PFSCEN_SHIFT 3 -#define I40E_PFGEN_STATE_PFSCEN_MASK I40E_MASK(0x1, I40E_PFGEN_STATE_PFSCEN_SHIFT) -#define I40E_PRTGEN_CNF 0x000B8120 /* Reset: POR */ -#define I40E_PRTGEN_CNF_PORT_DIS_SHIFT 0 -#define I40E_PRTGEN_CNF_PORT_DIS_MASK I40E_MASK(0x1, I40E_PRTGEN_CNF_PORT_DIS_SHIFT) -#define I40E_PRTGEN_CNF_ALLOW_PORT_DIS_SHIFT 1 -#define I40E_PRTGEN_CNF_ALLOW_PORT_DIS_MASK I40E_MASK(0x1, I40E_PRTGEN_CNF_ALLOW_PORT_DIS_SHIFT) -#define I40E_PRTGEN_CNF_EMP_PORT_DIS_SHIFT 2 -#define I40E_PRTGEN_CNF_EMP_PORT_DIS_MASK I40E_MASK(0x1, I40E_PRTGEN_CNF_EMP_PORT_DIS_SHIFT) -#define I40E_PRTGEN_CNF2 0x000B8160 /* Reset: POR */ -#define I40E_PRTGEN_CNF2_ACTIVATE_PORT_LINK_SHIFT 0 -#define I40E_PRTGEN_CNF2_ACTIVATE_PORT_LINK_MASK I40E_MASK(0x1, I40E_PRTGEN_CNF2_ACTIVATE_PORT_LINK_SHIFT) -#define I40E_PRTGEN_STATUS 0x000B8100 /* Reset: POR */ -#define I40E_PRTGEN_STATUS_PORT_VALID_SHIFT 0 -#define I40E_PRTGEN_STATUS_PORT_VALID_MASK I40E_MASK(0x1, I40E_PRTGEN_STATUS_PORT_VALID_SHIFT) -#define I40E_PRTGEN_STATUS_PORT_ACTIVE_SHIFT 1 -#define I40E_PRTGEN_STATUS_PORT_ACTIVE_MASK I40E_MASK(0x1, I40E_PRTGEN_STATUS_PORT_ACTIVE_SHIFT) -#define I40E_VFGEN_RSTAT1(_VF) (0x00074400 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: VFR */ -#define I40E_VFGEN_RSTAT1_MAX_INDEX 127 -#define I40E_VFGEN_RSTAT1_VFR_STATE_SHIFT 0 -#define I40E_VFGEN_RSTAT1_VFR_STATE_MASK I40E_MASK(0x3, I40E_VFGEN_RSTAT1_VFR_STATE_SHIFT) -#define I40E_VPGEN_VFRSTAT(_VF) (0x00091C00 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ -#define I40E_VPGEN_VFRSTAT_MAX_INDEX 127 -#define I40E_VPGEN_VFRSTAT_VFRD_SHIFT 0 -#define I40E_VPGEN_VFRSTAT_VFRD_MASK I40E_MASK(0x1, I40E_VPGEN_VFRSTAT_VFRD_SHIFT) -#define I40E_VPGEN_VFRTRIG(_VF) (0x00091800 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ -#define I40E_VPGEN_VFRTRIG_MAX_INDEX 127 -#define I40E_VPGEN_VFRTRIG_VFSWR_SHIFT 0 -#define I40E_VPGEN_VFRTRIG_VFSWR_MASK I40E_MASK(0x1, I40E_VPGEN_VFRTRIG_VFSWR_SHIFT) -#define I40E_VSIGEN_RSTAT(_VSI) (0x00090800 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: CORER */ -#define I40E_VSIGEN_RSTAT_MAX_INDEX 383 -#define I40E_VSIGEN_RSTAT_VMRD_SHIFT 0 -#define I40E_VSIGEN_RSTAT_VMRD_MASK I40E_MASK(0x1, I40E_VSIGEN_RSTAT_VMRD_SHIFT) -#define I40E_VSIGEN_RTRIG(_VSI) (0x00090000 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: CORER */ -#define I40E_VSIGEN_RTRIG_MAX_INDEX 383 -#define I40E_VSIGEN_RTRIG_VMSWR_SHIFT 0 -#define I40E_VSIGEN_RTRIG_VMSWR_MASK I40E_MASK(0x1, I40E_VSIGEN_RTRIG_VMSWR_SHIFT) -#define I40E_GLHMC_FCOEDDPBASE(_i) (0x000C6600 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLHMC_FCOEDDPBASE_MAX_INDEX 15 -#define I40E_GLHMC_FCOEDDPBASE_FPMFCOEDDPBASE_SHIFT 0 -#define I40E_GLHMC_FCOEDDPBASE_FPMFCOEDDPBASE_MASK I40E_MASK(0xFFFFFF, I40E_GLHMC_FCOEDDPBASE_FPMFCOEDDPBASE_SHIFT) -#define I40E_GLHMC_FCOEDDPCNT(_i) (0x000C6700 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLHMC_FCOEDDPCNT_MAX_INDEX 15 -#define I40E_GLHMC_FCOEDDPCNT_FPMFCOEDDPCNT_SHIFT 0 -#define I40E_GLHMC_FCOEDDPCNT_FPMFCOEDDPCNT_MASK I40E_MASK(0xFFFFF, I40E_GLHMC_FCOEDDPCNT_FPMFCOEDDPCNT_SHIFT) -#define I40E_GLHMC_FCOEDDPOBJSZ 0x000C2010 /* Reset: CORER */ -#define I40E_GLHMC_FCOEDDPOBJSZ_PMFCOEDDPOBJSZ_SHIFT 0 -#define I40E_GLHMC_FCOEDDPOBJSZ_PMFCOEDDPOBJSZ_MASK I40E_MASK(0xF, I40E_GLHMC_FCOEDDPOBJSZ_PMFCOEDDPOBJSZ_SHIFT) -#define I40E_GLHMC_FCOEFBASE(_i) (0x000C6800 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLHMC_FCOEFBASE_MAX_INDEX 15 -#define I40E_GLHMC_FCOEFBASE_FPMFCOEFBASE_SHIFT 0 -#define I40E_GLHMC_FCOEFBASE_FPMFCOEFBASE_MASK I40E_MASK(0xFFFFFF, I40E_GLHMC_FCOEFBASE_FPMFCOEFBASE_SHIFT) -#define I40E_GLHMC_FCOEFCNT(_i) (0x000C6900 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLHMC_FCOEFCNT_MAX_INDEX 15 -#define I40E_GLHMC_FCOEFCNT_FPMFCOEFCNT_SHIFT 0 -#define I40E_GLHMC_FCOEFCNT_FPMFCOEFCNT_MASK I40E_MASK(0x7FFFFF, I40E_GLHMC_FCOEFCNT_FPMFCOEFCNT_SHIFT) -#define I40E_GLHMC_FCOEFMAX 0x000C20D0 /* Reset: CORER */ -#define I40E_GLHMC_FCOEFMAX_PMFCOEFMAX_SHIFT 0 -#define I40E_GLHMC_FCOEFMAX_PMFCOEFMAX_MASK I40E_MASK(0xFFFF, I40E_GLHMC_FCOEFMAX_PMFCOEFMAX_SHIFT) -#define I40E_GLHMC_FCOEFOBJSZ 0x000C2018 /* Reset: CORER */ -#define I40E_GLHMC_FCOEFOBJSZ_PMFCOEFOBJSZ_SHIFT 0 -#define I40E_GLHMC_FCOEFOBJSZ_PMFCOEFOBJSZ_MASK I40E_MASK(0xF, I40E_GLHMC_FCOEFOBJSZ_PMFCOEFOBJSZ_SHIFT) -#define I40E_GLHMC_FCOEMAX 0x000C2014 /* Reset: CORER */ -#define I40E_GLHMC_FCOEMAX_PMFCOEMAX_SHIFT 0 -#define I40E_GLHMC_FCOEMAX_PMFCOEMAX_MASK I40E_MASK(0x1FFF, I40E_GLHMC_FCOEMAX_PMFCOEMAX_SHIFT) -#define I40E_GLHMC_FSIAVBASE(_i) (0x000C5600 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLHMC_FSIAVBASE_MAX_INDEX 15 -#define I40E_GLHMC_FSIAVBASE_FPMFSIAVBASE_SHIFT 0 -#define I40E_GLHMC_FSIAVBASE_FPMFSIAVBASE_MASK I40E_MASK(0xFFFFFF, I40E_GLHMC_FSIAVBASE_FPMFSIAVBASE_SHIFT) -#define I40E_GLHMC_FSIAVCNT(_i) (0x000C5700 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLHMC_FSIAVCNT_MAX_INDEX 15 -#define I40E_GLHMC_FSIAVCNT_FPMFSIAVCNT_SHIFT 0 -#define I40E_GLHMC_FSIAVCNT_FPMFSIAVCNT_MASK I40E_MASK(0x1FFFFFFF, I40E_GLHMC_FSIAVCNT_FPMFSIAVCNT_SHIFT) -#define I40E_GLHMC_FSIAVCNT_RSVD_SHIFT 29 -#define I40E_GLHMC_FSIAVCNT_RSVD_MASK I40E_MASK(0x7, I40E_GLHMC_FSIAVCNT_RSVD_SHIFT) -#define I40E_GLHMC_FSIAVMAX 0x000C2068 /* Reset: CORER */ -#define I40E_GLHMC_FSIAVMAX_PMFSIAVMAX_SHIFT 0 -#define I40E_GLHMC_FSIAVMAX_PMFSIAVMAX_MASK I40E_MASK(0x1FFFF, I40E_GLHMC_FSIAVMAX_PMFSIAVMAX_SHIFT) -#define I40E_GLHMC_FSIAVOBJSZ 0x000C2064 /* Reset: CORER */ -#define I40E_GLHMC_FSIAVOBJSZ_PMFSIAVOBJSZ_SHIFT 0 -#define I40E_GLHMC_FSIAVOBJSZ_PMFSIAVOBJSZ_MASK I40E_MASK(0xF, I40E_GLHMC_FSIAVOBJSZ_PMFSIAVOBJSZ_SHIFT) -#define I40E_GLHMC_FSIMCBASE(_i) (0x000C6000 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLHMC_FSIMCBASE_MAX_INDEX 15 -#define I40E_GLHMC_FSIMCBASE_FPMFSIMCBASE_SHIFT 0 -#define I40E_GLHMC_FSIMCBASE_FPMFSIMCBASE_MASK I40E_MASK(0xFFFFFF, I40E_GLHMC_FSIMCBASE_FPMFSIMCBASE_SHIFT) -#define I40E_GLHMC_FSIMCCNT(_i) (0x000C6100 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLHMC_FSIMCCNT_MAX_INDEX 15 -#define I40E_GLHMC_FSIMCCNT_FPMFSIMCSZ_SHIFT 0 -#define I40E_GLHMC_FSIMCCNT_FPMFSIMCSZ_MASK I40E_MASK(0x1FFFFFFF, I40E_GLHMC_FSIMCCNT_FPMFSIMCSZ_SHIFT) -#define I40E_GLHMC_FSIMCMAX 0x000C2060 /* Reset: CORER */ -#define I40E_GLHMC_FSIMCMAX_PMFSIMCMAX_SHIFT 0 -#define I40E_GLHMC_FSIMCMAX_PMFSIMCMAX_MASK I40E_MASK(0x3FFF, I40E_GLHMC_FSIMCMAX_PMFSIMCMAX_SHIFT) -#define I40E_GLHMC_FSIMCOBJSZ 0x000C205c /* Reset: CORER */ -#define I40E_GLHMC_FSIMCOBJSZ_PMFSIMCOBJSZ_SHIFT 0 -#define I40E_GLHMC_FSIMCOBJSZ_PMFSIMCOBJSZ_MASK I40E_MASK(0xF, I40E_GLHMC_FSIMCOBJSZ_PMFSIMCOBJSZ_SHIFT) -#define I40E_GLHMC_LANQMAX 0x000C2008 /* Reset: CORER */ -#define I40E_GLHMC_LANQMAX_PMLANQMAX_SHIFT 0 -#define I40E_GLHMC_LANQMAX_PMLANQMAX_MASK I40E_MASK(0x7FF, I40E_GLHMC_LANQMAX_PMLANQMAX_SHIFT) -#define I40E_GLHMC_LANRXBASE(_i) (0x000C6400 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLHMC_LANRXBASE_MAX_INDEX 15 -#define I40E_GLHMC_LANRXBASE_FPMLANRXBASE_SHIFT 0 -#define I40E_GLHMC_LANRXBASE_FPMLANRXBASE_MASK I40E_MASK(0xFFFFFF, I40E_GLHMC_LANRXBASE_FPMLANRXBASE_SHIFT) -#define I40E_GLHMC_LANRXCNT(_i) (0x000C6500 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLHMC_LANRXCNT_MAX_INDEX 15 -#define I40E_GLHMC_LANRXCNT_FPMLANRXCNT_SHIFT 0 -#define I40E_GLHMC_LANRXCNT_FPMLANRXCNT_MASK I40E_MASK(0x7FF, I40E_GLHMC_LANRXCNT_FPMLANRXCNT_SHIFT) -#define I40E_GLHMC_LANRXOBJSZ 0x000C200c /* Reset: CORER */ -#define I40E_GLHMC_LANRXOBJSZ_PMLANRXOBJSZ_SHIFT 0 -#define I40E_GLHMC_LANRXOBJSZ_PMLANRXOBJSZ_MASK I40E_MASK(0xF, I40E_GLHMC_LANRXOBJSZ_PMLANRXOBJSZ_SHIFT) -#define I40E_GLHMC_LANTXBASE(_i) (0x000C6200 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLHMC_LANTXBASE_MAX_INDEX 15 -#define I40E_GLHMC_LANTXBASE_FPMLANTXBASE_SHIFT 0 -#define I40E_GLHMC_LANTXBASE_FPMLANTXBASE_MASK I40E_MASK(0xFFFFFF, I40E_GLHMC_LANTXBASE_FPMLANTXBASE_SHIFT) -#define I40E_GLHMC_LANTXBASE_RSVD_SHIFT 24 -#define I40E_GLHMC_LANTXBASE_RSVD_MASK I40E_MASK(0xFF, I40E_GLHMC_LANTXBASE_RSVD_SHIFT) -#define I40E_GLHMC_LANTXCNT(_i) (0x000C6300 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLHMC_LANTXCNT_MAX_INDEX 15 -#define I40E_GLHMC_LANTXCNT_FPMLANTXCNT_SHIFT 0 -#define I40E_GLHMC_LANTXCNT_FPMLANTXCNT_MASK I40E_MASK(0x7FF, I40E_GLHMC_LANTXCNT_FPMLANTXCNT_SHIFT) -#define I40E_GLHMC_LANTXOBJSZ 0x000C2004 /* Reset: CORER */ -#define I40E_GLHMC_LANTXOBJSZ_PMLANTXOBJSZ_SHIFT 0 -#define I40E_GLHMC_LANTXOBJSZ_PMLANTXOBJSZ_MASK I40E_MASK(0xF, I40E_GLHMC_LANTXOBJSZ_PMLANTXOBJSZ_SHIFT) -#define I40E_GLHMC_PFASSIGN(_i) (0x000C0c00 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLHMC_PFASSIGN_MAX_INDEX 15 -#define I40E_GLHMC_PFASSIGN_PMFCNPFASSIGN_SHIFT 0 -#define I40E_GLHMC_PFASSIGN_PMFCNPFASSIGN_MASK I40E_MASK(0xF, I40E_GLHMC_PFASSIGN_PMFCNPFASSIGN_SHIFT) -#define I40E_GLHMC_SDPART(_i) (0x000C0800 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLHMC_SDPART_MAX_INDEX 15 -#define I40E_GLHMC_SDPART_PMSDBASE_SHIFT 0 -#define I40E_GLHMC_SDPART_PMSDBASE_MASK I40E_MASK(0xFFF, I40E_GLHMC_SDPART_PMSDBASE_SHIFT) -#define I40E_GLHMC_SDPART_PMSDSIZE_SHIFT 16 -#define I40E_GLHMC_SDPART_PMSDSIZE_MASK I40E_MASK(0x1FFF, I40E_GLHMC_SDPART_PMSDSIZE_SHIFT) -#define I40E_PFHMC_ERRORDATA 0x000C0500 /* Reset: PFR */ -#define I40E_PFHMC_ERRORDATA_HMC_ERROR_DATA_SHIFT 0 -#define I40E_PFHMC_ERRORDATA_HMC_ERROR_DATA_MASK I40E_MASK(0x3FFFFFFF, I40E_PFHMC_ERRORDATA_HMC_ERROR_DATA_SHIFT) -#define I40E_PFHMC_ERRORINFO 0x000C0400 /* Reset: PFR */ -#define I40E_PFHMC_ERRORINFO_PMF_INDEX_SHIFT 0 -#define I40E_PFHMC_ERRORINFO_PMF_INDEX_MASK I40E_MASK(0x1F, I40E_PFHMC_ERRORINFO_PMF_INDEX_SHIFT) -#define I40E_PFHMC_ERRORINFO_PMF_ISVF_SHIFT 7 -#define I40E_PFHMC_ERRORINFO_PMF_ISVF_MASK I40E_MASK(0x1, I40E_PFHMC_ERRORINFO_PMF_ISVF_SHIFT) -#define I40E_PFHMC_ERRORINFO_HMC_ERROR_TYPE_SHIFT 8 -#define I40E_PFHMC_ERRORINFO_HMC_ERROR_TYPE_MASK I40E_MASK(0xF, I40E_PFHMC_ERRORINFO_HMC_ERROR_TYPE_SHIFT) -#define I40E_PFHMC_ERRORINFO_HMC_OBJECT_TYPE_SHIFT 16 -#define I40E_PFHMC_ERRORINFO_HMC_OBJECT_TYPE_MASK I40E_MASK(0x1F, I40E_PFHMC_ERRORINFO_HMC_OBJECT_TYPE_SHIFT) -#define I40E_PFHMC_ERRORINFO_ERROR_DETECTED_SHIFT 31 -#define I40E_PFHMC_ERRORINFO_ERROR_DETECTED_MASK I40E_MASK(0x1, I40E_PFHMC_ERRORINFO_ERROR_DETECTED_SHIFT) -#define I40E_PFHMC_PDINV 0x000C0300 /* Reset: PFR */ -#define I40E_PFHMC_PDINV_PMSDIDX_SHIFT 0 -#define I40E_PFHMC_PDINV_PMSDIDX_MASK I40E_MASK(0xFFF, I40E_PFHMC_PDINV_PMSDIDX_SHIFT) -#define I40E_PFHMC_PDINV_PMPDIDX_SHIFT 16 -#define I40E_PFHMC_PDINV_PMPDIDX_MASK I40E_MASK(0x1FF, I40E_PFHMC_PDINV_PMPDIDX_SHIFT) -#define I40E_PFHMC_SDCMD 0x000C0000 /* Reset: PFR */ -#define I40E_PFHMC_SDCMD_PMSDIDX_SHIFT 0 -#define I40E_PFHMC_SDCMD_PMSDIDX_MASK I40E_MASK(0xFFF, I40E_PFHMC_SDCMD_PMSDIDX_SHIFT) -#define I40E_PFHMC_SDCMD_PMSDWR_SHIFT 31 -#define I40E_PFHMC_SDCMD_PMSDWR_MASK I40E_MASK(0x1, I40E_PFHMC_SDCMD_PMSDWR_SHIFT) -#define I40E_PFHMC_SDDATAHIGH 0x000C0200 /* Reset: PFR */ -#define I40E_PFHMC_SDDATAHIGH_PMSDDATAHIGH_SHIFT 0 -#define I40E_PFHMC_SDDATAHIGH_PMSDDATAHIGH_MASK I40E_MASK(0xFFFFFFFF, I40E_PFHMC_SDDATAHIGH_PMSDDATAHIGH_SHIFT) -#define I40E_PFHMC_SDDATALOW 0x000C0100 /* Reset: PFR */ -#define I40E_PFHMC_SDDATALOW_PMSDVALID_SHIFT 0 -#define I40E_PFHMC_SDDATALOW_PMSDVALID_MASK I40E_MASK(0x1, I40E_PFHMC_SDDATALOW_PMSDVALID_SHIFT) -#define I40E_PFHMC_SDDATALOW_PMSDTYPE_SHIFT 1 -#define I40E_PFHMC_SDDATALOW_PMSDTYPE_MASK I40E_MASK(0x1, I40E_PFHMC_SDDATALOW_PMSDTYPE_SHIFT) -#define I40E_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT 2 -#define I40E_PFHMC_SDDATALOW_PMSDBPCOUNT_MASK I40E_MASK(0x3FF, I40E_PFHMC_SDDATALOW_PMSDBPCOUNT_SHIFT) -#define I40E_PFHMC_SDDATALOW_PMSDDATALOW_SHIFT 12 -#define I40E_PFHMC_SDDATALOW_PMSDDATALOW_MASK I40E_MASK(0xFFFFF, I40E_PFHMC_SDDATALOW_PMSDDATALOW_SHIFT) -#define I40E_GL_GP_FUSE(_i) (0x0009400C + ((_i) * 4)) /* _i=0...28 */ /* Reset: POR */ -#define I40E_GL_GP_FUSE_MAX_INDEX 28 -#define I40E_GL_GP_FUSE_GL_GP_FUSE_SHIFT 0 -#define I40E_GL_GP_FUSE_GL_GP_FUSE_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_GP_FUSE_GL_GP_FUSE_SHIFT) -#define I40E_GL_UFUSE 0x00094008 /* Reset: POR */ -#define I40E_GL_UFUSE_FOUR_PORT_ENABLE_SHIFT 1 -#define I40E_GL_UFUSE_FOUR_PORT_ENABLE_MASK I40E_MASK(0x1, I40E_GL_UFUSE_FOUR_PORT_ENABLE_SHIFT) -#define I40E_GL_UFUSE_NIC_ID_SHIFT 2 -#define I40E_GL_UFUSE_NIC_ID_MASK I40E_MASK(0x1, I40E_GL_UFUSE_NIC_ID_SHIFT) -#define I40E_GL_UFUSE_ULT_LOCKOUT_SHIFT 10 -#define I40E_GL_UFUSE_ULT_LOCKOUT_MASK I40E_MASK(0x1, I40E_GL_UFUSE_ULT_LOCKOUT_SHIFT) -#define I40E_GL_UFUSE_CLS_LOCKOUT_SHIFT 11 -#define I40E_GL_UFUSE_CLS_LOCKOUT_MASK I40E_MASK(0x1, I40E_GL_UFUSE_CLS_LOCKOUT_SHIFT) -#define I40E_EMPINT_GPIO_ENA 0x00088188 /* Reset: POR */ -#define I40E_EMPINT_GPIO_ENA_GPIO0_ENA_SHIFT 0 -#define I40E_EMPINT_GPIO_ENA_GPIO0_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO0_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO1_ENA_SHIFT 1 -#define I40E_EMPINT_GPIO_ENA_GPIO1_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO1_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO2_ENA_SHIFT 2 -#define I40E_EMPINT_GPIO_ENA_GPIO2_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO2_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO3_ENA_SHIFT 3 -#define I40E_EMPINT_GPIO_ENA_GPIO3_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO3_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO4_ENA_SHIFT 4 -#define I40E_EMPINT_GPIO_ENA_GPIO4_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO4_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO5_ENA_SHIFT 5 -#define I40E_EMPINT_GPIO_ENA_GPIO5_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO5_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO6_ENA_SHIFT 6 -#define I40E_EMPINT_GPIO_ENA_GPIO6_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO6_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO7_ENA_SHIFT 7 -#define I40E_EMPINT_GPIO_ENA_GPIO7_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO7_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO8_ENA_SHIFT 8 -#define I40E_EMPINT_GPIO_ENA_GPIO8_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO8_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO9_ENA_SHIFT 9 -#define I40E_EMPINT_GPIO_ENA_GPIO9_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO9_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO10_ENA_SHIFT 10 -#define I40E_EMPINT_GPIO_ENA_GPIO10_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO10_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO11_ENA_SHIFT 11 -#define I40E_EMPINT_GPIO_ENA_GPIO11_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO11_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO12_ENA_SHIFT 12 -#define I40E_EMPINT_GPIO_ENA_GPIO12_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO12_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO13_ENA_SHIFT 13 -#define I40E_EMPINT_GPIO_ENA_GPIO13_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO13_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO14_ENA_SHIFT 14 -#define I40E_EMPINT_GPIO_ENA_GPIO14_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO14_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO15_ENA_SHIFT 15 -#define I40E_EMPINT_GPIO_ENA_GPIO15_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO15_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO16_ENA_SHIFT 16 -#define I40E_EMPINT_GPIO_ENA_GPIO16_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO16_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO17_ENA_SHIFT 17 -#define I40E_EMPINT_GPIO_ENA_GPIO17_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO17_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO18_ENA_SHIFT 18 -#define I40E_EMPINT_GPIO_ENA_GPIO18_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO18_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO19_ENA_SHIFT 19 -#define I40E_EMPINT_GPIO_ENA_GPIO19_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO19_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO20_ENA_SHIFT 20 -#define I40E_EMPINT_GPIO_ENA_GPIO20_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO20_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO21_ENA_SHIFT 21 -#define I40E_EMPINT_GPIO_ENA_GPIO21_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO21_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO22_ENA_SHIFT 22 -#define I40E_EMPINT_GPIO_ENA_GPIO22_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO22_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO23_ENA_SHIFT 23 -#define I40E_EMPINT_GPIO_ENA_GPIO23_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO23_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO24_ENA_SHIFT 24 -#define I40E_EMPINT_GPIO_ENA_GPIO24_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO24_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO25_ENA_SHIFT 25 -#define I40E_EMPINT_GPIO_ENA_GPIO25_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO25_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO26_ENA_SHIFT 26 -#define I40E_EMPINT_GPIO_ENA_GPIO26_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO26_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO27_ENA_SHIFT 27 -#define I40E_EMPINT_GPIO_ENA_GPIO27_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO27_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO28_ENA_SHIFT 28 -#define I40E_EMPINT_GPIO_ENA_GPIO28_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO28_ENA_SHIFT) -#define I40E_EMPINT_GPIO_ENA_GPIO29_ENA_SHIFT 29 -#define I40E_EMPINT_GPIO_ENA_GPIO29_ENA_MASK I40E_MASK(0x1, I40E_EMPINT_GPIO_ENA_GPIO29_ENA_SHIFT) -#define I40E_PFGEN_PORTMDIO_NUM 0x0003F100 /* Reset: CORER */ -#define I40E_PFGEN_PORTMDIO_NUM_PORT_NUM_SHIFT 0 -#define I40E_PFGEN_PORTMDIO_NUM_PORT_NUM_MASK I40E_MASK(0x3, I40E_PFGEN_PORTMDIO_NUM_PORT_NUM_SHIFT) -#define I40E_PFGEN_PORTMDIO_NUM_VFLINK_STAT_ENA_SHIFT 4 -#define I40E_PFGEN_PORTMDIO_NUM_VFLINK_STAT_ENA_MASK I40E_MASK(0x1, I40E_PFGEN_PORTMDIO_NUM_VFLINK_STAT_ENA_SHIFT) -#define I40E_PFINT_AEQCTL 0x00038700 /* Reset: CORER */ -#define I40E_PFINT_AEQCTL_MSIX_INDX_SHIFT 0 -#define I40E_PFINT_AEQCTL_MSIX_INDX_MASK I40E_MASK(0xFF, I40E_PFINT_AEQCTL_MSIX_INDX_SHIFT) -#define I40E_PFINT_AEQCTL_ITR_INDX_SHIFT 11 -#define I40E_PFINT_AEQCTL_ITR_INDX_MASK I40E_MASK(0x3, I40E_PFINT_AEQCTL_ITR_INDX_SHIFT) -#define I40E_PFINT_AEQCTL_MSIX0_INDX_SHIFT 13 -#define I40E_PFINT_AEQCTL_MSIX0_INDX_MASK I40E_MASK(0x7, I40E_PFINT_AEQCTL_MSIX0_INDX_SHIFT) -#define I40E_PFINT_AEQCTL_CAUSE_ENA_SHIFT 30 -#define I40E_PFINT_AEQCTL_CAUSE_ENA_MASK I40E_MASK(0x1, I40E_PFINT_AEQCTL_CAUSE_ENA_SHIFT) -#define I40E_PFINT_AEQCTL_INTEVENT_SHIFT 31 -#define I40E_PFINT_AEQCTL_INTEVENT_MASK I40E_MASK(0x1, I40E_PFINT_AEQCTL_INTEVENT_SHIFT) -#define I40E_PFINT_CEQCTL(_INTPF) (0x00036800 + ((_INTPF) * 4)) /* _i=0...511 */ /* Reset: CORER */ -#define I40E_PFINT_CEQCTL_MAX_INDEX 511 -#define I40E_PFINT_CEQCTL_MSIX_INDX_SHIFT 0 -#define I40E_PFINT_CEQCTL_MSIX_INDX_MASK I40E_MASK(0xFF, I40E_PFINT_CEQCTL_MSIX_INDX_SHIFT) -#define I40E_PFINT_CEQCTL_ITR_INDX_SHIFT 11 -#define I40E_PFINT_CEQCTL_ITR_INDX_MASK I40E_MASK(0x3, I40E_PFINT_CEQCTL_ITR_INDX_SHIFT) -#define I40E_PFINT_CEQCTL_MSIX0_INDX_SHIFT 13 -#define I40E_PFINT_CEQCTL_MSIX0_INDX_MASK I40E_MASK(0x7, I40E_PFINT_CEQCTL_MSIX0_INDX_SHIFT) -#define I40E_PFINT_CEQCTL_NEXTQ_INDX_SHIFT 16 -#define I40E_PFINT_CEQCTL_NEXTQ_INDX_MASK I40E_MASK(0x7FF, I40E_PFINT_CEQCTL_NEXTQ_INDX_SHIFT) -#define I40E_PFINT_CEQCTL_NEXTQ_TYPE_SHIFT 27 -#define I40E_PFINT_CEQCTL_NEXTQ_TYPE_MASK I40E_MASK(0x3, I40E_PFINT_CEQCTL_NEXTQ_TYPE_SHIFT) -#define I40E_PFINT_CEQCTL_CAUSE_ENA_SHIFT 30 -#define I40E_PFINT_CEQCTL_CAUSE_ENA_MASK I40E_MASK(0x1, I40E_PFINT_CEQCTL_CAUSE_ENA_SHIFT) -#define I40E_PFINT_CEQCTL_INTEVENT_SHIFT 31 -#define I40E_PFINT_CEQCTL_INTEVENT_MASK I40E_MASK(0x1, I40E_PFINT_CEQCTL_INTEVENT_SHIFT) -#define I40E_PFINT_DYN_CTL0 0x00038480 /* Reset: PFR */ -#define I40E_PFINT_DYN_CTL0_INTENA_SHIFT 0 -#define I40E_PFINT_DYN_CTL0_INTENA_MASK I40E_MASK(0x1, I40E_PFINT_DYN_CTL0_INTENA_SHIFT) -#define I40E_PFINT_DYN_CTL0_CLEARPBA_SHIFT 1 -#define I40E_PFINT_DYN_CTL0_CLEARPBA_MASK I40E_MASK(0x1, I40E_PFINT_DYN_CTL0_CLEARPBA_SHIFT) -#define I40E_PFINT_DYN_CTL0_SWINT_TRIG_SHIFT 2 -#define I40E_PFINT_DYN_CTL0_SWINT_TRIG_MASK I40E_MASK(0x1, I40E_PFINT_DYN_CTL0_SWINT_TRIG_SHIFT) -#define I40E_PFINT_DYN_CTL0_ITR_INDX_SHIFT 3 -#define I40E_PFINT_DYN_CTL0_ITR_INDX_MASK I40E_MASK(0x3, I40E_PFINT_DYN_CTL0_ITR_INDX_SHIFT) -#define I40E_PFINT_DYN_CTL0_INTERVAL_SHIFT 5 -#define I40E_PFINT_DYN_CTL0_INTERVAL_MASK I40E_MASK(0xFFF, I40E_PFINT_DYN_CTL0_INTERVAL_SHIFT) -#define I40E_PFINT_DYN_CTL0_SW_ITR_INDX_ENA_SHIFT 24 -#define I40E_PFINT_DYN_CTL0_SW_ITR_INDX_ENA_MASK I40E_MASK(0x1, I40E_PFINT_DYN_CTL0_SW_ITR_INDX_ENA_SHIFT) -#define I40E_PFINT_DYN_CTL0_SW_ITR_INDX_SHIFT 25 -#define I40E_PFINT_DYN_CTL0_SW_ITR_INDX_MASK I40E_MASK(0x3, I40E_PFINT_DYN_CTL0_SW_ITR_INDX_SHIFT) -#define I40E_PFINT_DYN_CTL0_INTENA_MSK_SHIFT 31 -#define I40E_PFINT_DYN_CTL0_INTENA_MSK_MASK I40E_MASK(0x1, I40E_PFINT_DYN_CTL0_INTENA_MSK_SHIFT) -#define I40E_PFINT_DYN_CTLN(_INTPF) (0x00034800 + ((_INTPF) * 4)) /* _i=0...511 */ /* Reset: PFR */ -#define I40E_PFINT_DYN_CTLN_MAX_INDEX 511 -#define I40E_PFINT_DYN_CTLN_INTENA_SHIFT 0 -#define I40E_PFINT_DYN_CTLN_INTENA_MASK I40E_MASK(0x1, I40E_PFINT_DYN_CTLN_INTENA_SHIFT) -#define I40E_PFINT_DYN_CTLN_CLEARPBA_SHIFT 1 -#define I40E_PFINT_DYN_CTLN_CLEARPBA_MASK I40E_MASK(0x1, I40E_PFINT_DYN_CTLN_CLEARPBA_SHIFT) -#define I40E_PFINT_DYN_CTLN_SWINT_TRIG_SHIFT 2 -#define I40E_PFINT_DYN_CTLN_SWINT_TRIG_MASK I40E_MASK(0x1, I40E_PFINT_DYN_CTLN_SWINT_TRIG_SHIFT) -#define I40E_PFINT_DYN_CTLN_ITR_INDX_SHIFT 3 -#define I40E_PFINT_DYN_CTLN_ITR_INDX_MASK I40E_MASK(0x3, I40E_PFINT_DYN_CTLN_ITR_INDX_SHIFT) -#define I40E_PFINT_DYN_CTLN_INTERVAL_SHIFT 5 -#define I40E_PFINT_DYN_CTLN_INTERVAL_MASK I40E_MASK(0xFFF, I40E_PFINT_DYN_CTLN_INTERVAL_SHIFT) -#define I40E_PFINT_DYN_CTLN_SW_ITR_INDX_ENA_SHIFT 24 -#define I40E_PFINT_DYN_CTLN_SW_ITR_INDX_ENA_MASK I40E_MASK(0x1, I40E_PFINT_DYN_CTLN_SW_ITR_INDX_ENA_SHIFT) -#define I40E_PFINT_DYN_CTLN_SW_ITR_INDX_SHIFT 25 -#define I40E_PFINT_DYN_CTLN_SW_ITR_INDX_MASK I40E_MASK(0x3, I40E_PFINT_DYN_CTLN_SW_ITR_INDX_SHIFT) -#define I40E_PFINT_DYN_CTLN_INTENA_MSK_SHIFT 31 -#define I40E_PFINT_DYN_CTLN_INTENA_MSK_MASK I40E_MASK(0x1, I40E_PFINT_DYN_CTLN_INTENA_MSK_SHIFT) -#define I40E_PFINT_GPIO_ENA 0x00088080 /* Reset: CORER */ -#define I40E_PFINT_GPIO_ENA_GPIO0_ENA_SHIFT 0 -#define I40E_PFINT_GPIO_ENA_GPIO0_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO0_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO1_ENA_SHIFT 1 -#define I40E_PFINT_GPIO_ENA_GPIO1_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO1_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO2_ENA_SHIFT 2 -#define I40E_PFINT_GPIO_ENA_GPIO2_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO2_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO3_ENA_SHIFT 3 -#define I40E_PFINT_GPIO_ENA_GPIO3_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO3_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO4_ENA_SHIFT 4 -#define I40E_PFINT_GPIO_ENA_GPIO4_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO4_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO5_ENA_SHIFT 5 -#define I40E_PFINT_GPIO_ENA_GPIO5_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO5_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO6_ENA_SHIFT 6 -#define I40E_PFINT_GPIO_ENA_GPIO6_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO6_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO7_ENA_SHIFT 7 -#define I40E_PFINT_GPIO_ENA_GPIO7_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO7_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO8_ENA_SHIFT 8 -#define I40E_PFINT_GPIO_ENA_GPIO8_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO8_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO9_ENA_SHIFT 9 -#define I40E_PFINT_GPIO_ENA_GPIO9_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO9_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO10_ENA_SHIFT 10 -#define I40E_PFINT_GPIO_ENA_GPIO10_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO10_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO11_ENA_SHIFT 11 -#define I40E_PFINT_GPIO_ENA_GPIO11_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO11_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO12_ENA_SHIFT 12 -#define I40E_PFINT_GPIO_ENA_GPIO12_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO12_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO13_ENA_SHIFT 13 -#define I40E_PFINT_GPIO_ENA_GPIO13_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO13_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO14_ENA_SHIFT 14 -#define I40E_PFINT_GPIO_ENA_GPIO14_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO14_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO15_ENA_SHIFT 15 -#define I40E_PFINT_GPIO_ENA_GPIO15_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO15_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO16_ENA_SHIFT 16 -#define I40E_PFINT_GPIO_ENA_GPIO16_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO16_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO17_ENA_SHIFT 17 -#define I40E_PFINT_GPIO_ENA_GPIO17_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO17_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO18_ENA_SHIFT 18 -#define I40E_PFINT_GPIO_ENA_GPIO18_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO18_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO19_ENA_SHIFT 19 -#define I40E_PFINT_GPIO_ENA_GPIO19_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO19_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO20_ENA_SHIFT 20 -#define I40E_PFINT_GPIO_ENA_GPIO20_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO20_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO21_ENA_SHIFT 21 -#define I40E_PFINT_GPIO_ENA_GPIO21_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO21_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO22_ENA_SHIFT 22 -#define I40E_PFINT_GPIO_ENA_GPIO22_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO22_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO23_ENA_SHIFT 23 -#define I40E_PFINT_GPIO_ENA_GPIO23_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO23_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO24_ENA_SHIFT 24 -#define I40E_PFINT_GPIO_ENA_GPIO24_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO24_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO25_ENA_SHIFT 25 -#define I40E_PFINT_GPIO_ENA_GPIO25_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO25_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO26_ENA_SHIFT 26 -#define I40E_PFINT_GPIO_ENA_GPIO26_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO26_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO27_ENA_SHIFT 27 -#define I40E_PFINT_GPIO_ENA_GPIO27_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO27_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO28_ENA_SHIFT 28 -#define I40E_PFINT_GPIO_ENA_GPIO28_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO28_ENA_SHIFT) -#define I40E_PFINT_GPIO_ENA_GPIO29_ENA_SHIFT 29 -#define I40E_PFINT_GPIO_ENA_GPIO29_ENA_MASK I40E_MASK(0x1, I40E_PFINT_GPIO_ENA_GPIO29_ENA_SHIFT) -#define I40E_PFINT_ICR0 0x00038780 /* Reset: CORER */ -#define I40E_PFINT_ICR0_INTEVENT_SHIFT 0 -#define I40E_PFINT_ICR0_INTEVENT_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_INTEVENT_SHIFT) -#define I40E_PFINT_ICR0_QUEUE_0_SHIFT 1 -#define I40E_PFINT_ICR0_QUEUE_0_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_QUEUE_0_SHIFT) -#define I40E_PFINT_ICR0_QUEUE_1_SHIFT 2 -#define I40E_PFINT_ICR0_QUEUE_1_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_QUEUE_1_SHIFT) -#define I40E_PFINT_ICR0_QUEUE_2_SHIFT 3 -#define I40E_PFINT_ICR0_QUEUE_2_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_QUEUE_2_SHIFT) -#define I40E_PFINT_ICR0_QUEUE_3_SHIFT 4 -#define I40E_PFINT_ICR0_QUEUE_3_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_QUEUE_3_SHIFT) -#define I40E_PFINT_ICR0_QUEUE_4_SHIFT 5 -#define I40E_PFINT_ICR0_QUEUE_4_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_QUEUE_4_SHIFT) -#define I40E_PFINT_ICR0_QUEUE_5_SHIFT 6 -#define I40E_PFINT_ICR0_QUEUE_5_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_QUEUE_5_SHIFT) -#define I40E_PFINT_ICR0_QUEUE_6_SHIFT 7 -#define I40E_PFINT_ICR0_QUEUE_6_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_QUEUE_6_SHIFT) -#define I40E_PFINT_ICR0_QUEUE_7_SHIFT 8 -#define I40E_PFINT_ICR0_QUEUE_7_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_QUEUE_7_SHIFT) -#define I40E_PFINT_ICR0_ECC_ERR_SHIFT 16 -#define I40E_PFINT_ICR0_ECC_ERR_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ECC_ERR_SHIFT) -#define I40E_PFINT_ICR0_MAL_DETECT_SHIFT 19 -#define I40E_PFINT_ICR0_MAL_DETECT_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_MAL_DETECT_SHIFT) -#define I40E_PFINT_ICR0_GRST_SHIFT 20 -#define I40E_PFINT_ICR0_GRST_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_GRST_SHIFT) -#define I40E_PFINT_ICR0_PCI_EXCEPTION_SHIFT 21 -#define I40E_PFINT_ICR0_PCI_EXCEPTION_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_PCI_EXCEPTION_SHIFT) -#define I40E_PFINT_ICR0_GPIO_SHIFT 22 -#define I40E_PFINT_ICR0_GPIO_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_GPIO_SHIFT) -#define I40E_PFINT_ICR0_TIMESYNC_SHIFT 23 -#define I40E_PFINT_ICR0_TIMESYNC_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_TIMESYNC_SHIFT) -#define I40E_PFINT_ICR0_STORM_DETECT_SHIFT 24 -#define I40E_PFINT_ICR0_STORM_DETECT_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_STORM_DETECT_SHIFT) -#define I40E_PFINT_ICR0_LINK_STAT_CHANGE_SHIFT 25 -#define I40E_PFINT_ICR0_LINK_STAT_CHANGE_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_LINK_STAT_CHANGE_SHIFT) -#define I40E_PFINT_ICR0_HMC_ERR_SHIFT 26 -#define I40E_PFINT_ICR0_HMC_ERR_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_HMC_ERR_SHIFT) -#define I40E_PFINT_ICR0_PE_CRITERR_SHIFT 28 -#define I40E_PFINT_ICR0_PE_CRITERR_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_PE_CRITERR_SHIFT) -#define I40E_PFINT_ICR0_VFLR_SHIFT 29 -#define I40E_PFINT_ICR0_VFLR_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_VFLR_SHIFT) -#define I40E_PFINT_ICR0_ADMINQ_SHIFT 30 -#define I40E_PFINT_ICR0_ADMINQ_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ADMINQ_SHIFT) -#define I40E_PFINT_ICR0_SWINT_SHIFT 31 -#define I40E_PFINT_ICR0_SWINT_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_SWINT_SHIFT) -#define I40E_PFINT_ICR0_ENA 0x00038800 /* Reset: CORER */ -#define I40E_PFINT_ICR0_ENA_ECC_ERR_SHIFT 16 -#define I40E_PFINT_ICR0_ENA_ECC_ERR_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_ECC_ERR_SHIFT) -#define I40E_PFINT_ICR0_ENA_MAL_DETECT_SHIFT 19 -#define I40E_PFINT_ICR0_ENA_MAL_DETECT_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_MAL_DETECT_SHIFT) -#define I40E_PFINT_ICR0_ENA_GRST_SHIFT 20 -#define I40E_PFINT_ICR0_ENA_GRST_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_GRST_SHIFT) -#define I40E_PFINT_ICR0_ENA_PCI_EXCEPTION_SHIFT 21 -#define I40E_PFINT_ICR0_ENA_PCI_EXCEPTION_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_PCI_EXCEPTION_SHIFT) -#define I40E_PFINT_ICR0_ENA_GPIO_SHIFT 22 -#define I40E_PFINT_ICR0_ENA_GPIO_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_GPIO_SHIFT) -#define I40E_PFINT_ICR0_ENA_TIMESYNC_SHIFT 23 -#define I40E_PFINT_ICR0_ENA_TIMESYNC_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_TIMESYNC_SHIFT) -#define I40E_PFINT_ICR0_ENA_STORM_DETECT_SHIFT 24 -#define I40E_PFINT_ICR0_ENA_STORM_DETECT_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_STORM_DETECT_SHIFT) -#define I40E_PFINT_ICR0_ENA_LINK_STAT_CHANGE_SHIFT 25 -#define I40E_PFINT_ICR0_ENA_LINK_STAT_CHANGE_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_LINK_STAT_CHANGE_SHIFT) -#define I40E_PFINT_ICR0_ENA_HMC_ERR_SHIFT 26 -#define I40E_PFINT_ICR0_ENA_HMC_ERR_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_HMC_ERR_SHIFT) -#define I40E_PFINT_ICR0_ENA_PE_CRITERR_SHIFT 28 -#define I40E_PFINT_ICR0_ENA_PE_CRITERR_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_PE_CRITERR_SHIFT) -#define I40E_PFINT_ICR0_ENA_VFLR_SHIFT 29 -#define I40E_PFINT_ICR0_ENA_VFLR_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_VFLR_SHIFT) -#define I40E_PFINT_ICR0_ENA_ADMINQ_SHIFT 30 -#define I40E_PFINT_ICR0_ENA_ADMINQ_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_ADMINQ_SHIFT) -#define I40E_PFINT_ICR0_ENA_RSVD_SHIFT 31 -#define I40E_PFINT_ICR0_ENA_RSVD_MASK I40E_MASK(0x1, I40E_PFINT_ICR0_ENA_RSVD_SHIFT) -#define I40E_PFINT_ITR0(_i) (0x00038000 + ((_i) * 128)) /* _i=0...2 */ /* Reset: PFR */ -#define I40E_PFINT_ITR0_MAX_INDEX 2 -#define I40E_PFINT_ITR0_INTERVAL_SHIFT 0 -#define I40E_PFINT_ITR0_INTERVAL_MASK I40E_MASK(0xFFF, I40E_PFINT_ITR0_INTERVAL_SHIFT) -#define I40E_PFINT_ITRN(_i, _INTPF) (0x00030000 + ((_i) * 2048 + (_INTPF) * 4)) /* _i=0...2, _INTPF=0...511 */ /* Reset: PFR */ -#define I40E_PFINT_ITRN_MAX_INDEX 2 -#define I40E_PFINT_ITRN_INTERVAL_SHIFT 0 -#define I40E_PFINT_ITRN_INTERVAL_MASK I40E_MASK(0xFFF, I40E_PFINT_ITRN_INTERVAL_SHIFT) -#define I40E_PFINT_LNKLST0 0x00038500 /* Reset: PFR */ -#define I40E_PFINT_LNKLST0_FIRSTQ_INDX_SHIFT 0 -#define I40E_PFINT_LNKLST0_FIRSTQ_INDX_MASK I40E_MASK(0x7FF, I40E_PFINT_LNKLST0_FIRSTQ_INDX_SHIFT) -#define I40E_PFINT_LNKLST0_FIRSTQ_TYPE_SHIFT 11 -#define I40E_PFINT_LNKLST0_FIRSTQ_TYPE_MASK I40E_MASK(0x3, I40E_PFINT_LNKLST0_FIRSTQ_TYPE_SHIFT) -#define I40E_PFINT_LNKLSTN(_INTPF) (0x00035000 + ((_INTPF) * 4)) /* _i=0...511 */ /* Reset: PFR */ -#define I40E_PFINT_LNKLSTN_MAX_INDEX 511 -#define I40E_PFINT_LNKLSTN_FIRSTQ_INDX_SHIFT 0 -#define I40E_PFINT_LNKLSTN_FIRSTQ_INDX_MASK I40E_MASK(0x7FF, I40E_PFINT_LNKLSTN_FIRSTQ_INDX_SHIFT) -#define I40E_PFINT_LNKLSTN_FIRSTQ_TYPE_SHIFT 11 -#define I40E_PFINT_LNKLSTN_FIRSTQ_TYPE_MASK I40E_MASK(0x3, I40E_PFINT_LNKLSTN_FIRSTQ_TYPE_SHIFT) -#define I40E_PFINT_RATE0 0x00038580 /* Reset: PFR */ -#define I40E_PFINT_RATE0_INTERVAL_SHIFT 0 -#define I40E_PFINT_RATE0_INTERVAL_MASK I40E_MASK(0x3F, I40E_PFINT_RATE0_INTERVAL_SHIFT) -#define I40E_PFINT_RATE0_INTRL_ENA_SHIFT 6 -#define I40E_PFINT_RATE0_INTRL_ENA_MASK I40E_MASK(0x1, I40E_PFINT_RATE0_INTRL_ENA_SHIFT) -#define I40E_PFINT_RATEN(_INTPF) (0x00035800 + ((_INTPF) * 4)) /* _i=0...511 */ /* Reset: PFR */ -#define I40E_PFINT_RATEN_MAX_INDEX 511 -#define I40E_PFINT_RATEN_INTERVAL_SHIFT 0 -#define I40E_PFINT_RATEN_INTERVAL_MASK I40E_MASK(0x3F, I40E_PFINT_RATEN_INTERVAL_SHIFT) -#define I40E_PFINT_RATEN_INTRL_ENA_SHIFT 6 -#define I40E_PFINT_RATEN_INTRL_ENA_MASK I40E_MASK(0x1, I40E_PFINT_RATEN_INTRL_ENA_SHIFT) -#define I40E_PFINT_STAT_CTL0 0x00038400 /* Reset: PFR */ -#define I40E_PFINT_STAT_CTL0_OTHER_ITR_INDX_SHIFT 2 -#define I40E_PFINT_STAT_CTL0_OTHER_ITR_INDX_MASK I40E_MASK(0x3, I40E_PFINT_STAT_CTL0_OTHER_ITR_INDX_SHIFT) -#define I40E_QINT_RQCTL(_Q) (0x0003A000 + ((_Q) * 4)) /* _i=0...1535 */ /* Reset: CORER */ -#define I40E_QINT_RQCTL_MAX_INDEX 1535 -#define I40E_QINT_RQCTL_MSIX_INDX_SHIFT 0 -#define I40E_QINT_RQCTL_MSIX_INDX_MASK I40E_MASK(0xFF, I40E_QINT_RQCTL_MSIX_INDX_SHIFT) -#define I40E_QINT_RQCTL_ITR_INDX_SHIFT 11 -#define I40E_QINT_RQCTL_ITR_INDX_MASK I40E_MASK(0x3, I40E_QINT_RQCTL_ITR_INDX_SHIFT) -#define I40E_QINT_RQCTL_MSIX0_INDX_SHIFT 13 -#define I40E_QINT_RQCTL_MSIX0_INDX_MASK I40E_MASK(0x7, I40E_QINT_RQCTL_MSIX0_INDX_SHIFT) -#define I40E_QINT_RQCTL_NEXTQ_INDX_SHIFT 16 -#define I40E_QINT_RQCTL_NEXTQ_INDX_MASK I40E_MASK(0x7FF, I40E_QINT_RQCTL_NEXTQ_INDX_SHIFT) -#define I40E_QINT_RQCTL_NEXTQ_TYPE_SHIFT 27 -#define I40E_QINT_RQCTL_NEXTQ_TYPE_MASK I40E_MASK(0x3, I40E_QINT_RQCTL_NEXTQ_TYPE_SHIFT) -#define I40E_QINT_RQCTL_CAUSE_ENA_SHIFT 30 -#define I40E_QINT_RQCTL_CAUSE_ENA_MASK I40E_MASK(0x1, I40E_QINT_RQCTL_CAUSE_ENA_SHIFT) -#define I40E_QINT_RQCTL_INTEVENT_SHIFT 31 -#define I40E_QINT_RQCTL_INTEVENT_MASK I40E_MASK(0x1, I40E_QINT_RQCTL_INTEVENT_SHIFT) -#define I40E_QINT_TQCTL(_Q) (0x0003C000 + ((_Q) * 4)) /* _i=0...1535 */ /* Reset: CORER */ -#define I40E_QINT_TQCTL_MAX_INDEX 1535 -#define I40E_QINT_TQCTL_MSIX_INDX_SHIFT 0 -#define I40E_QINT_TQCTL_MSIX_INDX_MASK I40E_MASK(0xFF, I40E_QINT_TQCTL_MSIX_INDX_SHIFT) -#define I40E_QINT_TQCTL_ITR_INDX_SHIFT 11 -#define I40E_QINT_TQCTL_ITR_INDX_MASK I40E_MASK(0x3, I40E_QINT_TQCTL_ITR_INDX_SHIFT) -#define I40E_QINT_TQCTL_MSIX0_INDX_SHIFT 13 -#define I40E_QINT_TQCTL_MSIX0_INDX_MASK I40E_MASK(0x7, I40E_QINT_TQCTL_MSIX0_INDX_SHIFT) -#define I40E_QINT_TQCTL_NEXTQ_INDX_SHIFT 16 -#define I40E_QINT_TQCTL_NEXTQ_INDX_MASK I40E_MASK(0x7FF, I40E_QINT_TQCTL_NEXTQ_INDX_SHIFT) -#define I40E_QINT_TQCTL_NEXTQ_TYPE_SHIFT 27 -#define I40E_QINT_TQCTL_NEXTQ_TYPE_MASK I40E_MASK(0x3, I40E_QINT_TQCTL_NEXTQ_TYPE_SHIFT) -#define I40E_QINT_TQCTL_CAUSE_ENA_SHIFT 30 -#define I40E_QINT_TQCTL_CAUSE_ENA_MASK I40E_MASK(0x1, I40E_QINT_TQCTL_CAUSE_ENA_SHIFT) -#define I40E_QINT_TQCTL_INTEVENT_SHIFT 31 -#define I40E_QINT_TQCTL_INTEVENT_MASK I40E_MASK(0x1, I40E_QINT_TQCTL_INTEVENT_SHIFT) -#define I40E_VFINT_DYN_CTL0(_VF) (0x0002A400 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: VFR */ -#define I40E_VFINT_DYN_CTL0_MAX_INDEX 127 -#define I40E_VFINT_DYN_CTL0_INTENA_SHIFT 0 -#define I40E_VFINT_DYN_CTL0_INTENA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL0_INTENA_SHIFT) -#define I40E_VFINT_DYN_CTL0_CLEARPBA_SHIFT 1 -#define I40E_VFINT_DYN_CTL0_CLEARPBA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL0_CLEARPBA_SHIFT) -#define I40E_VFINT_DYN_CTL0_SWINT_TRIG_SHIFT 2 -#define I40E_VFINT_DYN_CTL0_SWINT_TRIG_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL0_SWINT_TRIG_SHIFT) -#define I40E_VFINT_DYN_CTL0_ITR_INDX_SHIFT 3 -#define I40E_VFINT_DYN_CTL0_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_DYN_CTL0_ITR_INDX_SHIFT) -#define I40E_VFINT_DYN_CTL0_INTERVAL_SHIFT 5 -#define I40E_VFINT_DYN_CTL0_INTERVAL_MASK I40E_MASK(0xFFF, I40E_VFINT_DYN_CTL0_INTERVAL_SHIFT) -#define I40E_VFINT_DYN_CTL0_SW_ITR_INDX_ENA_SHIFT 24 -#define I40E_VFINT_DYN_CTL0_SW_ITR_INDX_ENA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL0_SW_ITR_INDX_ENA_SHIFT) -#define I40E_VFINT_DYN_CTL0_SW_ITR_INDX_SHIFT 25 -#define I40E_VFINT_DYN_CTL0_SW_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_DYN_CTL0_SW_ITR_INDX_SHIFT) -#define I40E_VFINT_DYN_CTL0_INTENA_MSK_SHIFT 31 -#define I40E_VFINT_DYN_CTL0_INTENA_MSK_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL0_INTENA_MSK_SHIFT) -#define I40E_VFINT_DYN_CTLN(_INTVF) (0x00024800 + ((_INTVF) * 4)) /* _i=0...511 */ /* Reset: VFR */ -#define I40E_VFINT_DYN_CTLN_MAX_INDEX 511 -#define I40E_VFINT_DYN_CTLN_INTENA_SHIFT 0 -#define I40E_VFINT_DYN_CTLN_INTENA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN_INTENA_SHIFT) -#define I40E_VFINT_DYN_CTLN_CLEARPBA_SHIFT 1 -#define I40E_VFINT_DYN_CTLN_CLEARPBA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN_CLEARPBA_SHIFT) -#define I40E_VFINT_DYN_CTLN_SWINT_TRIG_SHIFT 2 -#define I40E_VFINT_DYN_CTLN_SWINT_TRIG_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN_SWINT_TRIG_SHIFT) -#define I40E_VFINT_DYN_CTLN_ITR_INDX_SHIFT 3 -#define I40E_VFINT_DYN_CTLN_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_DYN_CTLN_ITR_INDX_SHIFT) -#define I40E_VFINT_DYN_CTLN_INTERVAL_SHIFT 5 -#define I40E_VFINT_DYN_CTLN_INTERVAL_MASK I40E_MASK(0xFFF, I40E_VFINT_DYN_CTLN_INTERVAL_SHIFT) -#define I40E_VFINT_DYN_CTLN_SW_ITR_INDX_ENA_SHIFT 24 -#define I40E_VFINT_DYN_CTLN_SW_ITR_INDX_ENA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN_SW_ITR_INDX_ENA_SHIFT) -#define I40E_VFINT_DYN_CTLN_SW_ITR_INDX_SHIFT 25 -#define I40E_VFINT_DYN_CTLN_SW_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_DYN_CTLN_SW_ITR_INDX_SHIFT) -#define I40E_VFINT_DYN_CTLN_INTENA_MSK_SHIFT 31 -#define I40E_VFINT_DYN_CTLN_INTENA_MSK_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN_INTENA_MSK_SHIFT) -#define I40E_VFINT_ICR0(_VF) (0x0002BC00 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ -#define I40E_VFINT_ICR0_MAX_INDEX 127 -#define I40E_VFINT_ICR0_INTEVENT_SHIFT 0 -#define I40E_VFINT_ICR0_INTEVENT_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_INTEVENT_SHIFT) -#define I40E_VFINT_ICR0_QUEUE_0_SHIFT 1 -#define I40E_VFINT_ICR0_QUEUE_0_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_QUEUE_0_SHIFT) -#define I40E_VFINT_ICR0_QUEUE_1_SHIFT 2 -#define I40E_VFINT_ICR0_QUEUE_1_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_QUEUE_1_SHIFT) -#define I40E_VFINT_ICR0_QUEUE_2_SHIFT 3 -#define I40E_VFINT_ICR0_QUEUE_2_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_QUEUE_2_SHIFT) -#define I40E_VFINT_ICR0_QUEUE_3_SHIFT 4 -#define I40E_VFINT_ICR0_QUEUE_3_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_QUEUE_3_SHIFT) -#define I40E_VFINT_ICR0_LINK_STAT_CHANGE_SHIFT 25 -#define I40E_VFINT_ICR0_LINK_STAT_CHANGE_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_LINK_STAT_CHANGE_SHIFT) -#define I40E_VFINT_ICR0_ADMINQ_SHIFT 30 -#define I40E_VFINT_ICR0_ADMINQ_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ADMINQ_SHIFT) -#define I40E_VFINT_ICR0_SWINT_SHIFT 31 -#define I40E_VFINT_ICR0_SWINT_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_SWINT_SHIFT) -#define I40E_VFINT_ICR0_ENA(_VF) (0x0002C000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ -#define I40E_VFINT_ICR0_ENA_MAX_INDEX 127 -#define I40E_VFINT_ICR0_ENA_LINK_STAT_CHANGE_SHIFT 25 -#define I40E_VFINT_ICR0_ENA_LINK_STAT_CHANGE_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ENA_LINK_STAT_CHANGE_SHIFT) -#define I40E_VFINT_ICR0_ENA_ADMINQ_SHIFT 30 -#define I40E_VFINT_ICR0_ENA_ADMINQ_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ENA_ADMINQ_SHIFT) -#define I40E_VFINT_ICR0_ENA_RSVD_SHIFT 31 -#define I40E_VFINT_ICR0_ENA_RSVD_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ENA_RSVD_SHIFT) -#define I40E_VFINT_ITR0(_i, _VF) (0x00028000 + ((_i) * 1024 + (_VF) * 4)) /* _i=0...2, _VF=0...127 */ /* Reset: VFR */ -#define I40E_VFINT_ITR0_MAX_INDEX 2 -#define I40E_VFINT_ITR0_INTERVAL_SHIFT 0 -#define I40E_VFINT_ITR0_INTERVAL_MASK I40E_MASK(0xFFF, I40E_VFINT_ITR0_INTERVAL_SHIFT) -#define I40E_VFINT_ITRN(_i, _INTVF) (0x00020000 + ((_i) * 2048 + (_INTVF) * 4)) /* _i=0...2, _INTVF=0...511 */ /* Reset: VFR */ -#define I40E_VFINT_ITRN_MAX_INDEX 2 -#define I40E_VFINT_ITRN_INTERVAL_SHIFT 0 -#define I40E_VFINT_ITRN_INTERVAL_MASK I40E_MASK(0xFFF, I40E_VFINT_ITRN_INTERVAL_SHIFT) -#define I40E_VFINT_STAT_CTL0(_VF) (0x0002A000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: VFR */ -#define I40E_VFINT_STAT_CTL0_MAX_INDEX 127 -#define I40E_VFINT_STAT_CTL0_OTHER_ITR_INDX_SHIFT 2 -#define I40E_VFINT_STAT_CTL0_OTHER_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_STAT_CTL0_OTHER_ITR_INDX_SHIFT) -#define I40E_VPINT_AEQCTL(_VF) (0x0002B800 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ -#define I40E_VPINT_AEQCTL_MAX_INDEX 127 -#define I40E_VPINT_AEQCTL_MSIX_INDX_SHIFT 0 -#define I40E_VPINT_AEQCTL_MSIX_INDX_MASK I40E_MASK(0xFF, I40E_VPINT_AEQCTL_MSIX_INDX_SHIFT) -#define I40E_VPINT_AEQCTL_ITR_INDX_SHIFT 11 -#define I40E_VPINT_AEQCTL_ITR_INDX_MASK I40E_MASK(0x3, I40E_VPINT_AEQCTL_ITR_INDX_SHIFT) -#define I40E_VPINT_AEQCTL_MSIX0_INDX_SHIFT 13 -#define I40E_VPINT_AEQCTL_MSIX0_INDX_MASK I40E_MASK(0x7, I40E_VPINT_AEQCTL_MSIX0_INDX_SHIFT) -#define I40E_VPINT_AEQCTL_CAUSE_ENA_SHIFT 30 -#define I40E_VPINT_AEQCTL_CAUSE_ENA_MASK I40E_MASK(0x1, I40E_VPINT_AEQCTL_CAUSE_ENA_SHIFT) -#define I40E_VPINT_AEQCTL_INTEVENT_SHIFT 31 -#define I40E_VPINT_AEQCTL_INTEVENT_MASK I40E_MASK(0x1, I40E_VPINT_AEQCTL_INTEVENT_SHIFT) -#define I40E_VPINT_CEQCTL(_INTVF) (0x00026800 + ((_INTVF) * 4)) /* _i=0...511 */ /* Reset: CORER */ -#define I40E_VPINT_CEQCTL_MAX_INDEX 511 -#define I40E_VPINT_CEQCTL_MSIX_INDX_SHIFT 0 -#define I40E_VPINT_CEQCTL_MSIX_INDX_MASK I40E_MASK(0xFF, I40E_VPINT_CEQCTL_MSIX_INDX_SHIFT) -#define I40E_VPINT_CEQCTL_ITR_INDX_SHIFT 11 -#define I40E_VPINT_CEQCTL_ITR_INDX_MASK I40E_MASK(0x3, I40E_VPINT_CEQCTL_ITR_INDX_SHIFT) -#define I40E_VPINT_CEQCTL_MSIX0_INDX_SHIFT 13 -#define I40E_VPINT_CEQCTL_MSIX0_INDX_MASK I40E_MASK(0x7, I40E_VPINT_CEQCTL_MSIX0_INDX_SHIFT) -#define I40E_VPINT_CEQCTL_NEXTQ_INDX_SHIFT 16 -#define I40E_VPINT_CEQCTL_NEXTQ_INDX_MASK I40E_MASK(0x7FF, I40E_VPINT_CEQCTL_NEXTQ_INDX_SHIFT) -#define I40E_VPINT_CEQCTL_NEXTQ_TYPE_SHIFT 27 -#define I40E_VPINT_CEQCTL_NEXTQ_TYPE_MASK I40E_MASK(0x3, I40E_VPINT_CEQCTL_NEXTQ_TYPE_SHIFT) -#define I40E_VPINT_CEQCTL_CAUSE_ENA_SHIFT 30 -#define I40E_VPINT_CEQCTL_CAUSE_ENA_MASK I40E_MASK(0x1, I40E_VPINT_CEQCTL_CAUSE_ENA_SHIFT) -#define I40E_VPINT_CEQCTL_INTEVENT_SHIFT 31 -#define I40E_VPINT_CEQCTL_INTEVENT_MASK I40E_MASK(0x1, I40E_VPINT_CEQCTL_INTEVENT_SHIFT) -#define I40E_VPINT_LNKLST0(_VF) (0x0002A800 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: VFR */ -#define I40E_VPINT_LNKLST0_MAX_INDEX 127 -#define I40E_VPINT_LNKLST0_FIRSTQ_INDX_SHIFT 0 -#define I40E_VPINT_LNKLST0_FIRSTQ_INDX_MASK I40E_MASK(0x7FF, I40E_VPINT_LNKLST0_FIRSTQ_INDX_SHIFT) -#define I40E_VPINT_LNKLST0_FIRSTQ_TYPE_SHIFT 11 -#define I40E_VPINT_LNKLST0_FIRSTQ_TYPE_MASK I40E_MASK(0x3, I40E_VPINT_LNKLST0_FIRSTQ_TYPE_SHIFT) -#define I40E_VPINT_LNKLSTN(_INTVF) (0x00025000 + ((_INTVF) * 4)) /* _i=0...511 */ /* Reset: VFR */ -#define I40E_VPINT_LNKLSTN_MAX_INDEX 511 -#define I40E_VPINT_LNKLSTN_FIRSTQ_INDX_SHIFT 0 -#define I40E_VPINT_LNKLSTN_FIRSTQ_INDX_MASK I40E_MASK(0x7FF, I40E_VPINT_LNKLSTN_FIRSTQ_INDX_SHIFT) -#define I40E_VPINT_LNKLSTN_FIRSTQ_TYPE_SHIFT 11 -#define I40E_VPINT_LNKLSTN_FIRSTQ_TYPE_MASK I40E_MASK(0x3, I40E_VPINT_LNKLSTN_FIRSTQ_TYPE_SHIFT) -#define I40E_VPINT_RATE0(_VF) (0x0002AC00 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: VFR */ -#define I40E_VPINT_RATE0_MAX_INDEX 127 -#define I40E_VPINT_RATE0_INTERVAL_SHIFT 0 -#define I40E_VPINT_RATE0_INTERVAL_MASK I40E_MASK(0x3F, I40E_VPINT_RATE0_INTERVAL_SHIFT) -#define I40E_VPINT_RATE0_INTRL_ENA_SHIFT 6 -#define I40E_VPINT_RATE0_INTRL_ENA_MASK I40E_MASK(0x1, I40E_VPINT_RATE0_INTRL_ENA_SHIFT) -#define I40E_VPINT_RATEN(_INTVF) (0x00025800 + ((_INTVF) * 4)) /* _i=0...511 */ /* Reset: VFR */ -#define I40E_VPINT_RATEN_MAX_INDEX 511 -#define I40E_VPINT_RATEN_INTERVAL_SHIFT 0 -#define I40E_VPINT_RATEN_INTERVAL_MASK I40E_MASK(0x3F, I40E_VPINT_RATEN_INTERVAL_SHIFT) -#define I40E_VPINT_RATEN_INTRL_ENA_SHIFT 6 -#define I40E_VPINT_RATEN_INTRL_ENA_MASK I40E_MASK(0x1, I40E_VPINT_RATEN_INTRL_ENA_SHIFT) -#define I40E_GL_RDPU_CNTRL 0x00051060 /* Reset: CORER */ -#define I40E_GL_RDPU_CNTRL_RX_PAD_EN_SHIFT 0 -#define I40E_GL_RDPU_CNTRL_RX_PAD_EN_MASK I40E_MASK(0x1, I40E_GL_RDPU_CNTRL_RX_PAD_EN_SHIFT) -#define I40E_GL_RDPU_CNTRL_ECO_SHIFT 1 -#define I40E_GL_RDPU_CNTRL_ECO_MASK I40E_MASK(0x7FFFFFFF, I40E_GL_RDPU_CNTRL_ECO_SHIFT) -#define I40E_GLLAN_RCTL_0 0x0012A500 /* Reset: CORER */ -#define I40E_GLLAN_RCTL_0_PXE_MODE_SHIFT 0 -#define I40E_GLLAN_RCTL_0_PXE_MODE_MASK I40E_MASK(0x1, I40E_GLLAN_RCTL_0_PXE_MODE_SHIFT) -#define I40E_GLLAN_TSOMSK_F 0x000442D8 /* Reset: CORER */ -#define I40E_GLLAN_TSOMSK_F_TCPMSKF_SHIFT 0 -#define I40E_GLLAN_TSOMSK_F_TCPMSKF_MASK I40E_MASK(0xFFF, I40E_GLLAN_TSOMSK_F_TCPMSKF_SHIFT) -#define I40E_GLLAN_TSOMSK_L 0x000442E0 /* Reset: CORER */ -#define I40E_GLLAN_TSOMSK_L_TCPMSKL_SHIFT 0 -#define I40E_GLLAN_TSOMSK_L_TCPMSKL_MASK I40E_MASK(0xFFF, I40E_GLLAN_TSOMSK_L_TCPMSKL_SHIFT) -#define I40E_GLLAN_TSOMSK_M 0x000442DC /* Reset: CORER */ -#define I40E_GLLAN_TSOMSK_M_TCPMSKM_SHIFT 0 -#define I40E_GLLAN_TSOMSK_M_TCPMSKM_MASK I40E_MASK(0xFFF, I40E_GLLAN_TSOMSK_M_TCPMSKM_SHIFT) -#define I40E_GLLAN_TXPRE_QDIS(_i) (0x000e6500 + ((_i) * 4)) /* _i=0...11 */ /* Reset: CORER */ -#define I40E_GLLAN_TXPRE_QDIS_MAX_INDEX 11 -#define I40E_GLLAN_TXPRE_QDIS_QINDX_SHIFT 0 -#define I40E_GLLAN_TXPRE_QDIS_QINDX_MASK I40E_MASK(0x7FF, I40E_GLLAN_TXPRE_QDIS_QINDX_SHIFT) -#define I40E_GLLAN_TXPRE_QDIS_QDIS_STAT_SHIFT 16 -#define I40E_GLLAN_TXPRE_QDIS_QDIS_STAT_MASK I40E_MASK(0x1, I40E_GLLAN_TXPRE_QDIS_QDIS_STAT_SHIFT) -#define I40E_GLLAN_TXPRE_QDIS_SET_QDIS_SHIFT 30 -#define I40E_GLLAN_TXPRE_QDIS_SET_QDIS_MASK I40E_MASK(0x1, I40E_GLLAN_TXPRE_QDIS_SET_QDIS_SHIFT) -#define I40E_GLLAN_TXPRE_QDIS_CLEAR_QDIS_SHIFT 31 -#define I40E_GLLAN_TXPRE_QDIS_CLEAR_QDIS_MASK I40E_MASK(0x1, I40E_GLLAN_TXPRE_QDIS_CLEAR_QDIS_SHIFT) -#define I40E_PFLAN_QALLOC 0x001C0400 /* Reset: CORER */ -#define I40E_PFLAN_QALLOC_FIRSTQ_SHIFT 0 -#define I40E_PFLAN_QALLOC_FIRSTQ_MASK I40E_MASK(0x7FF, I40E_PFLAN_QALLOC_FIRSTQ_SHIFT) -#define I40E_PFLAN_QALLOC_LASTQ_SHIFT 16 -#define I40E_PFLAN_QALLOC_LASTQ_MASK I40E_MASK(0x7FF, I40E_PFLAN_QALLOC_LASTQ_SHIFT) -#define I40E_PFLAN_QALLOC_VALID_SHIFT 31 -#define I40E_PFLAN_QALLOC_VALID_MASK I40E_MASK(0x1, I40E_PFLAN_QALLOC_VALID_SHIFT) -#define I40E_QRX_ENA(_Q) (0x00120000 + ((_Q) * 4)) /* _i=0...1535 */ /* Reset: PFR */ -#define I40E_QRX_ENA_MAX_INDEX 1535 -#define I40E_QRX_ENA_QENA_REQ_SHIFT 0 -#define I40E_QRX_ENA_QENA_REQ_MASK I40E_MASK(0x1, I40E_QRX_ENA_QENA_REQ_SHIFT) -#define I40E_QRX_ENA_FAST_QDIS_SHIFT 1 -#define I40E_QRX_ENA_FAST_QDIS_MASK I40E_MASK(0x1, I40E_QRX_ENA_FAST_QDIS_SHIFT) -#define I40E_QRX_ENA_QENA_STAT_SHIFT 2 -#define I40E_QRX_ENA_QENA_STAT_MASK I40E_MASK(0x1, I40E_QRX_ENA_QENA_STAT_SHIFT) -#define I40E_QRX_TAIL(_Q) (0x00128000 + ((_Q) * 4)) /* _i=0...1535 */ /* Reset: CORER */ -#define I40E_QRX_TAIL_MAX_INDEX 1535 -#define I40E_QRX_TAIL_TAIL_SHIFT 0 -#define I40E_QRX_TAIL_TAIL_MASK I40E_MASK(0x1FFF, I40E_QRX_TAIL_TAIL_SHIFT) -#define I40E_QTX_CTL(_Q) (0x00104000 + ((_Q) * 4)) /* _i=0...1535 */ /* Reset: CORER */ -#define I40E_QTX_CTL_MAX_INDEX 1535 -#define I40E_QTX_CTL_PFVF_Q_SHIFT 0 -#define I40E_QTX_CTL_PFVF_Q_MASK I40E_MASK(0x3, I40E_QTX_CTL_PFVF_Q_SHIFT) -#define I40E_QTX_CTL_PF_INDX_SHIFT 2 -#define I40E_QTX_CTL_PF_INDX_MASK I40E_MASK(0xF, I40E_QTX_CTL_PF_INDX_SHIFT) -#define I40E_QTX_CTL_VFVM_INDX_SHIFT 7 -#define I40E_QTX_CTL_VFVM_INDX_MASK I40E_MASK(0x1FF, I40E_QTX_CTL_VFVM_INDX_SHIFT) -#define I40E_QTX_ENA(_Q) (0x00100000 + ((_Q) * 4)) /* _i=0...1535 */ /* Reset: PFR */ -#define I40E_QTX_ENA_MAX_INDEX 1535 -#define I40E_QTX_ENA_QENA_REQ_SHIFT 0 -#define I40E_QTX_ENA_QENA_REQ_MASK I40E_MASK(0x1, I40E_QTX_ENA_QENA_REQ_SHIFT) -#define I40E_QTX_ENA_FAST_QDIS_SHIFT 1 -#define I40E_QTX_ENA_FAST_QDIS_MASK I40E_MASK(0x1, I40E_QTX_ENA_FAST_QDIS_SHIFT) -#define I40E_QTX_ENA_QENA_STAT_SHIFT 2 -#define I40E_QTX_ENA_QENA_STAT_MASK I40E_MASK(0x1, I40E_QTX_ENA_QENA_STAT_SHIFT) -#define I40E_QTX_HEAD(_Q) (0x000E4000 + ((_Q) * 4)) /* _i=0...1535 */ /* Reset: CORER */ -#define I40E_QTX_HEAD_MAX_INDEX 1535 -#define I40E_QTX_HEAD_HEAD_SHIFT 0 -#define I40E_QTX_HEAD_HEAD_MASK I40E_MASK(0x1FFF, I40E_QTX_HEAD_HEAD_SHIFT) -#define I40E_QTX_HEAD_RS_PENDING_SHIFT 16 -#define I40E_QTX_HEAD_RS_PENDING_MASK I40E_MASK(0x1, I40E_QTX_HEAD_RS_PENDING_SHIFT) -#define I40E_QTX_TAIL(_Q) (0x00108000 + ((_Q) * 4)) /* _i=0...1535 */ /* Reset: PFR */ -#define I40E_QTX_TAIL_MAX_INDEX 1535 -#define I40E_QTX_TAIL_TAIL_SHIFT 0 -#define I40E_QTX_TAIL_TAIL_MASK I40E_MASK(0x1FFF, I40E_QTX_TAIL_TAIL_SHIFT) -#define I40E_VPLAN_MAPENA(_VF) (0x00074000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: VFR */ -#define I40E_VPLAN_MAPENA_MAX_INDEX 127 -#define I40E_VPLAN_MAPENA_TXRX_ENA_SHIFT 0 -#define I40E_VPLAN_MAPENA_TXRX_ENA_MASK I40E_MASK(0x1, I40E_VPLAN_MAPENA_TXRX_ENA_SHIFT) -#define I40E_VPLAN_QTABLE(_i, _VF) (0x00070000 + ((_i) * 1024 + (_VF) * 4)) /* _i=0...15, _VF=0...127 */ /* Reset: VFR */ -#define I40E_VPLAN_QTABLE_MAX_INDEX 15 -#define I40E_VPLAN_QTABLE_QINDEX_SHIFT 0 -#define I40E_VPLAN_QTABLE_QINDEX_MASK I40E_MASK(0x7FF, I40E_VPLAN_QTABLE_QINDEX_SHIFT) -#define I40E_VSILAN_QBASE(_VSI) (0x0020C800 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: PFR */ -#define I40E_VSILAN_QBASE_MAX_INDEX 383 -#define I40E_VSILAN_QBASE_VSIBASE_SHIFT 0 -#define I40E_VSILAN_QBASE_VSIBASE_MASK I40E_MASK(0x7FF, I40E_VSILAN_QBASE_VSIBASE_SHIFT) -#define I40E_VSILAN_QBASE_VSIQTABLE_ENA_SHIFT 11 -#define I40E_VSILAN_QBASE_VSIQTABLE_ENA_MASK I40E_MASK(0x1, I40E_VSILAN_QBASE_VSIQTABLE_ENA_SHIFT) -#define I40E_VSILAN_QTABLE(_i, _VSI) (0x00200000 + ((_i) * 2048 + (_VSI) * 4)) /* _i=0...7, _VSI=0...383 */ /* Reset: PFR */ -#define I40E_VSILAN_QTABLE_MAX_INDEX 7 -#define I40E_VSILAN_QTABLE_QINDEX_0_SHIFT 0 -#define I40E_VSILAN_QTABLE_QINDEX_0_MASK I40E_MASK(0x7FF, I40E_VSILAN_QTABLE_QINDEX_0_SHIFT) -#define I40E_VSILAN_QTABLE_QINDEX_1_SHIFT 16 -#define I40E_VSILAN_QTABLE_QINDEX_1_MASK I40E_MASK(0x7FF, I40E_VSILAN_QTABLE_QINDEX_1_SHIFT) -#define I40E_PRTGL_SAH 0x001E2140 /* Reset: GLOBR */ -#define I40E_PRTGL_SAH_FC_SAH_SHIFT 0 -#define I40E_PRTGL_SAH_FC_SAH_MASK I40E_MASK(0xFFFF, I40E_PRTGL_SAH_FC_SAH_SHIFT) -#define I40E_PRTGL_SAH_MFS_SHIFT 16 -#define I40E_PRTGL_SAH_MFS_MASK I40E_MASK(0xFFFF, I40E_PRTGL_SAH_MFS_SHIFT) -#define I40E_PRTGL_SAL 0x001E2120 /* Reset: GLOBR */ -#define I40E_PRTGL_SAL_FC_SAL_SHIFT 0 -#define I40E_PRTGL_SAL_FC_SAL_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTGL_SAL_FC_SAL_SHIFT) -#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GCP 0x001E30E0 /* Reset: GLOBR */ -#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_SHIFT 0 -#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GCP_HSEC_CTL_RX_ENABLE_GCP_SHIFT) -#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GPP 0x001E3260 /* Reset: GLOBR */ -#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_SHIFT 0 -#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_ENABLE_GPP_HSEC_CTL_RX_ENABLE_GPP_SHIFT) -#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_PPP 0x001E32E0 /* Reset: GLOBR */ -#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_SHIFT 0 -#define I40E_PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_ENABLE_PPP_HSEC_CTL_RX_ENABLE_PPP_SHIFT) -#define I40E_PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL 0x001E3360 /* Reset: GLOBR */ -#define I40E_PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_SHIFT 0 -#define I40E_PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_MASK I40E_MASK(0x1, I40E_PRTMAC_HSEC_CTL_RX_FORWARD_CONTROL_HSEC_CTL_RX_FORWARD_CONTROL_SHIFT) -#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1 0x001E3110 /* Reset: GLOBR */ -#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_SHIFT 0 -#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_HSEC_CTL_RX_PAUSE_DA_UCAST_PART1_SHIFT) -#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2 0x001E3120 /* Reset: GLOBR */ -#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_SHIFT 0 -#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_HSEC_CTL_RX_PAUSE_DA_UCAST_PART2_SHIFT) -#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE 0x001E30C0 /* Reset: GLOBR */ -#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_SHIFT 0 -#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_MASK I40E_MASK(0x1FF, I40E_PRTMAC_HSEC_CTL_RX_PAUSE_ENABLE_HSEC_CTL_RX_PAUSE_ENABLE_SHIFT) -#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1 0x001E3140 /* Reset: GLOBR */ -#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_SHIFT 0 -#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART1_HSEC_CTL_RX_PAUSE_SA_PART1_SHIFT) -#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2 0x001E3150 /* Reset: GLOBR */ -#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_SHIFT 0 -#define I40E_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_RX_PAUSE_SA_PART2_HSEC_CTL_RX_PAUSE_SA_PART2_SHIFT) -#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE 0x001E30D0 /* Reset: GLOBR */ -#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_SHIFT 0 -#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_MASK I40E_MASK(0x1FF, I40E_PRTMAC_HSEC_CTL_TX_PAUSE_ENABLE_HSEC_CTL_TX_PAUSE_ENABLE_SHIFT) -#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA(_i) (0x001E3370 + ((_i) * 16)) /* _i=0...8 */ /* Reset: GLOBR */ -#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_MAX_INDEX 8 -#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_SHIFT 0 -#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_SHIFT) -#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(_i) (0x001E3400 + ((_i) * 16)) /* _i=0...8 */ /* Reset: GLOBR */ -#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MAX_INDEX 8 -#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_SHIFT 0 -#define I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_SHIFT) -#define I40E_PRTMAC_HSEC_CTL_TX_SA_PART1 0x001E34B0 /* Reset: GLOBR */ -#define I40E_PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_SHIFT 0 -#define I40E_PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTMAC_HSEC_CTL_TX_SA_PART1_HSEC_CTL_TX_SA_PART1_SHIFT) -#define I40E_PRTMAC_HSEC_CTL_TX_SA_PART2 0x001E34C0 /* Reset: GLOBR */ -#define I40E_PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_SHIFT 0 -#define I40E_PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_MASK I40E_MASK(0xFFFF, I40E_PRTMAC_HSEC_CTL_TX_SA_PART2_HSEC_CTL_TX_SA_PART2_SHIFT) -#define I40E_PRTMAC_PCS_XAUI_SWAP_A 0x0008C480 /* Reset: GLOBR */ -#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE3_SHIFT 0 -#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE3_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE3_SHIFT) -#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE2_SHIFT 2 -#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE2_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE2_SHIFT) -#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE1_SHIFT 4 -#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE1_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE1_SHIFT) -#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE0_SHIFT 6 -#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE0_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_TX_LANE0_SHIFT) -#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE3_SHIFT 8 -#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE3_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE3_SHIFT) -#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE2_SHIFT 10 -#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE2_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE2_SHIFT) -#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE1_SHIFT 12 -#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE1_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE1_SHIFT) -#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE0_SHIFT 14 -#define I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE0_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_A_SWAP_RX_LANE0_SHIFT) -#define I40E_PRTMAC_PCS_XAUI_SWAP_B 0x0008C484 /* Reset: GLOBR */ -#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE3_SHIFT 0 -#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE3_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE3_SHIFT) -#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE2_SHIFT 2 -#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE2_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE2_SHIFT) -#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE1_SHIFT 4 -#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE1_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE1_SHIFT) -#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE0_SHIFT 6 -#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE0_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_TX_LANE0_SHIFT) -#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE3_SHIFT 8 -#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE3_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE3_SHIFT) -#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE2_SHIFT 10 -#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE2_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE2_SHIFT) -#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE1_SHIFT 12 -#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE1_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE1_SHIFT) -#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE0_SHIFT 14 -#define I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE0_MASK I40E_MASK(0x3, I40E_PRTMAC_PCS_XAUI_SWAP_B_SWAP_RX_LANE0_SHIFT) -#define I40E_GL_FWRESETCNT 0x00083100 /* Reset: POR */ -#define I40E_GL_FWRESETCNT_FWRESETCNT_SHIFT 0 -#define I40E_GL_FWRESETCNT_FWRESETCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FWRESETCNT_FWRESETCNT_SHIFT) -#define I40E_GL_MNG_FWSM 0x000B6134 /* Reset: POR */ -#define I40E_GL_MNG_FWSM_FW_MODES_SHIFT 0 -#define I40E_GL_MNG_FWSM_FW_MODES_MASK I40E_MASK(0x3, I40E_GL_MNG_FWSM_FW_MODES_SHIFT) -#define I40E_GL_MNG_FWSM_EEP_RELOAD_IND_SHIFT 10 -#define I40E_GL_MNG_FWSM_EEP_RELOAD_IND_MASK I40E_MASK(0x1, I40E_GL_MNG_FWSM_EEP_RELOAD_IND_SHIFT) -#define I40E_GL_MNG_FWSM_CRC_ERROR_MODULE_SHIFT 11 -#define I40E_GL_MNG_FWSM_CRC_ERROR_MODULE_MASK I40E_MASK(0xF, I40E_GL_MNG_FWSM_CRC_ERROR_MODULE_SHIFT) -#define I40E_GL_MNG_FWSM_FW_STATUS_VALID_SHIFT 15 -#define I40E_GL_MNG_FWSM_FW_STATUS_VALID_MASK I40E_MASK(0x1, I40E_GL_MNG_FWSM_FW_STATUS_VALID_SHIFT) -#define I40E_GL_MNG_FWSM_RESET_CNT_SHIFT 16 -#define I40E_GL_MNG_FWSM_RESET_CNT_MASK I40E_MASK(0x7, I40E_GL_MNG_FWSM_RESET_CNT_SHIFT) -#define I40E_GL_MNG_FWSM_EXT_ERR_IND_SHIFT 19 -#define I40E_GL_MNG_FWSM_EXT_ERR_IND_MASK I40E_MASK(0x3F, I40E_GL_MNG_FWSM_EXT_ERR_IND_SHIFT) -#define I40E_GL_MNG_FWSM_PHY_SERDES0_CONFIG_ERR_SHIFT 26 -#define I40E_GL_MNG_FWSM_PHY_SERDES0_CONFIG_ERR_MASK I40E_MASK(0x1, I40E_GL_MNG_FWSM_PHY_SERDES0_CONFIG_ERR_SHIFT) -#define I40E_GL_MNG_FWSM_PHY_SERDES1_CONFIG_ERR_SHIFT 27 -#define I40E_GL_MNG_FWSM_PHY_SERDES1_CONFIG_ERR_MASK I40E_MASK(0x1, I40E_GL_MNG_FWSM_PHY_SERDES1_CONFIG_ERR_SHIFT) -#define I40E_GL_MNG_FWSM_PHY_SERDES2_CONFIG_ERR_SHIFT 28 -#define I40E_GL_MNG_FWSM_PHY_SERDES2_CONFIG_ERR_MASK I40E_MASK(0x1, I40E_GL_MNG_FWSM_PHY_SERDES2_CONFIG_ERR_SHIFT) -#define I40E_GL_MNG_FWSM_PHY_SERDES3_CONFIG_ERR_SHIFT 29 -#define I40E_GL_MNG_FWSM_PHY_SERDES3_CONFIG_ERR_MASK I40E_MASK(0x1, I40E_GL_MNG_FWSM_PHY_SERDES3_CONFIG_ERR_SHIFT) -#define I40E_GL_MNG_HWARB_CTRL 0x000B6130 /* Reset: POR */ -#define I40E_GL_MNG_HWARB_CTRL_NCSI_ARB_EN_SHIFT 0 -#define I40E_GL_MNG_HWARB_CTRL_NCSI_ARB_EN_MASK I40E_MASK(0x1, I40E_GL_MNG_HWARB_CTRL_NCSI_ARB_EN_SHIFT) -#define I40E_PRT_MNG_FTFT_DATA(_i) (0x000852A0 + ((_i) * 32)) /* _i=0...31 */ /* Reset: POR */ -#define I40E_PRT_MNG_FTFT_DATA_MAX_INDEX 31 -#define I40E_PRT_MNG_FTFT_DATA_DWORD_SHIFT 0 -#define I40E_PRT_MNG_FTFT_DATA_DWORD_MASK I40E_MASK(0xFFFFFFFF, I40E_PRT_MNG_FTFT_DATA_DWORD_SHIFT) -#define I40E_PRT_MNG_FTFT_LENGTH 0x00085260 /* Reset: POR */ -#define I40E_PRT_MNG_FTFT_LENGTH_LENGTH_SHIFT 0 -#define I40E_PRT_MNG_FTFT_LENGTH_LENGTH_MASK I40E_MASK(0xFF, I40E_PRT_MNG_FTFT_LENGTH_LENGTH_SHIFT) -#define I40E_PRT_MNG_FTFT_MASK(_i) (0x00085160 + ((_i) * 32)) /* _i=0...7 */ /* Reset: POR */ -#define I40E_PRT_MNG_FTFT_MASK_MAX_INDEX 7 -#define I40E_PRT_MNG_FTFT_MASK_MASK_SHIFT 0 -#define I40E_PRT_MNG_FTFT_MASK_MASK_MASK I40E_MASK(0xFFFF, I40E_PRT_MNG_FTFT_MASK_MASK_SHIFT) -#define I40E_PRT_MNG_MANC 0x00256A20 /* Reset: POR */ -#define I40E_PRT_MNG_MANC_FLOW_CONTROL_DISCARD_SHIFT 0 -#define I40E_PRT_MNG_MANC_FLOW_CONTROL_DISCARD_MASK I40E_MASK(0x1, I40E_PRT_MNG_MANC_FLOW_CONTROL_DISCARD_SHIFT) -#define I40E_PRT_MNG_MANC_NCSI_DISCARD_SHIFT 1 -#define I40E_PRT_MNG_MANC_NCSI_DISCARD_MASK I40E_MASK(0x1, I40E_PRT_MNG_MANC_NCSI_DISCARD_SHIFT) -#define I40E_PRT_MNG_MANC_RCV_TCO_EN_SHIFT 17 -#define I40E_PRT_MNG_MANC_RCV_TCO_EN_MASK I40E_MASK(0x1, I40E_PRT_MNG_MANC_RCV_TCO_EN_SHIFT) -#define I40E_PRT_MNG_MANC_RCV_ALL_SHIFT 19 -#define I40E_PRT_MNG_MANC_RCV_ALL_MASK I40E_MASK(0x1, I40E_PRT_MNG_MANC_RCV_ALL_SHIFT) -#define I40E_PRT_MNG_MANC_FIXED_NET_TYPE_SHIFT 25 -#define I40E_PRT_MNG_MANC_FIXED_NET_TYPE_MASK I40E_MASK(0x1, I40E_PRT_MNG_MANC_FIXED_NET_TYPE_SHIFT) -#define I40E_PRT_MNG_MANC_NET_TYPE_SHIFT 26 -#define I40E_PRT_MNG_MANC_NET_TYPE_MASK I40E_MASK(0x1, I40E_PRT_MNG_MANC_NET_TYPE_SHIFT) -#define I40E_PRT_MNG_MANC_EN_BMC2OS_SHIFT 28 -#define I40E_PRT_MNG_MANC_EN_BMC2OS_MASK I40E_MASK(0x1, I40E_PRT_MNG_MANC_EN_BMC2OS_SHIFT) -#define I40E_PRT_MNG_MANC_EN_BMC2NET_SHIFT 29 -#define I40E_PRT_MNG_MANC_EN_BMC2NET_MASK I40E_MASK(0x1, I40E_PRT_MNG_MANC_EN_BMC2NET_SHIFT) -#define I40E_PRT_MNG_MAVTV(_i) (0x00255900 + ((_i) * 32)) /* _i=0...7 */ /* Reset: POR */ -#define I40E_PRT_MNG_MAVTV_MAX_INDEX 7 -#define I40E_PRT_MNG_MAVTV_VID_SHIFT 0 -#define I40E_PRT_MNG_MAVTV_VID_MASK I40E_MASK(0xFFF, I40E_PRT_MNG_MAVTV_VID_SHIFT) -#define I40E_PRT_MNG_MDEF(_i) (0x00255D00 + ((_i) * 32)) /* _i=0...7 */ /* Reset: POR */ -#define I40E_PRT_MNG_MDEF_MAX_INDEX 7 -#define I40E_PRT_MNG_MDEF_MAC_EXACT_AND_SHIFT 0 -#define I40E_PRT_MNG_MDEF_MAC_EXACT_AND_MASK I40E_MASK(0xF, I40E_PRT_MNG_MDEF_MAC_EXACT_AND_SHIFT) -#define I40E_PRT_MNG_MDEF_BROADCAST_AND_SHIFT 4 -#define I40E_PRT_MNG_MDEF_BROADCAST_AND_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_BROADCAST_AND_SHIFT) -#define I40E_PRT_MNG_MDEF_VLAN_AND_SHIFT 5 -#define I40E_PRT_MNG_MDEF_VLAN_AND_MASK I40E_MASK(0xFF, I40E_PRT_MNG_MDEF_VLAN_AND_SHIFT) -#define I40E_PRT_MNG_MDEF_IPV4_ADDRESS_AND_SHIFT 13 -#define I40E_PRT_MNG_MDEF_IPV4_ADDRESS_AND_MASK I40E_MASK(0xF, I40E_PRT_MNG_MDEF_IPV4_ADDRESS_AND_SHIFT) -#define I40E_PRT_MNG_MDEF_IPV6_ADDRESS_AND_SHIFT 17 -#define I40E_PRT_MNG_MDEF_IPV6_ADDRESS_AND_MASK I40E_MASK(0xF, I40E_PRT_MNG_MDEF_IPV6_ADDRESS_AND_SHIFT) -#define I40E_PRT_MNG_MDEF_MAC_EXACT_OR_SHIFT 21 -#define I40E_PRT_MNG_MDEF_MAC_EXACT_OR_MASK I40E_MASK(0xF, I40E_PRT_MNG_MDEF_MAC_EXACT_OR_SHIFT) -#define I40E_PRT_MNG_MDEF_BROADCAST_OR_SHIFT 25 -#define I40E_PRT_MNG_MDEF_BROADCAST_OR_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_BROADCAST_OR_SHIFT) -#define I40E_PRT_MNG_MDEF_MULTICAST_AND_SHIFT 26 -#define I40E_PRT_MNG_MDEF_MULTICAST_AND_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_MULTICAST_AND_SHIFT) -#define I40E_PRT_MNG_MDEF_ARP_REQUEST_OR_SHIFT 27 -#define I40E_PRT_MNG_MDEF_ARP_REQUEST_OR_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_ARP_REQUEST_OR_SHIFT) -#define I40E_PRT_MNG_MDEF_ARP_RESPONSE_OR_SHIFT 28 -#define I40E_PRT_MNG_MDEF_ARP_RESPONSE_OR_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_ARP_RESPONSE_OR_SHIFT) -#define I40E_PRT_MNG_MDEF_NEIGHBOR_DISCOVERY_134_OR_SHIFT 29 -#define I40E_PRT_MNG_MDEF_NEIGHBOR_DISCOVERY_134_OR_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_NEIGHBOR_DISCOVERY_134_OR_SHIFT) -#define I40E_PRT_MNG_MDEF_PORT_0X298_OR_SHIFT 30 -#define I40E_PRT_MNG_MDEF_PORT_0X298_OR_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_PORT_0X298_OR_SHIFT) -#define I40E_PRT_MNG_MDEF_PORT_0X26F_OR_SHIFT 31 -#define I40E_PRT_MNG_MDEF_PORT_0X26F_OR_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_PORT_0X26F_OR_SHIFT) -#define I40E_PRT_MNG_MDEF_EXT(_i) (0x00255F00 + ((_i) * 32)) /* _i=0...7 */ /* Reset: POR */ -#define I40E_PRT_MNG_MDEF_EXT_MAX_INDEX 7 -#define I40E_PRT_MNG_MDEF_EXT_L2_ETHERTYPE_AND_SHIFT 0 -#define I40E_PRT_MNG_MDEF_EXT_L2_ETHERTYPE_AND_MASK I40E_MASK(0xF, I40E_PRT_MNG_MDEF_EXT_L2_ETHERTYPE_AND_SHIFT) -#define I40E_PRT_MNG_MDEF_EXT_L2_ETHERTYPE_OR_SHIFT 4 -#define I40E_PRT_MNG_MDEF_EXT_L2_ETHERTYPE_OR_MASK I40E_MASK(0xF, I40E_PRT_MNG_MDEF_EXT_L2_ETHERTYPE_OR_SHIFT) -#define I40E_PRT_MNG_MDEF_EXT_FLEX_PORT_OR_SHIFT 8 -#define I40E_PRT_MNG_MDEF_EXT_FLEX_PORT_OR_MASK I40E_MASK(0xFFFF, I40E_PRT_MNG_MDEF_EXT_FLEX_PORT_OR_SHIFT) -#define I40E_PRT_MNG_MDEF_EXT_FLEX_TCO_SHIFT 24 -#define I40E_PRT_MNG_MDEF_EXT_FLEX_TCO_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_EXT_FLEX_TCO_SHIFT) -#define I40E_PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_135_OR_SHIFT 25 -#define I40E_PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_135_OR_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_135_OR_SHIFT) -#define I40E_PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_136_OR_SHIFT 26 -#define I40E_PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_136_OR_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_136_OR_SHIFT) -#define I40E_PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_137_OR_SHIFT 27 -#define I40E_PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_137_OR_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_EXT_NEIGHBOR_DISCOVERY_137_OR_SHIFT) -#define I40E_PRT_MNG_MDEF_EXT_ICMP_OR_SHIFT 28 -#define I40E_PRT_MNG_MDEF_EXT_ICMP_OR_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_EXT_ICMP_OR_SHIFT) -#define I40E_PRT_MNG_MDEF_EXT_MLD_SHIFT 29 -#define I40E_PRT_MNG_MDEF_EXT_MLD_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_EXT_MLD_SHIFT) -#define I40E_PRT_MNG_MDEF_EXT_APPLY_TO_NETWORK_TRAFFIC_SHIFT 30 -#define I40E_PRT_MNG_MDEF_EXT_APPLY_TO_NETWORK_TRAFFIC_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_EXT_APPLY_TO_NETWORK_TRAFFIC_SHIFT) -#define I40E_PRT_MNG_MDEF_EXT_APPLY_TO_HOST_TRAFFIC_SHIFT 31 -#define I40E_PRT_MNG_MDEF_EXT_APPLY_TO_HOST_TRAFFIC_MASK I40E_MASK(0x1, I40E_PRT_MNG_MDEF_EXT_APPLY_TO_HOST_TRAFFIC_SHIFT) -#define I40E_PRT_MNG_MDEFVSI(_i) (0x00256580 + ((_i) * 32)) /* _i=0...3 */ /* Reset: POR */ -#define I40E_PRT_MNG_MDEFVSI_MAX_INDEX 3 -#define I40E_PRT_MNG_MDEFVSI_MDEFVSI_2N_SHIFT 0 -#define I40E_PRT_MNG_MDEFVSI_MDEFVSI_2N_MASK I40E_MASK(0xFFFF, I40E_PRT_MNG_MDEFVSI_MDEFVSI_2N_SHIFT) -#define I40E_PRT_MNG_MDEFVSI_MDEFVSI_2NP1_SHIFT 16 -#define I40E_PRT_MNG_MDEFVSI_MDEFVSI_2NP1_MASK I40E_MASK(0xFFFF, I40E_PRT_MNG_MDEFVSI_MDEFVSI_2NP1_SHIFT) -#define I40E_PRT_MNG_METF(_i) (0x00256780 + ((_i) * 32)) /* _i=0...3 */ /* Reset: POR */ -#define I40E_PRT_MNG_METF_MAX_INDEX 3 -#define I40E_PRT_MNG_METF_ETYPE_SHIFT 0 -#define I40E_PRT_MNG_METF_ETYPE_MASK I40E_MASK(0xFFFF, I40E_PRT_MNG_METF_ETYPE_SHIFT) -#define I40E_PRT_MNG_METF_POLARITY_SHIFT 30 -#define I40E_PRT_MNG_METF_POLARITY_MASK I40E_MASK(0x1, I40E_PRT_MNG_METF_POLARITY_SHIFT) -#define I40E_PRT_MNG_MFUTP(_i) (0x00254E00 + ((_i) * 32)) /* _i=0...15 */ /* Reset: POR */ -#define I40E_PRT_MNG_MFUTP_MAX_INDEX 15 -#define I40E_PRT_MNG_MFUTP_MFUTP_N_SHIFT 0 -#define I40E_PRT_MNG_MFUTP_MFUTP_N_MASK I40E_MASK(0xFFFF, I40E_PRT_MNG_MFUTP_MFUTP_N_SHIFT) -#define I40E_PRT_MNG_MFUTP_UDP_SHIFT 16 -#define I40E_PRT_MNG_MFUTP_UDP_MASK I40E_MASK(0x1, I40E_PRT_MNG_MFUTP_UDP_SHIFT) -#define I40E_PRT_MNG_MFUTP_TCP_SHIFT 17 -#define I40E_PRT_MNG_MFUTP_TCP_MASK I40E_MASK(0x1, I40E_PRT_MNG_MFUTP_TCP_SHIFT) -#define I40E_PRT_MNG_MFUTP_SOURCE_DESTINATION_SHIFT 18 -#define I40E_PRT_MNG_MFUTP_SOURCE_DESTINATION_MASK I40E_MASK(0x1, I40E_PRT_MNG_MFUTP_SOURCE_DESTINATION_SHIFT) -#define I40E_PRT_MNG_MIPAF4(_i) (0x00256280 + ((_i) * 32)) /* _i=0...3 */ /* Reset: POR */ -#define I40E_PRT_MNG_MIPAF4_MAX_INDEX 3 -#define I40E_PRT_MNG_MIPAF4_MIPAF_SHIFT 0 -#define I40E_PRT_MNG_MIPAF4_MIPAF_MASK I40E_MASK(0xFFFFFFFF, I40E_PRT_MNG_MIPAF4_MIPAF_SHIFT) -#define I40E_PRT_MNG_MIPAF6(_i) (0x00254200 + ((_i) * 32)) /* _i=0...15 */ /* Reset: POR */ -#define I40E_PRT_MNG_MIPAF6_MAX_INDEX 15 -#define I40E_PRT_MNG_MIPAF6_MIPAF_SHIFT 0 -#define I40E_PRT_MNG_MIPAF6_MIPAF_MASK I40E_MASK(0xFFFFFFFF, I40E_PRT_MNG_MIPAF6_MIPAF_SHIFT) -#define I40E_PRT_MNG_MMAH(_i) (0x00256380 + ((_i) * 32)) /* _i=0...3 */ /* Reset: POR */ -#define I40E_PRT_MNG_MMAH_MAX_INDEX 3 -#define I40E_PRT_MNG_MMAH_MMAH_SHIFT 0 -#define I40E_PRT_MNG_MMAH_MMAH_MASK I40E_MASK(0xFFFF, I40E_PRT_MNG_MMAH_MMAH_SHIFT) -#define I40E_PRT_MNG_MMAL(_i) (0x00256480 + ((_i) * 32)) /* _i=0...3 */ /* Reset: POR */ -#define I40E_PRT_MNG_MMAL_MAX_INDEX 3 -#define I40E_PRT_MNG_MMAL_MMAL_SHIFT 0 -#define I40E_PRT_MNG_MMAL_MMAL_MASK I40E_MASK(0xFFFFFFFF, I40E_PRT_MNG_MMAL_MMAL_SHIFT) -#define I40E_PRT_MNG_MNGONLY 0x00256A60 /* Reset: POR */ -#define I40E_PRT_MNG_MNGONLY_EXCLUSIVE_TO_MANAGEABILITY_SHIFT 0 -#define I40E_PRT_MNG_MNGONLY_EXCLUSIVE_TO_MANAGEABILITY_MASK I40E_MASK(0xFF, I40E_PRT_MNG_MNGONLY_EXCLUSIVE_TO_MANAGEABILITY_SHIFT) -#define I40E_PRT_MNG_MSFM 0x00256AA0 /* Reset: POR */ -#define I40E_PRT_MNG_MSFM_PORT_26F_UDP_SHIFT 0 -#define I40E_PRT_MNG_MSFM_PORT_26F_UDP_MASK I40E_MASK(0x1, I40E_PRT_MNG_MSFM_PORT_26F_UDP_SHIFT) -#define I40E_PRT_MNG_MSFM_PORT_26F_TCP_SHIFT 1 -#define I40E_PRT_MNG_MSFM_PORT_26F_TCP_MASK I40E_MASK(0x1, I40E_PRT_MNG_MSFM_PORT_26F_TCP_SHIFT) -#define I40E_PRT_MNG_MSFM_PORT_298_UDP_SHIFT 2 -#define I40E_PRT_MNG_MSFM_PORT_298_UDP_MASK I40E_MASK(0x1, I40E_PRT_MNG_MSFM_PORT_298_UDP_SHIFT) -#define I40E_PRT_MNG_MSFM_PORT_298_TCP_SHIFT 3 -#define I40E_PRT_MNG_MSFM_PORT_298_TCP_MASK I40E_MASK(0x1, I40E_PRT_MNG_MSFM_PORT_298_TCP_SHIFT) -#define I40E_PRT_MNG_MSFM_IPV6_0_MASK_SHIFT 4 -#define I40E_PRT_MNG_MSFM_IPV6_0_MASK_MASK I40E_MASK(0x1, I40E_PRT_MNG_MSFM_IPV6_0_MASK_SHIFT) -#define I40E_PRT_MNG_MSFM_IPV6_1_MASK_SHIFT 5 -#define I40E_PRT_MNG_MSFM_IPV6_1_MASK_MASK I40E_MASK(0x1, I40E_PRT_MNG_MSFM_IPV6_1_MASK_SHIFT) -#define I40E_PRT_MNG_MSFM_IPV6_2_MASK_SHIFT 6 -#define I40E_PRT_MNG_MSFM_IPV6_2_MASK_MASK I40E_MASK(0x1, I40E_PRT_MNG_MSFM_IPV6_2_MASK_SHIFT) -#define I40E_PRT_MNG_MSFM_IPV6_3_MASK_SHIFT 7 -#define I40E_PRT_MNG_MSFM_IPV6_3_MASK_MASK I40E_MASK(0x1, I40E_PRT_MNG_MSFM_IPV6_3_MASK_SHIFT) -#define I40E_MSIX_PBA(_i) (0x00001000 + ((_i) * 4)) /* _i=0...5 */ /* Reset: FLR */ -#define I40E_MSIX_PBA_MAX_INDEX 5 -#define I40E_MSIX_PBA_PENBIT_SHIFT 0 -#define I40E_MSIX_PBA_PENBIT_MASK I40E_MASK(0xFFFFFFFF, I40E_MSIX_PBA_PENBIT_SHIFT) -#define I40E_MSIX_TADD(_i) (0x00000000 + ((_i) * 16)) /* _i=0...128 */ /* Reset: FLR */ -#define I40E_MSIX_TADD_MAX_INDEX 128 -#define I40E_MSIX_TADD_MSIXTADD10_SHIFT 0 -#define I40E_MSIX_TADD_MSIXTADD10_MASK I40E_MASK(0x3, I40E_MSIX_TADD_MSIXTADD10_SHIFT) -#define I40E_MSIX_TADD_MSIXTADD_SHIFT 2 -#define I40E_MSIX_TADD_MSIXTADD_MASK I40E_MASK(0x3FFFFFFF, I40E_MSIX_TADD_MSIXTADD_SHIFT) -#define I40E_MSIX_TMSG(_i) (0x00000008 + ((_i) * 16)) /* _i=0...128 */ /* Reset: FLR */ -#define I40E_MSIX_TMSG_MAX_INDEX 128 -#define I40E_MSIX_TMSG_MSIXTMSG_SHIFT 0 -#define I40E_MSIX_TMSG_MSIXTMSG_MASK I40E_MASK(0xFFFFFFFF, I40E_MSIX_TMSG_MSIXTMSG_SHIFT) -#define I40E_MSIX_TUADD(_i) (0x00000004 + ((_i) * 16)) /* _i=0...128 */ /* Reset: FLR */ -#define I40E_MSIX_TUADD_MAX_INDEX 128 -#define I40E_MSIX_TUADD_MSIXTUADD_SHIFT 0 -#define I40E_MSIX_TUADD_MSIXTUADD_MASK I40E_MASK(0xFFFFFFFF, I40E_MSIX_TUADD_MSIXTUADD_SHIFT) -#define I40E_MSIX_TVCTRL(_i) (0x0000000C + ((_i) * 16)) /* _i=0...128 */ /* Reset: FLR */ -#define I40E_MSIX_TVCTRL_MAX_INDEX 128 -#define I40E_MSIX_TVCTRL_MASK_SHIFT 0 -#define I40E_MSIX_TVCTRL_MASK_MASK I40E_MASK(0x1, I40E_MSIX_TVCTRL_MASK_SHIFT) -#define I40E_VFMSIX_PBA1(_i) (0x00002000 + ((_i) * 4)) /* _i=0...19 */ /* Reset: VFLR */ -#define I40E_VFMSIX_PBA1_MAX_INDEX 19 -#define I40E_VFMSIX_PBA1_PENBIT_SHIFT 0 -#define I40E_VFMSIX_PBA1_PENBIT_MASK I40E_MASK(0xFFFFFFFF, I40E_VFMSIX_PBA1_PENBIT_SHIFT) -#define I40E_VFMSIX_TADD1(_i) (0x00002100 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */ -#define I40E_VFMSIX_TADD1_MAX_INDEX 639 -#define I40E_VFMSIX_TADD1_MSIXTADD10_SHIFT 0 -#define I40E_VFMSIX_TADD1_MSIXTADD10_MASK I40E_MASK(0x3, I40E_VFMSIX_TADD1_MSIXTADD10_SHIFT) -#define I40E_VFMSIX_TADD1_MSIXTADD_SHIFT 2 -#define I40E_VFMSIX_TADD1_MSIXTADD_MASK I40E_MASK(0x3FFFFFFF, I40E_VFMSIX_TADD1_MSIXTADD_SHIFT) -#define I40E_VFMSIX_TMSG1(_i) (0x00002108 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */ -#define I40E_VFMSIX_TMSG1_MAX_INDEX 639 -#define I40E_VFMSIX_TMSG1_MSIXTMSG_SHIFT 0 -#define I40E_VFMSIX_TMSG1_MSIXTMSG_MASK I40E_MASK(0xFFFFFFFF, I40E_VFMSIX_TMSG1_MSIXTMSG_SHIFT) -#define I40E_VFMSIX_TUADD1(_i) (0x00002104 + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */ -#define I40E_VFMSIX_TUADD1_MAX_INDEX 639 -#define I40E_VFMSIX_TUADD1_MSIXTUADD_SHIFT 0 -#define I40E_VFMSIX_TUADD1_MSIXTUADD_MASK I40E_MASK(0xFFFFFFFF, I40E_VFMSIX_TUADD1_MSIXTUADD_SHIFT) -#define I40E_VFMSIX_TVCTRL1(_i) (0x0000210C + ((_i) * 16)) /* _i=0...639 */ /* Reset: VFLR */ -#define I40E_VFMSIX_TVCTRL1_MAX_INDEX 639 -#define I40E_VFMSIX_TVCTRL1_MASK_SHIFT 0 -#define I40E_VFMSIX_TVCTRL1_MASK_MASK I40E_MASK(0x1, I40E_VFMSIX_TVCTRL1_MASK_SHIFT) -#define I40E_GLNVM_FLA 0x000B6108 /* Reset: POR */ -#define I40E_GLNVM_FLA_FL_SCK_SHIFT 0 -#define I40E_GLNVM_FLA_FL_SCK_MASK I40E_MASK(0x1, I40E_GLNVM_FLA_FL_SCK_SHIFT) -#define I40E_GLNVM_FLA_FL_CE_SHIFT 1 -#define I40E_GLNVM_FLA_FL_CE_MASK I40E_MASK(0x1, I40E_GLNVM_FLA_FL_CE_SHIFT) -#define I40E_GLNVM_FLA_FL_SI_SHIFT 2 -#define I40E_GLNVM_FLA_FL_SI_MASK I40E_MASK(0x1, I40E_GLNVM_FLA_FL_SI_SHIFT) -#define I40E_GLNVM_FLA_FL_SO_SHIFT 3 -#define I40E_GLNVM_FLA_FL_SO_MASK I40E_MASK(0x1, I40E_GLNVM_FLA_FL_SO_SHIFT) -#define I40E_GLNVM_FLA_FL_REQ_SHIFT 4 -#define I40E_GLNVM_FLA_FL_REQ_MASK I40E_MASK(0x1, I40E_GLNVM_FLA_FL_REQ_SHIFT) -#define I40E_GLNVM_FLA_FL_GNT_SHIFT 5 -#define I40E_GLNVM_FLA_FL_GNT_MASK I40E_MASK(0x1, I40E_GLNVM_FLA_FL_GNT_SHIFT) -#define I40E_GLNVM_FLA_LOCKED_SHIFT 6 -#define I40E_GLNVM_FLA_LOCKED_MASK I40E_MASK(0x1, I40E_GLNVM_FLA_LOCKED_SHIFT) -#define I40E_GLNVM_FLA_FL_SADDR_SHIFT 18 -#define I40E_GLNVM_FLA_FL_SADDR_MASK I40E_MASK(0x7FF, I40E_GLNVM_FLA_FL_SADDR_SHIFT) -#define I40E_GLNVM_FLA_FL_BUSY_SHIFT 30 -#define I40E_GLNVM_FLA_FL_BUSY_MASK I40E_MASK(0x1, I40E_GLNVM_FLA_FL_BUSY_SHIFT) -#define I40E_GLNVM_FLA_FL_DER_SHIFT 31 -#define I40E_GLNVM_FLA_FL_DER_MASK I40E_MASK(0x1, I40E_GLNVM_FLA_FL_DER_SHIFT) -#define I40E_GLNVM_FLASHID 0x000B6104 /* Reset: POR */ -#define I40E_GLNVM_FLASHID_FLASHID_SHIFT 0 -#define I40E_GLNVM_FLASHID_FLASHID_MASK I40E_MASK(0xFFFFFF, I40E_GLNVM_FLASHID_FLASHID_SHIFT) -#define I40E_GLNVM_FLASHID_FLEEP_PERF_SHIFT 31 -#define I40E_GLNVM_FLASHID_FLEEP_PERF_MASK I40E_MASK(0x1, I40E_GLNVM_FLASHID_FLEEP_PERF_SHIFT) -#define I40E_GLNVM_GENS 0x000B6100 /* Reset: POR */ -#define I40E_GLNVM_GENS_NVM_PRES_SHIFT 0 -#define I40E_GLNVM_GENS_NVM_PRES_MASK I40E_MASK(0x1, I40E_GLNVM_GENS_NVM_PRES_SHIFT) -#define I40E_GLNVM_GENS_SR_SIZE_SHIFT 5 -#define I40E_GLNVM_GENS_SR_SIZE_MASK I40E_MASK(0x7, I40E_GLNVM_GENS_SR_SIZE_SHIFT) -#define I40E_GLNVM_GENS_BANK1VAL_SHIFT 8 -#define I40E_GLNVM_GENS_BANK1VAL_MASK I40E_MASK(0x1, I40E_GLNVM_GENS_BANK1VAL_SHIFT) -#define I40E_GLNVM_GENS_ALT_PRST_SHIFT 23 -#define I40E_GLNVM_GENS_ALT_PRST_MASK I40E_MASK(0x1, I40E_GLNVM_GENS_ALT_PRST_SHIFT) -#define I40E_GLNVM_GENS_FL_AUTO_RD_SHIFT 25 -#define I40E_GLNVM_GENS_FL_AUTO_RD_MASK I40E_MASK(0x1, I40E_GLNVM_GENS_FL_AUTO_RD_SHIFT) -#define I40E_GLNVM_PROTCSR(_i) (0x000B6010 + ((_i) * 4)) /* _i=0...59 */ /* Reset: POR */ -#define I40E_GLNVM_PROTCSR_MAX_INDEX 59 -#define I40E_GLNVM_PROTCSR_ADDR_BLOCK_SHIFT 0 -#define I40E_GLNVM_PROTCSR_ADDR_BLOCK_MASK I40E_MASK(0xFFFFFF, I40E_GLNVM_PROTCSR_ADDR_BLOCK_SHIFT) -#define I40E_GLNVM_SRCTL 0x000B6110 /* Reset: POR */ -#define I40E_GLNVM_SRCTL_SRBUSY_SHIFT 0 -#define I40E_GLNVM_SRCTL_SRBUSY_MASK I40E_MASK(0x1, I40E_GLNVM_SRCTL_SRBUSY_SHIFT) -#define I40E_GLNVM_SRCTL_ADDR_SHIFT 14 -#define I40E_GLNVM_SRCTL_ADDR_MASK I40E_MASK(0x7FFF, I40E_GLNVM_SRCTL_ADDR_SHIFT) -#define I40E_GLNVM_SRCTL_WRITE_SHIFT 29 -#define I40E_GLNVM_SRCTL_WRITE_MASK I40E_MASK(0x1, I40E_GLNVM_SRCTL_WRITE_SHIFT) -#define I40E_GLNVM_SRCTL_START_SHIFT 30 -#define I40E_GLNVM_SRCTL_START_MASK I40E_MASK(0x1, I40E_GLNVM_SRCTL_START_SHIFT) -#define I40E_GLNVM_SRCTL_DONE_SHIFT 31 -#define I40E_GLNVM_SRCTL_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_SRCTL_DONE_SHIFT) -#define I40E_GLNVM_SRDATA 0x000B6114 /* Reset: POR */ -#define I40E_GLNVM_SRDATA_WRDATA_SHIFT 0 -#define I40E_GLNVM_SRDATA_WRDATA_MASK I40E_MASK(0xFFFF, I40E_GLNVM_SRDATA_WRDATA_SHIFT) -#define I40E_GLNVM_SRDATA_RDDATA_SHIFT 16 -#define I40E_GLNVM_SRDATA_RDDATA_MASK I40E_MASK(0xFFFF, I40E_GLNVM_SRDATA_RDDATA_SHIFT) -#define I40E_GLNVM_ULD 0x000B6008 /* Reset: POR */ -#define I40E_GLNVM_ULD_CONF_PCIR_DONE_SHIFT 0 -#define I40E_GLNVM_ULD_CONF_PCIR_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_ULD_CONF_PCIR_DONE_SHIFT) -#define I40E_GLNVM_ULD_CONF_PCIRTL_DONE_SHIFT 1 -#define I40E_GLNVM_ULD_CONF_PCIRTL_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_ULD_CONF_PCIRTL_DONE_SHIFT) -#define I40E_GLNVM_ULD_CONF_LCB_DONE_SHIFT 2 -#define I40E_GLNVM_ULD_CONF_LCB_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_ULD_CONF_LCB_DONE_SHIFT) -#define I40E_GLNVM_ULD_CONF_CORE_DONE_SHIFT 3 -#define I40E_GLNVM_ULD_CONF_CORE_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_ULD_CONF_CORE_DONE_SHIFT) -#define I40E_GLNVM_ULD_CONF_GLOBAL_DONE_SHIFT 4 -#define I40E_GLNVM_ULD_CONF_GLOBAL_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_ULD_CONF_GLOBAL_DONE_SHIFT) -#define I40E_GLNVM_ULD_CONF_POR_DONE_SHIFT 5 -#define I40E_GLNVM_ULD_CONF_POR_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_ULD_CONF_POR_DONE_SHIFT) -#define I40E_GLNVM_ULD_CONF_PCIE_ANA_DONE_SHIFT 6 -#define I40E_GLNVM_ULD_CONF_PCIE_ANA_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_ULD_CONF_PCIE_ANA_DONE_SHIFT) -#define I40E_GLNVM_ULD_CONF_PHY_ANA_DONE_SHIFT 7 -#define I40E_GLNVM_ULD_CONF_PHY_ANA_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_ULD_CONF_PHY_ANA_DONE_SHIFT) -#define I40E_GLNVM_ULD_CONF_EMP_DONE_SHIFT 8 -#define I40E_GLNVM_ULD_CONF_EMP_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_ULD_CONF_EMP_DONE_SHIFT) -#define I40E_GLNVM_ULD_CONF_PCIALT_DONE_SHIFT 9 -#define I40E_GLNVM_ULD_CONF_PCIALT_DONE_MASK I40E_MASK(0x1, I40E_GLNVM_ULD_CONF_PCIALT_DONE_SHIFT) -#define I40E_GLPCI_BYTCTH 0x0009C484 /* Reset: PCIR */ -#define I40E_GLPCI_BYTCTH_PCI_COUNT_BW_BCT_SHIFT 0 -#define I40E_GLPCI_BYTCTH_PCI_COUNT_BW_BCT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPCI_BYTCTH_PCI_COUNT_BW_BCT_SHIFT) -#define I40E_GLPCI_BYTCTL 0x0009C488 /* Reset: PCIR */ -#define I40E_GLPCI_BYTCTL_PCI_COUNT_BW_BCT_SHIFT 0 -#define I40E_GLPCI_BYTCTL_PCI_COUNT_BW_BCT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPCI_BYTCTL_PCI_COUNT_BW_BCT_SHIFT) -#define I40E_GLPCI_CAPCTRL 0x000BE4A4 /* Reset: PCIR */ -#define I40E_GLPCI_CAPCTRL_VPD_EN_SHIFT 0 -#define I40E_GLPCI_CAPCTRL_VPD_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPCTRL_VPD_EN_SHIFT) -#define I40E_GLPCI_CAPSUP 0x000BE4A8 /* Reset: PCIR */ -#define I40E_GLPCI_CAPSUP_PCIE_VER_SHIFT 0 -#define I40E_GLPCI_CAPSUP_PCIE_VER_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_PCIE_VER_SHIFT) -#define I40E_GLPCI_CAPSUP_LTR_EN_SHIFT 2 -#define I40E_GLPCI_CAPSUP_LTR_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_LTR_EN_SHIFT) -#define I40E_GLPCI_CAPSUP_TPH_EN_SHIFT 3 -#define I40E_GLPCI_CAPSUP_TPH_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_TPH_EN_SHIFT) -#define I40E_GLPCI_CAPSUP_ARI_EN_SHIFT 4 -#define I40E_GLPCI_CAPSUP_ARI_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_ARI_EN_SHIFT) -#define I40E_GLPCI_CAPSUP_IOV_EN_SHIFT 5 -#define I40E_GLPCI_CAPSUP_IOV_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_IOV_EN_SHIFT) -#define I40E_GLPCI_CAPSUP_ACS_EN_SHIFT 6 -#define I40E_GLPCI_CAPSUP_ACS_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_ACS_EN_SHIFT) -#define I40E_GLPCI_CAPSUP_SEC_EN_SHIFT 7 -#define I40E_GLPCI_CAPSUP_SEC_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_SEC_EN_SHIFT) -#define I40E_GLPCI_CAPSUP_ECRC_GEN_EN_SHIFT 16 -#define I40E_GLPCI_CAPSUP_ECRC_GEN_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_ECRC_GEN_EN_SHIFT) -#define I40E_GLPCI_CAPSUP_ECRC_CHK_EN_SHIFT 17 -#define I40E_GLPCI_CAPSUP_ECRC_CHK_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_ECRC_CHK_EN_SHIFT) -#define I40E_GLPCI_CAPSUP_IDO_EN_SHIFT 18 -#define I40E_GLPCI_CAPSUP_IDO_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_IDO_EN_SHIFT) -#define I40E_GLPCI_CAPSUP_MSI_MASK_SHIFT 19 -#define I40E_GLPCI_CAPSUP_MSI_MASK_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_MSI_MASK_SHIFT) -#define I40E_GLPCI_CAPSUP_CSR_CONF_EN_SHIFT 20 -#define I40E_GLPCI_CAPSUP_CSR_CONF_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_CSR_CONF_EN_SHIFT) -#define I40E_GLPCI_CAPSUP_LOAD_SUBSYS_ID_SHIFT 30 -#define I40E_GLPCI_CAPSUP_LOAD_SUBSYS_ID_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_LOAD_SUBSYS_ID_SHIFT) -#define I40E_GLPCI_CAPSUP_LOAD_DEV_ID_SHIFT 31 -#define I40E_GLPCI_CAPSUP_LOAD_DEV_ID_MASK I40E_MASK(0x1, I40E_GLPCI_CAPSUP_LOAD_DEV_ID_SHIFT) -#define I40E_GLPCI_CNF 0x000BE4C0 /* Reset: POR */ -#define I40E_GLPCI_CNF_FLEX10_SHIFT 1 -#define I40E_GLPCI_CNF_FLEX10_MASK I40E_MASK(0x1, I40E_GLPCI_CNF_FLEX10_SHIFT) -#define I40E_GLPCI_CNF_WAKE_PIN_EN_SHIFT 2 -#define I40E_GLPCI_CNF_WAKE_PIN_EN_MASK I40E_MASK(0x1, I40E_GLPCI_CNF_WAKE_PIN_EN_SHIFT) -#define I40E_GLPCI_CNF2 0x000BE494 /* Reset: PCIR */ -#define I40E_GLPCI_CNF2_RO_DIS_SHIFT 0 -#define I40E_GLPCI_CNF2_RO_DIS_MASK I40E_MASK(0x1, I40E_GLPCI_CNF2_RO_DIS_SHIFT) -#define I40E_GLPCI_CNF2_CACHELINE_SIZE_SHIFT 1 -#define I40E_GLPCI_CNF2_CACHELINE_SIZE_MASK I40E_MASK(0x1, I40E_GLPCI_CNF2_CACHELINE_SIZE_SHIFT) -#define I40E_GLPCI_CNF2_MSI_X_PF_N_SHIFT 2 -#define I40E_GLPCI_CNF2_MSI_X_PF_N_MASK I40E_MASK(0x7FF, I40E_GLPCI_CNF2_MSI_X_PF_N_SHIFT) -#define I40E_GLPCI_CNF2_MSI_X_VF_N_SHIFT 13 -#define I40E_GLPCI_CNF2_MSI_X_VF_N_MASK I40E_MASK(0x7FF, I40E_GLPCI_CNF2_MSI_X_VF_N_SHIFT) -#define I40E_GLPCI_DREVID 0x0009C480 /* Reset: PCIR */ -#define I40E_GLPCI_DREVID_DEFAULT_REVID_SHIFT 0 -#define I40E_GLPCI_DREVID_DEFAULT_REVID_MASK I40E_MASK(0xFF, I40E_GLPCI_DREVID_DEFAULT_REVID_SHIFT) -#define I40E_GLPCI_GSCL_1 0x0009C48C /* Reset: PCIR */ -#define I40E_GLPCI_GSCL_1_GIO_COUNT_EN_0_SHIFT 0 -#define I40E_GLPCI_GSCL_1_GIO_COUNT_EN_0_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_GIO_COUNT_EN_0_SHIFT) -#define I40E_GLPCI_GSCL_1_GIO_COUNT_EN_1_SHIFT 1 -#define I40E_GLPCI_GSCL_1_GIO_COUNT_EN_1_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_GIO_COUNT_EN_1_SHIFT) -#define I40E_GLPCI_GSCL_1_GIO_COUNT_EN_2_SHIFT 2 -#define I40E_GLPCI_GSCL_1_GIO_COUNT_EN_2_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_GIO_COUNT_EN_2_SHIFT) -#define I40E_GLPCI_GSCL_1_GIO_COUNT_EN_3_SHIFT 3 -#define I40E_GLPCI_GSCL_1_GIO_COUNT_EN_3_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_GIO_COUNT_EN_3_SHIFT) -#define I40E_GLPCI_GSCL_1_LBC_ENABLE_0_SHIFT 4 -#define I40E_GLPCI_GSCL_1_LBC_ENABLE_0_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_LBC_ENABLE_0_SHIFT) -#define I40E_GLPCI_GSCL_1_LBC_ENABLE_1_SHIFT 5 -#define I40E_GLPCI_GSCL_1_LBC_ENABLE_1_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_LBC_ENABLE_1_SHIFT) -#define I40E_GLPCI_GSCL_1_LBC_ENABLE_2_SHIFT 6 -#define I40E_GLPCI_GSCL_1_LBC_ENABLE_2_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_LBC_ENABLE_2_SHIFT) -#define I40E_GLPCI_GSCL_1_LBC_ENABLE_3_SHIFT 7 -#define I40E_GLPCI_GSCL_1_LBC_ENABLE_3_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_LBC_ENABLE_3_SHIFT) -#define I40E_GLPCI_GSCL_1_PCI_COUNT_LAT_EN_SHIFT 8 -#define I40E_GLPCI_GSCL_1_PCI_COUNT_LAT_EN_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_PCI_COUNT_LAT_EN_SHIFT) -#define I40E_GLPCI_GSCL_1_PCI_COUNT_LAT_EV_SHIFT 9 -#define I40E_GLPCI_GSCL_1_PCI_COUNT_LAT_EV_MASK I40E_MASK(0x1F, I40E_GLPCI_GSCL_1_PCI_COUNT_LAT_EV_SHIFT) -#define I40E_GLPCI_GSCL_1_PCI_COUNT_BW_EN_SHIFT 14 -#define I40E_GLPCI_GSCL_1_PCI_COUNT_BW_EN_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_PCI_COUNT_BW_EN_SHIFT) -#define I40E_GLPCI_GSCL_1_PCI_COUNT_BW_EV_SHIFT 15 -#define I40E_GLPCI_GSCL_1_PCI_COUNT_BW_EV_MASK I40E_MASK(0x1F, I40E_GLPCI_GSCL_1_PCI_COUNT_BW_EV_SHIFT) -#define I40E_GLPCI_GSCL_1_GIO_64_BIT_EN_SHIFT 28 -#define I40E_GLPCI_GSCL_1_GIO_64_BIT_EN_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_GIO_64_BIT_EN_SHIFT) -#define I40E_GLPCI_GSCL_1_GIO_COUNT_RESET_SHIFT 29 -#define I40E_GLPCI_GSCL_1_GIO_COUNT_RESET_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_GIO_COUNT_RESET_SHIFT) -#define I40E_GLPCI_GSCL_1_GIO_COUNT_STOP_SHIFT 30 -#define I40E_GLPCI_GSCL_1_GIO_COUNT_STOP_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_GIO_COUNT_STOP_SHIFT) -#define I40E_GLPCI_GSCL_1_GIO_COUNT_START_SHIFT 31 -#define I40E_GLPCI_GSCL_1_GIO_COUNT_START_MASK I40E_MASK(0x1, I40E_GLPCI_GSCL_1_GIO_COUNT_START_SHIFT) -#define I40E_GLPCI_GSCL_2 0x0009C490 /* Reset: PCIR */ -#define I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_0_SHIFT 0 -#define I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_0_MASK I40E_MASK(0xFF, I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_0_SHIFT) -#define I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_1_SHIFT 8 -#define I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_1_MASK I40E_MASK(0xFF, I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_1_SHIFT) -#define I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_2_SHIFT 16 -#define I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_2_MASK I40E_MASK(0xFF, I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_2_SHIFT) -#define I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_3_SHIFT 24 -#define I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_3_MASK I40E_MASK(0xFF, I40E_GLPCI_GSCL_2_GIO_EVENT_NUM_3_SHIFT) -#define I40E_GLPCI_GSCL_5_8(_i) (0x0009C494 + ((_i) * 4)) /* _i=0...3 */ /* Reset: PCIR */ -#define I40E_GLPCI_GSCL_5_8_MAX_INDEX 3 -#define I40E_GLPCI_GSCL_5_8_LBC_THRESHOLD_N_SHIFT 0 -#define I40E_GLPCI_GSCL_5_8_LBC_THRESHOLD_N_MASK I40E_MASK(0xFFFF, I40E_GLPCI_GSCL_5_8_LBC_THRESHOLD_N_SHIFT) -#define I40E_GLPCI_GSCL_5_8_LBC_TIMER_N_SHIFT 16 -#define I40E_GLPCI_GSCL_5_8_LBC_TIMER_N_MASK I40E_MASK(0xFFFF, I40E_GLPCI_GSCL_5_8_LBC_TIMER_N_SHIFT) -#define I40E_GLPCI_GSCN_0_3(_i) (0x0009C4A4 + ((_i) * 4)) /* _i=0...3 */ /* Reset: PCIR */ -#define I40E_GLPCI_GSCN_0_3_MAX_INDEX 3 -#define I40E_GLPCI_GSCN_0_3_EVENT_COUNTER_SHIFT 0 -#define I40E_GLPCI_GSCN_0_3_EVENT_COUNTER_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPCI_GSCN_0_3_EVENT_COUNTER_SHIFT) -#define I40E_GLPCI_LATCT 0x0009C4B4 /* Reset: PCIR */ -#define I40E_GLPCI_LATCT_PCI_COUNT_LAT_CT_SHIFT 0 -#define I40E_GLPCI_LATCT_PCI_COUNT_LAT_CT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPCI_LATCT_PCI_COUNT_LAT_CT_SHIFT) -#define I40E_GLPCI_LBARCTRL 0x000BE484 /* Reset: POR */ -#define I40E_GLPCI_LBARCTRL_PREFBAR_SHIFT 0 -#define I40E_GLPCI_LBARCTRL_PREFBAR_MASK I40E_MASK(0x1, I40E_GLPCI_LBARCTRL_PREFBAR_SHIFT) -#define I40E_GLPCI_LBARCTRL_BAR32_SHIFT 1 -#define I40E_GLPCI_LBARCTRL_BAR32_MASK I40E_MASK(0x1, I40E_GLPCI_LBARCTRL_BAR32_SHIFT) -#define I40E_GLPCI_LBARCTRL_FLASH_EXPOSE_SHIFT 3 -#define I40E_GLPCI_LBARCTRL_FLASH_EXPOSE_MASK I40E_MASK(0x1, I40E_GLPCI_LBARCTRL_FLASH_EXPOSE_SHIFT) -#define I40E_GLPCI_LBARCTRL_RSVD_4_SHIFT 4 -#define I40E_GLPCI_LBARCTRL_RSVD_4_MASK I40E_MASK(0x3, I40E_GLPCI_LBARCTRL_RSVD_4_SHIFT) -#define I40E_GLPCI_LBARCTRL_FL_SIZE_SHIFT 6 -#define I40E_GLPCI_LBARCTRL_FL_SIZE_MASK I40E_MASK(0x7, I40E_GLPCI_LBARCTRL_FL_SIZE_SHIFT) -#define I40E_GLPCI_LBARCTRL_RSVD_10_SHIFT 10 -#define I40E_GLPCI_LBARCTRL_RSVD_10_MASK I40E_MASK(0x1, I40E_GLPCI_LBARCTRL_RSVD_10_SHIFT) -#define I40E_GLPCI_LBARCTRL_EXROM_SIZE_SHIFT 11 -#define I40E_GLPCI_LBARCTRL_EXROM_SIZE_MASK I40E_MASK(0x7, I40E_GLPCI_LBARCTRL_EXROM_SIZE_SHIFT) -#define I40E_GLPCI_LINKCAP 0x000BE4AC /* Reset: PCIR */ -#define I40E_GLPCI_LINKCAP_LINK_SPEEDS_VECTOR_SHIFT 0 -#define I40E_GLPCI_LINKCAP_LINK_SPEEDS_VECTOR_MASK I40E_MASK(0x3F, I40E_GLPCI_LINKCAP_LINK_SPEEDS_VECTOR_SHIFT) -#define I40E_GLPCI_LINKCAP_MAX_PAYLOAD_SHIFT 6 -#define I40E_GLPCI_LINKCAP_MAX_PAYLOAD_MASK I40E_MASK(0x7, I40E_GLPCI_LINKCAP_MAX_PAYLOAD_SHIFT) -#define I40E_GLPCI_LINKCAP_MAX_LINK_WIDTH_SHIFT 9 -#define I40E_GLPCI_LINKCAP_MAX_LINK_WIDTH_MASK I40E_MASK(0xF, I40E_GLPCI_LINKCAP_MAX_LINK_WIDTH_SHIFT) -#define I40E_GLPCI_PCIERR 0x000BE4FC /* Reset: PCIR */ -#define I40E_GLPCI_PCIERR_PCIE_ERR_REP_SHIFT 0 -#define I40E_GLPCI_PCIERR_PCIE_ERR_REP_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPCI_PCIERR_PCIE_ERR_REP_SHIFT) -#define I40E_GLPCI_PKTCT 0x0009C4BC /* Reset: PCIR */ -#define I40E_GLPCI_PKTCT_PCI_COUNT_BW_PCT_SHIFT 0 -#define I40E_GLPCI_PKTCT_PCI_COUNT_BW_PCT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPCI_PKTCT_PCI_COUNT_BW_PCT_SHIFT) -#define I40E_GLPCI_PM_MUX_NPQ 0x0009C4F4 /* Reset: PCIR */ -#define I40E_GLPCI_PM_MUX_NPQ_NPQ_NUM_PORT_SEL_SHIFT 0 -#define I40E_GLPCI_PM_MUX_NPQ_NPQ_NUM_PORT_SEL_MASK I40E_MASK(0x7, I40E_GLPCI_PM_MUX_NPQ_NPQ_NUM_PORT_SEL_SHIFT) -#define I40E_GLPCI_PM_MUX_NPQ_INNER_NPQ_SEL_SHIFT 16 -#define I40E_GLPCI_PM_MUX_NPQ_INNER_NPQ_SEL_MASK I40E_MASK(0x1F, I40E_GLPCI_PM_MUX_NPQ_INNER_NPQ_SEL_SHIFT) -#define I40E_GLPCI_PM_MUX_PFB 0x0009C4F0 /* Reset: PCIR */ -#define I40E_GLPCI_PM_MUX_PFB_PFB_PORT_SEL_SHIFT 0 -#define I40E_GLPCI_PM_MUX_PFB_PFB_PORT_SEL_MASK I40E_MASK(0x1F, I40E_GLPCI_PM_MUX_PFB_PFB_PORT_SEL_SHIFT) -#define I40E_GLPCI_PM_MUX_PFB_INNER_PORT_SEL_SHIFT 16 -#define I40E_GLPCI_PM_MUX_PFB_INNER_PORT_SEL_MASK I40E_MASK(0x7, I40E_GLPCI_PM_MUX_PFB_INNER_PORT_SEL_SHIFT) -#define I40E_GLPCI_PMSUP 0x000BE4B0 /* Reset: PCIR */ -#define I40E_GLPCI_PMSUP_ASPM_SUP_SHIFT 0 -#define I40E_GLPCI_PMSUP_ASPM_SUP_MASK I40E_MASK(0x3, I40E_GLPCI_PMSUP_ASPM_SUP_SHIFT) -#define I40E_GLPCI_PMSUP_L0S_EXIT_LAT_SHIFT 2 -#define I40E_GLPCI_PMSUP_L0S_EXIT_LAT_MASK I40E_MASK(0x7, I40E_GLPCI_PMSUP_L0S_EXIT_LAT_SHIFT) -#define I40E_GLPCI_PMSUP_L1_EXIT_LAT_SHIFT 5 -#define I40E_GLPCI_PMSUP_L1_EXIT_LAT_MASK I40E_MASK(0x7, I40E_GLPCI_PMSUP_L1_EXIT_LAT_SHIFT) -#define I40E_GLPCI_PMSUP_L0S_ACC_LAT_SHIFT 8 -#define I40E_GLPCI_PMSUP_L0S_ACC_LAT_MASK I40E_MASK(0x7, I40E_GLPCI_PMSUP_L0S_ACC_LAT_SHIFT) -#define I40E_GLPCI_PMSUP_L1_ACC_LAT_SHIFT 11 -#define I40E_GLPCI_PMSUP_L1_ACC_LAT_MASK I40E_MASK(0x7, I40E_GLPCI_PMSUP_L1_ACC_LAT_SHIFT) -#define I40E_GLPCI_PMSUP_SLOT_CLK_SHIFT 14 -#define I40E_GLPCI_PMSUP_SLOT_CLK_MASK I40E_MASK(0x1, I40E_GLPCI_PMSUP_SLOT_CLK_SHIFT) -#define I40E_GLPCI_PMSUP_OBFF_SUP_SHIFT 15 -#define I40E_GLPCI_PMSUP_OBFF_SUP_MASK I40E_MASK(0x3, I40E_GLPCI_PMSUP_OBFF_SUP_SHIFT) -#define I40E_GLPCI_PQ_MAX_USED_SPC 0x0009C4EC /* Reset: PCIR */ -#define I40E_GLPCI_PQ_MAX_USED_SPC_GLPCI_PQ_MAX_USED_SPC_12_SHIFT 0 -#define I40E_GLPCI_PQ_MAX_USED_SPC_GLPCI_PQ_MAX_USED_SPC_12_MASK I40E_MASK(0xFF, I40E_GLPCI_PQ_MAX_USED_SPC_GLPCI_PQ_MAX_USED_SPC_12_SHIFT) -#define I40E_GLPCI_PQ_MAX_USED_SPC_GLPCI_PQ_MAX_USED_SPC_13_SHIFT 8 -#define I40E_GLPCI_PQ_MAX_USED_SPC_GLPCI_PQ_MAX_USED_SPC_13_MASK I40E_MASK(0xFF, I40E_GLPCI_PQ_MAX_USED_SPC_GLPCI_PQ_MAX_USED_SPC_13_SHIFT) -#define I40E_GLPCI_PWRDATA 0x000BE490 /* Reset: PCIR */ -#define I40E_GLPCI_PWRDATA_D0_POWER_SHIFT 0 -#define I40E_GLPCI_PWRDATA_D0_POWER_MASK I40E_MASK(0xFF, I40E_GLPCI_PWRDATA_D0_POWER_SHIFT) -#define I40E_GLPCI_PWRDATA_COMM_POWER_SHIFT 8 -#define I40E_GLPCI_PWRDATA_COMM_POWER_MASK I40E_MASK(0xFF, I40E_GLPCI_PWRDATA_COMM_POWER_SHIFT) -#define I40E_GLPCI_PWRDATA_D3_POWER_SHIFT 16 -#define I40E_GLPCI_PWRDATA_D3_POWER_MASK I40E_MASK(0xFF, I40E_GLPCI_PWRDATA_D3_POWER_SHIFT) -#define I40E_GLPCI_PWRDATA_DATA_SCALE_SHIFT 24 -#define I40E_GLPCI_PWRDATA_DATA_SCALE_MASK I40E_MASK(0x3, I40E_GLPCI_PWRDATA_DATA_SCALE_SHIFT) -#define I40E_GLPCI_REVID 0x000BE4B4 /* Reset: PCIR */ -#define I40E_GLPCI_REVID_NVM_REVID_SHIFT 0 -#define I40E_GLPCI_REVID_NVM_REVID_MASK I40E_MASK(0xFF, I40E_GLPCI_REVID_NVM_REVID_SHIFT) -#define I40E_GLPCI_SERH 0x000BE49C /* Reset: PCIR */ -#define I40E_GLPCI_SERH_SER_NUM_H_SHIFT 0 -#define I40E_GLPCI_SERH_SER_NUM_H_MASK I40E_MASK(0xFFFF, I40E_GLPCI_SERH_SER_NUM_H_SHIFT) -#define I40E_GLPCI_SERL 0x000BE498 /* Reset: PCIR */ -#define I40E_GLPCI_SERL_SER_NUM_L_SHIFT 0 -#define I40E_GLPCI_SERL_SER_NUM_L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPCI_SERL_SER_NUM_L_SHIFT) -#define I40E_GLPCI_SPARE_BITS_0 0x0009C4F8 /* Reset: PCIR */ -#define I40E_GLPCI_SPARE_BITS_0_SPARE_BITS_SHIFT 0 -#define I40E_GLPCI_SPARE_BITS_0_SPARE_BITS_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPCI_SPARE_BITS_0_SPARE_BITS_SHIFT) -#define I40E_GLPCI_SPARE_BITS_1 0x0009C4FC /* Reset: PCIR */ -#define I40E_GLPCI_SPARE_BITS_1_SPARE_BITS_SHIFT 0 -#define I40E_GLPCI_SPARE_BITS_1_SPARE_BITS_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPCI_SPARE_BITS_1_SPARE_BITS_SHIFT) -#define I40E_GLPCI_SUBVENID 0x000BE48C /* Reset: PCIR */ -#define I40E_GLPCI_SUBVENID_SUB_VEN_ID_SHIFT 0 -#define I40E_GLPCI_SUBVENID_SUB_VEN_ID_MASK I40E_MASK(0xFFFF, I40E_GLPCI_SUBVENID_SUB_VEN_ID_SHIFT) -#define I40E_GLPCI_UPADD 0x000BE4F8 /* Reset: PCIR */ -#define I40E_GLPCI_UPADD_ADDRESS_SHIFT 1 -#define I40E_GLPCI_UPADD_ADDRESS_MASK I40E_MASK(0x7FFFFFFF, I40E_GLPCI_UPADD_ADDRESS_SHIFT) -#define I40E_GLPCI_VENDORID 0x000BE518 /* Reset: PCIR */ -#define I40E_GLPCI_VENDORID_VENDORID_SHIFT 0 -#define I40E_GLPCI_VENDORID_VENDORID_MASK I40E_MASK(0xFFFF, I40E_GLPCI_VENDORID_VENDORID_SHIFT) -#define I40E_GLPCI_VFSUP 0x000BE4B8 /* Reset: PCIR */ -#define I40E_GLPCI_VFSUP_VF_PREFETCH_SHIFT 0 -#define I40E_GLPCI_VFSUP_VF_PREFETCH_MASK I40E_MASK(0x1, I40E_GLPCI_VFSUP_VF_PREFETCH_SHIFT) -#define I40E_GLPCI_VFSUP_VR_BAR_TYPE_SHIFT 1 -#define I40E_GLPCI_VFSUP_VR_BAR_TYPE_MASK I40E_MASK(0x1, I40E_GLPCI_VFSUP_VR_BAR_TYPE_SHIFT) -#define I40E_PF_FUNC_RID 0x0009C000 /* Reset: PCIR */ -#define I40E_PF_FUNC_RID_FUNCTION_NUMBER_SHIFT 0 -#define I40E_PF_FUNC_RID_FUNCTION_NUMBER_MASK I40E_MASK(0x7, I40E_PF_FUNC_RID_FUNCTION_NUMBER_SHIFT) -#define I40E_PF_FUNC_RID_DEVICE_NUMBER_SHIFT 3 -#define I40E_PF_FUNC_RID_DEVICE_NUMBER_MASK I40E_MASK(0x1F, I40E_PF_FUNC_RID_DEVICE_NUMBER_SHIFT) -#define I40E_PF_FUNC_RID_BUS_NUMBER_SHIFT 8 -#define I40E_PF_FUNC_RID_BUS_NUMBER_MASK I40E_MASK(0xFF, I40E_PF_FUNC_RID_BUS_NUMBER_SHIFT) -#define I40E_PF_PCI_CIAA 0x0009C080 /* Reset: FLR */ -#define I40E_PF_PCI_CIAA_ADDRESS_SHIFT 0 -#define I40E_PF_PCI_CIAA_ADDRESS_MASK I40E_MASK(0xFFF, I40E_PF_PCI_CIAA_ADDRESS_SHIFT) -#define I40E_PF_PCI_CIAA_VF_NUM_SHIFT 12 -#define I40E_PF_PCI_CIAA_VF_NUM_MASK I40E_MASK(0x7F, I40E_PF_PCI_CIAA_VF_NUM_SHIFT) -#define I40E_PF_PCI_CIAD 0x0009C100 /* Reset: FLR */ -#define I40E_PF_PCI_CIAD_DATA_SHIFT 0 -#define I40E_PF_PCI_CIAD_DATA_MASK I40E_MASK(0xFFFFFFFF, I40E_PF_PCI_CIAD_DATA_SHIFT) -#define I40E_PFPCI_CLASS 0x000BE400 /* Reset: PCIR */ -#define I40E_PFPCI_CLASS_STORAGE_CLASS_SHIFT 0 -#define I40E_PFPCI_CLASS_STORAGE_CLASS_MASK I40E_MASK(0x1, I40E_PFPCI_CLASS_STORAGE_CLASS_SHIFT) -#define I40E_PFPCI_CLASS_RESERVED_1_SHIFT 1 -#define I40E_PFPCI_CLASS_RESERVED_1_MASK I40E_MASK(0x1, I40E_PFPCI_CLASS_RESERVED_1_SHIFT) -#define I40E_PFPCI_CLASS_PF_IS_LAN_SHIFT 2 -#define I40E_PFPCI_CLASS_PF_IS_LAN_MASK I40E_MASK(0x1, I40E_PFPCI_CLASS_PF_IS_LAN_SHIFT) -#define I40E_PFPCI_CNF 0x000BE000 /* Reset: PCIR */ -#define I40E_PFPCI_CNF_MSI_EN_SHIFT 2 -#define I40E_PFPCI_CNF_MSI_EN_MASK I40E_MASK(0x1, I40E_PFPCI_CNF_MSI_EN_SHIFT) -#define I40E_PFPCI_CNF_EXROM_DIS_SHIFT 3 -#define I40E_PFPCI_CNF_EXROM_DIS_MASK I40E_MASK(0x1, I40E_PFPCI_CNF_EXROM_DIS_SHIFT) -#define I40E_PFPCI_CNF_IO_BAR_SHIFT 4 -#define I40E_PFPCI_CNF_IO_BAR_MASK I40E_MASK(0x1, I40E_PFPCI_CNF_IO_BAR_SHIFT) -#define I40E_PFPCI_CNF_INT_PIN_SHIFT 5 -#define I40E_PFPCI_CNF_INT_PIN_MASK I40E_MASK(0x3, I40E_PFPCI_CNF_INT_PIN_SHIFT) -#define I40E_PFPCI_DEVID 0x000BE080 /* Reset: PCIR */ -#define I40E_PFPCI_DEVID_PF_DEV_ID_SHIFT 0 -#define I40E_PFPCI_DEVID_PF_DEV_ID_MASK I40E_MASK(0xFFFF, I40E_PFPCI_DEVID_PF_DEV_ID_SHIFT) -#define I40E_PFPCI_DEVID_VF_DEV_ID_SHIFT 16 -#define I40E_PFPCI_DEVID_VF_DEV_ID_MASK I40E_MASK(0xFFFF, I40E_PFPCI_DEVID_VF_DEV_ID_SHIFT) -#define I40E_PFPCI_FACTPS 0x0009C180 /* Reset: FLR */ -#define I40E_PFPCI_FACTPS_FUNC_POWER_STATE_SHIFT 0 -#define I40E_PFPCI_FACTPS_FUNC_POWER_STATE_MASK I40E_MASK(0x3, I40E_PFPCI_FACTPS_FUNC_POWER_STATE_SHIFT) -#define I40E_PFPCI_FACTPS_FUNC_AUX_EN_SHIFT 3 -#define I40E_PFPCI_FACTPS_FUNC_AUX_EN_MASK I40E_MASK(0x1, I40E_PFPCI_FACTPS_FUNC_AUX_EN_SHIFT) -#define I40E_PFPCI_FUNC 0x000BE200 /* Reset: POR */ -#define I40E_PFPCI_FUNC_FUNC_DIS_SHIFT 0 -#define I40E_PFPCI_FUNC_FUNC_DIS_MASK I40E_MASK(0x1, I40E_PFPCI_FUNC_FUNC_DIS_SHIFT) -#define I40E_PFPCI_FUNC_ALLOW_FUNC_DIS_SHIFT 1 -#define I40E_PFPCI_FUNC_ALLOW_FUNC_DIS_MASK I40E_MASK(0x1, I40E_PFPCI_FUNC_ALLOW_FUNC_DIS_SHIFT) -#define I40E_PFPCI_FUNC_DIS_FUNC_ON_PORT_DIS_SHIFT 2 -#define I40E_PFPCI_FUNC_DIS_FUNC_ON_PORT_DIS_MASK I40E_MASK(0x1, I40E_PFPCI_FUNC_DIS_FUNC_ON_PORT_DIS_SHIFT) -#define I40E_PFPCI_FUNC2 0x000BE180 /* Reset: PCIR */ -#define I40E_PFPCI_FUNC2_EMP_FUNC_DIS_SHIFT 0 -#define I40E_PFPCI_FUNC2_EMP_FUNC_DIS_MASK I40E_MASK(0x1, I40E_PFPCI_FUNC2_EMP_FUNC_DIS_SHIFT) -#define I40E_PFPCI_ICAUSE 0x0009C200 /* Reset: PFR */ -#define I40E_PFPCI_ICAUSE_PCIE_ERR_CAUSE_SHIFT 0 -#define I40E_PFPCI_ICAUSE_PCIE_ERR_CAUSE_MASK I40E_MASK(0xFFFFFFFF, I40E_PFPCI_ICAUSE_PCIE_ERR_CAUSE_SHIFT) -#define I40E_PFPCI_IENA 0x0009C280 /* Reset: PFR */ -#define I40E_PFPCI_IENA_PCIE_ERR_EN_SHIFT 0 -#define I40E_PFPCI_IENA_PCIE_ERR_EN_MASK I40E_MASK(0xFFFFFFFF, I40E_PFPCI_IENA_PCIE_ERR_EN_SHIFT) -#define I40E_PFPCI_PF_FLUSH_DONE 0x0009C800 /* Reset: PCIR */ -#define I40E_PFPCI_PF_FLUSH_DONE_FLUSH_DONE_SHIFT 0 -#define I40E_PFPCI_PF_FLUSH_DONE_FLUSH_DONE_MASK I40E_MASK(0x1, I40E_PFPCI_PF_FLUSH_DONE_FLUSH_DONE_SHIFT) -#define I40E_PFPCI_PM 0x000BE300 /* Reset: POR */ -#define I40E_PFPCI_PM_PME_EN_SHIFT 0 -#define I40E_PFPCI_PM_PME_EN_MASK I40E_MASK(0x1, I40E_PFPCI_PM_PME_EN_SHIFT) -#define I40E_PFPCI_STATUS1 0x000BE280 /* Reset: POR */ -#define I40E_PFPCI_STATUS1_FUNC_VALID_SHIFT 0 -#define I40E_PFPCI_STATUS1_FUNC_VALID_MASK I40E_MASK(0x1, I40E_PFPCI_STATUS1_FUNC_VALID_SHIFT) -#define I40E_PFPCI_SUBSYSID 0x000BE100 /* Reset: PCIR */ -#define I40E_PFPCI_SUBSYSID_PF_SUBSYS_ID_SHIFT 0 -#define I40E_PFPCI_SUBSYSID_PF_SUBSYS_ID_MASK I40E_MASK(0xFFFF, I40E_PFPCI_SUBSYSID_PF_SUBSYS_ID_SHIFT) -#define I40E_PFPCI_SUBSYSID_VF_SUBSYS_ID_SHIFT 16 -#define I40E_PFPCI_SUBSYSID_VF_SUBSYS_ID_MASK I40E_MASK(0xFFFF, I40E_PFPCI_SUBSYSID_VF_SUBSYS_ID_SHIFT) -#define I40E_PFPCI_VF_FLUSH_DONE 0x0000E400 /* Reset: PCIR */ -#define I40E_PFPCI_VF_FLUSH_DONE_FLUSH_DONE_SHIFT 0 -#define I40E_PFPCI_VF_FLUSH_DONE_FLUSH_DONE_MASK I40E_MASK(0x1, I40E_PFPCI_VF_FLUSH_DONE_FLUSH_DONE_SHIFT) -#define I40E_PFPCI_VF_FLUSH_DONE1(_VF) (0x0009C600 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: PCIR */ -#define I40E_PFPCI_VF_FLUSH_DONE1_MAX_INDEX 127 -#define I40E_PFPCI_VF_FLUSH_DONE1_FLUSH_DONE_SHIFT 0 -#define I40E_PFPCI_VF_FLUSH_DONE1_FLUSH_DONE_MASK I40E_MASK(0x1, I40E_PFPCI_VF_FLUSH_DONE1_FLUSH_DONE_SHIFT) -#define I40E_PFPCI_VM_FLUSH_DONE 0x0009C880 /* Reset: PCIR */ -#define I40E_PFPCI_VM_FLUSH_DONE_FLUSH_DONE_SHIFT 0 -#define I40E_PFPCI_VM_FLUSH_DONE_FLUSH_DONE_MASK I40E_MASK(0x1, I40E_PFPCI_VM_FLUSH_DONE_FLUSH_DONE_SHIFT) -#define I40E_PFPCI_VMINDEX 0x0009C300 /* Reset: PCIR */ -#define I40E_PFPCI_VMINDEX_VMINDEX_SHIFT 0 -#define I40E_PFPCI_VMINDEX_VMINDEX_MASK I40E_MASK(0x1FF, I40E_PFPCI_VMINDEX_VMINDEX_SHIFT) -#define I40E_PFPCI_VMPEND 0x0009C380 /* Reset: PCIR */ -#define I40E_PFPCI_VMPEND_PENDING_SHIFT 0 -#define I40E_PFPCI_VMPEND_PENDING_MASK I40E_MASK(0x1, I40E_PFPCI_VMPEND_PENDING_SHIFT) -#define I40E_PRTPM_EEE_STAT 0x001E4320 /* Reset: GLOBR */ -#define I40E_PRTPM_EEE_STAT_EEE_NEG_SHIFT 29 -#define I40E_PRTPM_EEE_STAT_EEE_NEG_MASK I40E_MASK(0x1, I40E_PRTPM_EEE_STAT_EEE_NEG_SHIFT) -#define I40E_PRTPM_EEE_STAT_RX_LPI_STATUS_SHIFT 30 -#define I40E_PRTPM_EEE_STAT_RX_LPI_STATUS_MASK I40E_MASK(0x1, I40E_PRTPM_EEE_STAT_RX_LPI_STATUS_SHIFT) -#define I40E_PRTPM_EEE_STAT_TX_LPI_STATUS_SHIFT 31 -#define I40E_PRTPM_EEE_STAT_TX_LPI_STATUS_MASK I40E_MASK(0x1, I40E_PRTPM_EEE_STAT_TX_LPI_STATUS_SHIFT) -#define I40E_PRTPM_EEEC 0x001E4380 /* Reset: GLOBR */ -#define I40E_PRTPM_EEEC_TW_WAKE_MIN_SHIFT 16 -#define I40E_PRTPM_EEEC_TW_WAKE_MIN_MASK I40E_MASK(0x3F, I40E_PRTPM_EEEC_TW_WAKE_MIN_SHIFT) -#define I40E_PRTPM_EEEC_TX_LU_LPI_DLY_SHIFT 24 -#define I40E_PRTPM_EEEC_TX_LU_LPI_DLY_MASK I40E_MASK(0x3, I40E_PRTPM_EEEC_TX_LU_LPI_DLY_SHIFT) -#define I40E_PRTPM_EEEC_TEEE_DLY_SHIFT 26 -#define I40E_PRTPM_EEEC_TEEE_DLY_MASK I40E_MASK(0x3F, I40E_PRTPM_EEEC_TEEE_DLY_SHIFT) -#define I40E_PRTPM_EEEFWD 0x001E4400 /* Reset: GLOBR */ -#define I40E_PRTPM_EEEFWD_EEE_FW_CONFIG_DONE_SHIFT 31 -#define I40E_PRTPM_EEEFWD_EEE_FW_CONFIG_DONE_MASK I40E_MASK(0x1, I40E_PRTPM_EEEFWD_EEE_FW_CONFIG_DONE_SHIFT) -#define I40E_PRTPM_EEER 0x001E4360 /* Reset: GLOBR */ -#define I40E_PRTPM_EEER_TW_SYSTEM_SHIFT 0 -#define I40E_PRTPM_EEER_TW_SYSTEM_MASK I40E_MASK(0xFFFF, I40E_PRTPM_EEER_TW_SYSTEM_SHIFT) -#define I40E_PRTPM_EEER_TX_LPI_EN_SHIFT 16 -#define I40E_PRTPM_EEER_TX_LPI_EN_MASK I40E_MASK(0x1, I40E_PRTPM_EEER_TX_LPI_EN_SHIFT) -#define I40E_PRTPM_EEETXC 0x001E43E0 /* Reset: GLOBR */ -#define I40E_PRTPM_EEETXC_TW_PHY_SHIFT 0 -#define I40E_PRTPM_EEETXC_TW_PHY_MASK I40E_MASK(0xFFFF, I40E_PRTPM_EEETXC_TW_PHY_SHIFT) -#define I40E_PRTPM_GC 0x000B8140 /* Reset: POR */ -#define I40E_PRTPM_GC_EMP_LINK_ON_SHIFT 0 -#define I40E_PRTPM_GC_EMP_LINK_ON_MASK I40E_MASK(0x1, I40E_PRTPM_GC_EMP_LINK_ON_SHIFT) -#define I40E_PRTPM_GC_MNG_VETO_SHIFT 1 -#define I40E_PRTPM_GC_MNG_VETO_MASK I40E_MASK(0x1, I40E_PRTPM_GC_MNG_VETO_SHIFT) -#define I40E_PRTPM_GC_RATD_SHIFT 2 -#define I40E_PRTPM_GC_RATD_MASK I40E_MASK(0x1, I40E_PRTPM_GC_RATD_SHIFT) -#define I40E_PRTPM_GC_LCDMP_SHIFT 3 -#define I40E_PRTPM_GC_LCDMP_MASK I40E_MASK(0x1, I40E_PRTPM_GC_LCDMP_SHIFT) -#define I40E_PRTPM_GC_LPLU_ASSERTED_SHIFT 31 -#define I40E_PRTPM_GC_LPLU_ASSERTED_MASK I40E_MASK(0x1, I40E_PRTPM_GC_LPLU_ASSERTED_SHIFT) -#define I40E_PRTPM_RLPIC 0x001E43A0 /* Reset: GLOBR */ -#define I40E_PRTPM_RLPIC_ERLPIC_SHIFT 0 -#define I40E_PRTPM_RLPIC_ERLPIC_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTPM_RLPIC_ERLPIC_SHIFT) -#define I40E_PRTPM_TLPIC 0x001E43C0 /* Reset: GLOBR */ -#define I40E_PRTPM_TLPIC_ETLPIC_SHIFT 0 -#define I40E_PRTPM_TLPIC_ETLPIC_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTPM_TLPIC_ETLPIC_SHIFT) -#define I40E_GLRPB_DPSS 0x000AC828 /* Reset: CORER */ -#define I40E_GLRPB_DPSS_DPS_TCN_SHIFT 0 -#define I40E_GLRPB_DPSS_DPS_TCN_MASK I40E_MASK(0xFFFFF, I40E_GLRPB_DPSS_DPS_TCN_SHIFT) -#define I40E_GLRPB_GHW 0x000AC830 /* Reset: CORER */ -#define I40E_GLRPB_GHW_GHW_SHIFT 0 -#define I40E_GLRPB_GHW_GHW_MASK I40E_MASK(0xFFFFF, I40E_GLRPB_GHW_GHW_SHIFT) -#define I40E_GLRPB_GLW 0x000AC834 /* Reset: CORER */ -#define I40E_GLRPB_GLW_GLW_SHIFT 0 -#define I40E_GLRPB_GLW_GLW_MASK I40E_MASK(0xFFFFF, I40E_GLRPB_GLW_GLW_SHIFT) -#define I40E_GLRPB_PHW 0x000AC844 /* Reset: CORER */ -#define I40E_GLRPB_PHW_PHW_SHIFT 0 -#define I40E_GLRPB_PHW_PHW_MASK I40E_MASK(0xFFFFF, I40E_GLRPB_PHW_PHW_SHIFT) -#define I40E_GLRPB_PLW 0x000AC848 /* Reset: CORER */ -#define I40E_GLRPB_PLW_PLW_SHIFT 0 -#define I40E_GLRPB_PLW_PLW_MASK I40E_MASK(0xFFFFF, I40E_GLRPB_PLW_PLW_SHIFT) -#define I40E_PRTRPB_DHW(_i) (0x000AC100 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ -#define I40E_PRTRPB_DHW_MAX_INDEX 7 -#define I40E_PRTRPB_DHW_DHW_TCN_SHIFT 0 -#define I40E_PRTRPB_DHW_DHW_TCN_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_DHW_DHW_TCN_SHIFT) -#define I40E_PRTRPB_DLW(_i) (0x000AC220 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ -#define I40E_PRTRPB_DLW_MAX_INDEX 7 -#define I40E_PRTRPB_DLW_DLW_TCN_SHIFT 0 -#define I40E_PRTRPB_DLW_DLW_TCN_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_DLW_DLW_TCN_SHIFT) -#define I40E_PRTRPB_DPS(_i) (0x000AC320 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ -#define I40E_PRTRPB_DPS_MAX_INDEX 7 -#define I40E_PRTRPB_DPS_DPS_TCN_SHIFT 0 -#define I40E_PRTRPB_DPS_DPS_TCN_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_DPS_DPS_TCN_SHIFT) -#define I40E_PRTRPB_SHT(_i) (0x000AC480 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ -#define I40E_PRTRPB_SHT_MAX_INDEX 7 -#define I40E_PRTRPB_SHT_SHT_TCN_SHIFT 0 -#define I40E_PRTRPB_SHT_SHT_TCN_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_SHT_SHT_TCN_SHIFT) -#define I40E_PRTRPB_SHW 0x000AC580 /* Reset: CORER */ -#define I40E_PRTRPB_SHW_SHW_SHIFT 0 -#define I40E_PRTRPB_SHW_SHW_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_SHW_SHW_SHIFT) -#define I40E_PRTRPB_SLT(_i) (0x000AC5A0 + ((_i) * 32)) /* _i=0...7 */ /* Reset: CORER */ -#define I40E_PRTRPB_SLT_MAX_INDEX 7 -#define I40E_PRTRPB_SLT_SLT_TCN_SHIFT 0 -#define I40E_PRTRPB_SLT_SLT_TCN_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_SLT_SLT_TCN_SHIFT) -#define I40E_PRTRPB_SLW 0x000AC6A0 /* Reset: CORER */ -#define I40E_PRTRPB_SLW_SLW_SHIFT 0 -#define I40E_PRTRPB_SLW_SLW_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_SLW_SLW_SHIFT) -#define I40E_PRTRPB_SPS 0x000AC7C0 /* Reset: CORER */ -#define I40E_PRTRPB_SPS_SPS_SHIFT 0 -#define I40E_PRTRPB_SPS_SPS_MASK I40E_MASK(0xFFFFF, I40E_PRTRPB_SPS_SPS_SHIFT) -#define I40E_GLQF_CTL 0x00269BA4 /* Reset: CORER */ -#define I40E_GLQF_CTL_HTOEP_SHIFT 1 -#define I40E_GLQF_CTL_HTOEP_MASK I40E_MASK(0x1, I40E_GLQF_CTL_HTOEP_SHIFT) -#define I40E_GLQF_CTL_HTOEP_FCOE_SHIFT 2 -#define I40E_GLQF_CTL_HTOEP_FCOE_MASK I40E_MASK(0x1, I40E_GLQF_CTL_HTOEP_FCOE_SHIFT) -#define I40E_GLQF_CTL_PCNT_ALLOC_SHIFT 3 -#define I40E_GLQF_CTL_PCNT_ALLOC_MASK I40E_MASK(0x7, I40E_GLQF_CTL_PCNT_ALLOC_SHIFT) -#define I40E_GLQF_CTL_FD_AUTO_PCTYPE_SHIFT 6 -#define I40E_GLQF_CTL_FD_AUTO_PCTYPE_MASK I40E_MASK(0x1, I40E_GLQF_CTL_FD_AUTO_PCTYPE_SHIFT) -#define I40E_GLQF_CTL_RSVD_SHIFT 7 -#define I40E_GLQF_CTL_RSVD_MASK I40E_MASK(0x1, I40E_GLQF_CTL_RSVD_SHIFT) -#define I40E_GLQF_CTL_MAXPEBLEN_SHIFT 8 -#define I40E_GLQF_CTL_MAXPEBLEN_MASK I40E_MASK(0x7, I40E_GLQF_CTL_MAXPEBLEN_SHIFT) -#define I40E_GLQF_CTL_MAXFCBLEN_SHIFT 11 -#define I40E_GLQF_CTL_MAXFCBLEN_MASK I40E_MASK(0x7, I40E_GLQF_CTL_MAXFCBLEN_SHIFT) -#define I40E_GLQF_CTL_MAXFDBLEN_SHIFT 14 -#define I40E_GLQF_CTL_MAXFDBLEN_MASK I40E_MASK(0x7, I40E_GLQF_CTL_MAXFDBLEN_SHIFT) -#define I40E_GLQF_CTL_FDBEST_SHIFT 17 -#define I40E_GLQF_CTL_FDBEST_MASK I40E_MASK(0xFF, I40E_GLQF_CTL_FDBEST_SHIFT) -#define I40E_GLQF_CTL_PROGPRIO_SHIFT 25 -#define I40E_GLQF_CTL_PROGPRIO_MASK I40E_MASK(0x1, I40E_GLQF_CTL_PROGPRIO_SHIFT) -#define I40E_GLQF_CTL_INVALPRIO_SHIFT 26 -#define I40E_GLQF_CTL_INVALPRIO_MASK I40E_MASK(0x1, I40E_GLQF_CTL_INVALPRIO_SHIFT) -#define I40E_GLQF_CTL_IGNORE_IP_SHIFT 27 -#define I40E_GLQF_CTL_IGNORE_IP_MASK I40E_MASK(0x1, I40E_GLQF_CTL_IGNORE_IP_SHIFT) -#define I40E_GLQF_FDCNT_0 0x00269BAC /* Reset: CORER */ -#define I40E_GLQF_FDCNT_0_GUARANT_CNT_SHIFT 0 -#define I40E_GLQF_FDCNT_0_GUARANT_CNT_MASK I40E_MASK(0x1FFF, I40E_GLQF_FDCNT_0_GUARANT_CNT_SHIFT) -#define I40E_GLQF_FDCNT_0_BESTCNT_SHIFT 13 -#define I40E_GLQF_FDCNT_0_BESTCNT_MASK I40E_MASK(0x1FFF, I40E_GLQF_FDCNT_0_BESTCNT_SHIFT) -#define I40E_GLQF_HKEY(_i) (0x00270140 + ((_i) * 4)) /* _i=0...12 */ /* Reset: CORER */ -#define I40E_GLQF_HKEY_MAX_INDEX 12 -#define I40E_GLQF_HKEY_KEY_0_SHIFT 0 -#define I40E_GLQF_HKEY_KEY_0_MASK I40E_MASK(0xFF, I40E_GLQF_HKEY_KEY_0_SHIFT) -#define I40E_GLQF_HKEY_KEY_1_SHIFT 8 -#define I40E_GLQF_HKEY_KEY_1_MASK I40E_MASK(0xFF, I40E_GLQF_HKEY_KEY_1_SHIFT) -#define I40E_GLQF_HKEY_KEY_2_SHIFT 16 -#define I40E_GLQF_HKEY_KEY_2_MASK I40E_MASK(0xFF, I40E_GLQF_HKEY_KEY_2_SHIFT) -#define I40E_GLQF_HKEY_KEY_3_SHIFT 24 -#define I40E_GLQF_HKEY_KEY_3_MASK I40E_MASK(0xFF, I40E_GLQF_HKEY_KEY_3_SHIFT) -#define I40E_GLQF_HSYM(_i) (0x00269D00 + ((_i) * 4)) /* _i=0...63 */ /* Reset: CORER */ -#define I40E_GLQF_HSYM_MAX_INDEX 63 -#define I40E_GLQF_HSYM_SYMH_ENA_SHIFT 0 -#define I40E_GLQF_HSYM_SYMH_ENA_MASK I40E_MASK(0x1, I40E_GLQF_HSYM_SYMH_ENA_SHIFT) -#define I40E_GLQF_PCNT(_i) (0x00266800 + ((_i) * 4)) /* _i=0...511 */ /* Reset: CORER */ -#define I40E_GLQF_PCNT_MAX_INDEX 511 -#define I40E_GLQF_PCNT_PCNT_SHIFT 0 -#define I40E_GLQF_PCNT_PCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLQF_PCNT_PCNT_SHIFT) -#define I40E_GLQF_SWAP(_i, _j) (0x00267E00 + ((_i) * 4 + (_j) * 8)) /* _i=0...1, _j=0...63 */ /* Reset: CORER */ -#define I40E_GLQF_SWAP_MAX_INDEX 1 -#define I40E_GLQF_SWAP_OFF0_SRC0_SHIFT 0 -#define I40E_GLQF_SWAP_OFF0_SRC0_MASK I40E_MASK(0x3F, I40E_GLQF_SWAP_OFF0_SRC0_SHIFT) -#define I40E_GLQF_SWAP_OFF0_SRC1_SHIFT 6 -#define I40E_GLQF_SWAP_OFF0_SRC1_MASK I40E_MASK(0x3F, I40E_GLQF_SWAP_OFF0_SRC1_SHIFT) -#define I40E_GLQF_SWAP_FLEN0_SHIFT 12 -#define I40E_GLQF_SWAP_FLEN0_MASK I40E_MASK(0xF, I40E_GLQF_SWAP_FLEN0_SHIFT) -#define I40E_GLQF_SWAP_OFF1_SRC0_SHIFT 16 -#define I40E_GLQF_SWAP_OFF1_SRC0_MASK I40E_MASK(0x3F, I40E_GLQF_SWAP_OFF1_SRC0_SHIFT) -#define I40E_GLQF_SWAP_OFF1_SRC1_SHIFT 22 -#define I40E_GLQF_SWAP_OFF1_SRC1_MASK I40E_MASK(0x3F, I40E_GLQF_SWAP_OFF1_SRC1_SHIFT) -#define I40E_GLQF_SWAP_FLEN1_SHIFT 28 -#define I40E_GLQF_SWAP_FLEN1_MASK I40E_MASK(0xF, I40E_GLQF_SWAP_FLEN1_SHIFT) -#define I40E_PFQF_CTL_0 0x001C0AC0 /* Reset: CORER */ -#define I40E_PFQF_CTL_0_PEHSIZE_SHIFT 0 -#define I40E_PFQF_CTL_0_PEHSIZE_MASK I40E_MASK(0x1F, I40E_PFQF_CTL_0_PEHSIZE_SHIFT) -#define I40E_PFQF_CTL_0_PEDSIZE_SHIFT 5 -#define I40E_PFQF_CTL_0_PEDSIZE_MASK I40E_MASK(0x1F, I40E_PFQF_CTL_0_PEDSIZE_SHIFT) -#define I40E_PFQF_CTL_0_PFFCHSIZE_SHIFT 10 -#define I40E_PFQF_CTL_0_PFFCHSIZE_MASK I40E_MASK(0xF, I40E_PFQF_CTL_0_PFFCHSIZE_SHIFT) -#define I40E_PFQF_CTL_0_PFFCDSIZE_SHIFT 14 -#define I40E_PFQF_CTL_0_PFFCDSIZE_MASK I40E_MASK(0x3, I40E_PFQF_CTL_0_PFFCDSIZE_SHIFT) -#define I40E_PFQF_CTL_0_HASHLUTSIZE_SHIFT 16 -#define I40E_PFQF_CTL_0_HASHLUTSIZE_MASK I40E_MASK(0x1, I40E_PFQF_CTL_0_HASHLUTSIZE_SHIFT) -#define I40E_PFQF_CTL_0_FD_ENA_SHIFT 17 -#define I40E_PFQF_CTL_0_FD_ENA_MASK I40E_MASK(0x1, I40E_PFQF_CTL_0_FD_ENA_SHIFT) -#define I40E_PFQF_CTL_0_ETYPE_ENA_SHIFT 18 -#define I40E_PFQF_CTL_0_ETYPE_ENA_MASK I40E_MASK(0x1, I40E_PFQF_CTL_0_ETYPE_ENA_SHIFT) -#define I40E_PFQF_CTL_0_MACVLAN_ENA_SHIFT 19 -#define I40E_PFQF_CTL_0_MACVLAN_ENA_MASK I40E_MASK(0x1, I40E_PFQF_CTL_0_MACVLAN_ENA_SHIFT) -#define I40E_PFQF_CTL_0_VFFCHSIZE_SHIFT 20 -#define I40E_PFQF_CTL_0_VFFCHSIZE_MASK I40E_MASK(0xF, I40E_PFQF_CTL_0_VFFCHSIZE_SHIFT) -#define I40E_PFQF_CTL_0_VFFCDSIZE_SHIFT 24 -#define I40E_PFQF_CTL_0_VFFCDSIZE_MASK I40E_MASK(0x3, I40E_PFQF_CTL_0_VFFCDSIZE_SHIFT) -#define I40E_PFQF_CTL_1 0x00245D80 /* Reset: CORER */ -#define I40E_PFQF_CTL_1_CLEARFDTABLE_SHIFT 0 -#define I40E_PFQF_CTL_1_CLEARFDTABLE_MASK I40E_MASK(0x1, I40E_PFQF_CTL_1_CLEARFDTABLE_SHIFT) -#define I40E_PFQF_FDALLOC 0x00246280 /* Reset: CORER */ -#define I40E_PFQF_FDALLOC_FDALLOC_SHIFT 0 -#define I40E_PFQF_FDALLOC_FDALLOC_MASK I40E_MASK(0xFF, I40E_PFQF_FDALLOC_FDALLOC_SHIFT) -#define I40E_PFQF_FDALLOC_FDBEST_SHIFT 8 -#define I40E_PFQF_FDALLOC_FDBEST_MASK I40E_MASK(0xFF, I40E_PFQF_FDALLOC_FDBEST_SHIFT) -#define I40E_PFQF_FDSTAT 0x00246380 /* Reset: CORER */ -#define I40E_PFQF_FDSTAT_GUARANT_CNT_SHIFT 0 -#define I40E_PFQF_FDSTAT_GUARANT_CNT_MASK I40E_MASK(0x1FFF, I40E_PFQF_FDSTAT_GUARANT_CNT_SHIFT) -#define I40E_PFQF_FDSTAT_BEST_CNT_SHIFT 16 -#define I40E_PFQF_FDSTAT_BEST_CNT_MASK I40E_MASK(0x1FFF, I40E_PFQF_FDSTAT_BEST_CNT_SHIFT) -#define I40E_PFQF_HENA(_i) (0x00245900 + ((_i) * 128)) /* _i=0...1 */ /* Reset: CORER */ -#define I40E_PFQF_HENA_MAX_INDEX 1 -#define I40E_PFQF_HENA_PTYPE_ENA_SHIFT 0 -#define I40E_PFQF_HENA_PTYPE_ENA_MASK I40E_MASK(0xFFFFFFFF, I40E_PFQF_HENA_PTYPE_ENA_SHIFT) -#define I40E_PFQF_HKEY(_i) (0x00244800 + ((_i) * 128)) /* _i=0...12 */ /* Reset: CORER */ -#define I40E_PFQF_HKEY_MAX_INDEX 12 -#define I40E_PFQF_HKEY_KEY_0_SHIFT 0 -#define I40E_PFQF_HKEY_KEY_0_MASK I40E_MASK(0xFF, I40E_PFQF_HKEY_KEY_0_SHIFT) -#define I40E_PFQF_HKEY_KEY_1_SHIFT 8 -#define I40E_PFQF_HKEY_KEY_1_MASK I40E_MASK(0xFF, I40E_PFQF_HKEY_KEY_1_SHIFT) -#define I40E_PFQF_HKEY_KEY_2_SHIFT 16 -#define I40E_PFQF_HKEY_KEY_2_MASK I40E_MASK(0xFF, I40E_PFQF_HKEY_KEY_2_SHIFT) -#define I40E_PFQF_HKEY_KEY_3_SHIFT 24 -#define I40E_PFQF_HKEY_KEY_3_MASK I40E_MASK(0xFF, I40E_PFQF_HKEY_KEY_3_SHIFT) -#define I40E_PFQF_HLUT(_i) (0x00240000 + ((_i) * 128)) /* _i=0...127 */ /* Reset: CORER */ -#define I40E_PFQF_HLUT_MAX_INDEX 127 -#define I40E_PFQF_HLUT_LUT0_SHIFT 0 -#define I40E_PFQF_HLUT_LUT0_MASK I40E_MASK(0x3F, I40E_PFQF_HLUT_LUT0_SHIFT) -#define I40E_PFQF_HLUT_LUT1_SHIFT 8 -#define I40E_PFQF_HLUT_LUT1_MASK I40E_MASK(0x3F, I40E_PFQF_HLUT_LUT1_SHIFT) -#define I40E_PFQF_HLUT_LUT2_SHIFT 16 -#define I40E_PFQF_HLUT_LUT2_MASK I40E_MASK(0x3F, I40E_PFQF_HLUT_LUT2_SHIFT) -#define I40E_PFQF_HLUT_LUT3_SHIFT 24 -#define I40E_PFQF_HLUT_LUT3_MASK I40E_MASK(0x3F, I40E_PFQF_HLUT_LUT3_SHIFT) -#define I40E_PRTQF_CTL_0 0x00256E60 /* Reset: CORER */ -#define I40E_PRTQF_CTL_0_HSYM_ENA_SHIFT 0 -#define I40E_PRTQF_CTL_0_HSYM_ENA_MASK I40E_MASK(0x1, I40E_PRTQF_CTL_0_HSYM_ENA_SHIFT) -#define I40E_PRTQF_FD_FLXINSET(_i) (0x00253800 + ((_i) * 32)) /* _i=0...63 */ /* Reset: CORER */ -#define I40E_PRTQF_FD_FLXINSET_MAX_INDEX 63 -#define I40E_PRTQF_FD_FLXINSET_INSET_SHIFT 0 -#define I40E_PRTQF_FD_FLXINSET_INSET_MASK I40E_MASK(0xFF, I40E_PRTQF_FD_FLXINSET_INSET_SHIFT) -#define I40E_PRTQF_FD_MSK(_i, _j) (0x00252000 + ((_i) * 64 + (_j) * 32)) /* _i=0...63, _j=0...1 */ /* Reset: CORER */ -#define I40E_PRTQF_FD_MSK_MAX_INDEX 63 -#define I40E_PRTQF_FD_MSK_MASK_SHIFT 0 -#define I40E_PRTQF_FD_MSK_MASK_MASK I40E_MASK(0xFFFF, I40E_PRTQF_FD_MSK_MASK_SHIFT) -#define I40E_PRTQF_FD_MSK_OFFSET_SHIFT 16 -#define I40E_PRTQF_FD_MSK_OFFSET_MASK I40E_MASK(0x3F, I40E_PRTQF_FD_MSK_OFFSET_SHIFT) -#define I40E_PRTQF_FLX_PIT(_i) (0x00255200 + ((_i) * 32)) /* _i=0...8 */ /* Reset: CORER */ -#define I40E_PRTQF_FLX_PIT_MAX_INDEX 8 -#define I40E_PRTQF_FLX_PIT_SOURCE_OFF_SHIFT 0 -#define I40E_PRTQF_FLX_PIT_SOURCE_OFF_MASK I40E_MASK(0x1F, I40E_PRTQF_FLX_PIT_SOURCE_OFF_SHIFT) -#define I40E_PRTQF_FLX_PIT_FSIZE_SHIFT 5 -#define I40E_PRTQF_FLX_PIT_FSIZE_MASK I40E_MASK(0x1F, I40E_PRTQF_FLX_PIT_FSIZE_SHIFT) -#define I40E_PRTQF_FLX_PIT_DEST_OFF_SHIFT 10 -#define I40E_PRTQF_FLX_PIT_DEST_OFF_MASK I40E_MASK(0x3F, I40E_PRTQF_FLX_PIT_DEST_OFF_SHIFT) -#define I40E_VFQF_HENA1(_i, _VF) (0x00230800 + ((_i) * 1024 + (_VF) * 4)) /* _i=0...1, _VF=0...127 */ /* Reset: CORER */ -#define I40E_VFQF_HENA1_MAX_INDEX 1 -#define I40E_VFQF_HENA1_PTYPE_ENA_SHIFT 0 -#define I40E_VFQF_HENA1_PTYPE_ENA_MASK I40E_MASK(0xFFFFFFFF, I40E_VFQF_HENA1_PTYPE_ENA_SHIFT) -#define I40E_VFQF_HKEY1(_i, _VF) (0x00228000 + ((_i) * 1024 + (_VF) * 4)) /* _i=0...12, _VF=0...127 */ /* Reset: CORER */ -#define I40E_VFQF_HKEY1_MAX_INDEX 12 -#define I40E_VFQF_HKEY1_KEY_0_SHIFT 0 -#define I40E_VFQF_HKEY1_KEY_0_MASK I40E_MASK(0xFF, I40E_VFQF_HKEY1_KEY_0_SHIFT) -#define I40E_VFQF_HKEY1_KEY_1_SHIFT 8 -#define I40E_VFQF_HKEY1_KEY_1_MASK I40E_MASK(0xFF, I40E_VFQF_HKEY1_KEY_1_SHIFT) -#define I40E_VFQF_HKEY1_KEY_2_SHIFT 16 -#define I40E_VFQF_HKEY1_KEY_2_MASK I40E_MASK(0xFF, I40E_VFQF_HKEY1_KEY_2_SHIFT) -#define I40E_VFQF_HKEY1_KEY_3_SHIFT 24 -#define I40E_VFQF_HKEY1_KEY_3_MASK I40E_MASK(0xFF, I40E_VFQF_HKEY1_KEY_3_SHIFT) -#define I40E_VFQF_HLUT1(_i, _VF) (0x00220000 + ((_i) * 1024 + (_VF) * 4)) /* _i=0...15, _VF=0...127 */ /* Reset: CORER */ -#define I40E_VFQF_HLUT1_MAX_INDEX 15 -#define I40E_VFQF_HLUT1_LUT0_SHIFT 0 -#define I40E_VFQF_HLUT1_LUT0_MASK I40E_MASK(0xF, I40E_VFQF_HLUT1_LUT0_SHIFT) -#define I40E_VFQF_HLUT1_LUT1_SHIFT 8 -#define I40E_VFQF_HLUT1_LUT1_MASK I40E_MASK(0xF, I40E_VFQF_HLUT1_LUT1_SHIFT) -#define I40E_VFQF_HLUT1_LUT2_SHIFT 16 -#define I40E_VFQF_HLUT1_LUT2_MASK I40E_MASK(0xF, I40E_VFQF_HLUT1_LUT2_SHIFT) -#define I40E_VFQF_HLUT1_LUT3_SHIFT 24 -#define I40E_VFQF_HLUT1_LUT3_MASK I40E_MASK(0xF, I40E_VFQF_HLUT1_LUT3_SHIFT) -#define I40E_VFQF_HREGION1(_i, _VF) (0x0022E000 + ((_i) * 1024 + (_VF) * 4)) /* _i=0...7, _VF=0...127 */ /* Reset: CORER */ -#define I40E_VFQF_HREGION1_MAX_INDEX 7 -#define I40E_VFQF_HREGION1_OVERRIDE_ENA_0_SHIFT 0 -#define I40E_VFQF_HREGION1_OVERRIDE_ENA_0_MASK I40E_MASK(0x1, I40E_VFQF_HREGION1_OVERRIDE_ENA_0_SHIFT) -#define I40E_VFQF_HREGION1_REGION_0_SHIFT 1 -#define I40E_VFQF_HREGION1_REGION_0_MASK I40E_MASK(0x7, I40E_VFQF_HREGION1_REGION_0_SHIFT) -#define I40E_VFQF_HREGION1_OVERRIDE_ENA_1_SHIFT 4 -#define I40E_VFQF_HREGION1_OVERRIDE_ENA_1_MASK I40E_MASK(0x1, I40E_VFQF_HREGION1_OVERRIDE_ENA_1_SHIFT) -#define I40E_VFQF_HREGION1_REGION_1_SHIFT 5 -#define I40E_VFQF_HREGION1_REGION_1_MASK I40E_MASK(0x7, I40E_VFQF_HREGION1_REGION_1_SHIFT) -#define I40E_VFQF_HREGION1_OVERRIDE_ENA_2_SHIFT 8 -#define I40E_VFQF_HREGION1_OVERRIDE_ENA_2_MASK I40E_MASK(0x1, I40E_VFQF_HREGION1_OVERRIDE_ENA_2_SHIFT) -#define I40E_VFQF_HREGION1_REGION_2_SHIFT 9 -#define I40E_VFQF_HREGION1_REGION_2_MASK I40E_MASK(0x7, I40E_VFQF_HREGION1_REGION_2_SHIFT) -#define I40E_VFQF_HREGION1_OVERRIDE_ENA_3_SHIFT 12 -#define I40E_VFQF_HREGION1_OVERRIDE_ENA_3_MASK I40E_MASK(0x1, I40E_VFQF_HREGION1_OVERRIDE_ENA_3_SHIFT) -#define I40E_VFQF_HREGION1_REGION_3_SHIFT 13 -#define I40E_VFQF_HREGION1_REGION_3_MASK I40E_MASK(0x7, I40E_VFQF_HREGION1_REGION_3_SHIFT) -#define I40E_VFQF_HREGION1_OVERRIDE_ENA_4_SHIFT 16 -#define I40E_VFQF_HREGION1_OVERRIDE_ENA_4_MASK I40E_MASK(0x1, I40E_VFQF_HREGION1_OVERRIDE_ENA_4_SHIFT) -#define I40E_VFQF_HREGION1_REGION_4_SHIFT 17 -#define I40E_VFQF_HREGION1_REGION_4_MASK I40E_MASK(0x7, I40E_VFQF_HREGION1_REGION_4_SHIFT) -#define I40E_VFQF_HREGION1_OVERRIDE_ENA_5_SHIFT 20 -#define I40E_VFQF_HREGION1_OVERRIDE_ENA_5_MASK I40E_MASK(0x1, I40E_VFQF_HREGION1_OVERRIDE_ENA_5_SHIFT) -#define I40E_VFQF_HREGION1_REGION_5_SHIFT 21 -#define I40E_VFQF_HREGION1_REGION_5_MASK I40E_MASK(0x7, I40E_VFQF_HREGION1_REGION_5_SHIFT) -#define I40E_VFQF_HREGION1_OVERRIDE_ENA_6_SHIFT 24 -#define I40E_VFQF_HREGION1_OVERRIDE_ENA_6_MASK I40E_MASK(0x1, I40E_VFQF_HREGION1_OVERRIDE_ENA_6_SHIFT) -#define I40E_VFQF_HREGION1_REGION_6_SHIFT 25 -#define I40E_VFQF_HREGION1_REGION_6_MASK I40E_MASK(0x7, I40E_VFQF_HREGION1_REGION_6_SHIFT) -#define I40E_VFQF_HREGION1_OVERRIDE_ENA_7_SHIFT 28 -#define I40E_VFQF_HREGION1_OVERRIDE_ENA_7_MASK I40E_MASK(0x1, I40E_VFQF_HREGION1_OVERRIDE_ENA_7_SHIFT) -#define I40E_VFQF_HREGION1_REGION_7_SHIFT 29 -#define I40E_VFQF_HREGION1_REGION_7_MASK I40E_MASK(0x7, I40E_VFQF_HREGION1_REGION_7_SHIFT) -#define I40E_VPQF_CTL(_VF) (0x001C0000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: VFR */ -#define I40E_VPQF_CTL_MAX_INDEX 127 -#define I40E_VPQF_CTL_PEHSIZE_SHIFT 0 -#define I40E_VPQF_CTL_PEHSIZE_MASK I40E_MASK(0x1F, I40E_VPQF_CTL_PEHSIZE_SHIFT) -#define I40E_VPQF_CTL_PEDSIZE_SHIFT 5 -#define I40E_VPQF_CTL_PEDSIZE_MASK I40E_MASK(0x1F, I40E_VPQF_CTL_PEDSIZE_SHIFT) -#define I40E_VPQF_CTL_FCHSIZE_SHIFT 10 -#define I40E_VPQF_CTL_FCHSIZE_MASK I40E_MASK(0xF, I40E_VPQF_CTL_FCHSIZE_SHIFT) -#define I40E_VPQF_CTL_FCDSIZE_SHIFT 14 -#define I40E_VPQF_CTL_FCDSIZE_MASK I40E_MASK(0x3, I40E_VPQF_CTL_FCDSIZE_SHIFT) -#define I40E_VSIQF_CTL(_VSI) (0x0020D800 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: PFR */ -#define I40E_VSIQF_CTL_MAX_INDEX 383 -#define I40E_VSIQF_CTL_FCOE_ENA_SHIFT 0 -#define I40E_VSIQF_CTL_FCOE_ENA_MASK I40E_MASK(0x1, I40E_VSIQF_CTL_FCOE_ENA_SHIFT) -#define I40E_VSIQF_CTL_PETCP_ENA_SHIFT 1 -#define I40E_VSIQF_CTL_PETCP_ENA_MASK I40E_MASK(0x1, I40E_VSIQF_CTL_PETCP_ENA_SHIFT) -#define I40E_VSIQF_CTL_PEUUDP_ENA_SHIFT 2 -#define I40E_VSIQF_CTL_PEUUDP_ENA_MASK I40E_MASK(0x1, I40E_VSIQF_CTL_PEUUDP_ENA_SHIFT) -#define I40E_VSIQF_CTL_PEMUDP_ENA_SHIFT 3 -#define I40E_VSIQF_CTL_PEMUDP_ENA_MASK I40E_MASK(0x1, I40E_VSIQF_CTL_PEMUDP_ENA_SHIFT) -#define I40E_VSIQF_CTL_PEUFRAG_ENA_SHIFT 4 -#define I40E_VSIQF_CTL_PEUFRAG_ENA_MASK I40E_MASK(0x1, I40E_VSIQF_CTL_PEUFRAG_ENA_SHIFT) -#define I40E_VSIQF_CTL_PEMFRAG_ENA_SHIFT 5 -#define I40E_VSIQF_CTL_PEMFRAG_ENA_MASK I40E_MASK(0x1, I40E_VSIQF_CTL_PEMFRAG_ENA_SHIFT) -#define I40E_VSIQF_TCREGION(_i, _VSI) (0x00206000 + ((_i) * 2048 + (_VSI) * 4)) /* _i=0...3, _VSI=0...383 */ /* Reset: PFR */ -#define I40E_VSIQF_TCREGION_MAX_INDEX 3 -#define I40E_VSIQF_TCREGION_TC_OFFSET_SHIFT 0 -#define I40E_VSIQF_TCREGION_TC_OFFSET_MASK I40E_MASK(0x1FF, I40E_VSIQF_TCREGION_TC_OFFSET_SHIFT) -#define I40E_VSIQF_TCREGION_TC_SIZE_SHIFT 9 -#define I40E_VSIQF_TCREGION_TC_SIZE_MASK I40E_MASK(0x7, I40E_VSIQF_TCREGION_TC_SIZE_SHIFT) -#define I40E_VSIQF_TCREGION_TC_OFFSET2_SHIFT 16 -#define I40E_VSIQF_TCREGION_TC_OFFSET2_MASK I40E_MASK(0x1FF, I40E_VSIQF_TCREGION_TC_OFFSET2_SHIFT) -#define I40E_VSIQF_TCREGION_TC_SIZE2_SHIFT 25 -#define I40E_VSIQF_TCREGION_TC_SIZE2_MASK I40E_MASK(0x7, I40E_VSIQF_TCREGION_TC_SIZE2_SHIFT) -#define I40E_GL_FCOECRC(_i) (0x00314d80 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ -#define I40E_GL_FCOECRC_MAX_INDEX 143 -#define I40E_GL_FCOECRC_FCOECRC_SHIFT 0 -#define I40E_GL_FCOECRC_FCOECRC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOECRC_FCOECRC_SHIFT) -#define I40E_GL_FCOEDDPC(_i) (0x00314480 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ -#define I40E_GL_FCOEDDPC_MAX_INDEX 143 -#define I40E_GL_FCOEDDPC_FCOEDDPC_SHIFT 0 -#define I40E_GL_FCOEDDPC_FCOEDDPC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOEDDPC_FCOEDDPC_SHIFT) -#define I40E_GL_FCOEDIFEC(_i) (0x00318480 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ -#define I40E_GL_FCOEDIFEC_MAX_INDEX 143 -#define I40E_GL_FCOEDIFEC_FCOEDIFRC_SHIFT 0 -#define I40E_GL_FCOEDIFEC_FCOEDIFRC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOEDIFEC_FCOEDIFRC_SHIFT) -#define I40E_GL_FCOEDIFTCL(_i) (0x00354000 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ -#define I40E_GL_FCOEDIFTCL_MAX_INDEX 143 -#define I40E_GL_FCOEDIFTCL_FCOEDIFTC_SHIFT 0 -#define I40E_GL_FCOEDIFTCL_FCOEDIFTC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOEDIFTCL_FCOEDIFTC_SHIFT) -#define I40E_GL_FCOEDIXEC(_i) (0x0034c000 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ -#define I40E_GL_FCOEDIXEC_MAX_INDEX 143 -#define I40E_GL_FCOEDIXEC_FCOEDIXEC_SHIFT 0 -#define I40E_GL_FCOEDIXEC_FCOEDIXEC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOEDIXEC_FCOEDIXEC_SHIFT) -#define I40E_GL_FCOEDIXVC(_i) (0x00350000 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ -#define I40E_GL_FCOEDIXVC_MAX_INDEX 143 -#define I40E_GL_FCOEDIXVC_FCOEDIXVC_SHIFT 0 -#define I40E_GL_FCOEDIXVC_FCOEDIXVC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOEDIXVC_FCOEDIXVC_SHIFT) -#define I40E_GL_FCOEDWRCH(_i) (0x00320004 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ -#define I40E_GL_FCOEDWRCH_MAX_INDEX 143 -#define I40E_GL_FCOEDWRCH_FCOEDWRCH_SHIFT 0 -#define I40E_GL_FCOEDWRCH_FCOEDWRCH_MASK I40E_MASK(0xFFFF, I40E_GL_FCOEDWRCH_FCOEDWRCH_SHIFT) -#define I40E_GL_FCOEDWRCL(_i) (0x00320000 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ -#define I40E_GL_FCOEDWRCL_MAX_INDEX 143 -#define I40E_GL_FCOEDWRCL_FCOEDWRCL_SHIFT 0 -#define I40E_GL_FCOEDWRCL_FCOEDWRCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOEDWRCL_FCOEDWRCL_SHIFT) -#define I40E_GL_FCOEDWTCH(_i) (0x00348084 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ -#define I40E_GL_FCOEDWTCH_MAX_INDEX 143 -#define I40E_GL_FCOEDWTCH_FCOEDWTCH_SHIFT 0 -#define I40E_GL_FCOEDWTCH_FCOEDWTCH_MASK I40E_MASK(0xFFFF, I40E_GL_FCOEDWTCH_FCOEDWTCH_SHIFT) -#define I40E_GL_FCOEDWTCL(_i) (0x00348080 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ -#define I40E_GL_FCOEDWTCL_MAX_INDEX 143 -#define I40E_GL_FCOEDWTCL_FCOEDWTCL_SHIFT 0 -#define I40E_GL_FCOEDWTCL_FCOEDWTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOEDWTCL_FCOEDWTCL_SHIFT) -#define I40E_GL_FCOELAST(_i) (0x00314000 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ -#define I40E_GL_FCOELAST_MAX_INDEX 143 -#define I40E_GL_FCOELAST_FCOELAST_SHIFT 0 -#define I40E_GL_FCOELAST_FCOELAST_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOELAST_FCOELAST_SHIFT) -#define I40E_GL_FCOEPRC(_i) (0x00315200 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ -#define I40E_GL_FCOEPRC_MAX_INDEX 143 -#define I40E_GL_FCOEPRC_FCOEPRC_SHIFT 0 -#define I40E_GL_FCOEPRC_FCOEPRC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOEPRC_FCOEPRC_SHIFT) -#define I40E_GL_FCOEPTC(_i) (0x00344C00 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ -#define I40E_GL_FCOEPTC_MAX_INDEX 143 -#define I40E_GL_FCOEPTC_FCOEPTC_SHIFT 0 -#define I40E_GL_FCOEPTC_FCOEPTC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOEPTC_FCOEPTC_SHIFT) -#define I40E_GL_FCOERPDC(_i) (0x00324000 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ -#define I40E_GL_FCOERPDC_MAX_INDEX 143 -#define I40E_GL_FCOERPDC_FCOERPDC_SHIFT 0 -#define I40E_GL_FCOERPDC_FCOERPDC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_FCOERPDC_FCOERPDC_SHIFT) -#define I40E_GL_RXERR1_L(_i) (0x00318000 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ -#define I40E_GL_RXERR1_L_MAX_INDEX 143 -#define I40E_GL_RXERR1_L_FCOEDIFRC_SHIFT 0 -#define I40E_GL_RXERR1_L_FCOEDIFRC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_RXERR1_L_FCOEDIFRC_SHIFT) -#define I40E_GL_RXERR2_L(_i) (0x0031c000 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ -#define I40E_GL_RXERR2_L_MAX_INDEX 143 -#define I40E_GL_RXERR2_L_FCOEDIXAC_SHIFT 0 -#define I40E_GL_RXERR2_L_FCOEDIXAC_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_RXERR2_L_FCOEDIXAC_SHIFT) -#define I40E_GLPRT_BPRCH(_i) (0x003005E4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_BPRCH_MAX_INDEX 3 -#define I40E_GLPRT_BPRCH_UPRCH_SHIFT 0 -#define I40E_GLPRT_BPRCH_UPRCH_MASK I40E_MASK(0xFFFF, I40E_GLPRT_BPRCH_UPRCH_SHIFT) -#define I40E_GLPRT_BPRCL(_i) (0x003005E0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_BPRCL_MAX_INDEX 3 -#define I40E_GLPRT_BPRCL_UPRCH_SHIFT 0 -#define I40E_GLPRT_BPRCL_UPRCH_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_BPRCL_UPRCH_SHIFT) -#define I40E_GLPRT_BPTCH(_i) (0x00300A04 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_BPTCH_MAX_INDEX 3 -#define I40E_GLPRT_BPTCH_UPRCH_SHIFT 0 -#define I40E_GLPRT_BPTCH_UPRCH_MASK I40E_MASK(0xFFFF, I40E_GLPRT_BPTCH_UPRCH_SHIFT) -#define I40E_GLPRT_BPTCL(_i) (0x00300A00 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_BPTCL_MAX_INDEX 3 -#define I40E_GLPRT_BPTCL_UPRCH_SHIFT 0 -#define I40E_GLPRT_BPTCL_UPRCH_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_BPTCL_UPRCH_SHIFT) -#define I40E_GLPRT_CRCERRS(_i) (0x00300080 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_CRCERRS_MAX_INDEX 3 -#define I40E_GLPRT_CRCERRS_CRCERRS_SHIFT 0 -#define I40E_GLPRT_CRCERRS_CRCERRS_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_CRCERRS_CRCERRS_SHIFT) -#define I40E_GLPRT_GORCH(_i) (0x00300004 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_GORCH_MAX_INDEX 3 -#define I40E_GLPRT_GORCH_GORCH_SHIFT 0 -#define I40E_GLPRT_GORCH_GORCH_MASK I40E_MASK(0xFFFF, I40E_GLPRT_GORCH_GORCH_SHIFT) -#define I40E_GLPRT_GORCL(_i) (0x00300000 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_GORCL_MAX_INDEX 3 -#define I40E_GLPRT_GORCL_GORCL_SHIFT 0 -#define I40E_GLPRT_GORCL_GORCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_GORCL_GORCL_SHIFT) -#define I40E_GLPRT_GOTCH(_i) (0x00300684 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_GOTCH_MAX_INDEX 3 -#define I40E_GLPRT_GOTCH_GOTCH_SHIFT 0 -#define I40E_GLPRT_GOTCH_GOTCH_MASK I40E_MASK(0xFFFF, I40E_GLPRT_GOTCH_GOTCH_SHIFT) -#define I40E_GLPRT_GOTCL(_i) (0x00300680 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_GOTCL_MAX_INDEX 3 -#define I40E_GLPRT_GOTCL_GOTCL_SHIFT 0 -#define I40E_GLPRT_GOTCL_GOTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_GOTCL_GOTCL_SHIFT) -#define I40E_GLPRT_ILLERRC(_i) (0x003000E0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_ILLERRC_MAX_INDEX 3 -#define I40E_GLPRT_ILLERRC_ILLERRC_SHIFT 0 -#define I40E_GLPRT_ILLERRC_ILLERRC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_ILLERRC_ILLERRC_SHIFT) -#define I40E_GLPRT_LDPC(_i) (0x00300620 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_LDPC_MAX_INDEX 3 -#define I40E_GLPRT_LDPC_LDPC_SHIFT 0 -#define I40E_GLPRT_LDPC_LDPC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_LDPC_LDPC_SHIFT) -#define I40E_GLPRT_LXOFFRXC(_i) (0x00300160 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_LXOFFRXC_MAX_INDEX 3 -#define I40E_GLPRT_LXOFFRXC_LXOFFRXCNT_SHIFT 0 -#define I40E_GLPRT_LXOFFRXC_LXOFFRXCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_LXOFFRXC_LXOFFRXCNT_SHIFT) -#define I40E_GLPRT_LXOFFTXC(_i) (0x003009A0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_LXOFFTXC_MAX_INDEX 3 -#define I40E_GLPRT_LXOFFTXC_LXOFFTXC_SHIFT 0 -#define I40E_GLPRT_LXOFFTXC_LXOFFTXC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_LXOFFTXC_LXOFFTXC_SHIFT) -#define I40E_GLPRT_LXONRXC(_i) (0x00300140 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_LXONRXC_MAX_INDEX 3 -#define I40E_GLPRT_LXONRXC_LXONRXCNT_SHIFT 0 -#define I40E_GLPRT_LXONRXC_LXONRXCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_LXONRXC_LXONRXCNT_SHIFT) -#define I40E_GLPRT_LXONTXC(_i) (0x00300980 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_LXONTXC_MAX_INDEX 3 -#define I40E_GLPRT_LXONTXC_LXONTXC_SHIFT 0 -#define I40E_GLPRT_LXONTXC_LXONTXC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_LXONTXC_LXONTXC_SHIFT) -#define I40E_GLPRT_MLFC(_i) (0x00300020 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_MLFC_MAX_INDEX 3 -#define I40E_GLPRT_MLFC_MLFC_SHIFT 0 -#define I40E_GLPRT_MLFC_MLFC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_MLFC_MLFC_SHIFT) -#define I40E_GLPRT_MPRCH(_i) (0x003005C4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_MPRCH_MAX_INDEX 3 -#define I40E_GLPRT_MPRCH_MPRCH_SHIFT 0 -#define I40E_GLPRT_MPRCH_MPRCH_MASK I40E_MASK(0xFFFF, I40E_GLPRT_MPRCH_MPRCH_SHIFT) -#define I40E_GLPRT_MPRCL(_i) (0x003005C0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_MPRCL_MAX_INDEX 3 -#define I40E_GLPRT_MPRCL_MPRCL_SHIFT 0 -#define I40E_GLPRT_MPRCL_MPRCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_MPRCL_MPRCL_SHIFT) -#define I40E_GLPRT_MPTCH(_i) (0x003009E4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_MPTCH_MAX_INDEX 3 -#define I40E_GLPRT_MPTCH_MPTCH_SHIFT 0 -#define I40E_GLPRT_MPTCH_MPTCH_MASK I40E_MASK(0xFFFF, I40E_GLPRT_MPTCH_MPTCH_SHIFT) -#define I40E_GLPRT_MPTCL(_i) (0x003009E0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_MPTCL_MAX_INDEX 3 -#define I40E_GLPRT_MPTCL_MPTCL_SHIFT 0 -#define I40E_GLPRT_MPTCL_MPTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_MPTCL_MPTCL_SHIFT) -#define I40E_GLPRT_MRFC(_i) (0x00300040 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_MRFC_MAX_INDEX 3 -#define I40E_GLPRT_MRFC_MRFC_SHIFT 0 -#define I40E_GLPRT_MRFC_MRFC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_MRFC_MRFC_SHIFT) -#define I40E_GLPRT_PRC1023H(_i) (0x00300504 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PRC1023H_MAX_INDEX 3 -#define I40E_GLPRT_PRC1023H_PRC1023H_SHIFT 0 -#define I40E_GLPRT_PRC1023H_PRC1023H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PRC1023H_PRC1023H_SHIFT) -#define I40E_GLPRT_PRC1023L(_i) (0x00300500 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PRC1023L_MAX_INDEX 3 -#define I40E_GLPRT_PRC1023L_PRC1023L_SHIFT 0 -#define I40E_GLPRT_PRC1023L_PRC1023L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PRC1023L_PRC1023L_SHIFT) -#define I40E_GLPRT_PRC127H(_i) (0x003004A4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PRC127H_MAX_INDEX 3 -#define I40E_GLPRT_PRC127H_PRC127H_SHIFT 0 -#define I40E_GLPRT_PRC127H_PRC127H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PRC127H_PRC127H_SHIFT) -#define I40E_GLPRT_PRC127L(_i) (0x003004A0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PRC127L_MAX_INDEX 3 -#define I40E_GLPRT_PRC127L_PRC127L_SHIFT 0 -#define I40E_GLPRT_PRC127L_PRC127L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PRC127L_PRC127L_SHIFT) -#define I40E_GLPRT_PRC1522H(_i) (0x00300524 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PRC1522H_MAX_INDEX 3 -#define I40E_GLPRT_PRC1522H_PRC1522H_SHIFT 0 -#define I40E_GLPRT_PRC1522H_PRC1522H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PRC1522H_PRC1522H_SHIFT) -#define I40E_GLPRT_PRC1522L(_i) (0x00300520 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PRC1522L_MAX_INDEX 3 -#define I40E_GLPRT_PRC1522L_PRC1522L_SHIFT 0 -#define I40E_GLPRT_PRC1522L_PRC1522L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PRC1522L_PRC1522L_SHIFT) -#define I40E_GLPRT_PRC255H(_i) (0x003004C4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PRC255H_MAX_INDEX 3 -#define I40E_GLPRT_PRC255H_PRTPRC255H_SHIFT 0 -#define I40E_GLPRT_PRC255H_PRTPRC255H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PRC255H_PRTPRC255H_SHIFT) -#define I40E_GLPRT_PRC255L(_i) (0x003004C0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PRC255L_MAX_INDEX 3 -#define I40E_GLPRT_PRC255L_PRC255L_SHIFT 0 -#define I40E_GLPRT_PRC255L_PRC255L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PRC255L_PRC255L_SHIFT) -#define I40E_GLPRT_PRC511H(_i) (0x003004E4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PRC511H_MAX_INDEX 3 -#define I40E_GLPRT_PRC511H_PRC511H_SHIFT 0 -#define I40E_GLPRT_PRC511H_PRC511H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PRC511H_PRC511H_SHIFT) -#define I40E_GLPRT_PRC511L(_i) (0x003004E0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PRC511L_MAX_INDEX 3 -#define I40E_GLPRT_PRC511L_PRC511L_SHIFT 0 -#define I40E_GLPRT_PRC511L_PRC511L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PRC511L_PRC511L_SHIFT) -#define I40E_GLPRT_PRC64H(_i) (0x00300484 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PRC64H_MAX_INDEX 3 -#define I40E_GLPRT_PRC64H_PRC64H_SHIFT 0 -#define I40E_GLPRT_PRC64H_PRC64H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PRC64H_PRC64H_SHIFT) -#define I40E_GLPRT_PRC64L(_i) (0x00300480 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PRC64L_MAX_INDEX 3 -#define I40E_GLPRT_PRC64L_PRC64L_SHIFT 0 -#define I40E_GLPRT_PRC64L_PRC64L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PRC64L_PRC64L_SHIFT) -#define I40E_GLPRT_PRC9522H(_i) (0x00300544 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PRC9522H_MAX_INDEX 3 -#define I40E_GLPRT_PRC9522H_PRC1522H_SHIFT 0 -#define I40E_GLPRT_PRC9522H_PRC1522H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PRC9522H_PRC1522H_SHIFT) -#define I40E_GLPRT_PRC9522L(_i) (0x00300540 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PRC9522L_MAX_INDEX 3 -#define I40E_GLPRT_PRC9522L_PRC1522L_SHIFT 0 -#define I40E_GLPRT_PRC9522L_PRC1522L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PRC9522L_PRC1522L_SHIFT) -#define I40E_GLPRT_PTC1023H(_i) (0x00300724 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PTC1023H_MAX_INDEX 3 -#define I40E_GLPRT_PTC1023H_PTC1023H_SHIFT 0 -#define I40E_GLPRT_PTC1023H_PTC1023H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PTC1023H_PTC1023H_SHIFT) -#define I40E_GLPRT_PTC1023L(_i) (0x00300720 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PTC1023L_MAX_INDEX 3 -#define I40E_GLPRT_PTC1023L_PTC1023L_SHIFT 0 -#define I40E_GLPRT_PTC1023L_PTC1023L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PTC1023L_PTC1023L_SHIFT) -#define I40E_GLPRT_PTC127H(_i) (0x003006C4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PTC127H_MAX_INDEX 3 -#define I40E_GLPRT_PTC127H_PTC127H_SHIFT 0 -#define I40E_GLPRT_PTC127H_PTC127H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PTC127H_PTC127H_SHIFT) -#define I40E_GLPRT_PTC127L(_i) (0x003006C0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PTC127L_MAX_INDEX 3 -#define I40E_GLPRT_PTC127L_PTC127L_SHIFT 0 -#define I40E_GLPRT_PTC127L_PTC127L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PTC127L_PTC127L_SHIFT) -#define I40E_GLPRT_PTC1522H(_i) (0x00300744 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PTC1522H_MAX_INDEX 3 -#define I40E_GLPRT_PTC1522H_PTC1522H_SHIFT 0 -#define I40E_GLPRT_PTC1522H_PTC1522H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PTC1522H_PTC1522H_SHIFT) -#define I40E_GLPRT_PTC1522L(_i) (0x00300740 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PTC1522L_MAX_INDEX 3 -#define I40E_GLPRT_PTC1522L_PTC1522L_SHIFT 0 -#define I40E_GLPRT_PTC1522L_PTC1522L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PTC1522L_PTC1522L_SHIFT) -#define I40E_GLPRT_PTC255H(_i) (0x003006E4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PTC255H_MAX_INDEX 3 -#define I40E_GLPRT_PTC255H_PTC255H_SHIFT 0 -#define I40E_GLPRT_PTC255H_PTC255H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PTC255H_PTC255H_SHIFT) -#define I40E_GLPRT_PTC255L(_i) (0x003006E0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PTC255L_MAX_INDEX 3 -#define I40E_GLPRT_PTC255L_PTC255L_SHIFT 0 -#define I40E_GLPRT_PTC255L_PTC255L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PTC255L_PTC255L_SHIFT) -#define I40E_GLPRT_PTC511H(_i) (0x00300704 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PTC511H_MAX_INDEX 3 -#define I40E_GLPRT_PTC511H_PTC511H_SHIFT 0 -#define I40E_GLPRT_PTC511H_PTC511H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PTC511H_PTC511H_SHIFT) -#define I40E_GLPRT_PTC511L(_i) (0x00300700 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PTC511L_MAX_INDEX 3 -#define I40E_GLPRT_PTC511L_PTC511L_SHIFT 0 -#define I40E_GLPRT_PTC511L_PTC511L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PTC511L_PTC511L_SHIFT) -#define I40E_GLPRT_PTC64H(_i) (0x003006A4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PTC64H_MAX_INDEX 3 -#define I40E_GLPRT_PTC64H_PTC64H_SHIFT 0 -#define I40E_GLPRT_PTC64H_PTC64H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PTC64H_PTC64H_SHIFT) -#define I40E_GLPRT_PTC64L(_i) (0x003006A0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PTC64L_MAX_INDEX 3 -#define I40E_GLPRT_PTC64L_PTC64L_SHIFT 0 -#define I40E_GLPRT_PTC64L_PTC64L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PTC64L_PTC64L_SHIFT) -#define I40E_GLPRT_PTC9522H(_i) (0x00300764 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PTC9522H_MAX_INDEX 3 -#define I40E_GLPRT_PTC9522H_PTC9522H_SHIFT 0 -#define I40E_GLPRT_PTC9522H_PTC9522H_MASK I40E_MASK(0xFFFF, I40E_GLPRT_PTC9522H_PTC9522H_SHIFT) -#define I40E_GLPRT_PTC9522L(_i) (0x00300760 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_PTC9522L_MAX_INDEX 3 -#define I40E_GLPRT_PTC9522L_PTC9522L_SHIFT 0 -#define I40E_GLPRT_PTC9522L_PTC9522L_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PTC9522L_PTC9522L_SHIFT) -#define I40E_GLPRT_PXOFFRXC(_i, _j) (0x00300280 + ((_i) * 8 + (_j) * 32)) /* _i=0...3, _j=0...7 */ /* Reset: CORER */ -#define I40E_GLPRT_PXOFFRXC_MAX_INDEX 3 -#define I40E_GLPRT_PXOFFRXC_PRPXOFFRXCNT_SHIFT 0 -#define I40E_GLPRT_PXOFFRXC_PRPXOFFRXCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PXOFFRXC_PRPXOFFRXCNT_SHIFT) -#define I40E_GLPRT_PXOFFTXC(_i, _j) (0x00300880 + ((_i) * 8 + (_j) * 32)) /* _i=0...3, _j=0...7 */ /* Reset: CORER */ -#define I40E_GLPRT_PXOFFTXC_MAX_INDEX 3 -#define I40E_GLPRT_PXOFFTXC_PRPXOFFTXCNT_SHIFT 0 -#define I40E_GLPRT_PXOFFTXC_PRPXOFFTXCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PXOFFTXC_PRPXOFFTXCNT_SHIFT) -#define I40E_GLPRT_PXONRXC(_i, _j) (0x00300180 + ((_i) * 8 + (_j) * 32)) /* _i=0...3, _j=0...7 */ /* Reset: CORER */ -#define I40E_GLPRT_PXONRXC_MAX_INDEX 3 -#define I40E_GLPRT_PXONRXC_PRPXONRXCNT_SHIFT 0 -#define I40E_GLPRT_PXONRXC_PRPXONRXCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PXONRXC_PRPXONRXCNT_SHIFT) -#define I40E_GLPRT_PXONTXC(_i, _j) (0x00300780 + ((_i) * 8 + (_j) * 32)) /* _i=0...3, _j=0...7 */ /* Reset: CORER */ -#define I40E_GLPRT_PXONTXC_MAX_INDEX 3 -#define I40E_GLPRT_PXONTXC_PRPXONTXC_SHIFT 0 -#define I40E_GLPRT_PXONTXC_PRPXONTXC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_PXONTXC_PRPXONTXC_SHIFT) -#define I40E_GLPRT_RDPC(_i) (0x00300600 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_RDPC_MAX_INDEX 3 -#define I40E_GLPRT_RDPC_RDPC_SHIFT 0 -#define I40E_GLPRT_RDPC_RDPC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_RDPC_RDPC_SHIFT) -#define I40E_GLPRT_RFC(_i) (0x00300560 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_RFC_MAX_INDEX 3 -#define I40E_GLPRT_RFC_RFC_SHIFT 0 -#define I40E_GLPRT_RFC_RFC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_RFC_RFC_SHIFT) -#define I40E_GLPRT_RJC(_i) (0x00300580 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_RJC_MAX_INDEX 3 -#define I40E_GLPRT_RJC_RJC_SHIFT 0 -#define I40E_GLPRT_RJC_RJC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_RJC_RJC_SHIFT) -#define I40E_GLPRT_RLEC(_i) (0x003000A0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_RLEC_MAX_INDEX 3 -#define I40E_GLPRT_RLEC_RLEC_SHIFT 0 -#define I40E_GLPRT_RLEC_RLEC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_RLEC_RLEC_SHIFT) -#define I40E_GLPRT_ROC(_i) (0x00300120 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_ROC_MAX_INDEX 3 -#define I40E_GLPRT_ROC_ROC_SHIFT 0 -#define I40E_GLPRT_ROC_ROC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_ROC_ROC_SHIFT) -#define I40E_GLPRT_RUC(_i) (0x00300100 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_RUC_MAX_INDEX 3 -#define I40E_GLPRT_RUC_RUC_SHIFT 0 -#define I40E_GLPRT_RUC_RUC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_RUC_RUC_SHIFT) -#define I40E_GLPRT_RUPP(_i) (0x00300660 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_RUPP_MAX_INDEX 3 -#define I40E_GLPRT_RUPP_RUPP_SHIFT 0 -#define I40E_GLPRT_RUPP_RUPP_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_RUPP_RUPP_SHIFT) -#define I40E_GLPRT_RXON2OFFCNT(_i, _j) (0x00300380 + ((_i) * 8 + (_j) * 32)) /* _i=0...3, _j=0...7 */ /* Reset: CORER */ -#define I40E_GLPRT_RXON2OFFCNT_MAX_INDEX 3 -#define I40E_GLPRT_RXON2OFFCNT_PRRXON2OFFCNT_SHIFT 0 -#define I40E_GLPRT_RXON2OFFCNT_PRRXON2OFFCNT_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_RXON2OFFCNT_PRRXON2OFFCNT_SHIFT) -#define I40E_GLPRT_TDOLD(_i) (0x00300A20 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_TDOLD_MAX_INDEX 3 -#define I40E_GLPRT_TDOLD_GLPRT_TDOLD_SHIFT 0 -#define I40E_GLPRT_TDOLD_GLPRT_TDOLD_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_TDOLD_GLPRT_TDOLD_SHIFT) -#define I40E_GLPRT_TDPC(_i) (0x00375400 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_TDPC_MAX_INDEX 3 -#define I40E_GLPRT_TDPC_TDPC_SHIFT 0 -#define I40E_GLPRT_TDPC_TDPC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_TDPC_TDPC_SHIFT) -#define I40E_GLPRT_UPRCH(_i) (0x003005A4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_UPRCH_MAX_INDEX 3 -#define I40E_GLPRT_UPRCH_UPRCH_SHIFT 0 -#define I40E_GLPRT_UPRCH_UPRCH_MASK I40E_MASK(0xFFFF, I40E_GLPRT_UPRCH_UPRCH_SHIFT) -#define I40E_GLPRT_UPRCL(_i) (0x003005A0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_UPRCL_MAX_INDEX 3 -#define I40E_GLPRT_UPRCL_UPRCL_SHIFT 0 -#define I40E_GLPRT_UPRCL_UPRCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_UPRCL_UPRCL_SHIFT) -#define I40E_GLPRT_UPTCH(_i) (0x003009C4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_UPTCH_MAX_INDEX 3 -#define I40E_GLPRT_UPTCH_UPTCH_SHIFT 0 -#define I40E_GLPRT_UPTCH_UPTCH_MASK I40E_MASK(0xFFFF, I40E_GLPRT_UPTCH_UPTCH_SHIFT) -#define I40E_GLPRT_UPTCL(_i) (0x003009C0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_GLPRT_UPTCL_MAX_INDEX 3 -#define I40E_GLPRT_UPTCL_VUPTCH_SHIFT 0 -#define I40E_GLPRT_UPTCL_VUPTCH_MASK I40E_MASK(0xFFFFFFFF, I40E_GLPRT_UPTCL_VUPTCH_SHIFT) -#define I40E_GLSW_BPRCH(_i) (0x00370104 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLSW_BPRCH_MAX_INDEX 15 -#define I40E_GLSW_BPRCH_BPRCH_SHIFT 0 -#define I40E_GLSW_BPRCH_BPRCH_MASK I40E_MASK(0xFFFF, I40E_GLSW_BPRCH_BPRCH_SHIFT) -#define I40E_GLSW_BPRCL(_i) (0x00370100 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLSW_BPRCL_MAX_INDEX 15 -#define I40E_GLSW_BPRCL_BPRCL_SHIFT 0 -#define I40E_GLSW_BPRCL_BPRCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSW_BPRCL_BPRCL_SHIFT) -#define I40E_GLSW_BPTCH(_i) (0x00340104 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLSW_BPTCH_MAX_INDEX 15 -#define I40E_GLSW_BPTCH_BPTCH_SHIFT 0 -#define I40E_GLSW_BPTCH_BPTCH_MASK I40E_MASK(0xFFFF, I40E_GLSW_BPTCH_BPTCH_SHIFT) -#define I40E_GLSW_BPTCL(_i) (0x00340100 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLSW_BPTCL_MAX_INDEX 15 -#define I40E_GLSW_BPTCL_BPTCL_SHIFT 0 -#define I40E_GLSW_BPTCL_BPTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSW_BPTCL_BPTCL_SHIFT) -#define I40E_GLSW_GORCH(_i) (0x0035C004 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLSW_GORCH_MAX_INDEX 15 -#define I40E_GLSW_GORCH_GORCH_SHIFT 0 -#define I40E_GLSW_GORCH_GORCH_MASK I40E_MASK(0xFFFF, I40E_GLSW_GORCH_GORCH_SHIFT) -#define I40E_GLSW_GORCL(_i) (0x0035c000 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLSW_GORCL_MAX_INDEX 15 -#define I40E_GLSW_GORCL_GORCL_SHIFT 0 -#define I40E_GLSW_GORCL_GORCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSW_GORCL_GORCL_SHIFT) -#define I40E_GLSW_GOTCH(_i) (0x0032C004 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLSW_GOTCH_MAX_INDEX 15 -#define I40E_GLSW_GOTCH_GOTCH_SHIFT 0 -#define I40E_GLSW_GOTCH_GOTCH_MASK I40E_MASK(0xFFFF, I40E_GLSW_GOTCH_GOTCH_SHIFT) -#define I40E_GLSW_GOTCL(_i) (0x0032c000 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLSW_GOTCL_MAX_INDEX 15 -#define I40E_GLSW_GOTCL_GOTCL_SHIFT 0 -#define I40E_GLSW_GOTCL_GOTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSW_GOTCL_GOTCL_SHIFT) -#define I40E_GLSW_MPRCH(_i) (0x00370084 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLSW_MPRCH_MAX_INDEX 15 -#define I40E_GLSW_MPRCH_MPRCH_SHIFT 0 -#define I40E_GLSW_MPRCH_MPRCH_MASK I40E_MASK(0xFFFF, I40E_GLSW_MPRCH_MPRCH_SHIFT) -#define I40E_GLSW_MPRCL(_i) (0x00370080 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLSW_MPRCL_MAX_INDEX 15 -#define I40E_GLSW_MPRCL_MPRCL_SHIFT 0 -#define I40E_GLSW_MPRCL_MPRCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSW_MPRCL_MPRCL_SHIFT) -#define I40E_GLSW_MPTCH(_i) (0x00340084 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLSW_MPTCH_MAX_INDEX 15 -#define I40E_GLSW_MPTCH_MPTCH_SHIFT 0 -#define I40E_GLSW_MPTCH_MPTCH_MASK I40E_MASK(0xFFFF, I40E_GLSW_MPTCH_MPTCH_SHIFT) -#define I40E_GLSW_MPTCL(_i) (0x00340080 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLSW_MPTCL_MAX_INDEX 15 -#define I40E_GLSW_MPTCL_MPTCL_SHIFT 0 -#define I40E_GLSW_MPTCL_MPTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSW_MPTCL_MPTCL_SHIFT) -#define I40E_GLSW_RUPP(_i) (0x00370180 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLSW_RUPP_MAX_INDEX 15 -#define I40E_GLSW_RUPP_RUPP_SHIFT 0 -#define I40E_GLSW_RUPP_RUPP_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSW_RUPP_RUPP_SHIFT) -#define I40E_GLSW_TDPC(_i) (0x00348000 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLSW_TDPC_MAX_INDEX 15 -#define I40E_GLSW_TDPC_TDPC_SHIFT 0 -#define I40E_GLSW_TDPC_TDPC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSW_TDPC_TDPC_SHIFT) -#define I40E_GLSW_UPRCH(_i) (0x00370004 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLSW_UPRCH_MAX_INDEX 15 -#define I40E_GLSW_UPRCH_UPRCH_SHIFT 0 -#define I40E_GLSW_UPRCH_UPRCH_MASK I40E_MASK(0xFFFF, I40E_GLSW_UPRCH_UPRCH_SHIFT) -#define I40E_GLSW_UPRCL(_i) (0x00370000 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLSW_UPRCL_MAX_INDEX 15 -#define I40E_GLSW_UPRCL_UPRCL_SHIFT 0 -#define I40E_GLSW_UPRCL_UPRCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSW_UPRCL_UPRCL_SHIFT) -#define I40E_GLSW_UPTCH(_i) (0x00340004 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLSW_UPTCH_MAX_INDEX 15 -#define I40E_GLSW_UPTCH_UPTCH_SHIFT 0 -#define I40E_GLSW_UPTCH_UPTCH_MASK I40E_MASK(0xFFFF, I40E_GLSW_UPTCH_UPTCH_SHIFT) -#define I40E_GLSW_UPTCL(_i) (0x00340000 + ((_i) * 8)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_GLSW_UPTCL_MAX_INDEX 15 -#define I40E_GLSW_UPTCL_UPTCL_SHIFT 0 -#define I40E_GLSW_UPTCL_UPTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLSW_UPTCL_UPTCL_SHIFT) -#define I40E_GLV_BPRCH(_i) (0x0036D804 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ -#define I40E_GLV_BPRCH_MAX_INDEX 383 -#define I40E_GLV_BPRCH_BPRCH_SHIFT 0 -#define I40E_GLV_BPRCH_BPRCH_MASK I40E_MASK(0xFFFF, I40E_GLV_BPRCH_BPRCH_SHIFT) -#define I40E_GLV_BPRCL(_i) (0x0036d800 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ -#define I40E_GLV_BPRCL_MAX_INDEX 383 -#define I40E_GLV_BPRCL_BPRCL_SHIFT 0 -#define I40E_GLV_BPRCL_BPRCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_BPRCL_BPRCL_SHIFT) -#define I40E_GLV_BPTCH(_i) (0x0033D804 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ -#define I40E_GLV_BPTCH_MAX_INDEX 383 -#define I40E_GLV_BPTCH_BPTCH_SHIFT 0 -#define I40E_GLV_BPTCH_BPTCH_MASK I40E_MASK(0xFFFF, I40E_GLV_BPTCH_BPTCH_SHIFT) -#define I40E_GLV_BPTCL(_i) (0x0033d800 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ -#define I40E_GLV_BPTCL_MAX_INDEX 383 -#define I40E_GLV_BPTCL_BPTCL_SHIFT 0 -#define I40E_GLV_BPTCL_BPTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_BPTCL_BPTCL_SHIFT) -#define I40E_GLV_GORCH(_i) (0x00358004 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ -#define I40E_GLV_GORCH_MAX_INDEX 383 -#define I40E_GLV_GORCH_GORCH_SHIFT 0 -#define I40E_GLV_GORCH_GORCH_MASK I40E_MASK(0xFFFF, I40E_GLV_GORCH_GORCH_SHIFT) -#define I40E_GLV_GORCL(_i) (0x00358000 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ -#define I40E_GLV_GORCL_MAX_INDEX 383 -#define I40E_GLV_GORCL_GORCL_SHIFT 0 -#define I40E_GLV_GORCL_GORCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_GORCL_GORCL_SHIFT) -#define I40E_GLV_GOTCH(_i) (0x00328004 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ -#define I40E_GLV_GOTCH_MAX_INDEX 383 -#define I40E_GLV_GOTCH_GOTCH_SHIFT 0 -#define I40E_GLV_GOTCH_GOTCH_MASK I40E_MASK(0xFFFF, I40E_GLV_GOTCH_GOTCH_SHIFT) -#define I40E_GLV_GOTCL(_i) (0x00328000 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ -#define I40E_GLV_GOTCL_MAX_INDEX 383 -#define I40E_GLV_GOTCL_GOTCL_SHIFT 0 -#define I40E_GLV_GOTCL_GOTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_GOTCL_GOTCL_SHIFT) -#define I40E_GLV_MPRCH(_i) (0x0036CC04 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ -#define I40E_GLV_MPRCH_MAX_INDEX 383 -#define I40E_GLV_MPRCH_MPRCH_SHIFT 0 -#define I40E_GLV_MPRCH_MPRCH_MASK I40E_MASK(0xFFFF, I40E_GLV_MPRCH_MPRCH_SHIFT) -#define I40E_GLV_MPRCL(_i) (0x0036cc00 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ -#define I40E_GLV_MPRCL_MAX_INDEX 383 -#define I40E_GLV_MPRCL_MPRCL_SHIFT 0 -#define I40E_GLV_MPRCL_MPRCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_MPRCL_MPRCL_SHIFT) -#define I40E_GLV_MPTCH(_i) (0x0033CC04 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ -#define I40E_GLV_MPTCH_MAX_INDEX 383 -#define I40E_GLV_MPTCH_MPTCH_SHIFT 0 -#define I40E_GLV_MPTCH_MPTCH_MASK I40E_MASK(0xFFFF, I40E_GLV_MPTCH_MPTCH_SHIFT) -#define I40E_GLV_MPTCL(_i) (0x0033cc00 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ -#define I40E_GLV_MPTCL_MAX_INDEX 383 -#define I40E_GLV_MPTCL_MPTCL_SHIFT 0 -#define I40E_GLV_MPTCL_MPTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_MPTCL_MPTCL_SHIFT) -#define I40E_GLV_RDPC(_i) (0x00310000 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ -#define I40E_GLV_RDPC_MAX_INDEX 383 -#define I40E_GLV_RDPC_RDPC_SHIFT 0 -#define I40E_GLV_RDPC_RDPC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_RDPC_RDPC_SHIFT) -#define I40E_GLV_RUPP(_i) (0x0036E400 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ -#define I40E_GLV_RUPP_MAX_INDEX 383 -#define I40E_GLV_RUPP_RUPP_SHIFT 0 -#define I40E_GLV_RUPP_RUPP_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_RUPP_RUPP_SHIFT) -#define I40E_GLV_TEPC(_VSI) (0x00344000 + ((_VSI) * 4)) /* _i=0...383 */ /* Reset: CORER */ -#define I40E_GLV_TEPC_MAX_INDEX 383 -#define I40E_GLV_TEPC_TEPC_SHIFT 0 -#define I40E_GLV_TEPC_TEPC_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_TEPC_TEPC_SHIFT) -#define I40E_GLV_UPRCH(_i) (0x0036C004 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ -#define I40E_GLV_UPRCH_MAX_INDEX 383 -#define I40E_GLV_UPRCH_UPRCH_SHIFT 0 -#define I40E_GLV_UPRCH_UPRCH_MASK I40E_MASK(0xFFFF, I40E_GLV_UPRCH_UPRCH_SHIFT) -#define I40E_GLV_UPRCL(_i) (0x0036c000 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ -#define I40E_GLV_UPRCL_MAX_INDEX 383 -#define I40E_GLV_UPRCL_UPRCL_SHIFT 0 -#define I40E_GLV_UPRCL_UPRCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_UPRCL_UPRCL_SHIFT) -#define I40E_GLV_UPTCH(_i) (0x0033C004 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ -#define I40E_GLV_UPTCH_MAX_INDEX 383 -#define I40E_GLV_UPTCH_GLVUPTCH_SHIFT 0 -#define I40E_GLV_UPTCH_GLVUPTCH_MASK I40E_MASK(0xFFFF, I40E_GLV_UPTCH_GLVUPTCH_SHIFT) -#define I40E_GLV_UPTCL(_i) (0x0033c000 + ((_i) * 8)) /* _i=0...383 */ /* Reset: CORER */ -#define I40E_GLV_UPTCL_MAX_INDEX 383 -#define I40E_GLV_UPTCL_UPTCL_SHIFT 0 -#define I40E_GLV_UPTCL_UPTCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLV_UPTCL_UPTCL_SHIFT) -#define I40E_GLVEBTC_RBCH(_i, _j) (0x00364004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...15 */ /* Reset: CORER */ -#define I40E_GLVEBTC_RBCH_MAX_INDEX 7 -#define I40E_GLVEBTC_RBCH_TCBCH_SHIFT 0 -#define I40E_GLVEBTC_RBCH_TCBCH_MASK I40E_MASK(0xFFFF, I40E_GLVEBTC_RBCH_TCBCH_SHIFT) -#define I40E_GLVEBTC_RBCL(_i, _j) (0x00364000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...15 */ /* Reset: CORER */ -#define I40E_GLVEBTC_RBCL_MAX_INDEX 7 -#define I40E_GLVEBTC_RBCL_TCBCL_SHIFT 0 -#define I40E_GLVEBTC_RBCL_TCBCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLVEBTC_RBCL_TCBCL_SHIFT) -#define I40E_GLVEBTC_RPCH(_i, _j) (0x00368004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...15 */ /* Reset: CORER */ -#define I40E_GLVEBTC_RPCH_MAX_INDEX 7 -#define I40E_GLVEBTC_RPCH_TCPCH_SHIFT 0 -#define I40E_GLVEBTC_RPCH_TCPCH_MASK I40E_MASK(0xFFFF, I40E_GLVEBTC_RPCH_TCPCH_SHIFT) -#define I40E_GLVEBTC_RPCL(_i, _j) (0x00368000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...15 */ /* Reset: CORER */ -#define I40E_GLVEBTC_RPCL_MAX_INDEX 7 -#define I40E_GLVEBTC_RPCL_TCPCL_SHIFT 0 -#define I40E_GLVEBTC_RPCL_TCPCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLVEBTC_RPCL_TCPCL_SHIFT) -#define I40E_GLVEBTC_TBCH(_i, _j) (0x00334004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...15 */ /* Reset: CORER */ -#define I40E_GLVEBTC_TBCH_MAX_INDEX 7 -#define I40E_GLVEBTC_TBCH_TCBCH_SHIFT 0 -#define I40E_GLVEBTC_TBCH_TCBCH_MASK I40E_MASK(0xFFFF, I40E_GLVEBTC_TBCH_TCBCH_SHIFT) -#define I40E_GLVEBTC_TBCL(_i, _j) (0x00334000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...15 */ /* Reset: CORER */ -#define I40E_GLVEBTC_TBCL_MAX_INDEX 7 -#define I40E_GLVEBTC_TBCL_TCBCL_SHIFT 0 -#define I40E_GLVEBTC_TBCL_TCBCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLVEBTC_TBCL_TCBCL_SHIFT) -#define I40E_GLVEBTC_TPCH(_i, _j) (0x00338004 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...15 */ /* Reset: CORER */ -#define I40E_GLVEBTC_TPCH_MAX_INDEX 7 -#define I40E_GLVEBTC_TPCH_TCPCH_SHIFT 0 -#define I40E_GLVEBTC_TPCH_TCPCH_MASK I40E_MASK(0xFFFF, I40E_GLVEBTC_TPCH_TCPCH_SHIFT) -#define I40E_GLVEBTC_TPCL(_i, _j) (0x00338000 + ((_i) * 8 + (_j) * 64)) /* _i=0...7, _j=0...15 */ /* Reset: CORER */ -#define I40E_GLVEBTC_TPCL_MAX_INDEX 7 -#define I40E_GLVEBTC_TPCL_TCPCL_SHIFT 0 -#define I40E_GLVEBTC_TPCL_TCPCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLVEBTC_TPCL_TCPCL_SHIFT) -#define I40E_GLVEBVL_BPCH(_i) (0x00374804 + ((_i) * 8)) /* _i=0...127 */ /* Reset: CORER */ -#define I40E_GLVEBVL_BPCH_MAX_INDEX 127 -#define I40E_GLVEBVL_BPCH_VLBPCH_SHIFT 0 -#define I40E_GLVEBVL_BPCH_VLBPCH_MASK I40E_MASK(0xFFFF, I40E_GLVEBVL_BPCH_VLBPCH_SHIFT) -#define I40E_GLVEBVL_BPCL(_i) (0x00374800 + ((_i) * 8)) /* _i=0...127 */ /* Reset: CORER */ -#define I40E_GLVEBVL_BPCL_MAX_INDEX 127 -#define I40E_GLVEBVL_BPCL_VLBPCL_SHIFT 0 -#define I40E_GLVEBVL_BPCL_VLBPCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLVEBVL_BPCL_VLBPCL_SHIFT) -#define I40E_GLVEBVL_GORCH(_i) (0x00360004 + ((_i) * 8)) /* _i=0...127 */ /* Reset: CORER */ -#define I40E_GLVEBVL_GORCH_MAX_INDEX 127 -#define I40E_GLVEBVL_GORCH_VLBCH_SHIFT 0 -#define I40E_GLVEBVL_GORCH_VLBCH_MASK I40E_MASK(0xFFFF, I40E_GLVEBVL_GORCH_VLBCH_SHIFT) -#define I40E_GLVEBVL_GORCL(_i) (0x00360000 + ((_i) * 8)) /* _i=0...127 */ /* Reset: CORER */ -#define I40E_GLVEBVL_GORCL_MAX_INDEX 127 -#define I40E_GLVEBVL_GORCL_VLBCL_SHIFT 0 -#define I40E_GLVEBVL_GORCL_VLBCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLVEBVL_GORCL_VLBCL_SHIFT) -#define I40E_GLVEBVL_GOTCH(_i) (0x00330004 + ((_i) * 8)) /* _i=0...127 */ /* Reset: CORER */ -#define I40E_GLVEBVL_GOTCH_MAX_INDEX 127 -#define I40E_GLVEBVL_GOTCH_VLBCH_SHIFT 0 -#define I40E_GLVEBVL_GOTCH_VLBCH_MASK I40E_MASK(0xFFFF, I40E_GLVEBVL_GOTCH_VLBCH_SHIFT) -#define I40E_GLVEBVL_GOTCL(_i) (0x00330000 + ((_i) * 8)) /* _i=0...127 */ /* Reset: CORER */ -#define I40E_GLVEBVL_GOTCL_MAX_INDEX 127 -#define I40E_GLVEBVL_GOTCL_VLBCL_SHIFT 0 -#define I40E_GLVEBVL_GOTCL_VLBCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLVEBVL_GOTCL_VLBCL_SHIFT) -#define I40E_GLVEBVL_MPCH(_i) (0x00374404 + ((_i) * 8)) /* _i=0...127 */ /* Reset: CORER */ -#define I40E_GLVEBVL_MPCH_MAX_INDEX 127 -#define I40E_GLVEBVL_MPCH_VLMPCH_SHIFT 0 -#define I40E_GLVEBVL_MPCH_VLMPCH_MASK I40E_MASK(0xFFFF, I40E_GLVEBVL_MPCH_VLMPCH_SHIFT) -#define I40E_GLVEBVL_MPCL(_i) (0x00374400 + ((_i) * 8)) /* _i=0...127 */ /* Reset: CORER */ -#define I40E_GLVEBVL_MPCL_MAX_INDEX 127 -#define I40E_GLVEBVL_MPCL_VLMPCL_SHIFT 0 -#define I40E_GLVEBVL_MPCL_VLMPCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLVEBVL_MPCL_VLMPCL_SHIFT) -#define I40E_GLVEBVL_UPCH(_i) (0x00374004 + ((_i) * 8)) /* _i=0...127 */ /* Reset: CORER */ -#define I40E_GLVEBVL_UPCH_MAX_INDEX 127 -#define I40E_GLVEBVL_UPCH_VLUPCH_SHIFT 0 -#define I40E_GLVEBVL_UPCH_VLUPCH_MASK I40E_MASK(0xFFFF, I40E_GLVEBVL_UPCH_VLUPCH_SHIFT) -#define I40E_GLVEBVL_UPCL(_i) (0x00374000 + ((_i) * 8)) /* _i=0...127 */ /* Reset: CORER */ -#define I40E_GLVEBVL_UPCL_MAX_INDEX 127 -#define I40E_GLVEBVL_UPCL_VLUPCL_SHIFT 0 -#define I40E_GLVEBVL_UPCL_VLUPCL_MASK I40E_MASK(0xFFFFFFFF, I40E_GLVEBVL_UPCL_VLUPCL_SHIFT) -#define I40E_GL_MTG_FLU_MSK_H 0x00269F4C /* Reset: CORER */ -#define I40E_GL_MTG_FLU_MSK_H_MASK_HIGH_SHIFT 0 -#define I40E_GL_MTG_FLU_MSK_H_MASK_HIGH_MASK I40E_MASK(0xFFFF, I40E_GL_MTG_FLU_MSK_H_MASK_HIGH_SHIFT) -#define I40E_GL_SWR_DEF_ACT(_i) (0x00270200 + ((_i) * 4)) /* _i=0...35 */ /* Reset: CORER */ -#define I40E_GL_SWR_DEF_ACT_MAX_INDEX 35 -#define I40E_GL_SWR_DEF_ACT_DEF_ACTION_SHIFT 0 -#define I40E_GL_SWR_DEF_ACT_DEF_ACTION_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_SWR_DEF_ACT_DEF_ACTION_SHIFT) -#define I40E_GL_SWR_DEF_ACT_EN(_i) (0x0026CFB8 + ((_i) * 4)) /* _i=0...1 */ /* Reset: CORER */ -#define I40E_GL_SWR_DEF_ACT_EN_MAX_INDEX 1 -#define I40E_GL_SWR_DEF_ACT_EN_DEF_ACT_EN_BITMAP_SHIFT 0 -#define I40E_GL_SWR_DEF_ACT_EN_DEF_ACT_EN_BITMAP_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_SWR_DEF_ACT_EN_DEF_ACT_EN_BITMAP_SHIFT) -#define I40E_PRTTSYN_ADJ 0x001E4280 /* Reset: GLOBR */ -#define I40E_PRTTSYN_ADJ_TSYNADJ_SHIFT 0 -#define I40E_PRTTSYN_ADJ_TSYNADJ_MASK I40E_MASK(0x7FFFFFFF, I40E_PRTTSYN_ADJ_TSYNADJ_SHIFT) -#define I40E_PRTTSYN_ADJ_SIGN_SHIFT 31 -#define I40E_PRTTSYN_ADJ_SIGN_MASK I40E_MASK(0x1, I40E_PRTTSYN_ADJ_SIGN_SHIFT) -#define I40E_PRTTSYN_AUX_0(_i) (0x001E42A0 + ((_i) * 32)) /* _i=0...1 */ /* Reset: GLOBR */ -#define I40E_PRTTSYN_AUX_0_MAX_INDEX 1 -#define I40E_PRTTSYN_AUX_0_OUT_ENA_SHIFT 0 -#define I40E_PRTTSYN_AUX_0_OUT_ENA_MASK I40E_MASK(0x1, I40E_PRTTSYN_AUX_0_OUT_ENA_SHIFT) -#define I40E_PRTTSYN_AUX_0_OUTMOD_SHIFT 1 -#define I40E_PRTTSYN_AUX_0_OUTMOD_MASK I40E_MASK(0x3, I40E_PRTTSYN_AUX_0_OUTMOD_SHIFT) -#define I40E_PRTTSYN_AUX_0_OUTLVL_SHIFT 3 -#define I40E_PRTTSYN_AUX_0_OUTLVL_MASK I40E_MASK(0x1, I40E_PRTTSYN_AUX_0_OUTLVL_SHIFT) -#define I40E_PRTTSYN_AUX_0_PULSEW_SHIFT 8 -#define I40E_PRTTSYN_AUX_0_PULSEW_MASK I40E_MASK(0xF, I40E_PRTTSYN_AUX_0_PULSEW_SHIFT) -#define I40E_PRTTSYN_AUX_0_EVNTLVL_SHIFT 16 -#define I40E_PRTTSYN_AUX_0_EVNTLVL_MASK I40E_MASK(0x3, I40E_PRTTSYN_AUX_0_EVNTLVL_SHIFT) -#define I40E_PRTTSYN_AUX_1(_i) (0x001E42E0 + ((_i) * 32)) /* _i=0...1 */ /* Reset: GLOBR */ -#define I40E_PRTTSYN_AUX_1_MAX_INDEX 1 -#define I40E_PRTTSYN_AUX_1_INSTNT_SHIFT 0 -#define I40E_PRTTSYN_AUX_1_INSTNT_MASK I40E_MASK(0x1, I40E_PRTTSYN_AUX_1_INSTNT_SHIFT) -#define I40E_PRTTSYN_AUX_1_SAMPLE_TIME_SHIFT 1 -#define I40E_PRTTSYN_AUX_1_SAMPLE_TIME_MASK I40E_MASK(0x1, I40E_PRTTSYN_AUX_1_SAMPLE_TIME_SHIFT) -#define I40E_PRTTSYN_CLKO(_i) (0x001E4240 + ((_i) * 32)) /* _i=0...1 */ /* Reset: GLOBR */ -#define I40E_PRTTSYN_CLKO_MAX_INDEX 1 -#define I40E_PRTTSYN_CLKO_TSYNCLKO_SHIFT 0 -#define I40E_PRTTSYN_CLKO_TSYNCLKO_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_CLKO_TSYNCLKO_SHIFT) -#define I40E_PRTTSYN_CTL0 0x001E4200 /* Reset: GLOBR */ -#define I40E_PRTTSYN_CTL0_CLEAR_TSYNTIMER_SHIFT 0 -#define I40E_PRTTSYN_CTL0_CLEAR_TSYNTIMER_MASK I40E_MASK(0x1, I40E_PRTTSYN_CTL0_CLEAR_TSYNTIMER_SHIFT) -#define I40E_PRTTSYN_CTL0_TXTIME_INT_ENA_SHIFT 1 -#define I40E_PRTTSYN_CTL0_TXTIME_INT_ENA_MASK I40E_MASK(0x1, I40E_PRTTSYN_CTL0_TXTIME_INT_ENA_SHIFT) -#define I40E_PRTTSYN_CTL0_EVENT_INT_ENA_SHIFT 2 -#define I40E_PRTTSYN_CTL0_EVENT_INT_ENA_MASK I40E_MASK(0x1, I40E_PRTTSYN_CTL0_EVENT_INT_ENA_SHIFT) -#define I40E_PRTTSYN_CTL0_TGT_INT_ENA_SHIFT 3 -#define I40E_PRTTSYN_CTL0_TGT_INT_ENA_MASK I40E_MASK(0x1, I40E_PRTTSYN_CTL0_TGT_INT_ENA_SHIFT) -#define I40E_PRTTSYN_CTL0_PF_ID_SHIFT 8 -#define I40E_PRTTSYN_CTL0_PF_ID_MASK I40E_MASK(0xF, I40E_PRTTSYN_CTL0_PF_ID_SHIFT) -#define I40E_PRTTSYN_CTL0_TSYNACT_SHIFT 12 -#define I40E_PRTTSYN_CTL0_TSYNACT_MASK I40E_MASK(0x3, I40E_PRTTSYN_CTL0_TSYNACT_SHIFT) -#define I40E_PRTTSYN_CTL0_TSYNENA_SHIFT 31 -#define I40E_PRTTSYN_CTL0_TSYNENA_MASK I40E_MASK(0x1, I40E_PRTTSYN_CTL0_TSYNENA_SHIFT) -#define I40E_PRTTSYN_CTL1 0x00085020 /* Reset: CORER */ -#define I40E_PRTTSYN_CTL1_V1MESSTYPE0_SHIFT 0 -#define I40E_PRTTSYN_CTL1_V1MESSTYPE0_MASK I40E_MASK(0xFF, I40E_PRTTSYN_CTL1_V1MESSTYPE0_SHIFT) -#define I40E_PRTTSYN_CTL1_V1MESSTYPE1_SHIFT 8 -#define I40E_PRTTSYN_CTL1_V1MESSTYPE1_MASK I40E_MASK(0xFF, I40E_PRTTSYN_CTL1_V1MESSTYPE1_SHIFT) -#define I40E_PRTTSYN_CTL1_V2MESSTYPE0_SHIFT 16 -#define I40E_PRTTSYN_CTL1_V2MESSTYPE0_MASK I40E_MASK(0xF, I40E_PRTTSYN_CTL1_V2MESSTYPE0_SHIFT) -#define I40E_PRTTSYN_CTL1_V2MESSTYPE1_SHIFT 20 -#define I40E_PRTTSYN_CTL1_V2MESSTYPE1_MASK I40E_MASK(0xF, I40E_PRTTSYN_CTL1_V2MESSTYPE1_SHIFT) -#define I40E_PRTTSYN_CTL1_TSYNTYPE_SHIFT 24 -#define I40E_PRTTSYN_CTL1_TSYNTYPE_MASK I40E_MASK(0x3, I40E_PRTTSYN_CTL1_TSYNTYPE_SHIFT) -#define I40E_PRTTSYN_CTL1_UDP_ENA_SHIFT 26 -#define I40E_PRTTSYN_CTL1_UDP_ENA_MASK I40E_MASK(0x3, I40E_PRTTSYN_CTL1_UDP_ENA_SHIFT) -#define I40E_PRTTSYN_CTL1_TSYNENA_SHIFT 31 -#define I40E_PRTTSYN_CTL1_TSYNENA_MASK I40E_MASK(0x1, I40E_PRTTSYN_CTL1_TSYNENA_SHIFT) -#define I40E_PRTTSYN_EVNT_H(_i) (0x001E40C0 + ((_i) * 32)) /* _i=0...1 */ /* Reset: GLOBR */ -#define I40E_PRTTSYN_EVNT_H_MAX_INDEX 1 -#define I40E_PRTTSYN_EVNT_H_TSYNEVNT_H_SHIFT 0 -#define I40E_PRTTSYN_EVNT_H_TSYNEVNT_H_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_EVNT_H_TSYNEVNT_H_SHIFT) -#define I40E_PRTTSYN_EVNT_L(_i) (0x001E4080 + ((_i) * 32)) /* _i=0...1 */ /* Reset: GLOBR */ -#define I40E_PRTTSYN_EVNT_L_MAX_INDEX 1 -#define I40E_PRTTSYN_EVNT_L_TSYNEVNT_L_SHIFT 0 -#define I40E_PRTTSYN_EVNT_L_TSYNEVNT_L_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_EVNT_L_TSYNEVNT_L_SHIFT) -#define I40E_PRTTSYN_INC_H 0x001E4060 /* Reset: GLOBR */ -#define I40E_PRTTSYN_INC_H_TSYNINC_H_SHIFT 0 -#define I40E_PRTTSYN_INC_H_TSYNINC_H_MASK I40E_MASK(0x3F, I40E_PRTTSYN_INC_H_TSYNINC_H_SHIFT) -#define I40E_PRTTSYN_INC_L 0x001E4040 /* Reset: GLOBR */ -#define I40E_PRTTSYN_INC_L_TSYNINC_L_SHIFT 0 -#define I40E_PRTTSYN_INC_L_TSYNINC_L_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_INC_L_TSYNINC_L_SHIFT) -#define I40E_PRTTSYN_RXTIME_H(_i) (0x00085040 + ((_i) * 32)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_PRTTSYN_RXTIME_H_MAX_INDEX 3 -#define I40E_PRTTSYN_RXTIME_H_RXTIEM_H_SHIFT 0 -#define I40E_PRTTSYN_RXTIME_H_RXTIEM_H_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_RXTIME_H_RXTIEM_H_SHIFT) -#define I40E_PRTTSYN_RXTIME_L(_i) (0x000850C0 + ((_i) * 32)) /* _i=0...3 */ /* Reset: CORER */ -#define I40E_PRTTSYN_RXTIME_L_MAX_INDEX 3 -#define I40E_PRTTSYN_RXTIME_L_RXTIEM_L_SHIFT 0 -#define I40E_PRTTSYN_RXTIME_L_RXTIEM_L_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_RXTIME_L_RXTIEM_L_SHIFT) -#define I40E_PRTTSYN_STAT_0 0x001E4220 /* Reset: GLOBR */ -#define I40E_PRTTSYN_STAT_0_EVENT0_SHIFT 0 -#define I40E_PRTTSYN_STAT_0_EVENT0_MASK I40E_MASK(0x1, I40E_PRTTSYN_STAT_0_EVENT0_SHIFT) -#define I40E_PRTTSYN_STAT_0_EVENT1_SHIFT 1 -#define I40E_PRTTSYN_STAT_0_EVENT1_MASK I40E_MASK(0x1, I40E_PRTTSYN_STAT_0_EVENT1_SHIFT) -#define I40E_PRTTSYN_STAT_0_TGT0_SHIFT 2 -#define I40E_PRTTSYN_STAT_0_TGT0_MASK I40E_MASK(0x1, I40E_PRTTSYN_STAT_0_TGT0_SHIFT) -#define I40E_PRTTSYN_STAT_0_TGT1_SHIFT 3 -#define I40E_PRTTSYN_STAT_0_TGT1_MASK I40E_MASK(0x1, I40E_PRTTSYN_STAT_0_TGT1_SHIFT) -#define I40E_PRTTSYN_STAT_0_TXTIME_SHIFT 4 -#define I40E_PRTTSYN_STAT_0_TXTIME_MASK I40E_MASK(0x1, I40E_PRTTSYN_STAT_0_TXTIME_SHIFT) -#define I40E_PRTTSYN_STAT_1 0x00085140 /* Reset: CORER */ -#define I40E_PRTTSYN_STAT_1_RXT0_SHIFT 0 -#define I40E_PRTTSYN_STAT_1_RXT0_MASK I40E_MASK(0x1, I40E_PRTTSYN_STAT_1_RXT0_SHIFT) -#define I40E_PRTTSYN_STAT_1_RXT1_SHIFT 1 -#define I40E_PRTTSYN_STAT_1_RXT1_MASK I40E_MASK(0x1, I40E_PRTTSYN_STAT_1_RXT1_SHIFT) -#define I40E_PRTTSYN_STAT_1_RXT2_SHIFT 2 -#define I40E_PRTTSYN_STAT_1_RXT2_MASK I40E_MASK(0x1, I40E_PRTTSYN_STAT_1_RXT2_SHIFT) -#define I40E_PRTTSYN_STAT_1_RXT3_SHIFT 3 -#define I40E_PRTTSYN_STAT_1_RXT3_MASK I40E_MASK(0x1, I40E_PRTTSYN_STAT_1_RXT3_SHIFT) -#define I40E_PRTTSYN_TGT_H(_i) (0x001E4180 + ((_i) * 32)) /* _i=0...1 */ /* Reset: GLOBR */ -#define I40E_PRTTSYN_TGT_H_MAX_INDEX 1 -#define I40E_PRTTSYN_TGT_H_TSYNTGTT_H_SHIFT 0 -#define I40E_PRTTSYN_TGT_H_TSYNTGTT_H_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_TGT_H_TSYNTGTT_H_SHIFT) -#define I40E_PRTTSYN_TGT_L(_i) (0x001E4140 + ((_i) * 32)) /* _i=0...1 */ /* Reset: GLOBR */ -#define I40E_PRTTSYN_TGT_L_MAX_INDEX 1 -#define I40E_PRTTSYN_TGT_L_TSYNTGTT_L_SHIFT 0 -#define I40E_PRTTSYN_TGT_L_TSYNTGTT_L_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_TGT_L_TSYNTGTT_L_SHIFT) -#define I40E_PRTTSYN_TIME_H 0x001E4120 /* Reset: GLOBR */ -#define I40E_PRTTSYN_TIME_H_TSYNTIME_H_SHIFT 0 -#define I40E_PRTTSYN_TIME_H_TSYNTIME_H_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_TIME_H_TSYNTIME_H_SHIFT) -#define I40E_PRTTSYN_TIME_L 0x001E4100 /* Reset: GLOBR */ -#define I40E_PRTTSYN_TIME_L_TSYNTIME_L_SHIFT 0 -#define I40E_PRTTSYN_TIME_L_TSYNTIME_L_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_TIME_L_TSYNTIME_L_SHIFT) -#define I40E_PRTTSYN_TXTIME_H 0x001E41E0 /* Reset: GLOBR */ -#define I40E_PRTTSYN_TXTIME_H_TXTIEM_H_SHIFT 0 -#define I40E_PRTTSYN_TXTIME_H_TXTIEM_H_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_TXTIME_H_TXTIEM_H_SHIFT) -#define I40E_PRTTSYN_TXTIME_L 0x001E41C0 /* Reset: GLOBR */ -#define I40E_PRTTSYN_TXTIME_L_TXTIEM_L_SHIFT 0 -#define I40E_PRTTSYN_TXTIME_L_TXTIEM_L_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTTSYN_TXTIME_L_TXTIEM_L_SHIFT) -#define I40E_GLSCD_QUANTA 0x000B2080 /* Reset: CORER */ -#define I40E_GLSCD_QUANTA_TSCDQUANTA_SHIFT 0 -#define I40E_GLSCD_QUANTA_TSCDQUANTA_MASK I40E_MASK(0x7, I40E_GLSCD_QUANTA_TSCDQUANTA_SHIFT) -#define I40E_GL_MDET_RX 0x0012A510 /* Reset: CORER */ -#define I40E_GL_MDET_RX_FUNCTION_SHIFT 0 -#define I40E_GL_MDET_RX_FUNCTION_MASK I40E_MASK(0xFF, I40E_GL_MDET_RX_FUNCTION_SHIFT) -#define I40E_GL_MDET_RX_EVENT_SHIFT 8 -#define I40E_GL_MDET_RX_EVENT_MASK I40E_MASK(0x1FF, I40E_GL_MDET_RX_EVENT_SHIFT) -#define I40E_GL_MDET_RX_QUEUE_SHIFT 17 -#define I40E_GL_MDET_RX_QUEUE_MASK I40E_MASK(0x3FFF, I40E_GL_MDET_RX_QUEUE_SHIFT) -#define I40E_GL_MDET_RX_VALID_SHIFT 31 -#define I40E_GL_MDET_RX_VALID_MASK I40E_MASK(0x1, I40E_GL_MDET_RX_VALID_SHIFT) -#define I40E_GL_MDET_TX 0x000E6480 /* Reset: CORER */ -#define I40E_GL_MDET_TX_QUEUE_SHIFT 0 -#define I40E_GL_MDET_TX_QUEUE_MASK I40E_MASK(0xFFF, I40E_GL_MDET_TX_QUEUE_SHIFT) -#define I40E_GL_MDET_TX_VF_NUM_SHIFT 12 -#define I40E_GL_MDET_TX_VF_NUM_MASK I40E_MASK(0x1FF, I40E_GL_MDET_TX_VF_NUM_SHIFT) -#define I40E_GL_MDET_TX_PF_NUM_SHIFT 21 -#define I40E_GL_MDET_TX_PF_NUM_MASK I40E_MASK(0xF, I40E_GL_MDET_TX_PF_NUM_SHIFT) -#define I40E_GL_MDET_TX_EVENT_SHIFT 25 -#define I40E_GL_MDET_TX_EVENT_MASK I40E_MASK(0x1F, I40E_GL_MDET_TX_EVENT_SHIFT) -#define I40E_GL_MDET_TX_VALID_SHIFT 31 -#define I40E_GL_MDET_TX_VALID_MASK I40E_MASK(0x1, I40E_GL_MDET_TX_VALID_SHIFT) -#define I40E_PF_MDET_RX 0x0012A400 /* Reset: CORER */ -#define I40E_PF_MDET_RX_VALID_SHIFT 0 -#define I40E_PF_MDET_RX_VALID_MASK I40E_MASK(0x1, I40E_PF_MDET_RX_VALID_SHIFT) -#define I40E_PF_MDET_TX 0x000E6400 /* Reset: CORER */ -#define I40E_PF_MDET_TX_VALID_SHIFT 0 -#define I40E_PF_MDET_TX_VALID_MASK I40E_MASK(0x1, I40E_PF_MDET_TX_VALID_SHIFT) -#define I40E_PF_VT_PFALLOC 0x001C0500 /* Reset: CORER */ -#define I40E_PF_VT_PFALLOC_FIRSTVF_SHIFT 0 -#define I40E_PF_VT_PFALLOC_FIRSTVF_MASK I40E_MASK(0xFF, I40E_PF_VT_PFALLOC_FIRSTVF_SHIFT) -#define I40E_PF_VT_PFALLOC_LASTVF_SHIFT 8 -#define I40E_PF_VT_PFALLOC_LASTVF_MASK I40E_MASK(0xFF, I40E_PF_VT_PFALLOC_LASTVF_SHIFT) -#define I40E_PF_VT_PFALLOC_VALID_SHIFT 31 -#define I40E_PF_VT_PFALLOC_VALID_MASK I40E_MASK(0x1, I40E_PF_VT_PFALLOC_VALID_SHIFT) -#define I40E_VP_MDET_RX(_VF) (0x0012A000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ -#define I40E_VP_MDET_RX_MAX_INDEX 127 -#define I40E_VP_MDET_RX_VALID_SHIFT 0 -#define I40E_VP_MDET_RX_VALID_MASK I40E_MASK(0x1, I40E_VP_MDET_RX_VALID_SHIFT) -#define I40E_VP_MDET_TX(_VF) (0x000E6000 + ((_VF) * 4)) /* _i=0...127 */ /* Reset: CORER */ -#define I40E_VP_MDET_TX_MAX_INDEX 127 -#define I40E_VP_MDET_TX_VALID_SHIFT 0 -#define I40E_VP_MDET_TX_VALID_MASK I40E_MASK(0x1, I40E_VP_MDET_TX_VALID_SHIFT) -#define I40E_GLPM_WUMC 0x0006C800 /* Reset: POR */ -#define I40E_GLPM_WUMC_NOTCO_SHIFT 0 -#define I40E_GLPM_WUMC_NOTCO_MASK I40E_MASK(0x1, I40E_GLPM_WUMC_NOTCO_SHIFT) -#define I40E_GLPM_WUMC_SRST_PIN_VAL_SHIFT 1 -#define I40E_GLPM_WUMC_SRST_PIN_VAL_MASK I40E_MASK(0x1, I40E_GLPM_WUMC_SRST_PIN_VAL_SHIFT) -#define I40E_GLPM_WUMC_ROL_MODE_SHIFT 2 -#define I40E_GLPM_WUMC_ROL_MODE_MASK I40E_MASK(0x1, I40E_GLPM_WUMC_ROL_MODE_SHIFT) -#define I40E_GLPM_WUMC_RESERVED_4_SHIFT 3 -#define I40E_GLPM_WUMC_RESERVED_4_MASK I40E_MASK(0x1FFF, I40E_GLPM_WUMC_RESERVED_4_SHIFT) -#define I40E_GLPM_WUMC_MNG_WU_PF_SHIFT 16 -#define I40E_GLPM_WUMC_MNG_WU_PF_MASK I40E_MASK(0xFFFF, I40E_GLPM_WUMC_MNG_WU_PF_SHIFT) -#define I40E_PFPM_APM 0x000B8080 /* Reset: POR */ -#define I40E_PFPM_APM_APME_SHIFT 0 -#define I40E_PFPM_APM_APME_MASK I40E_MASK(0x1, I40E_PFPM_APM_APME_SHIFT) -#define I40E_PFPM_FHFT_LENGTH(_i) (0x0006A000 + ((_i) * 128)) /* _i=0...7 */ /* Reset: POR */ -#define I40E_PFPM_FHFT_LENGTH_MAX_INDEX 7 -#define I40E_PFPM_FHFT_LENGTH_LENGTH_SHIFT 0 -#define I40E_PFPM_FHFT_LENGTH_LENGTH_MASK I40E_MASK(0xFF, I40E_PFPM_FHFT_LENGTH_LENGTH_SHIFT) -#define I40E_PFPM_WUC 0x0006B200 /* Reset: POR */ -#define I40E_PFPM_WUC_EN_APM_D0_SHIFT 5 -#define I40E_PFPM_WUC_EN_APM_D0_MASK I40E_MASK(0x1, I40E_PFPM_WUC_EN_APM_D0_SHIFT) -#define I40E_PFPM_WUFC 0x0006B400 /* Reset: POR */ -#define I40E_PFPM_WUFC_LNKC_SHIFT 0 -#define I40E_PFPM_WUFC_LNKC_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_LNKC_SHIFT) -#define I40E_PFPM_WUFC_MAG_SHIFT 1 -#define I40E_PFPM_WUFC_MAG_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_MAG_SHIFT) -#define I40E_PFPM_WUFC_MNG_SHIFT 3 -#define I40E_PFPM_WUFC_MNG_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_MNG_SHIFT) -#define I40E_PFPM_WUFC_FLX0_ACT_SHIFT 4 -#define I40E_PFPM_WUFC_FLX0_ACT_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX0_ACT_SHIFT) -#define I40E_PFPM_WUFC_FLX1_ACT_SHIFT 5 -#define I40E_PFPM_WUFC_FLX1_ACT_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX1_ACT_SHIFT) -#define I40E_PFPM_WUFC_FLX2_ACT_SHIFT 6 -#define I40E_PFPM_WUFC_FLX2_ACT_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX2_ACT_SHIFT) -#define I40E_PFPM_WUFC_FLX3_ACT_SHIFT 7 -#define I40E_PFPM_WUFC_FLX3_ACT_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX3_ACT_SHIFT) -#define I40E_PFPM_WUFC_FLX4_ACT_SHIFT 8 -#define I40E_PFPM_WUFC_FLX4_ACT_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX4_ACT_SHIFT) -#define I40E_PFPM_WUFC_FLX5_ACT_SHIFT 9 -#define I40E_PFPM_WUFC_FLX5_ACT_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX5_ACT_SHIFT) -#define I40E_PFPM_WUFC_FLX6_ACT_SHIFT 10 -#define I40E_PFPM_WUFC_FLX6_ACT_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX6_ACT_SHIFT) -#define I40E_PFPM_WUFC_FLX7_ACT_SHIFT 11 -#define I40E_PFPM_WUFC_FLX7_ACT_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX7_ACT_SHIFT) -#define I40E_PFPM_WUFC_FLX0_SHIFT 16 -#define I40E_PFPM_WUFC_FLX0_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX0_SHIFT) -#define I40E_PFPM_WUFC_FLX1_SHIFT 17 -#define I40E_PFPM_WUFC_FLX1_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX1_SHIFT) -#define I40E_PFPM_WUFC_FLX2_SHIFT 18 -#define I40E_PFPM_WUFC_FLX2_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX2_SHIFT) -#define I40E_PFPM_WUFC_FLX3_SHIFT 19 -#define I40E_PFPM_WUFC_FLX3_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX3_SHIFT) -#define I40E_PFPM_WUFC_FLX4_SHIFT 20 -#define I40E_PFPM_WUFC_FLX4_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX4_SHIFT) -#define I40E_PFPM_WUFC_FLX5_SHIFT 21 -#define I40E_PFPM_WUFC_FLX5_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX5_SHIFT) -#define I40E_PFPM_WUFC_FLX6_SHIFT 22 -#define I40E_PFPM_WUFC_FLX6_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX6_SHIFT) -#define I40E_PFPM_WUFC_FLX7_SHIFT 23 -#define I40E_PFPM_WUFC_FLX7_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FLX7_SHIFT) -#define I40E_PFPM_WUFC_FW_RST_WK_SHIFT 31 -#define I40E_PFPM_WUFC_FW_RST_WK_MASK I40E_MASK(0x1, I40E_PFPM_WUFC_FW_RST_WK_SHIFT) -#define I40E_PFPM_WUS 0x0006B600 /* Reset: POR */ -#define I40E_PFPM_WUS_LNKC_SHIFT 0 -#define I40E_PFPM_WUS_LNKC_MASK I40E_MASK(0x1, I40E_PFPM_WUS_LNKC_SHIFT) -#define I40E_PFPM_WUS_MAG_SHIFT 1 -#define I40E_PFPM_WUS_MAG_MASK I40E_MASK(0x1, I40E_PFPM_WUS_MAG_SHIFT) -#define I40E_PFPM_WUS_PME_STATUS_SHIFT 2 -#define I40E_PFPM_WUS_PME_STATUS_MASK I40E_MASK(0x1, I40E_PFPM_WUS_PME_STATUS_SHIFT) -#define I40E_PFPM_WUS_MNG_SHIFT 3 -#define I40E_PFPM_WUS_MNG_MASK I40E_MASK(0x1, I40E_PFPM_WUS_MNG_SHIFT) -#define I40E_PFPM_WUS_FLX0_SHIFT 16 -#define I40E_PFPM_WUS_FLX0_MASK I40E_MASK(0x1, I40E_PFPM_WUS_FLX0_SHIFT) -#define I40E_PFPM_WUS_FLX1_SHIFT 17 -#define I40E_PFPM_WUS_FLX1_MASK I40E_MASK(0x1, I40E_PFPM_WUS_FLX1_SHIFT) -#define I40E_PFPM_WUS_FLX2_SHIFT 18 -#define I40E_PFPM_WUS_FLX2_MASK I40E_MASK(0x1, I40E_PFPM_WUS_FLX2_SHIFT) -#define I40E_PFPM_WUS_FLX3_SHIFT 19 -#define I40E_PFPM_WUS_FLX3_MASK I40E_MASK(0x1, I40E_PFPM_WUS_FLX3_SHIFT) -#define I40E_PFPM_WUS_FLX4_SHIFT 20 -#define I40E_PFPM_WUS_FLX4_MASK I40E_MASK(0x1, I40E_PFPM_WUS_FLX4_SHIFT) -#define I40E_PFPM_WUS_FLX5_SHIFT 21 -#define I40E_PFPM_WUS_FLX5_MASK I40E_MASK(0x1, I40E_PFPM_WUS_FLX5_SHIFT) -#define I40E_PFPM_WUS_FLX6_SHIFT 22 -#define I40E_PFPM_WUS_FLX6_MASK I40E_MASK(0x1, I40E_PFPM_WUS_FLX6_SHIFT) -#define I40E_PFPM_WUS_FLX7_SHIFT 23 -#define I40E_PFPM_WUS_FLX7_MASK I40E_MASK(0x1, I40E_PFPM_WUS_FLX7_SHIFT) -#define I40E_PFPM_WUS_FW_RST_WK_SHIFT 31 -#define I40E_PFPM_WUS_FW_RST_WK_MASK I40E_MASK(0x1, I40E_PFPM_WUS_FW_RST_WK_SHIFT) -#define I40E_PRTPM_FHFHR 0x0006C000 /* Reset: POR */ -#define I40E_PRTPM_FHFHR_UNICAST_SHIFT 0 -#define I40E_PRTPM_FHFHR_UNICAST_MASK I40E_MASK(0x1, I40E_PRTPM_FHFHR_UNICAST_SHIFT) -#define I40E_PRTPM_FHFHR_MULTICAST_SHIFT 1 -#define I40E_PRTPM_FHFHR_MULTICAST_MASK I40E_MASK(0x1, I40E_PRTPM_FHFHR_MULTICAST_SHIFT) -#define I40E_PRTPM_SAH(_i) (0x001E44C0 + ((_i) * 32)) /* _i=0...3 */ /* Reset: PFR */ -#define I40E_PRTPM_SAH_MAX_INDEX 3 -#define I40E_PRTPM_SAH_PFPM_SAH_SHIFT 0 -#define I40E_PRTPM_SAH_PFPM_SAH_MASK I40E_MASK(0xFFFF, I40E_PRTPM_SAH_PFPM_SAH_SHIFT) -#define I40E_PRTPM_SAH_PF_NUM_SHIFT 26 -#define I40E_PRTPM_SAH_PF_NUM_MASK I40E_MASK(0xF, I40E_PRTPM_SAH_PF_NUM_SHIFT) -#define I40E_PRTPM_SAH_MC_MAG_EN_SHIFT 30 -#define I40E_PRTPM_SAH_MC_MAG_EN_MASK I40E_MASK(0x1, I40E_PRTPM_SAH_MC_MAG_EN_SHIFT) -#define I40E_PRTPM_SAH_AV_SHIFT 31 -#define I40E_PRTPM_SAH_AV_MASK I40E_MASK(0x1, I40E_PRTPM_SAH_AV_SHIFT) -#define I40E_PRTPM_SAL(_i) (0x001E4440 + ((_i) * 32)) /* _i=0...3 */ /* Reset: PFR */ -#define I40E_PRTPM_SAL_MAX_INDEX 3 -#define I40E_PRTPM_SAL_PFPM_SAL_SHIFT 0 -#define I40E_PRTPM_SAL_PFPM_SAL_MASK I40E_MASK(0xFFFFFFFF, I40E_PRTPM_SAL_PFPM_SAL_SHIFT) -#define I40E_VF_ARQBAH1 0x00006000 /* Reset: EMPR */ -#define I40E_VF_ARQBAH1_ARQBAH_SHIFT 0 -#define I40E_VF_ARQBAH1_ARQBAH_MASK I40E_MASK(0xFFFFFFFF, I40E_VF_ARQBAH1_ARQBAH_SHIFT) -#define I40E_VF_ARQBAL1 0x00006C00 /* Reset: EMPR */ -#define I40E_VF_ARQBAL1_ARQBAL_SHIFT 0 -#define I40E_VF_ARQBAL1_ARQBAL_MASK I40E_MASK(0xFFFFFFFF, I40E_VF_ARQBAL1_ARQBAL_SHIFT) -#define I40E_VF_ARQH1 0x00007400 /* Reset: EMPR */ -#define I40E_VF_ARQH1_ARQH_SHIFT 0 -#define I40E_VF_ARQH1_ARQH_MASK I40E_MASK(0x3FF, I40E_VF_ARQH1_ARQH_SHIFT) -#define I40E_VF_ARQLEN1 0x00008000 /* Reset: EMPR */ -#define I40E_VF_ARQLEN1_ARQLEN_SHIFT 0 -#define I40E_VF_ARQLEN1_ARQLEN_MASK I40E_MASK(0x3FF, I40E_VF_ARQLEN1_ARQLEN_SHIFT) -#define I40E_VF_ARQLEN1_ARQVFE_SHIFT 28 -#define I40E_VF_ARQLEN1_ARQVFE_MASK I40E_MASK(0x1, I40E_VF_ARQLEN1_ARQVFE_SHIFT) -#define I40E_VF_ARQLEN1_ARQOVFL_SHIFT 29 -#define I40E_VF_ARQLEN1_ARQOVFL_MASK I40E_MASK(0x1, I40E_VF_ARQLEN1_ARQOVFL_SHIFT) -#define I40E_VF_ARQLEN1_ARQCRIT_SHIFT 30 -#define I40E_VF_ARQLEN1_ARQCRIT_MASK I40E_MASK(0x1, I40E_VF_ARQLEN1_ARQCRIT_SHIFT) -#define I40E_VF_ARQLEN1_ARQENABLE_SHIFT 31 -#define I40E_VF_ARQLEN1_ARQENABLE_MASK I40E_MASK(0x1, I40E_VF_ARQLEN1_ARQENABLE_SHIFT) -#define I40E_VF_ARQT1 0x00007000 /* Reset: EMPR */ -#define I40E_VF_ARQT1_ARQT_SHIFT 0 -#define I40E_VF_ARQT1_ARQT_MASK I40E_MASK(0x3FF, I40E_VF_ARQT1_ARQT_SHIFT) -#define I40E_VF_ATQBAH1 0x00007800 /* Reset: EMPR */ -#define I40E_VF_ATQBAH1_ATQBAH_SHIFT 0 -#define I40E_VF_ATQBAH1_ATQBAH_MASK I40E_MASK(0xFFFFFFFF, I40E_VF_ATQBAH1_ATQBAH_SHIFT) -#define I40E_VF_ATQBAL1 0x00007C00 /* Reset: EMPR */ -#define I40E_VF_ATQBAL1_ATQBAL_SHIFT 0 -#define I40E_VF_ATQBAL1_ATQBAL_MASK I40E_MASK(0xFFFFFFFF, I40E_VF_ATQBAL1_ATQBAL_SHIFT) -#define I40E_VF_ATQH1 0x00006400 /* Reset: EMPR */ -#define I40E_VF_ATQH1_ATQH_SHIFT 0 -#define I40E_VF_ATQH1_ATQH_MASK I40E_MASK(0x3FF, I40E_VF_ATQH1_ATQH_SHIFT) -#define I40E_VF_ATQLEN1 0x00006800 /* Reset: EMPR */ -#define I40E_VF_ATQLEN1_ATQLEN_SHIFT 0 -#define I40E_VF_ATQLEN1_ATQLEN_MASK I40E_MASK(0x3FF, I40E_VF_ATQLEN1_ATQLEN_SHIFT) -#define I40E_VF_ATQLEN1_ATQVFE_SHIFT 28 -#define I40E_VF_ATQLEN1_ATQVFE_MASK I40E_MASK(0x1, I40E_VF_ATQLEN1_ATQVFE_SHIFT) -#define I40E_VF_ATQLEN1_ATQOVFL_SHIFT 29 -#define I40E_VF_ATQLEN1_ATQOVFL_MASK I40E_MASK(0x1, I40E_VF_ATQLEN1_ATQOVFL_SHIFT) -#define I40E_VF_ATQLEN1_ATQCRIT_SHIFT 30 -#define I40E_VF_ATQLEN1_ATQCRIT_MASK I40E_MASK(0x1, I40E_VF_ATQLEN1_ATQCRIT_SHIFT) -#define I40E_VF_ATQLEN1_ATQENABLE_SHIFT 31 -#define I40E_VF_ATQLEN1_ATQENABLE_MASK I40E_MASK(0x1, I40E_VF_ATQLEN1_ATQENABLE_SHIFT) -#define I40E_VF_ATQT1 0x00008400 /* Reset: EMPR */ -#define I40E_VF_ATQT1_ATQT_SHIFT 0 -#define I40E_VF_ATQT1_ATQT_MASK I40E_MASK(0x3FF, I40E_VF_ATQT1_ATQT_SHIFT) -#define I40E_VFGEN_RSTAT 0x00008800 /* Reset: VFR */ -#define I40E_VFGEN_RSTAT_VFR_STATE_SHIFT 0 -#define I40E_VFGEN_RSTAT_VFR_STATE_MASK I40E_MASK(0x3, I40E_VFGEN_RSTAT_VFR_STATE_SHIFT) -#define I40E_VFINT_DYN_CTL01 0x00005C00 /* Reset: VFR */ -#define I40E_VFINT_DYN_CTL01_INTENA_SHIFT 0 -#define I40E_VFINT_DYN_CTL01_INTENA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL01_INTENA_SHIFT) -#define I40E_VFINT_DYN_CTL01_CLEARPBA_SHIFT 1 -#define I40E_VFINT_DYN_CTL01_CLEARPBA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL01_CLEARPBA_SHIFT) -#define I40E_VFINT_DYN_CTL01_SWINT_TRIG_SHIFT 2 -#define I40E_VFINT_DYN_CTL01_SWINT_TRIG_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL01_SWINT_TRIG_SHIFT) -#define I40E_VFINT_DYN_CTL01_ITR_INDX_SHIFT 3 -#define I40E_VFINT_DYN_CTL01_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_DYN_CTL01_ITR_INDX_SHIFT) -#define I40E_VFINT_DYN_CTL01_INTERVAL_SHIFT 5 -#define I40E_VFINT_DYN_CTL01_INTERVAL_MASK I40E_MASK(0xFFF, I40E_VFINT_DYN_CTL01_INTERVAL_SHIFT) -#define I40E_VFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT 24 -#define I40E_VFINT_DYN_CTL01_SW_ITR_INDX_ENA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL01_SW_ITR_INDX_ENA_SHIFT) -#define I40E_VFINT_DYN_CTL01_SW_ITR_INDX_SHIFT 25 -#define I40E_VFINT_DYN_CTL01_SW_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_DYN_CTL01_SW_ITR_INDX_SHIFT) -#define I40E_VFINT_DYN_CTL01_INTENA_MSK_SHIFT 31 -#define I40E_VFINT_DYN_CTL01_INTENA_MSK_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTL01_INTENA_MSK_SHIFT) -#define I40E_VFINT_DYN_CTLN1(_INTVF) (0x00003800 + ((_INTVF) * 4)) /* _i=0...15 */ /* Reset: VFR */ -#define I40E_VFINT_DYN_CTLN1_MAX_INDEX 15 -#define I40E_VFINT_DYN_CTLN1_INTENA_SHIFT 0 -#define I40E_VFINT_DYN_CTLN1_INTENA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN1_INTENA_SHIFT) -#define I40E_VFINT_DYN_CTLN1_CLEARPBA_SHIFT 1 -#define I40E_VFINT_DYN_CTLN1_CLEARPBA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN1_CLEARPBA_SHIFT) -#define I40E_VFINT_DYN_CTLN1_SWINT_TRIG_SHIFT 2 -#define I40E_VFINT_DYN_CTLN1_SWINT_TRIG_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN1_SWINT_TRIG_SHIFT) -#define I40E_VFINT_DYN_CTLN1_ITR_INDX_SHIFT 3 -#define I40E_VFINT_DYN_CTLN1_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_DYN_CTLN1_ITR_INDX_SHIFT) -#define I40E_VFINT_DYN_CTLN1_INTERVAL_SHIFT 5 -#define I40E_VFINT_DYN_CTLN1_INTERVAL_MASK I40E_MASK(0xFFF, I40E_VFINT_DYN_CTLN1_INTERVAL_SHIFT) -#define I40E_VFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT 24 -#define I40E_VFINT_DYN_CTLN1_SW_ITR_INDX_ENA_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN1_SW_ITR_INDX_ENA_SHIFT) -#define I40E_VFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT 25 -#define I40E_VFINT_DYN_CTLN1_SW_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_DYN_CTLN1_SW_ITR_INDX_SHIFT) -#define I40E_VFINT_DYN_CTLN1_INTENA_MSK_SHIFT 31 -#define I40E_VFINT_DYN_CTLN1_INTENA_MSK_MASK I40E_MASK(0x1, I40E_VFINT_DYN_CTLN1_INTENA_MSK_SHIFT) -#define I40E_VFINT_ICR0_ENA1 0x00005000 /* Reset: CORER */ -#define I40E_VFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT 25 -#define I40E_VFINT_ICR0_ENA1_LINK_STAT_CHANGE_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ENA1_LINK_STAT_CHANGE_SHIFT) -#define I40E_VFINT_ICR0_ENA1_ADMINQ_SHIFT 30 -#define I40E_VFINT_ICR0_ENA1_ADMINQ_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ENA1_ADMINQ_SHIFT) -#define I40E_VFINT_ICR0_ENA1_RSVD_SHIFT 31 -#define I40E_VFINT_ICR0_ENA1_RSVD_MASK I40E_MASK(0x1, I40E_VFINT_ICR0_ENA1_RSVD_SHIFT) -#define I40E_VFINT_ICR01 0x00004800 /* Reset: CORER */ -#define I40E_VFINT_ICR01_INTEVENT_SHIFT 0 -#define I40E_VFINT_ICR01_INTEVENT_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_INTEVENT_SHIFT) -#define I40E_VFINT_ICR01_QUEUE_0_SHIFT 1 -#define I40E_VFINT_ICR01_QUEUE_0_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_QUEUE_0_SHIFT) -#define I40E_VFINT_ICR01_QUEUE_1_SHIFT 2 -#define I40E_VFINT_ICR01_QUEUE_1_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_QUEUE_1_SHIFT) -#define I40E_VFINT_ICR01_QUEUE_2_SHIFT 3 -#define I40E_VFINT_ICR01_QUEUE_2_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_QUEUE_2_SHIFT) -#define I40E_VFINT_ICR01_QUEUE_3_SHIFT 4 -#define I40E_VFINT_ICR01_QUEUE_3_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_QUEUE_3_SHIFT) -#define I40E_VFINT_ICR01_LINK_STAT_CHANGE_SHIFT 25 -#define I40E_VFINT_ICR01_LINK_STAT_CHANGE_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_LINK_STAT_CHANGE_SHIFT) -#define I40E_VFINT_ICR01_ADMINQ_SHIFT 30 -#define I40E_VFINT_ICR01_ADMINQ_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_ADMINQ_SHIFT) -#define I40E_VFINT_ICR01_SWINT_SHIFT 31 -#define I40E_VFINT_ICR01_SWINT_MASK I40E_MASK(0x1, I40E_VFINT_ICR01_SWINT_SHIFT) -#define I40E_VFINT_ITR01(_i) (0x00004C00 + ((_i) * 4)) /* _i=0...2 */ /* Reset: VFR */ -#define I40E_VFINT_ITR01_MAX_INDEX 2 -#define I40E_VFINT_ITR01_INTERVAL_SHIFT 0 -#define I40E_VFINT_ITR01_INTERVAL_MASK I40E_MASK(0xFFF, I40E_VFINT_ITR01_INTERVAL_SHIFT) -#define I40E_VFINT_ITRN1(_i, _INTVF) (0x00002800 + ((_i) * 64 + (_INTVF) * 4)) /* _i=0...2, _INTVF=0...15 */ /* Reset: VFR */ -#define I40E_VFINT_ITRN1_MAX_INDEX 2 -#define I40E_VFINT_ITRN1_INTERVAL_SHIFT 0 -#define I40E_VFINT_ITRN1_INTERVAL_MASK I40E_MASK(0xFFF, I40E_VFINT_ITRN1_INTERVAL_SHIFT) -#define I40E_VFINT_STAT_CTL01 0x00005400 /* Reset: VFR */ -#define I40E_VFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT 2 -#define I40E_VFINT_STAT_CTL01_OTHER_ITR_INDX_MASK I40E_MASK(0x3, I40E_VFINT_STAT_CTL01_OTHER_ITR_INDX_SHIFT) -#define I40E_QRX_TAIL1(_Q) (0x00002000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_QRX_TAIL1_MAX_INDEX 15 -#define I40E_QRX_TAIL1_TAIL_SHIFT 0 -#define I40E_QRX_TAIL1_TAIL_MASK I40E_MASK(0x1FFF, I40E_QRX_TAIL1_TAIL_SHIFT) -#define I40E_QTX_TAIL1(_Q) (0x00000000 + ((_Q) * 4)) /* _i=0...15 */ /* Reset: PFR */ -#define I40E_QTX_TAIL1_MAX_INDEX 15 -#define I40E_QTX_TAIL1_TAIL_SHIFT 0 -#define I40E_QTX_TAIL1_TAIL_MASK I40E_MASK(0x1FFF, I40E_QTX_TAIL1_TAIL_SHIFT) -#define I40E_VFMSIX_PBA 0x00002000 /* Reset: VFLR */ -#define I40E_VFMSIX_PBA_PENBIT_SHIFT 0 -#define I40E_VFMSIX_PBA_PENBIT_MASK I40E_MASK(0xFFFFFFFF, I40E_VFMSIX_PBA_PENBIT_SHIFT) -#define I40E_VFMSIX_TADD(_i) (0x00000000 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */ -#define I40E_VFMSIX_TADD_MAX_INDEX 16 -#define I40E_VFMSIX_TADD_MSIXTADD10_SHIFT 0 -#define I40E_VFMSIX_TADD_MSIXTADD10_MASK I40E_MASK(0x3, I40E_VFMSIX_TADD_MSIXTADD10_SHIFT) -#define I40E_VFMSIX_TADD_MSIXTADD_SHIFT 2 -#define I40E_VFMSIX_TADD_MSIXTADD_MASK I40E_MASK(0x3FFFFFFF, I40E_VFMSIX_TADD_MSIXTADD_SHIFT) -#define I40E_VFMSIX_TMSG(_i) (0x00000008 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */ -#define I40E_VFMSIX_TMSG_MAX_INDEX 16 -#define I40E_VFMSIX_TMSG_MSIXTMSG_SHIFT 0 -#define I40E_VFMSIX_TMSG_MSIXTMSG_MASK I40E_MASK(0xFFFFFFFF, I40E_VFMSIX_TMSG_MSIXTMSG_SHIFT) -#define I40E_VFMSIX_TUADD(_i) (0x00000004 + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */ -#define I40E_VFMSIX_TUADD_MAX_INDEX 16 -#define I40E_VFMSIX_TUADD_MSIXTUADD_SHIFT 0 -#define I40E_VFMSIX_TUADD_MSIXTUADD_MASK I40E_MASK(0xFFFFFFFF, I40E_VFMSIX_TUADD_MSIXTUADD_SHIFT) -#define I40E_VFMSIX_TVCTRL(_i) (0x0000000C + ((_i) * 16)) /* _i=0...16 */ /* Reset: VFLR */ -#define I40E_VFMSIX_TVCTRL_MAX_INDEX 16 -#define I40E_VFMSIX_TVCTRL_MASK_SHIFT 0 -#define I40E_VFMSIX_TVCTRL_MASK_MASK I40E_MASK(0x1, I40E_VFMSIX_TVCTRL_MASK_SHIFT) -#define I40E_VFCM_PE_ERRDATA 0x0000DC00 /* Reset: VFR */ -#define I40E_VFCM_PE_ERRDATA_ERROR_CODE_SHIFT 0 -#define I40E_VFCM_PE_ERRDATA_ERROR_CODE_MASK I40E_MASK(0xF, I40E_VFCM_PE_ERRDATA_ERROR_CODE_SHIFT) -#define I40E_VFCM_PE_ERRDATA_Q_TYPE_SHIFT 4 -#define I40E_VFCM_PE_ERRDATA_Q_TYPE_MASK I40E_MASK(0x7, I40E_VFCM_PE_ERRDATA_Q_TYPE_SHIFT) -#define I40E_VFCM_PE_ERRDATA_Q_NUM_SHIFT 8 -#define I40E_VFCM_PE_ERRDATA_Q_NUM_MASK I40E_MASK(0x3FFFF, I40E_VFCM_PE_ERRDATA_Q_NUM_SHIFT) -#define I40E_VFCM_PE_ERRINFO 0x0000D800 /* Reset: VFR */ -#define I40E_VFCM_PE_ERRINFO_ERROR_VALID_SHIFT 0 -#define I40E_VFCM_PE_ERRINFO_ERROR_VALID_MASK I40E_MASK(0x1, I40E_VFCM_PE_ERRINFO_ERROR_VALID_SHIFT) -#define I40E_VFCM_PE_ERRINFO_ERROR_INST_SHIFT 4 -#define I40E_VFCM_PE_ERRINFO_ERROR_INST_MASK I40E_MASK(0x7, I40E_VFCM_PE_ERRINFO_ERROR_INST_SHIFT) -#define I40E_VFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT 8 -#define I40E_VFCM_PE_ERRINFO_DBL_ERROR_CNT_MASK I40E_MASK(0xFF, I40E_VFCM_PE_ERRINFO_DBL_ERROR_CNT_SHIFT) -#define I40E_VFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT 16 -#define I40E_VFCM_PE_ERRINFO_RLU_ERROR_CNT_MASK I40E_MASK(0xFF, I40E_VFCM_PE_ERRINFO_RLU_ERROR_CNT_SHIFT) -#define I40E_VFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT 24 -#define I40E_VFCM_PE_ERRINFO_RLS_ERROR_CNT_MASK I40E_MASK(0xFF, I40E_VFCM_PE_ERRINFO_RLS_ERROR_CNT_SHIFT) -#define I40E_VFQF_HENA(_i) (0x0000C400 + ((_i) * 4)) /* _i=0...1 */ /* Reset: CORER */ -#define I40E_VFQF_HENA_MAX_INDEX 1 -#define I40E_VFQF_HENA_PTYPE_ENA_SHIFT 0 -#define I40E_VFQF_HENA_PTYPE_ENA_MASK I40E_MASK(0xFFFFFFFF, I40E_VFQF_HENA_PTYPE_ENA_SHIFT) -#define I40E_VFQF_HKEY(_i) (0x0000CC00 + ((_i) * 4)) /* _i=0...12 */ /* Reset: CORER */ -#define I40E_VFQF_HKEY_MAX_INDEX 12 -#define I40E_VFQF_HKEY_KEY_0_SHIFT 0 -#define I40E_VFQF_HKEY_KEY_0_MASK I40E_MASK(0xFF, I40E_VFQF_HKEY_KEY_0_SHIFT) -#define I40E_VFQF_HKEY_KEY_1_SHIFT 8 -#define I40E_VFQF_HKEY_KEY_1_MASK I40E_MASK(0xFF, I40E_VFQF_HKEY_KEY_1_SHIFT) -#define I40E_VFQF_HKEY_KEY_2_SHIFT 16 -#define I40E_VFQF_HKEY_KEY_2_MASK I40E_MASK(0xFF, I40E_VFQF_HKEY_KEY_2_SHIFT) -#define I40E_VFQF_HKEY_KEY_3_SHIFT 24 -#define I40E_VFQF_HKEY_KEY_3_MASK I40E_MASK(0xFF, I40E_VFQF_HKEY_KEY_3_SHIFT) -#define I40E_VFQF_HLUT(_i) (0x0000D000 + ((_i) * 4)) /* _i=0...15 */ /* Reset: CORER */ -#define I40E_VFQF_HLUT_MAX_INDEX 15 -#define I40E_VFQF_HLUT_LUT0_SHIFT 0 -#define I40E_VFQF_HLUT_LUT0_MASK I40E_MASK(0xF, I40E_VFQF_HLUT_LUT0_SHIFT) -#define I40E_VFQF_HLUT_LUT1_SHIFT 8 -#define I40E_VFQF_HLUT_LUT1_MASK I40E_MASK(0xF, I40E_VFQF_HLUT_LUT1_SHIFT) -#define I40E_VFQF_HLUT_LUT2_SHIFT 16 -#define I40E_VFQF_HLUT_LUT2_MASK I40E_MASK(0xF, I40E_VFQF_HLUT_LUT2_SHIFT) -#define I40E_VFQF_HLUT_LUT3_SHIFT 24 -#define I40E_VFQF_HLUT_LUT3_MASK I40E_MASK(0xF, I40E_VFQF_HLUT_LUT3_SHIFT) -#define I40E_VFQF_HREGION(_i) (0x0000D400 + ((_i) * 4)) /* _i=0...7 */ /* Reset: CORER */ -#define I40E_VFQF_HREGION_MAX_INDEX 7 -#define I40E_VFQF_HREGION_OVERRIDE_ENA_0_SHIFT 0 -#define I40E_VFQF_HREGION_OVERRIDE_ENA_0_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_0_SHIFT) -#define I40E_VFQF_HREGION_REGION_0_SHIFT 1 -#define I40E_VFQF_HREGION_REGION_0_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_0_SHIFT) -#define I40E_VFQF_HREGION_OVERRIDE_ENA_1_SHIFT 4 -#define I40E_VFQF_HREGION_OVERRIDE_ENA_1_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_1_SHIFT) -#define I40E_VFQF_HREGION_REGION_1_SHIFT 5 -#define I40E_VFQF_HREGION_REGION_1_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_1_SHIFT) -#define I40E_VFQF_HREGION_OVERRIDE_ENA_2_SHIFT 8 -#define I40E_VFQF_HREGION_OVERRIDE_ENA_2_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_2_SHIFT) -#define I40E_VFQF_HREGION_REGION_2_SHIFT 9 -#define I40E_VFQF_HREGION_REGION_2_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_2_SHIFT) -#define I40E_VFQF_HREGION_OVERRIDE_ENA_3_SHIFT 12 -#define I40E_VFQF_HREGION_OVERRIDE_ENA_3_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_3_SHIFT) -#define I40E_VFQF_HREGION_REGION_3_SHIFT 13 -#define I40E_VFQF_HREGION_REGION_3_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_3_SHIFT) -#define I40E_VFQF_HREGION_OVERRIDE_ENA_4_SHIFT 16 -#define I40E_VFQF_HREGION_OVERRIDE_ENA_4_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_4_SHIFT) -#define I40E_VFQF_HREGION_REGION_4_SHIFT 17 -#define I40E_VFQF_HREGION_REGION_4_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_4_SHIFT) -#define I40E_VFQF_HREGION_OVERRIDE_ENA_5_SHIFT 20 -#define I40E_VFQF_HREGION_OVERRIDE_ENA_5_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_5_SHIFT) -#define I40E_VFQF_HREGION_REGION_5_SHIFT 21 -#define I40E_VFQF_HREGION_REGION_5_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_5_SHIFT) -#define I40E_VFQF_HREGION_OVERRIDE_ENA_6_SHIFT 24 -#define I40E_VFQF_HREGION_OVERRIDE_ENA_6_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_6_SHIFT) -#define I40E_VFQF_HREGION_REGION_6_SHIFT 25 -#define I40E_VFQF_HREGION_REGION_6_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_6_SHIFT) -#define I40E_VFQF_HREGION_OVERRIDE_ENA_7_SHIFT 28 -#define I40E_VFQF_HREGION_OVERRIDE_ENA_7_MASK I40E_MASK(0x1, I40E_VFQF_HREGION_OVERRIDE_ENA_7_SHIFT) -#define I40E_VFQF_HREGION_REGION_7_SHIFT 29 -#define I40E_VFQF_HREGION_REGION_7_MASK I40E_MASK(0x7, I40E_VFQF_HREGION_REGION_7_SHIFT) -#endif diff --git a/lib/librte_pmd_i40e/i40e/i40e_status.h b/lib/librte_pmd_i40e/i40e/i40e_status.h deleted file mode 100644 index 2e693a3..0000000 --- a/lib/librte_pmd_i40e/i40e/i40e_status.h +++ /dev/null @@ -1,107 +0,0 @@ -/******************************************************************************* - -Copyright (c) 2013 - 2014, Intel Corporation -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - - 1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - 3. Neither the name of the Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. - -***************************************************************************/ - -#ifndef _I40E_STATUS_H_ -#define _I40E_STATUS_H_ - -/* Error Codes */ -enum i40e_status_code { - I40E_SUCCESS = 0, - I40E_ERR_NVM = -1, - I40E_ERR_NVM_CHECKSUM = -2, - I40E_ERR_PHY = -3, - I40E_ERR_CONFIG = -4, - I40E_ERR_PARAM = -5, - I40E_ERR_MAC_TYPE = -6, - I40E_ERR_UNKNOWN_PHY = -7, - I40E_ERR_LINK_SETUP = -8, - I40E_ERR_ADAPTER_STOPPED = -9, - I40E_ERR_INVALID_MAC_ADDR = -10, - I40E_ERR_DEVICE_NOT_SUPPORTED = -11, - I40E_ERR_MASTER_REQUESTS_PENDING = -12, - I40E_ERR_INVALID_LINK_SETTINGS = -13, - I40E_ERR_AUTONEG_NOT_COMPLETE = -14, - I40E_ERR_RESET_FAILED = -15, - I40E_ERR_SWFW_SYNC = -16, - I40E_ERR_NO_AVAILABLE_VSI = -17, - I40E_ERR_NO_MEMORY = -18, - I40E_ERR_BAD_PTR = -19, - I40E_ERR_RING_FULL = -20, - I40E_ERR_INVALID_PD_ID = -21, - I40E_ERR_INVALID_QP_ID = -22, - I40E_ERR_INVALID_CQ_ID = -23, - I40E_ERR_INVALID_CEQ_ID = -24, - I40E_ERR_INVALID_AEQ_ID = -25, - I40E_ERR_INVALID_SIZE = -26, - I40E_ERR_INVALID_ARP_INDEX = -27, - I40E_ERR_INVALID_FPM_FUNC_ID = -28, - I40E_ERR_QP_INVALID_MSG_SIZE = -29, - I40E_ERR_QP_TOOMANY_WRS_POSTED = -30, - I40E_ERR_INVALID_FRAG_COUNT = -31, - I40E_ERR_QUEUE_EMPTY = -32, - I40E_ERR_INVALID_ALIGNMENT = -33, - I40E_ERR_FLUSHED_QUEUE = -34, - I40E_ERR_INVALID_PUSH_PAGE_INDEX = -35, - I40E_ERR_INVALID_IMM_DATA_SIZE = -36, - I40E_ERR_TIMEOUT = -37, - I40E_ERR_OPCODE_MISMATCH = -38, - I40E_ERR_CQP_COMPL_ERROR = -39, - I40E_ERR_INVALID_VF_ID = -40, - I40E_ERR_INVALID_HMCFN_ID = -41, - I40E_ERR_BACKING_PAGE_ERROR = -42, - I40E_ERR_NO_PBLCHUNKS_AVAILABLE = -43, - I40E_ERR_INVALID_PBLE_INDEX = -44, - I40E_ERR_INVALID_SD_INDEX = -45, - I40E_ERR_INVALID_PAGE_DESC_INDEX = -46, - I40E_ERR_INVALID_SD_TYPE = -47, - I40E_ERR_MEMCPY_FAILED = -48, - I40E_ERR_INVALID_HMC_OBJ_INDEX = -49, - I40E_ERR_INVALID_HMC_OBJ_COUNT = -50, - I40E_ERR_INVALID_SRQ_ARM_LIMIT = -51, - I40E_ERR_SRQ_ENABLED = -52, - I40E_ERR_ADMIN_QUEUE_ERROR = -53, - I40E_ERR_ADMIN_QUEUE_TIMEOUT = -54, - I40E_ERR_BUF_TOO_SHORT = -55, - I40E_ERR_ADMIN_QUEUE_FULL = -56, - I40E_ERR_ADMIN_QUEUE_NO_WORK = -57, - I40E_ERR_BAD_IWARP_CQE = -58, - I40E_ERR_NVM_BLANK_MODE = -59, - I40E_ERR_NOT_IMPLEMENTED = -60, - I40E_ERR_PE_DOORBELL_NOT_ENABLED = -61, - I40E_ERR_DIAG_TEST_FAILED = -62, - I40E_ERR_NOT_READY = -63, - I40E_NOT_SUPPORTED = -64, - I40E_ERR_FIRMWARE_API_VERSION = -65, -}; - -#endif /* _I40E_STATUS_H_ */ diff --git a/lib/librte_pmd_i40e/i40e/i40e_type.h b/lib/librte_pmd_i40e/i40e/i40e_type.h deleted file mode 100644 index bb87640..0000000 --- a/lib/librte_pmd_i40e/i40e/i40e_type.h +++ /dev/null @@ -1,1425 +0,0 @@ -/******************************************************************************* - -Copyright (c) 2013 - 2014, Intel Corporation -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - - 1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - 3. Neither the name of the Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. - -***************************************************************************/ - -#ifndef _I40E_TYPE_H_ -#define _I40E_TYPE_H_ - -#include "i40e_status.h" -#include "i40e_osdep.h" -#include "i40e_register.h" -#include "i40e_adminq.h" -#include "i40e_hmc.h" -#include "i40e_lan_hmc.h" - -#define UNREFERENCED_XPARAMETER -#define UNREFERENCED_1PARAMETER(_p) (_p); -#define UNREFERENCED_2PARAMETER(_p, _q) (_p); (_q); -#define UNREFERENCED_3PARAMETER(_p, _q, _r) (_p); (_q); (_r); -#define UNREFERENCED_4PARAMETER(_p, _q, _r, _s) (_p); (_q); (_r); (_s); -#define UNREFERENCED_5PARAMETER(_p, _q, _r, _s, _t) (_p); (_q); (_r); (_s); (_t); - -/* Vendor ID */ -#define I40E_INTEL_VENDOR_ID 0x8086 - -/* Device IDs */ -#define I40E_DEV_ID_SFP_XL710 0x1572 -#define I40E_DEV_ID_QEMU 0x1574 -#define I40E_DEV_ID_KX_A 0x157F -#define I40E_DEV_ID_KX_B 0x1580 -#define I40E_DEV_ID_KX_C 0x1581 -#define I40E_DEV_ID_QSFP_A 0x1583 -#define I40E_DEV_ID_QSFP_B 0x1584 -#define I40E_DEV_ID_QSFP_C 0x1585 -#define I40E_DEV_ID_10G_BASE_T 0x1586 -#define I40E_DEV_ID_VF 0x154C -#define I40E_DEV_ID_VF_HV 0x1571 - -#define i40e_is_40G_device(d) ((d) == I40E_DEV_ID_QSFP_A || \ - (d) == I40E_DEV_ID_QSFP_B || \ - (d) == I40E_DEV_ID_QSFP_C) - -#ifndef I40E_MASK -/* I40E_MASK is a macro used on 32 bit registers */ -#define I40E_MASK(mask, shift) (mask << shift) -#endif - -#define I40E_MAX_PF 16 -#define I40E_MAX_PF_VSI 64 -#define I40E_MAX_PF_QP 128 -#define I40E_MAX_VSI_QP 16 -#define I40E_MAX_VF_VSI 3 -#define I40E_MAX_CHAINED_RX_BUFFERS 5 -#define I40E_MAX_PF_UDP_OFFLOAD_PORTS 16 - -/* something less than 1 minute */ -#define I40E_HEARTBEAT_TIMEOUT (HZ * 50) - -/* Max default timeout in ms, */ -#define I40E_MAX_NVM_TIMEOUT 18000 - -/* Check whether address is multicast. */ -#define I40E_IS_MULTICAST(address) (bool)(((u8 *)(address))[0] & ((u8)0x01)) - -/* Check whether an address is broadcast. */ -#define I40E_IS_BROADCAST(address) \ - ((((u8 *)(address))[0] == ((u8)0xff)) && \ - (((u8 *)(address))[1] == ((u8)0xff))) - -/* Switch from ms to the 1usec global time (this is the GTIME resolution) */ -#define I40E_MS_TO_GTIME(time) ((time) * 1000) - -/* forward declaration */ -struct i40e_hw; -typedef void (*I40E_ADMINQ_CALLBACK)(struct i40e_hw *, struct i40e_aq_desc *); - -#define I40E_ETH_LENGTH_OF_ADDRESS 6 -/* Data type manipulation macros. */ -#define I40E_HI_DWORD(x) ((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF)) -#define I40E_LO_DWORD(x) ((u32)((x) & 0xFFFFFFFF)) - -#define I40E_HI_WORD(x) ((u16)(((x) >> 16) & 0xFFFF)) -#define I40E_LO_WORD(x) ((u16)((x) & 0xFFFF)) - -#define I40E_HI_BYTE(x) ((u8)(((x) >> 8) & 0xFF)) -#define I40E_LO_BYTE(x) ((u8)((x) & 0xFF)) - -/* Number of Transmit Descriptors must be a multiple of 8. */ -#define I40E_REQ_TX_DESCRIPTOR_MULTIPLE 8 -/* Number of Receive Descriptors must be a multiple of 32 if - * the number of descriptors is greater than 32. - */ -#define I40E_REQ_RX_DESCRIPTOR_MULTIPLE 32 - -#define I40E_DESC_UNUSED(R) \ - ((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \ - (R)->next_to_clean - (R)->next_to_use - 1) - -/* bitfields for Tx queue mapping in QTX_CTL */ -#define I40E_QTX_CTL_VF_QUEUE 0x0 -#define I40E_QTX_CTL_VM_QUEUE 0x1 -#define I40E_QTX_CTL_PF_QUEUE 0x2 - -/* debug masks - set these bits in hw->debug_mask to control output */ -enum i40e_debug_mask { - I40E_DEBUG_INIT = 0x00000001, - I40E_DEBUG_RELEASE = 0x00000002, - - I40E_DEBUG_LINK = 0x00000010, - I40E_DEBUG_PHY = 0x00000020, - I40E_DEBUG_HMC = 0x00000040, - I40E_DEBUG_NVM = 0x00000080, - I40E_DEBUG_LAN = 0x00000100, - I40E_DEBUG_FLOW = 0x00000200, - I40E_DEBUG_DCB = 0x00000400, - I40E_DEBUG_DIAG = 0x00000800, - I40E_DEBUG_FD = 0x00001000, - - I40E_DEBUG_AQ_MESSAGE = 0x01000000, - I40E_DEBUG_AQ_DESCRIPTOR = 0x02000000, - I40E_DEBUG_AQ_DESC_BUFFER = 0x04000000, - I40E_DEBUG_AQ_COMMAND = 0x06000000, - I40E_DEBUG_AQ = 0x0F000000, - - I40E_DEBUG_USER = 0xF0000000, - - I40E_DEBUG_ALL = 0xFFFFFFFF -}; - -/* PCI Bus Info */ -#define I40E_PCI_LINK_STATUS 0xB2 -#define I40E_PCI_LINK_WIDTH 0x3F0 -#define I40E_PCI_LINK_WIDTH_1 0x10 -#define I40E_PCI_LINK_WIDTH_2 0x20 -#define I40E_PCI_LINK_WIDTH_4 0x40 -#define I40E_PCI_LINK_WIDTH_8 0x80 -#define I40E_PCI_LINK_SPEED 0xF -#define I40E_PCI_LINK_SPEED_2500 0x1 -#define I40E_PCI_LINK_SPEED_5000 0x2 -#define I40E_PCI_LINK_SPEED_8000 0x3 - -/* Memory types */ -enum i40e_memset_type { - I40E_NONDMA_MEM = 0, - I40E_DMA_MEM -}; - -/* Memcpy types */ -enum i40e_memcpy_type { - I40E_NONDMA_TO_NONDMA = 0, - I40E_NONDMA_TO_DMA, - I40E_DMA_TO_DMA, - I40E_DMA_TO_NONDMA -}; - -/* These are structs for managing the hardware information and the operations. - * The structures of function pointers are filled out at init time when we - * know for sure exactly which hardware we're working with. This gives us the - * flexibility of using the same main driver code but adapting to slightly - * different hardware needs as new parts are developed. For this architecture, - * the Firmware and AdminQ are intended to insulate the driver from most of the - * future changes, but these structures will also do part of the job. - */ -enum i40e_mac_type { - I40E_MAC_UNKNOWN = 0, - I40E_MAC_X710, - I40E_MAC_XL710, - I40E_MAC_VF, - I40E_MAC_GENERIC, -}; - -enum i40e_media_type { - I40E_MEDIA_TYPE_UNKNOWN = 0, - I40E_MEDIA_TYPE_FIBER, - I40E_MEDIA_TYPE_BASET, - I40E_MEDIA_TYPE_BACKPLANE, - I40E_MEDIA_TYPE_CX4, - I40E_MEDIA_TYPE_DA, - I40E_MEDIA_TYPE_VIRTUAL -}; - -enum i40e_fc_mode { - I40E_FC_NONE = 0, - I40E_FC_RX_PAUSE, - I40E_FC_TX_PAUSE, - I40E_FC_FULL, - I40E_FC_PFC, - I40E_FC_DEFAULT -}; - -enum i40e_set_fc_aq_failures { - I40E_SET_FC_AQ_FAIL_NONE = 0, - I40E_SET_FC_AQ_FAIL_GET = 1, - I40E_SET_FC_AQ_FAIL_SET = 2, - I40E_SET_FC_AQ_FAIL_UPDATE = 4, - I40E_SET_FC_AQ_FAIL_SET_UPDATE = 6 -}; - -enum i40e_vsi_type { - I40E_VSI_MAIN = 0, - I40E_VSI_VMDQ1, - I40E_VSI_VMDQ2, - I40E_VSI_CTRL, - I40E_VSI_FCOE, - I40E_VSI_MIRROR, - I40E_VSI_SRIOV, - I40E_VSI_FDIR, - I40E_VSI_TYPE_UNKNOWN -}; - -enum i40e_queue_type { - I40E_QUEUE_TYPE_RX = 0, - I40E_QUEUE_TYPE_TX, - I40E_QUEUE_TYPE_PE_CEQ, - I40E_QUEUE_TYPE_UNKNOWN -}; - -struct i40e_link_status { - enum i40e_aq_phy_type phy_type; - enum i40e_aq_link_speed link_speed; - u8 link_info; - u8 an_info; - u8 ext_info; - u8 loopback; - bool an_enabled; - /* is Link Status Event notification to SW enabled */ - bool lse_enable; - u16 max_frame_size; - bool crc_enable; - u8 pacing; -}; - -struct i40e_phy_info { - struct i40e_link_status link_info; - struct i40e_link_status link_info_old; - u32 autoneg_advertised; - u32 phy_id; - u32 module_type; - bool get_link_info; - enum i40e_media_type media_type; -}; - -#define I40E_HW_CAP_MAX_GPIO 30 -#define I40E_HW_CAP_MDIO_PORT_MODE_MDIO 0 -#define I40E_HW_CAP_MDIO_PORT_MODE_I2C 1 - -/* Capabilities of a PF or a VF or the whole device */ -struct i40e_hw_capabilities { - u32 switch_mode; -#define I40E_NVM_IMAGE_TYPE_EVB 0x0 -#define I40E_NVM_IMAGE_TYPE_CLOUD 0x2 -#define I40E_NVM_IMAGE_TYPE_UDP_CLOUD 0x3 - - u32 management_mode; - u32 npar_enable; - u32 os2bmc; - u32 valid_functions; - bool sr_iov_1_1; - bool vmdq; - bool evb_802_1_qbg; /* Edge Virtual Bridging */ - bool evb_802_1_qbh; /* Bridge Port Extension */ - bool dcb; - bool fcoe; - bool mfp_mode_1; - bool mgmt_cem; - bool ieee_1588; - bool iwarp; - bool fd; - u32 fd_filters_guaranteed; - u32 fd_filters_best_effort; - bool rss; - u32 rss_table_size; - u32 rss_table_entry_width; - bool led[I40E_HW_CAP_MAX_GPIO]; - bool sdp[I40E_HW_CAP_MAX_GPIO]; - u32 nvm_image_type; - u32 num_flow_director_filters; - u32 num_vfs; - u32 vf_base_id; - u32 num_vsis; - u32 num_rx_qp; - u32 num_tx_qp; - u32 base_queue; - u32 num_msix_vectors; - u32 num_msix_vectors_vf; - u32 led_pin_num; - u32 sdp_pin_num; - u32 mdio_port_num; - u32 mdio_port_mode; - u8 rx_buf_chain_len; - u32 enabled_tcmap; - u32 maxtc; -}; - -struct i40e_mac_info { - enum i40e_mac_type type; - u8 addr[I40E_ETH_LENGTH_OF_ADDRESS]; - u8 perm_addr[I40E_ETH_LENGTH_OF_ADDRESS]; - u8 san_addr[I40E_ETH_LENGTH_OF_ADDRESS]; - u8 port_addr[I40E_ETH_LENGTH_OF_ADDRESS]; - u16 max_fcoeq; -}; - -enum i40e_aq_resources_ids { - I40E_NVM_RESOURCE_ID = 1 -}; - -enum i40e_aq_resource_access_type { - I40E_RESOURCE_READ = 1, - I40E_RESOURCE_WRITE -}; - -struct i40e_nvm_info { - u64 hw_semaphore_timeout; /* 2usec global time (GTIME resolution) */ - u64 hw_semaphore_wait; /* - || - */ - u32 timeout; /* [ms] */ - u16 sr_size; /* Shadow RAM size in words */ - bool blank_nvm_mode; /* is NVM empty (no FW present)*/ - u16 version; /* NVM package version */ - u32 eetrack; /* NVM data version */ -}; - -/* definitions used in NVM update support */ - -enum i40e_nvmupd_cmd { - I40E_NVMUPD_INVALID, - I40E_NVMUPD_READ_CON, - I40E_NVMUPD_READ_SNT, - I40E_NVMUPD_READ_LCB, - I40E_NVMUPD_READ_SA, - I40E_NVMUPD_WRITE_ERA, - I40E_NVMUPD_WRITE_CON, - I40E_NVMUPD_WRITE_SNT, - I40E_NVMUPD_WRITE_LCB, - I40E_NVMUPD_WRITE_SA, - I40E_NVMUPD_CSUM_CON, - I40E_NVMUPD_CSUM_SA, - I40E_NVMUPD_CSUM_LCB, -}; - -enum i40e_nvmupd_state { - I40E_NVMUPD_STATE_INIT, - I40E_NVMUPD_STATE_READING, - I40E_NVMUPD_STATE_WRITING -}; - -/* nvm_access definition and its masks/shifts need to be accessible to - * application, core driver, and shared code. Where is the right file? - */ -#define I40E_NVM_READ 0xB -#define I40E_NVM_WRITE 0xC - -#define I40E_NVM_MOD_PNT_MASK 0xFF - -#define I40E_NVM_TRANS_SHIFT 8 -#define I40E_NVM_TRANS_MASK (0xf << I40E_NVM_TRANS_SHIFT) -#define I40E_NVM_CON 0x0 -#define I40E_NVM_SNT 0x1 -#define I40E_NVM_LCB 0x2 -#define I40E_NVM_SA (I40E_NVM_SNT | I40E_NVM_LCB) -#define I40E_NVM_ERA 0x4 -#define I40E_NVM_CSUM 0x8 - -#define I40E_NVM_ADAPT_SHIFT 16 -#define I40E_NVM_ADAPT_MASK (0xffffULL << I40E_NVM_ADAPT_SHIFT) - -#define I40E_NVMUPD_MAX_DATA 4096 -#define I40E_NVMUPD_IFACE_TIMEOUT 2 /* seconds */ - -struct i40e_nvm_access { - u32 command; - u32 config; - u32 offset; /* in bytes */ - u32 data_size; /* in bytes */ - u8 data[1]; -}; - -/* PCI bus types */ -enum i40e_bus_type { - i40e_bus_type_unknown = 0, - i40e_bus_type_pci, - i40e_bus_type_pcix, - i40e_bus_type_pci_express, - i40e_bus_type_reserved -}; - -/* PCI bus speeds */ -enum i40e_bus_speed { - i40e_bus_speed_unknown = 0, - i40e_bus_speed_33 = 33, - i40e_bus_speed_66 = 66, - i40e_bus_speed_100 = 100, - i40e_bus_speed_120 = 120, - i40e_bus_speed_133 = 133, - i40e_bus_speed_2500 = 2500, - i40e_bus_speed_5000 = 5000, - i40e_bus_speed_8000 = 8000, - i40e_bus_speed_reserved -}; - -/* PCI bus widths */ -enum i40e_bus_width { - i40e_bus_width_unknown = 0, - i40e_bus_width_pcie_x1 = 1, - i40e_bus_width_pcie_x2 = 2, - i40e_bus_width_pcie_x4 = 4, - i40e_bus_width_pcie_x8 = 8, - i40e_bus_width_32 = 32, - i40e_bus_width_64 = 64, - i40e_bus_width_reserved -}; - -/* Bus parameters */ -struct i40e_bus_info { - enum i40e_bus_speed speed; - enum i40e_bus_width width; - enum i40e_bus_type type; - - u16 func; - u16 device; - u16 lan_id; -}; - -/* Flow control (FC) parameters */ -struct i40e_fc_info { - enum i40e_fc_mode current_mode; /* FC mode in effect */ - enum i40e_fc_mode requested_mode; /* FC mode requested by caller */ -}; - -#define I40E_MAX_TRAFFIC_CLASS 8 -#define I40E_MAX_USER_PRIORITY 8 -#define I40E_DCBX_MAX_APPS 32 -#define I40E_LLDPDU_SIZE 1500 - -/* IEEE 802.1Qaz ETS Configuration data */ -struct i40e_ieee_ets_config { - u8 willing; - u8 cbs; - u8 maxtcs; - u8 prioritytable[I40E_MAX_TRAFFIC_CLASS]; - u8 tcbwtable[I40E_MAX_TRAFFIC_CLASS]; - u8 tsatable[I40E_MAX_TRAFFIC_CLASS]; -}; - -/* IEEE 802.1Qaz ETS Recommendation data */ -struct i40e_ieee_ets_recommend { - u8 prioritytable[I40E_MAX_TRAFFIC_CLASS]; - u8 tcbwtable[I40E_MAX_TRAFFIC_CLASS]; - u8 tsatable[I40E_MAX_TRAFFIC_CLASS]; -}; - -/* IEEE 802.1Qaz PFC Configuration data */ -struct i40e_ieee_pfc_config { - u8 willing; - u8 mbc; - u8 pfccap; - u8 pfcenable; -}; - -/* IEEE 802.1Qaz Application Priority data */ -struct i40e_ieee_app_priority_table { - u8 priority; - u8 selector; - u16 protocolid; -}; - -struct i40e_dcbx_config { - u32 numapps; - struct i40e_ieee_ets_config etscfg; - struct i40e_ieee_ets_recommend etsrec; - struct i40e_ieee_pfc_config pfc; - struct i40e_ieee_app_priority_table app[I40E_DCBX_MAX_APPS]; -}; - -/* Port hardware description */ -struct i40e_hw { - u8 *hw_addr; - void *back; - - /* function pointer structs */ - struct i40e_phy_info phy; - struct i40e_mac_info mac; - struct i40e_bus_info bus; - struct i40e_nvm_info nvm; - struct i40e_fc_info fc; - - /* pci info */ - u16 device_id; - u16 vendor_id; - u16 subsystem_device_id; - u16 subsystem_vendor_id; - u8 revision_id; - u8 port; - bool adapter_stopped; - - /* capabilities for entire device and PCI func */ - struct i40e_hw_capabilities dev_caps; - struct i40e_hw_capabilities func_caps; - - /* Flow Director shared filter space */ - u16 fdir_shared_filter_count; - - /* device profile info */ - u8 pf_id; - u16 main_vsi_seid; - - /* Closest numa node to the device */ - u16 numa_node; - - /* Admin Queue info */ - struct i40e_adminq_info aq; - - /* state of nvm update process */ - enum i40e_nvmupd_state nvmupd_state; - - /* HMC info */ - struct i40e_hmc_info hmc; /* HMC info struct */ - - /* LLDP/DCBX Status */ - u16 dcbx_status; - - /* DCBX info */ - struct i40e_dcbx_config local_dcbx_config; - struct i40e_dcbx_config remote_dcbx_config; - - /* debug mask */ - u32 debug_mask; -}; - -struct i40e_driver_version { - u8 major_version; - u8 minor_version; - u8 build_version; - u8 subbuild_version; - u8 driver_string[32]; -}; - -/* RX Descriptors */ -union i40e_16byte_rx_desc { - struct { - __le64 pkt_addr; /* Packet buffer address */ - __le64 hdr_addr; /* Header buffer address */ - } read; - struct { - struct { - struct { - union { - __le16 mirroring_status; - __le16 fcoe_ctx_id; - } mirr_fcoe; - __le16 l2tag1; - } lo_dword; - union { - __le32 rss; /* RSS Hash */ - __le32 fd_id; /* Flow director filter id */ - __le32 fcoe_param; /* FCoE DDP Context id */ - } hi_dword; - } qword0; - struct { - /* ext status/error/pktype/length */ - __le64 status_error_len; - } qword1; - } wb; /* writeback */ -}; - -union i40e_32byte_rx_desc { - struct { - __le64 pkt_addr; /* Packet buffer address */ - __le64 hdr_addr; /* Header buffer address */ - /* bit 0 of hdr_buffer_addr is DD bit */ - __le64 rsvd1; - __le64 rsvd2; - } read; - struct { - struct { - struct { - union { - __le16 mirroring_status; - __le16 fcoe_ctx_id; - } mirr_fcoe; - __le16 l2tag1; - } lo_dword; - union { - __le32 rss; /* RSS Hash */ - __le32 fcoe_param; /* FCoE DDP Context id */ - /* Flow director filter id in case of - * Programming status desc WB - */ - __le32 fd_id; - } hi_dword; - } qword0; - struct { - /* status/error/pktype/length */ - __le64 status_error_len; - } qword1; - struct { - __le16 ext_status; /* extended status */ - __le16 rsvd; - __le16 l2tag2_1; - __le16 l2tag2_2; - } qword2; - struct { - union { - __le32 flex_bytes_lo; - __le32 pe_status; - } lo_dword; - union { - __le32 flex_bytes_hi; - __le32 fd_id; - } hi_dword; - } qword3; - } wb; /* writeback */ -}; - -#define I40E_RXD_QW0_MIRROR_STATUS_SHIFT 8 -#define I40E_RXD_QW0_MIRROR_STATUS_MASK (0x3FUL << \ - I40E_RXD_QW0_MIRROR_STATUS_SHIFT) -#define I40E_RXD_QW0_FCOEINDX_SHIFT 0 -#define I40E_RXD_QW0_FCOEINDX_MASK (0xFFFUL << \ - I40E_RXD_QW0_FCOEINDX_SHIFT) - -enum i40e_rx_desc_status_bits { - /* Note: These are predefined bit offsets */ - I40E_RX_DESC_STATUS_DD_SHIFT = 0, - I40E_RX_DESC_STATUS_EOF_SHIFT = 1, - I40E_RX_DESC_STATUS_L2TAG1P_SHIFT = 2, - I40E_RX_DESC_STATUS_L3L4P_SHIFT = 3, - I40E_RX_DESC_STATUS_CRCP_SHIFT = 4, - I40E_RX_DESC_STATUS_TSYNINDX_SHIFT = 5, /* 2 BITS */ - I40E_RX_DESC_STATUS_TSYNVALID_SHIFT = 7, - I40E_RX_DESC_STATUS_PIF_SHIFT = 8, - I40E_RX_DESC_STATUS_UMBCAST_SHIFT = 9, /* 2 BITS */ - I40E_RX_DESC_STATUS_FLM_SHIFT = 11, - I40E_RX_DESC_STATUS_FLTSTAT_SHIFT = 12, /* 2 BITS */ - I40E_RX_DESC_STATUS_LPBK_SHIFT = 14, - I40E_RX_DESC_STATUS_IPV6EXADD_SHIFT = 15, - I40E_RX_DESC_STATUS_RESERVED_SHIFT = 16, /* 2 BITS */ - I40E_RX_DESC_STATUS_UDP_0_SHIFT = 18, - I40E_RX_DESC_STATUS_LAST /* this entry must be last!!! */ -}; - -#define I40E_RXD_QW1_STATUS_SHIFT 0 -#define I40E_RXD_QW1_STATUS_MASK (((1 << I40E_RX_DESC_STATUS_LAST) - 1) << \ - I40E_RXD_QW1_STATUS_SHIFT) - -#define I40E_RXD_QW1_STATUS_TSYNINDX_SHIFT I40E_RX_DESC_STATUS_TSYNINDX_SHIFT -#define I40E_RXD_QW1_STATUS_TSYNINDX_MASK (0x3UL << \ - I40E_RXD_QW1_STATUS_TSYNINDX_SHIFT) - -#define I40E_RXD_QW1_STATUS_TSYNVALID_SHIFT I40E_RX_DESC_STATUS_TSYNVALID_SHIFT -#define I40E_RXD_QW1_STATUS_TSYNVALID_MASK (0x1UL << \ - I40E_RXD_QW1_STATUS_TSYNVALID_SHIFT) - -#define I40E_RXD_QW1_STATUS_UMBCAST_SHIFT I40E_RX_DESC_STATUS_UMBCAST -#define I40E_RXD_QW1_STATUS_UMBCAST_MASK (0x3UL << \ - I40E_RXD_QW1_STATUS_UMBCAST_SHIFT) - -enum i40e_rx_desc_fltstat_values { - I40E_RX_DESC_FLTSTAT_NO_DATA = 0, - I40E_RX_DESC_FLTSTAT_RSV_FD_ID = 1, /* 16byte desc? FD_ID : RSV */ - I40E_RX_DESC_FLTSTAT_RSV = 2, - I40E_RX_DESC_FLTSTAT_RSS_HASH = 3, -}; - -#define I40E_RXD_PACKET_TYPE_UNICAST 0 -#define I40E_RXD_PACKET_TYPE_MULTICAST 1 -#define I40E_RXD_PACKET_TYPE_BROADCAST 2 -#define I40E_RXD_PACKET_TYPE_MIRRORED 3 - -#define I40E_RXD_QW1_ERROR_SHIFT 19 -#define I40E_RXD_QW1_ERROR_MASK (0xFFUL << I40E_RXD_QW1_ERROR_SHIFT) - -enum i40e_rx_desc_error_bits { - /* Note: These are predefined bit offsets */ - I40E_RX_DESC_ERROR_RXE_SHIFT = 0, - I40E_RX_DESC_ERROR_RECIPE_SHIFT = 1, - I40E_RX_DESC_ERROR_HBO_SHIFT = 2, - I40E_RX_DESC_ERROR_L3L4E_SHIFT = 3, /* 3 BITS */ - I40E_RX_DESC_ERROR_IPE_SHIFT = 3, - I40E_RX_DESC_ERROR_L4E_SHIFT = 4, - I40E_RX_DESC_ERROR_EIPE_SHIFT = 5, - I40E_RX_DESC_ERROR_OVERSIZE_SHIFT = 6, - I40E_RX_DESC_ERROR_PPRS_SHIFT = 7 -}; - -enum i40e_rx_desc_error_l3l4e_fcoe_masks { - I40E_RX_DESC_ERROR_L3L4E_NONE = 0, - I40E_RX_DESC_ERROR_L3L4E_PROT = 1, - I40E_RX_DESC_ERROR_L3L4E_FC = 2, - I40E_RX_DESC_ERROR_L3L4E_DMAC_ERR = 3, - I40E_RX_DESC_ERROR_L3L4E_DMAC_WARN = 4 -}; - -#define I40E_RXD_QW1_PTYPE_SHIFT 30 -#define I40E_RXD_QW1_PTYPE_MASK (0xFFULL << I40E_RXD_QW1_PTYPE_SHIFT) - -/* Packet type non-ip values */ -enum i40e_rx_l2_ptype { - I40E_RX_PTYPE_L2_RESERVED = 0, - I40E_RX_PTYPE_L2_MAC_PAY2 = 1, - I40E_RX_PTYPE_L2_TIMESYNC_PAY2 = 2, - I40E_RX_PTYPE_L2_FIP_PAY2 = 3, - I40E_RX_PTYPE_L2_OUI_PAY2 = 4, - I40E_RX_PTYPE_L2_MACCNTRL_PAY2 = 5, - I40E_RX_PTYPE_L2_LLDP_PAY2 = 6, - I40E_RX_PTYPE_L2_ECP_PAY2 = 7, - I40E_RX_PTYPE_L2_EVB_PAY2 = 8, - I40E_RX_PTYPE_L2_QCN_PAY2 = 9, - I40E_RX_PTYPE_L2_EAPOL_PAY2 = 10, - I40E_RX_PTYPE_L2_ARP = 11, - I40E_RX_PTYPE_L2_FCOE_PAY3 = 12, - I40E_RX_PTYPE_L2_FCOE_FCDATA_PAY3 = 13, - I40E_RX_PTYPE_L2_FCOE_FCRDY_PAY3 = 14, - I40E_RX_PTYPE_L2_FCOE_FCRSP_PAY3 = 15, - I40E_RX_PTYPE_L2_FCOE_FCOTHER_PA = 16, - I40E_RX_PTYPE_L2_FCOE_VFT_PAY3 = 17, - I40E_RX_PTYPE_L2_FCOE_VFT_FCDATA = 18, - I40E_RX_PTYPE_L2_FCOE_VFT_FCRDY = 19, - I40E_RX_PTYPE_L2_FCOE_VFT_FCRSP = 20, - I40E_RX_PTYPE_L2_FCOE_VFT_FCOTHER = 21, - I40E_RX_PTYPE_GRENAT4_MAC_PAY3 = 58, - I40E_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4 = 87, - I40E_RX_PTYPE_GRENAT6_MAC_PAY3 = 124, - I40E_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4 = 153 -}; - -struct i40e_rx_ptype_decoded { - u32 ptype:8; - u32 known:1; - u32 outer_ip:1; - u32 outer_ip_ver:1; - u32 outer_frag:1; - u32 tunnel_type:3; - u32 tunnel_end_prot:2; - u32 tunnel_end_frag:1; - u32 inner_prot:4; - u32 payload_layer:3; -}; - -enum i40e_rx_ptype_outer_ip { - I40E_RX_PTYPE_OUTER_L2 = 0, - I40E_RX_PTYPE_OUTER_IP = 1 -}; - -enum i40e_rx_ptype_outer_ip_ver { - I40E_RX_PTYPE_OUTER_NONE = 0, - I40E_RX_PTYPE_OUTER_IPV4 = 0, - I40E_RX_PTYPE_OUTER_IPV6 = 1 -}; - -enum i40e_rx_ptype_outer_fragmented { - I40E_RX_PTYPE_NOT_FRAG = 0, - I40E_RX_PTYPE_FRAG = 1 -}; - -enum i40e_rx_ptype_tunnel_type { - I40E_RX_PTYPE_TUNNEL_NONE = 0, - I40E_RX_PTYPE_TUNNEL_IP_IP = 1, - I40E_RX_PTYPE_TUNNEL_IP_GRENAT = 2, - I40E_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3, - I40E_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4, -}; - -enum i40e_rx_ptype_tunnel_end_prot { - I40E_RX_PTYPE_TUNNEL_END_NONE = 0, - I40E_RX_PTYPE_TUNNEL_END_IPV4 = 1, - I40E_RX_PTYPE_TUNNEL_END_IPV6 = 2, -}; - -enum i40e_rx_ptype_inner_prot { - I40E_RX_PTYPE_INNER_PROT_NONE = 0, - I40E_RX_PTYPE_INNER_PROT_UDP = 1, - I40E_RX_PTYPE_INNER_PROT_TCP = 2, - I40E_RX_PTYPE_INNER_PROT_SCTP = 3, - I40E_RX_PTYPE_INNER_PROT_ICMP = 4, - I40E_RX_PTYPE_INNER_PROT_TIMESYNC = 5 -}; - -enum i40e_rx_ptype_payload_layer { - I40E_RX_PTYPE_PAYLOAD_LAYER_NONE = 0, - I40E_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1, - I40E_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2, - I40E_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3, -}; - -#define I40E_RX_PTYPE_BIT_MASK 0x0FFFFFFF -#define I40E_RX_PTYPE_SHIFT 56 - -#define I40E_RXD_QW1_LENGTH_PBUF_SHIFT 38 -#define I40E_RXD_QW1_LENGTH_PBUF_MASK (0x3FFFULL << \ - I40E_RXD_QW1_LENGTH_PBUF_SHIFT) - -#define I40E_RXD_QW1_LENGTH_HBUF_SHIFT 52 -#define I40E_RXD_QW1_LENGTH_HBUF_MASK (0x7FFULL << \ - I40E_RXD_QW1_LENGTH_HBUF_SHIFT) - -#define I40E_RXD_QW1_LENGTH_SPH_SHIFT 63 -#define I40E_RXD_QW1_LENGTH_SPH_MASK (0x1ULL << \ - I40E_RXD_QW1_LENGTH_SPH_SHIFT) - -#define I40E_RXD_QW1_NEXTP_SHIFT 38 -#define I40E_RXD_QW1_NEXTP_MASK (0x1FFFULL << I40E_RXD_QW1_NEXTP_SHIFT) - -#define I40E_RXD_QW2_EXT_STATUS_SHIFT 0 -#define I40E_RXD_QW2_EXT_STATUS_MASK (0xFFFFFUL << \ - I40E_RXD_QW2_EXT_STATUS_SHIFT) - -enum i40e_rx_desc_ext_status_bits { - /* Note: These are predefined bit offsets */ - I40E_RX_DESC_EXT_STATUS_L2TAG2P_SHIFT = 0, - I40E_RX_DESC_EXT_STATUS_L2TAG3P_SHIFT = 1, - I40E_RX_DESC_EXT_STATUS_FLEXBL_SHIFT = 2, /* 2 BITS */ - I40E_RX_DESC_EXT_STATUS_FLEXBH_SHIFT = 4, /* 2 BITS */ - I40E_RX_DESC_EXT_STATUS_FDLONGB_SHIFT = 9, - I40E_RX_DESC_EXT_STATUS_FCOELONGB_SHIFT = 10, - I40E_RX_DESC_EXT_STATUS_PELONGB_SHIFT = 11, -}; - -#define I40E_RXD_QW2_L2TAG2_SHIFT 0 -#define I40E_RXD_QW2_L2TAG2_MASK (0xFFFFUL << I40E_RXD_QW2_L2TAG2_SHIFT) - -#define I40E_RXD_QW2_L2TAG3_SHIFT 16 -#define I40E_RXD_QW2_L2TAG3_MASK (0xFFFFUL << I40E_RXD_QW2_L2TAG3_SHIFT) - -enum i40e_rx_desc_pe_status_bits { - /* Note: These are predefined bit offsets */ - I40E_RX_DESC_PE_STATUS_QPID_SHIFT = 0, /* 18 BITS */ - I40E_RX_DESC_PE_STATUS_L4PORT_SHIFT = 0, /* 16 BITS */ - I40E_RX_DESC_PE_STATUS_IPINDEX_SHIFT = 16, /* 8 BITS */ - I40E_RX_DESC_PE_STATUS_QPIDHIT_SHIFT = 24, - I40E_RX_DESC_PE_STATUS_APBVTHIT_SHIFT = 25, - I40E_RX_DESC_PE_STATUS_PORTV_SHIFT = 26, - I40E_RX_DESC_PE_STATUS_URG_SHIFT = 27, - I40E_RX_DESC_PE_STATUS_IPFRAG_SHIFT = 28, - I40E_RX_DESC_PE_STATUS_IPOPT_SHIFT = 29 -}; - -#define I40E_RX_PROG_STATUS_DESC_LENGTH_SHIFT 38 -#define I40E_RX_PROG_STATUS_DESC_LENGTH 0x2000000 - -#define I40E_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT 2 -#define I40E_RX_PROG_STATUS_DESC_QW1_PROGID_MASK (0x7UL << \ - I40E_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT) - -#define I40E_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT 0 -#define I40E_RX_PROG_STATUS_DESC_QW1_STATUS_MASK (0x7FFFUL << \ - I40E_RX_PROG_STATUS_DESC_QW1_STATUS_SHIFT) - -#define I40E_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT 19 -#define I40E_RX_PROG_STATUS_DESC_QW1_ERROR_MASK (0x3FUL << \ - I40E_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT) - -enum i40e_rx_prog_status_desc_status_bits { - /* Note: These are predefined bit offsets */ - I40E_RX_PROG_STATUS_DESC_DD_SHIFT = 0, - I40E_RX_PROG_STATUS_DESC_PROG_ID_SHIFT = 2 /* 3 BITS */ -}; - -enum i40e_rx_prog_status_desc_prog_id_masks { - I40E_RX_PROG_STATUS_DESC_FD_FILTER_STATUS = 1, - I40E_RX_PROG_STATUS_DESC_FCOE_CTXT_PROG_STATUS = 2, - I40E_RX_PROG_STATUS_DESC_FCOE_CTXT_INVL_STATUS = 4, -}; - -enum i40e_rx_prog_status_desc_error_bits { - /* Note: These are predefined bit offsets */ - I40E_RX_PROG_STATUS_DESC_FD_TBL_FULL_SHIFT = 0, - I40E_RX_PROG_STATUS_DESC_NO_FD_ENTRY_SHIFT = 1, - I40E_RX_PROG_STATUS_DESC_FCOE_TBL_FULL_SHIFT = 2, - I40E_RX_PROG_STATUS_DESC_FCOE_CONFLICT_SHIFT = 3 -}; - -#define I40E_TWO_BIT_MASK 0x3 -#define I40E_THREE_BIT_MASK 0x7 -#define I40E_FOUR_BIT_MASK 0xF -#define I40E_EIGHTEEN_BIT_MASK 0x3FFFF - -/* TX Descriptor */ -struct i40e_tx_desc { - __le64 buffer_addr; /* Address of descriptor's data buf */ - __le64 cmd_type_offset_bsz; -}; - -#define I40E_TXD_QW1_DTYPE_SHIFT 0 -#define I40E_TXD_QW1_DTYPE_MASK (0xFUL << I40E_TXD_QW1_DTYPE_SHIFT) - -enum i40e_tx_desc_dtype_value { - I40E_TX_DESC_DTYPE_DATA = 0x0, - I40E_TX_DESC_DTYPE_NOP = 0x1, /* same as Context desc */ - I40E_TX_DESC_DTYPE_CONTEXT = 0x1, - I40E_TX_DESC_DTYPE_FCOE_CTX = 0x2, - I40E_TX_DESC_DTYPE_FILTER_PROG = 0x8, - I40E_TX_DESC_DTYPE_DDP_CTX = 0x9, - I40E_TX_DESC_DTYPE_FLEX_DATA = 0xB, - I40E_TX_DESC_DTYPE_FLEX_CTX_1 = 0xC, - I40E_TX_DESC_DTYPE_FLEX_CTX_2 = 0xD, - I40E_TX_DESC_DTYPE_DESC_DONE = 0xF -}; - -#define I40E_TXD_QW1_CMD_SHIFT 4 -#define I40E_TXD_QW1_CMD_MASK (0x3FFUL << I40E_TXD_QW1_CMD_SHIFT) - -enum i40e_tx_desc_cmd_bits { - I40E_TX_DESC_CMD_EOP = 0x0001, - I40E_TX_DESC_CMD_RS = 0x0002, - I40E_TX_DESC_CMD_ICRC = 0x0004, - I40E_TX_DESC_CMD_IL2TAG1 = 0x0008, - I40E_TX_DESC_CMD_DUMMY = 0x0010, - I40E_TX_DESC_CMD_IIPT_NONIP = 0x0000, /* 2 BITS */ - I40E_TX_DESC_CMD_IIPT_IPV6 = 0x0020, /* 2 BITS */ - I40E_TX_DESC_CMD_IIPT_IPV4 = 0x0040, /* 2 BITS */ - I40E_TX_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, /* 2 BITS */ - I40E_TX_DESC_CMD_FCOET = 0x0080, - I40E_TX_DESC_CMD_L4T_EOFT_UNK = 0x0000, /* 2 BITS */ - I40E_TX_DESC_CMD_L4T_EOFT_TCP = 0x0100, /* 2 BITS */ - I40E_TX_DESC_CMD_L4T_EOFT_SCTP = 0x0200, /* 2 BITS */ - I40E_TX_DESC_CMD_L4T_EOFT_UDP = 0x0300, /* 2 BITS */ - I40E_TX_DESC_CMD_L4T_EOFT_EOF_N = 0x0000, /* 2 BITS */ - I40E_TX_DESC_CMD_L4T_EOFT_EOF_T = 0x0100, /* 2 BITS */ - I40E_TX_DESC_CMD_L4T_EOFT_EOF_NI = 0x0200, /* 2 BITS */ - I40E_TX_DESC_CMD_L4T_EOFT_EOF_A = 0x0300, /* 2 BITS */ -}; - -#define I40E_TXD_QW1_OFFSET_SHIFT 16 -#define I40E_TXD_QW1_OFFSET_MASK (0x3FFFFULL << \ - I40E_TXD_QW1_OFFSET_SHIFT) - -enum i40e_tx_desc_length_fields { - /* Note: These are predefined bit offsets */ - I40E_TX_DESC_LENGTH_MACLEN_SHIFT = 0, /* 7 BITS */ - I40E_TX_DESC_LENGTH_IPLEN_SHIFT = 7, /* 7 BITS */ - I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT = 14 /* 4 BITS */ -}; - -#define I40E_TXD_QW1_MACLEN_MASK (0x7FUL << I40E_TX_DESC_LENGTH_MACLEN_SHIFT) -#define I40E_TXD_QW1_IPLEN_MASK (0x7FUL << I40E_TX_DESC_LENGTH_IPLEN_SHIFT) -#define I40E_TXD_QW1_L4LEN_MASK (0xFUL << I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) -#define I40E_TXD_QW1_FCLEN_MASK (0xFUL << I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT) - -#define I40E_TXD_QW1_TX_BUF_SZ_SHIFT 34 -#define I40E_TXD_QW1_TX_BUF_SZ_MASK (0x3FFFULL << \ - I40E_TXD_QW1_TX_BUF_SZ_SHIFT) - -#define I40E_TXD_QW1_L2TAG1_SHIFT 48 -#define I40E_TXD_QW1_L2TAG1_MASK (0xFFFFULL << I40E_TXD_QW1_L2TAG1_SHIFT) - -/* Context descriptors */ -struct i40e_tx_context_desc { - __le32 tunneling_params; - __le16 l2tag2; - __le16 rsvd; - __le64 type_cmd_tso_mss; -}; - -#define I40E_TXD_CTX_QW1_DTYPE_SHIFT 0 -#define I40E_TXD_CTX_QW1_DTYPE_MASK (0xFUL << I40E_TXD_CTX_QW1_DTYPE_SHIFT) - -#define I40E_TXD_CTX_QW1_CMD_SHIFT 4 -#define I40E_TXD_CTX_QW1_CMD_MASK (0xFFFFUL << I40E_TXD_CTX_QW1_CMD_SHIFT) - -enum i40e_tx_ctx_desc_cmd_bits { - I40E_TX_CTX_DESC_TSO = 0x01, - I40E_TX_CTX_DESC_TSYN = 0x02, - I40E_TX_CTX_DESC_IL2TAG2 = 0x04, - I40E_TX_CTX_DESC_IL2TAG2_IL2H = 0x08, - I40E_TX_CTX_DESC_SWTCH_NOTAG = 0x00, - I40E_TX_CTX_DESC_SWTCH_UPLINK = 0x10, - I40E_TX_CTX_DESC_SWTCH_LOCAL = 0x20, - I40E_TX_CTX_DESC_SWTCH_VSI = 0x30, - I40E_TX_CTX_DESC_SWPE = 0x40 -}; - -#define I40E_TXD_CTX_QW1_TSO_LEN_SHIFT 30 -#define I40E_TXD_CTX_QW1_TSO_LEN_MASK (0x3FFFFULL << \ - I40E_TXD_CTX_QW1_TSO_LEN_SHIFT) - -#define I40E_TXD_CTX_QW1_MSS_SHIFT 50 -#define I40E_TXD_CTX_QW1_MSS_MASK (0x3FFFULL << \ - I40E_TXD_CTX_QW1_MSS_SHIFT) - -#define I40E_TXD_CTX_QW1_VSI_SHIFT 50 -#define I40E_TXD_CTX_QW1_VSI_MASK (0x1FFULL << I40E_TXD_CTX_QW1_VSI_SHIFT) - -#define I40E_TXD_CTX_QW0_EXT_IP_SHIFT 0 -#define I40E_TXD_CTX_QW0_EXT_IP_MASK (0x3ULL << \ - I40E_TXD_CTX_QW0_EXT_IP_SHIFT) - -enum i40e_tx_ctx_desc_eipt_offload { - I40E_TX_CTX_EXT_IP_NONE = 0x0, - I40E_TX_CTX_EXT_IP_IPV6 = 0x1, - I40E_TX_CTX_EXT_IP_IPV4_NO_CSUM = 0x2, - I40E_TX_CTX_EXT_IP_IPV4 = 0x3 -}; - -#define I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT 2 -#define I40E_TXD_CTX_QW0_EXT_IPLEN_MASK (0x3FULL << \ - I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT) - -#define I40E_TXD_CTX_QW0_NATT_SHIFT 9 -#define I40E_TXD_CTX_QW0_NATT_MASK (0x3ULL << I40E_TXD_CTX_QW0_NATT_SHIFT) - -#define I40E_TXD_CTX_UDP_TUNNELING (0x1ULL << I40E_TXD_CTX_QW0_NATT_SHIFT) -#define I40E_TXD_CTX_GRE_TUNNELING (0x2ULL << I40E_TXD_CTX_QW0_NATT_SHIFT) - -#define I40E_TXD_CTX_QW0_EIP_NOINC_SHIFT 11 -#define I40E_TXD_CTX_QW0_EIP_NOINC_MASK (0x1ULL << \ - I40E_TXD_CTX_QW0_EIP_NOINC_SHIFT) - -#define I40E_TXD_CTX_EIP_NOINC_IPID_CONST I40E_TXD_CTX_QW0_EIP_NOINC_MASK - -#define I40E_TXD_CTX_QW0_NATLEN_SHIFT 12 -#define I40E_TXD_CTX_QW0_NATLEN_MASK (0X7FULL << \ - I40E_TXD_CTX_QW0_NATLEN_SHIFT) - -#define I40E_TXD_CTX_QW0_DECTTL_SHIFT 19 -#define I40E_TXD_CTX_QW0_DECTTL_MASK (0xFULL << \ - I40E_TXD_CTX_QW0_DECTTL_SHIFT) - -struct i40e_nop_desc { - __le64 rsvd; - __le64 dtype_cmd; -}; - -#define I40E_TXD_NOP_QW1_DTYPE_SHIFT 0 -#define I40E_TXD_NOP_QW1_DTYPE_MASK (0xFUL << I40E_TXD_NOP_QW1_DTYPE_SHIFT) - -#define I40E_TXD_NOP_QW1_CMD_SHIFT 4 -#define I40E_TXD_NOP_QW1_CMD_MASK (0x7FUL << I40E_TXD_NOP_QW1_CMD_SHIFT) - -enum i40e_tx_nop_desc_cmd_bits { - /* Note: These are predefined bit offsets */ - I40E_TX_NOP_DESC_EOP_SHIFT = 0, - I40E_TX_NOP_DESC_RS_SHIFT = 1, - I40E_TX_NOP_DESC_RSV_SHIFT = 2 /* 5 bits */ -}; - -struct i40e_filter_program_desc { - __le32 qindex_flex_ptype_vsi; - __le32 rsvd; - __le32 dtype_cmd_cntindex; - __le32 fd_id; -}; -#define I40E_TXD_FLTR_QW0_QINDEX_SHIFT 0 -#define I40E_TXD_FLTR_QW0_QINDEX_MASK (0x7FFUL << \ - I40E_TXD_FLTR_QW0_QINDEX_SHIFT) -#define I40E_TXD_FLTR_QW0_FLEXOFF_SHIFT 11 -#define I40E_TXD_FLTR_QW0_FLEXOFF_MASK (0x7UL << \ - I40E_TXD_FLTR_QW0_FLEXOFF_SHIFT) -#define I40E_TXD_FLTR_QW0_PCTYPE_SHIFT 17 -#define I40E_TXD_FLTR_QW0_PCTYPE_MASK (0x3FUL << \ - I40E_TXD_FLTR_QW0_PCTYPE_SHIFT) - -/* Packet Classifier Types for filters */ -enum i40e_filter_pctype { - /* Note: Values 0-30 are reserved for future use */ - I40E_FILTER_PCTYPE_NONF_IPV4_UDP = 31, - /* Note: Value 32 is reserved for future use */ - I40E_FILTER_PCTYPE_NONF_IPV4_TCP = 33, - I40E_FILTER_PCTYPE_NONF_IPV4_SCTP = 34, - I40E_FILTER_PCTYPE_NONF_IPV4_OTHER = 35, - I40E_FILTER_PCTYPE_FRAG_IPV4 = 36, - /* Note: Values 37-40 are reserved for future use */ - I40E_FILTER_PCTYPE_NONF_IPV6_UDP = 41, - I40E_FILTER_PCTYPE_NONF_IPV6_TCP = 43, - I40E_FILTER_PCTYPE_NONF_IPV6_SCTP = 44, - I40E_FILTER_PCTYPE_NONF_IPV6_OTHER = 45, - I40E_FILTER_PCTYPE_FRAG_IPV6 = 46, - /* Note: Value 47 is reserved for future use */ - I40E_FILTER_PCTYPE_FCOE_OX = 48, - I40E_FILTER_PCTYPE_FCOE_RX = 49, - I40E_FILTER_PCTYPE_FCOE_OTHER = 50, - /* Note: Values 51-62 are reserved for future use */ - I40E_FILTER_PCTYPE_L2_PAYLOAD = 63, -}; - -enum i40e_filter_program_desc_dest { - I40E_FILTER_PROGRAM_DESC_DEST_DROP_PACKET = 0x0, - I40E_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_QINDEX = 0x1, - I40E_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_OTHER = 0x2, -}; - -enum i40e_filter_program_desc_fd_status { - I40E_FILTER_PROGRAM_DESC_FD_STATUS_NONE = 0x0, - I40E_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID = 0x1, - I40E_FILTER_PROGRAM_DESC_FD_STATUS_FD_ID_4FLEX_BYTES = 0x2, - I40E_FILTER_PROGRAM_DESC_FD_STATUS_8FLEX_BYTES = 0x3, -}; - -#define I40E_TXD_FLTR_QW0_DEST_VSI_SHIFT 23 -#define I40E_TXD_FLTR_QW0_DEST_VSI_MASK (0x1FFUL << \ - I40E_TXD_FLTR_QW0_DEST_VSI_SHIFT) - -#define I40E_TXD_FLTR_QW1_DTYPE_SHIFT 0 -#define I40E_TXD_FLTR_QW1_DTYPE_MASK (0xFUL << I40E_TXD_FLTR_QW1_DTYPE_SHIFT) - -#define I40E_TXD_FLTR_QW1_CMD_SHIFT 4 -#define I40E_TXD_FLTR_QW1_CMD_MASK (0xFFFFULL << \ - I40E_TXD_FLTR_QW1_CMD_SHIFT) - -#define I40E_TXD_FLTR_QW1_PCMD_SHIFT (0x0ULL + I40E_TXD_FLTR_QW1_CMD_SHIFT) -#define I40E_TXD_FLTR_QW1_PCMD_MASK (0x7ULL << I40E_TXD_FLTR_QW1_PCMD_SHIFT) - -enum i40e_filter_program_desc_pcmd { - I40E_FILTER_PROGRAM_DESC_PCMD_ADD_UPDATE = 0x1, - I40E_FILTER_PROGRAM_DESC_PCMD_REMOVE = 0x2, -}; - -#define I40E_TXD_FLTR_QW1_DEST_SHIFT (0x3ULL + I40E_TXD_FLTR_QW1_CMD_SHIFT) -#define I40E_TXD_FLTR_QW1_DEST_MASK (0x3ULL << I40E_TXD_FLTR_QW1_DEST_SHIFT) - -#define I40E_TXD_FLTR_QW1_CNT_ENA_SHIFT (0x7ULL + I40E_TXD_FLTR_QW1_CMD_SHIFT) -#define I40E_TXD_FLTR_QW1_CNT_ENA_MASK (0x1ULL << \ - I40E_TXD_FLTR_QW1_CNT_ENA_SHIFT) - -#define I40E_TXD_FLTR_QW1_FD_STATUS_SHIFT (0x9ULL + \ - I40E_TXD_FLTR_QW1_CMD_SHIFT) -#define I40E_TXD_FLTR_QW1_FD_STATUS_MASK (0x3ULL << \ - I40E_TXD_FLTR_QW1_FD_STATUS_SHIFT) - -#define I40E_TXD_FLTR_QW1_CNTINDEX_SHIFT 20 -#define I40E_TXD_FLTR_QW1_CNTINDEX_MASK (0x1FFUL << \ - I40E_TXD_FLTR_QW1_CNTINDEX_SHIFT) - -enum i40e_filter_type { - I40E_FLOW_DIRECTOR_FLTR = 0, - I40E_PE_QUAD_HASH_FLTR = 1, - I40E_ETHERTYPE_FLTR, - I40E_FCOE_CTX_FLTR, - I40E_MAC_VLAN_FLTR, - I40E_HASH_FLTR -}; - -struct i40e_vsi_context { - u16 seid; - u16 uplink_seid; - u16 vsi_number; - u16 vsis_allocated; - u16 vsis_unallocated; - u16 flags; - u8 pf_num; - u8 vf_num; - u8 connection_type; - struct i40e_aqc_vsi_properties_data info; -}; - -struct i40e_veb_context { - u16 seid; - u16 uplink_seid; - u16 veb_number; - u16 vebs_allocated; - u16 vebs_unallocated; - u16 flags; - struct i40e_aqc_get_veb_parameters_completion info; -}; - -/* Statistics collected by each port, VSI, VEB, and S-channel */ -struct i40e_eth_stats { - u64 rx_bytes; /* gorc */ - u64 rx_unicast; /* uprc */ - u64 rx_multicast; /* mprc */ - u64 rx_broadcast; /* bprc */ - u64 rx_discards; /* rdpc */ - u64 rx_unknown_protocol; /* rupp */ - u64 tx_bytes; /* gotc */ - u64 tx_unicast; /* uptc */ - u64 tx_multicast; /* mptc */ - u64 tx_broadcast; /* bptc */ - u64 tx_discards; /* tdpc */ - u64 tx_errors; /* tepc */ -}; - -/* Statistics collected by the MAC */ -struct i40e_hw_port_stats { - /* eth stats collected by the port */ - struct i40e_eth_stats eth; - - /* additional port specific stats */ - u64 tx_dropped_link_down; /* tdold */ - u64 crc_errors; /* crcerrs */ - u64 illegal_bytes; /* illerrc */ - u64 error_bytes; /* errbc */ - u64 mac_local_faults; /* mlfc */ - u64 mac_remote_faults; /* mrfc */ - u64 rx_length_errors; /* rlec */ - u64 link_xon_rx; /* lxonrxc */ - u64 link_xoff_rx; /* lxoffrxc */ - u64 priority_xon_rx[8]; /* pxonrxc[8] */ - u64 priority_xoff_rx[8]; /* pxoffrxc[8] */ - u64 link_xon_tx; /* lxontxc */ - u64 link_xoff_tx; /* lxofftxc */ - u64 priority_xon_tx[8]; /* pxontxc[8] */ - u64 priority_xoff_tx[8]; /* pxofftxc[8] */ - u64 priority_xon_2_xoff[8]; /* pxon2offc[8] */ - u64 rx_size_64; /* prc64 */ - u64 rx_size_127; /* prc127 */ - u64 rx_size_255; /* prc255 */ - u64 rx_size_511; /* prc511 */ - u64 rx_size_1023; /* prc1023 */ - u64 rx_size_1522; /* prc1522 */ - u64 rx_size_big; /* prc9522 */ - u64 rx_undersize; /* ruc */ - u64 rx_fragments; /* rfc */ - u64 rx_oversize; /* roc */ - u64 rx_jabber; /* rjc */ - u64 tx_size_64; /* ptc64 */ - u64 tx_size_127; /* ptc127 */ - u64 tx_size_255; /* ptc255 */ - u64 tx_size_511; /* ptc511 */ - u64 tx_size_1023; /* ptc1023 */ - u64 tx_size_1522; /* ptc1522 */ - u64 tx_size_big; /* ptc9522 */ - u64 mac_short_packet_dropped; /* mspdc */ - u64 checksum_error; /* xec */ - /* flow director stats */ - u64 fd_atr_match; - u64 fd_sb_match; - /* EEE LPI */ - u32 tx_lpi_status; - u32 rx_lpi_status; - u64 tx_lpi_count; /* etlpic */ - u64 rx_lpi_count; /* erlpic */ -}; - -/* Checksum and Shadow RAM pointers */ -#define I40E_SR_NVM_CONTROL_WORD 0x00 -#define I40E_SR_PCIE_ANALOG_CONFIG_PTR 0x03 -#define I40E_SR_PHY_ANALOG_CONFIG_PTR 0x04 -#define I40E_SR_OPTION_ROM_PTR 0x05 -#define I40E_SR_RO_PCIR_REGS_AUTO_LOAD_PTR 0x06 -#define I40E_SR_AUTO_GENERATED_POINTERS_PTR 0x07 -#define I40E_SR_PCIR_REGS_AUTO_LOAD_PTR 0x08 -#define I40E_SR_EMP_GLOBAL_MODULE_PTR 0x09 -#define I40E_SR_RO_PCIE_LCB_PTR 0x0A -#define I40E_SR_EMP_IMAGE_PTR 0x0B -#define I40E_SR_PE_IMAGE_PTR 0x0C -#define I40E_SR_CSR_PROTECTED_LIST_PTR 0x0D -#define I40E_SR_MNG_CONFIG_PTR 0x0E -#define I40E_SR_EMP_MODULE_PTR 0x0F -#define I40E_SR_PBA_BLOCK_PTR 0x16 -#define I40E_SR_BOOT_CONFIG_PTR 0x17 -#define I40E_SR_NVM_IMAGE_VERSION 0x18 -#define I40E_SR_NVM_WAKE_ON_LAN 0x19 -#define I40E_SR_ALTERNATE_SAN_MAC_ADDRESS_PTR 0x27 -#define I40E_SR_PERMANENT_SAN_MAC_ADDRESS_PTR 0x28 -#define I40E_SR_NVM_EETRACK_LO 0x2D -#define I40E_SR_NVM_EETRACK_HI 0x2E -#define I40E_SR_VPD_PTR 0x2F -#define I40E_SR_PXE_SETUP_PTR 0x30 -#define I40E_SR_PXE_CONFIG_CUST_OPTIONS_PTR 0x31 -#define I40E_SR_SW_ETHERNET_MAC_ADDRESS_PTR 0x37 -#define I40E_SR_POR_REGS_AUTO_LOAD_PTR 0x38 -#define I40E_SR_EMPR_REGS_AUTO_LOAD_PTR 0x3A -#define I40E_SR_GLOBR_REGS_AUTO_LOAD_PTR 0x3B -#define I40E_SR_CORER_REGS_AUTO_LOAD_PTR 0x3C -#define I40E_SR_PCIE_ALT_AUTO_LOAD_PTR 0x3E -#define I40E_SR_SW_CHECKSUM_WORD 0x3F -#define I40E_SR_1ST_FREE_PROVISION_AREA_PTR 0x40 -#define I40E_SR_4TH_FREE_PROVISION_AREA_PTR 0x42 -#define I40E_SR_3RD_FREE_PROVISION_AREA_PTR 0x44 -#define I40E_SR_2ND_FREE_PROVISION_AREA_PTR 0x46 -#define I40E_SR_EMP_SR_SETTINGS_PTR 0x48 - -/* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */ -#define I40E_SR_VPD_MODULE_MAX_SIZE 1024 -#define I40E_SR_PCIE_ALT_MODULE_MAX_SIZE 1024 -#define I40E_SR_CONTROL_WORD_1_SHIFT 0x06 -#define I40E_SR_CONTROL_WORD_1_MASK (0x03 << I40E_SR_CONTROL_WORD_1_SHIFT) - -/* Shadow RAM related */ -#define I40E_SR_SECTOR_SIZE_IN_WORDS 0x800 -#define I40E_SR_BUF_ALIGNMENT 4096 -#define I40E_SR_WORDS_IN_1KB 512 -/* Checksum should be calculated such that after adding all the words, - * including the checksum word itself, the sum should be 0xBABA. - */ -#define I40E_SR_SW_CHECKSUM_BASE 0xBABA - -#define I40E_SRRD_SRCTL_ATTEMPTS 100000 - -enum i40e_switch_element_types { - I40E_SWITCH_ELEMENT_TYPE_MAC = 1, - I40E_SWITCH_ELEMENT_TYPE_PF = 2, - I40E_SWITCH_ELEMENT_TYPE_VF = 3, - I40E_SWITCH_ELEMENT_TYPE_EMP = 4, - I40E_SWITCH_ELEMENT_TYPE_BMC = 6, - I40E_SWITCH_ELEMENT_TYPE_PE = 16, - I40E_SWITCH_ELEMENT_TYPE_VEB = 17, - I40E_SWITCH_ELEMENT_TYPE_PA = 18, - I40E_SWITCH_ELEMENT_TYPE_VSI = 19, -}; - -/* Supported EtherType filters */ -enum i40e_ether_type_index { - I40E_ETHER_TYPE_1588 = 0, - I40E_ETHER_TYPE_FIP = 1, - I40E_ETHER_TYPE_OUI_EXTENDED = 2, - I40E_ETHER_TYPE_MAC_CONTROL = 3, - I40E_ETHER_TYPE_LLDP = 4, - I40E_ETHER_TYPE_EVB_PROTOCOL1 = 5, - I40E_ETHER_TYPE_EVB_PROTOCOL2 = 6, - I40E_ETHER_TYPE_QCN_CNM = 7, - I40E_ETHER_TYPE_8021X = 8, - I40E_ETHER_TYPE_ARP = 9, - I40E_ETHER_TYPE_RSV1 = 10, - I40E_ETHER_TYPE_RSV2 = 11, -}; - -/* Filter context base size is 1K */ -#define I40E_HASH_FILTER_BASE_SIZE 1024 -/* Supported Hash filter values */ -enum i40e_hash_filter_size { - I40E_HASH_FILTER_SIZE_1K = 0, - I40E_HASH_FILTER_SIZE_2K = 1, - I40E_HASH_FILTER_SIZE_4K = 2, - I40E_HASH_FILTER_SIZE_8K = 3, - I40E_HASH_FILTER_SIZE_16K = 4, - I40E_HASH_FILTER_SIZE_32K = 5, - I40E_HASH_FILTER_SIZE_64K = 6, - I40E_HASH_FILTER_SIZE_128K = 7, - I40E_HASH_FILTER_SIZE_256K = 8, - I40E_HASH_FILTER_SIZE_512K = 9, - I40E_HASH_FILTER_SIZE_1M = 10, -}; - -/* DMA context base size is 0.5K */ -#define I40E_DMA_CNTX_BASE_SIZE 512 -/* Supported DMA context values */ -enum i40e_dma_cntx_size { - I40E_DMA_CNTX_SIZE_512 = 0, - I40E_DMA_CNTX_SIZE_1K = 1, - I40E_DMA_CNTX_SIZE_2K = 2, - I40E_DMA_CNTX_SIZE_4K = 3, - I40E_DMA_CNTX_SIZE_8K = 4, - I40E_DMA_CNTX_SIZE_16K = 5, - I40E_DMA_CNTX_SIZE_32K = 6, - I40E_DMA_CNTX_SIZE_64K = 7, - I40E_DMA_CNTX_SIZE_128K = 8, - I40E_DMA_CNTX_SIZE_256K = 9, -}; - -/* Supported Hash look up table (LUT) sizes */ -enum i40e_hash_lut_size { - I40E_HASH_LUT_SIZE_128 = 0, - I40E_HASH_LUT_SIZE_512 = 1, -}; - -/* Structure to hold a per PF filter control settings */ -struct i40e_filter_control_settings { - /* number of PE Quad Hash filter buckets */ - enum i40e_hash_filter_size pe_filt_num; - /* number of PE Quad Hash contexts */ - enum i40e_dma_cntx_size pe_cntx_num; - /* number of FCoE filter buckets */ - enum i40e_hash_filter_size fcoe_filt_num; - /* number of FCoE DDP contexts */ - enum i40e_dma_cntx_size fcoe_cntx_num; - /* size of the Hash LUT */ - enum i40e_hash_lut_size hash_lut_size; - /* enable FDIR filters for PF and its VFs */ - bool enable_fdir; - /* enable Ethertype filters for PF and its VFs */ - bool enable_ethtype; - /* enable MAC/VLAN filters for PF and its VFs */ - bool enable_macvlan; -}; - -/* Structure to hold device level control filter counts */ -struct i40e_control_filter_stats { - u16 mac_etype_used; /* Used perfect match MAC/EtherType filters */ - u16 etype_used; /* Used perfect EtherType filters */ - u16 mac_etype_free; /* Un-used perfect match MAC/EtherType filters */ - u16 etype_free; /* Un-used perfect EtherType filters */ -}; - -enum i40e_reset_type { - I40E_RESET_POR = 0, - I40E_RESET_CORER = 1, - I40E_RESET_GLOBR = 2, - I40E_RESET_EMPR = 3, -}; - -/* Offsets into Alternate Ram */ -#define I40E_ALT_STRUCT_FIRST_PF_OFFSET 0 /* in dwords */ -#define I40E_ALT_STRUCT_DWORDS_PER_PF 64 /* in dwords */ -#define I40E_ALT_STRUCT_OUTER_VLAN_TAG_OFFSET 0xD /* in dwords */ -#define I40E_ALT_STRUCT_USER_PRIORITY_OFFSET 0xC /* in dwords */ -#define I40E_ALT_STRUCT_MIN_BW_OFFSET 0xE /* in dwords */ -#define I40E_ALT_STRUCT_MAX_BW_OFFSET 0xF /* in dwords */ - -/* Alternate Ram Bandwidth Masks */ -#define I40E_ALT_BW_VALUE_MASK 0xFF -#define I40E_ALT_BW_RELATIVE_MASK 0x40000000 -#define I40E_ALT_BW_VALID_MASK 0x80000000 - -/* RSS Hash Table Size */ -#define I40E_PFQF_CTL_0_HASHLUTSIZE_512 0x00010000 -#endif /* _I40E_TYPE_H_ */ diff --git a/lib/librte_pmd_i40e/i40e/i40e_virtchnl.h b/lib/librte_pmd_i40e/i40e/i40e_virtchnl.h deleted file mode 100644 index 5257b5d..0000000 --- a/lib/librte_pmd_i40e/i40e/i40e_virtchnl.h +++ /dev/null @@ -1,373 +0,0 @@ -/******************************************************************************* - -Copyright (c) 2013 - 2014, Intel Corporation -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - - 1. Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - 2. Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - 3. Neither the name of the Intel Corporation nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -POSSIBILITY OF SUCH DAMAGE. - -***************************************************************************/ - -#ifndef _I40E_VIRTCHNL_H_ -#define _I40E_VIRTCHNL_H_ - -#include "i40e_type.h" - -/* Description: - * This header file describes the VF-PF communication protocol used - * by the various i40e drivers. - * - * Admin queue buffer usage: - * desc->opcode is always i40e_aqc_opc_send_msg_to_pf - * flags, retval, datalen, and data addr are all used normally. - * Firmware copies the cookie fields when sending messages between the PF and - * VF, but uses all other fields internally. Due to this limitation, we - * must send all messages as "indirect", i.e. using an external buffer. - * - * All the vsi indexes are relative to the VF. Each VF can have maximum of - * three VSIs. All the queue indexes are relative to the VSI. Each VF can - * have a maximum of sixteen queues for all of its VSIs. - * - * The PF is required to return a status code in v_retval for all messages - * except RESET_VF, which does not require any response. The return value is of - * i40e_status_code type, defined in the i40e_type.h. - * - * In general, VF driver initialization should roughly follow the order of these - * opcodes. The VF driver must first validate the API version of the PF driver, - * then request a reset, then get resources, then configure queues and - * interrupts. After these operations are complete, the VF driver may start - * its queues, optionally add MAC and VLAN filters, and process traffic. - */ - -/* Opcodes for VF-PF communication. These are placed in the v_opcode field - * of the virtchnl_msg structure. - */ -enum i40e_virtchnl_ops { -/* VF sends req. to pf for the following - * ops. - */ - I40E_VIRTCHNL_OP_UNKNOWN = 0, - I40E_VIRTCHNL_OP_VERSION = 1, /* must ALWAYS be 1 */ - I40E_VIRTCHNL_OP_RESET_VF, - I40E_VIRTCHNL_OP_GET_VF_RESOURCES, - I40E_VIRTCHNL_OP_CONFIG_TX_QUEUE, - I40E_VIRTCHNL_OP_CONFIG_RX_QUEUE, - I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES, - I40E_VIRTCHNL_OP_CONFIG_IRQ_MAP, - I40E_VIRTCHNL_OP_ENABLE_QUEUES, - I40E_VIRTCHNL_OP_DISABLE_QUEUES, - I40E_VIRTCHNL_OP_ADD_ETHER_ADDRESS, - I40E_VIRTCHNL_OP_DEL_ETHER_ADDRESS, - I40E_VIRTCHNL_OP_ADD_VLAN, - I40E_VIRTCHNL_OP_DEL_VLAN, - I40E_VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE, - I40E_VIRTCHNL_OP_GET_STATS, - I40E_VIRTCHNL_OP_FCOE, -/* PF sends status change events to vfs using - * the following op. - */ - I40E_VIRTCHNL_OP_EVENT, -}; - -/* Virtual channel message descriptor. This overlays the admin queue - * descriptor. All other data is passed in external buffers. - */ - -struct i40e_virtchnl_msg { - u8 pad[8]; /* AQ flags/opcode/len/retval fields */ - enum i40e_virtchnl_ops v_opcode; /* avoid confusion with desc->opcode */ - enum i40e_status_code v_retval; /* ditto for desc->retval */ - u32 vfid; /* used by PF when sending to VF */ -}; - -/* Message descriptions and data structures.*/ - -/* I40E_VIRTCHNL_OP_VERSION - * VF posts its version number to the PF. PF responds with its version number - * in the same format, along with a return code. - * Reply from PF has its major/minor versions also in param0 and param1. - * If there is a major version mismatch, then the VF cannot operate. - * If there is a minor version mismatch, then the VF can operate but should - * add a warning to the system log. - * - * This enum element MUST always be specified as == 1, regardless of other - * changes in the API. The PF must always respond to this message without - * error regardless of version mismatch. - */ -#define I40E_VIRTCHNL_VERSION_MAJOR 1 -#define I40E_VIRTCHNL_VERSION_MINOR 0 -struct i40e_virtchnl_version_info { - u32 major; - u32 minor; -}; - -/* I40E_VIRTCHNL_OP_RESET_VF - * VF sends this request to PF with no parameters - * PF does NOT respond! VF driver must delay then poll VFGEN_RSTAT register - * until reset completion is indicated. The admin queue must be reinitialized - * after this operation. - * - * When reset is complete, PF must ensure that all queues in all VSIs associated - * with the VF are stopped, all queue configurations in the HMC are set to 0, - * and all MAC and VLAN filters (except the default MAC address) on all VSIs - * are cleared. - */ - -/* I40E_VIRTCHNL_OP_GET_VF_RESOURCES - * VF sends this request to PF with no parameters - * PF responds with an indirect message containing - * i40e_virtchnl_vf_resource and one or more - * i40e_virtchnl_vsi_resource structures. - */ - -struct i40e_virtchnl_vsi_resource { - u16 vsi_id; - u16 num_queue_pairs; - enum i40e_vsi_type vsi_type; - u16 qset_handle; - u8 default_mac_addr[I40E_ETH_LENGTH_OF_ADDRESS]; -}; -/* VF offload flags */ -#define I40E_VIRTCHNL_VF_OFFLOAD_L2 0x00000001 -#define I40E_VIRTCHNL_VF_OFFLOAD_IWARP 0x00000002 -#define I40E_VIRTCHNL_VF_OFFLOAD_FCOE 0x00000004 -#define I40E_VIRTCHNL_VF_OFFLOAD_VLAN 0x00010000 - -struct i40e_virtchnl_vf_resource { - u16 num_vsis; - u16 num_queue_pairs; - u16 max_vectors; - u16 max_mtu; - - u32 vf_offload_flags; - u32 max_fcoe_contexts; - u32 max_fcoe_filters; - - struct i40e_virtchnl_vsi_resource vsi_res[1]; -}; - -/* I40E_VIRTCHNL_OP_CONFIG_TX_QUEUE - * VF sends this message to set up parameters for one TX queue. - * External data buffer contains one instance of i40e_virtchnl_txq_info. - * PF configures requested queue and returns a status code. - */ - -/* Tx queue config info */ -struct i40e_virtchnl_txq_info { - u16 vsi_id; - u16 queue_id; - u16 ring_len; /* number of descriptors, multiple of 8 */ - u16 headwb_enabled; - u64 dma_ring_addr; - u64 dma_headwb_addr; -}; - -/* I40E_VIRTCHNL_OP_CONFIG_RX_QUEUE - * VF sends this message to set up parameters for one RX queue. - * External data buffer contains one instance of i40e_virtchnl_rxq_info. - * PF configures requested queue and returns a status code. - */ - -/* Rx queue config info */ -struct i40e_virtchnl_rxq_info { - u16 vsi_id; - u16 queue_id; - u32 ring_len; /* number of descriptors, multiple of 32 */ - u16 hdr_size; - u16 splithdr_enabled; - u32 databuffer_size; - u32 max_pkt_size; - u64 dma_ring_addr; - enum i40e_hmc_obj_rx_hsplit_0 rx_split_pos; -}; - -/* I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES - * VF sends this message to set parameters for all active TX and RX queues - * associated with the specified VSI. - * PF configures queues and returns status. - * If the number of queues specified is greater than the number of queues - * associated with the VSI, an error is returned and no queues are configured. - */ -struct i40e_virtchnl_queue_pair_info { - /* NOTE: vsi_id and queue_id should be identical for both queues. */ - struct i40e_virtchnl_txq_info txq; - struct i40e_virtchnl_rxq_info rxq; -}; - -struct i40e_virtchnl_vsi_queue_config_info { - u16 vsi_id; - u16 num_queue_pairs; - struct i40e_virtchnl_queue_pair_info qpair[1]; -}; - -/* I40E_VIRTCHNL_OP_CONFIG_IRQ_MAP - * VF uses this message to map vectors to queues. - * The rxq_map and txq_map fields are bitmaps used to indicate which queues - * are to be associated with the specified vector. - * The "other" causes are always mapped to vector 0. - * PF configures interrupt mapping and returns status. - */ -struct i40e_virtchnl_vector_map { - u16 vsi_id; - u16 vector_id; - u16 rxq_map; - u16 txq_map; - u16 rxitr_idx; - u16 txitr_idx; -}; - -struct i40e_virtchnl_irq_map_info { - u16 num_vectors; - struct i40e_virtchnl_vector_map vecmap[1]; -}; - -/* I40E_VIRTCHNL_OP_ENABLE_QUEUES - * I40E_VIRTCHNL_OP_DISABLE_QUEUES - * VF sends these message to enable or disable TX/RX queue pairs. - * The queues fields are bitmaps indicating which queues to act upon. - * (Currently, we only support 16 queues per VF, but we make the field - * u32 to allow for expansion.) - * PF performs requested action and returns status. - */ -struct i40e_virtchnl_queue_select { - u16 vsi_id; - u16 pad; - u32 rx_queues; - u32 tx_queues; -}; - -/* I40E_VIRTCHNL_OP_ADD_ETHER_ADDRESS - * VF sends this message in order to add one or more unicast or multicast - * address filters for the specified VSI. - * PF adds the filters and returns status. - */ - -/* I40E_VIRTCHNL_OP_DEL_ETHER_ADDRESS - * VF sends this message in order to remove one or more unicast or multicast - * filters for the specified VSI. - * PF removes the filters and returns status. - */ - -struct i40e_virtchnl_ether_addr { - u8 addr[I40E_ETH_LENGTH_OF_ADDRESS]; - u8 pad[2]; -}; - -struct i40e_virtchnl_ether_addr_list { - u16 vsi_id; - u16 num_elements; - struct i40e_virtchnl_ether_addr list[1]; -}; - -/* I40E_VIRTCHNL_OP_ADD_VLAN - * VF sends this message to add one or more VLAN tag filters for receives. - * PF adds the filters and returns status. - * If a port VLAN is configured by the PF, this operation will return an - * error to the VF. - */ - -/* I40E_VIRTCHNL_OP_DEL_VLAN - * VF sends this message to remove one or more VLAN tag filters for receives. - * PF removes the filters and returns status. - * If a port VLAN is configured by the PF, this operation will return an - * error to the VF. - */ - -struct i40e_virtchnl_vlan_filter_list { - u16 vsi_id; - u16 num_elements; - u16 vlan_id[1]; -}; - -/* I40E_VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE - * VF sends VSI id and flags. - * PF returns status code in retval. - * Note: we assume that broadcast accept mode is always enabled. - */ -struct i40e_virtchnl_promisc_info { - u16 vsi_id; - u16 flags; -}; - -#define I40E_FLAG_VF_UNICAST_PROMISC 0x00000001 -#define I40E_FLAG_VF_MULTICAST_PROMISC 0x00000002 - -/* I40E_VIRTCHNL_OP_GET_STATS - * VF sends this message to request stats for the selected VSI. VF uses - * the i40e_virtchnl_queue_select struct to specify the VSI. The queue_id - * field is ignored by the PF. - * - * PF replies with struct i40e_eth_stats in an external buffer. - */ - -/* I40E_VIRTCHNL_OP_EVENT - * PF sends this message to inform the VF driver of events that may affect it. - * No direct response is expected from the VF, though it may generate other - * messages in response to this one. - */ -enum i40e_virtchnl_event_codes { - I40E_VIRTCHNL_EVENT_UNKNOWN = 0, - I40E_VIRTCHNL_EVENT_LINK_CHANGE, - I40E_VIRTCHNL_EVENT_RESET_IMPENDING, - I40E_VIRTCHNL_EVENT_PF_DRIVER_CLOSE, -}; -#define I40E_PF_EVENT_SEVERITY_INFO 0 -#define I40E_PF_EVENT_SEVERITY_ATTENTION 1 -#define I40E_PF_EVENT_SEVERITY_ACTION_REQUIRED 2 -#define I40E_PF_EVENT_SEVERITY_CERTAIN_DOOM 255 - -struct i40e_virtchnl_pf_event { - enum i40e_virtchnl_event_codes event; - union { - struct { - enum i40e_aq_link_speed link_speed; - bool link_status; - } link_event; - } event_data; - - int severity; -}; - -/* VF reset states - these are written into the RSTAT register: - * I40E_VFGEN_RSTAT1 on the PF - * I40E_VFGEN_RSTAT on the VF - * When the PF initiates a reset, it writes 0 - * When the reset is complete, it writes 1 - * When the PF detects that the VF has recovered, it writes 2 - * VF checks this register periodically to determine if a reset has occurred, - * then polls it to know when the reset is complete. - * If either the PF or VF reads the register while the hardware - * is in a reset state, it will return DEADBEEF, which, when masked - * will result in 3. - */ -enum i40e_vfr_states { - I40E_VFR_INPROGRESS = 0, - I40E_VFR_COMPLETED, - I40E_VFR_VFACTIVE, - I40E_VFR_UNKNOWN, -}; - -#endif /* _I40E_VIRTCHNL_H_ */ diff --git a/lib/librte_pmd_i40e/i40e_ethdev.c b/lib/librte_pmd_i40e/i40e_ethdev.c deleted file mode 100644 index 43762f2..0000000 --- a/lib/librte_pmd_i40e/i40e_ethdev.c +++ /dev/null @@ -1,5716 +0,0 @@ -/*- - * BSD LICENSE - * - * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include "i40e_logs.h" -#include "i40e/i40e_prototype.h" -#include "i40e/i40e_adminq_cmd.h" -#include "i40e/i40e_type.h" -#include "i40e_ethdev.h" -#include "i40e_rxtx.h" -#include "i40e_pf.h" - -/* Maximun number of MAC addresses */ -#define I40E_NUM_MACADDR_MAX 64 -#define I40E_CLEAR_PXE_WAIT_MS 200 - -/* Maximun number of capability elements */ -#define I40E_MAX_CAP_ELE_NUM 128 - -/* Wait count and inteval */ -#define I40E_CHK_Q_ENA_COUNT 1000 -#define I40E_CHK_Q_ENA_INTERVAL_US 1000 - -/* Maximun number of VSI */ -#define I40E_MAX_NUM_VSIS (384UL) - -/* Default queue interrupt throttling time in microseconds */ -#define I40E_ITR_INDEX_DEFAULT 0 -#define I40E_QUEUE_ITR_INTERVAL_DEFAULT 32 /* 32 us */ -#define I40E_QUEUE_ITR_INTERVAL_MAX 8160 /* 8160 us */ - -#define I40E_PRE_TX_Q_CFG_WAIT_US 10 /* 10 us */ - -/* Mask of PF interrupt causes */ -#define I40E_PFINT_ICR0_ENA_MASK ( \ - I40E_PFINT_ICR0_ENA_ECC_ERR_MASK | \ - I40E_PFINT_ICR0_ENA_MAL_DETECT_MASK | \ - I40E_PFINT_ICR0_ENA_GRST_MASK | \ - I40E_PFINT_ICR0_ENA_PCI_EXCEPTION_MASK | \ - I40E_PFINT_ICR0_ENA_STORM_DETECT_MASK | \ - I40E_PFINT_ICR0_ENA_LINK_STAT_CHANGE_MASK | \ - I40E_PFINT_ICR0_ENA_HMC_ERR_MASK | \ - I40E_PFINT_ICR0_ENA_PE_CRITERR_MASK | \ - I40E_PFINT_ICR0_ENA_VFLR_MASK | \ - I40E_PFINT_ICR0_ENA_ADMINQ_MASK) - -#define I40E_FLOW_TYPES ( \ - (1UL << RTE_ETH_FLOW_FRAG_IPV4) | \ - (1UL << RTE_ETH_FLOW_NONFRAG_IPV4_TCP) | \ - (1UL << RTE_ETH_FLOW_NONFRAG_IPV4_UDP) | \ - (1UL << RTE_ETH_FLOW_NONFRAG_IPV4_SCTP) | \ - (1UL << RTE_ETH_FLOW_NONFRAG_IPV4_OTHER) | \ - (1UL << RTE_ETH_FLOW_FRAG_IPV6) | \ - (1UL << RTE_ETH_FLOW_NONFRAG_IPV6_TCP) | \ - (1UL << RTE_ETH_FLOW_NONFRAG_IPV6_UDP) | \ - (1UL << RTE_ETH_FLOW_NONFRAG_IPV6_SCTP) | \ - (1UL << RTE_ETH_FLOW_NONFRAG_IPV6_OTHER) | \ - (1UL << RTE_ETH_FLOW_L2_PAYLOAD)) - -static int eth_i40e_dev_init(struct rte_eth_dev *eth_dev); -static int i40e_dev_configure(struct rte_eth_dev *dev); -static int i40e_dev_start(struct rte_eth_dev *dev); -static void i40e_dev_stop(struct rte_eth_dev *dev); -static void i40e_dev_close(struct rte_eth_dev *dev); -static void i40e_dev_promiscuous_enable(struct rte_eth_dev *dev); -static void i40e_dev_promiscuous_disable(struct rte_eth_dev *dev); -static void i40e_dev_allmulticast_enable(struct rte_eth_dev *dev); -static void i40e_dev_allmulticast_disable(struct rte_eth_dev *dev); -static int i40e_dev_set_link_up(struct rte_eth_dev *dev); -static int i40e_dev_set_link_down(struct rte_eth_dev *dev); -static void i40e_dev_stats_get(struct rte_eth_dev *dev, - struct rte_eth_stats *stats); -static void i40e_dev_stats_reset(struct rte_eth_dev *dev); -static int i40e_dev_queue_stats_mapping_set(struct rte_eth_dev *dev, - uint16_t queue_id, - uint8_t stat_idx, - uint8_t is_rx); -static void i40e_dev_info_get(struct rte_eth_dev *dev, - struct rte_eth_dev_info *dev_info); -static int i40e_vlan_filter_set(struct rte_eth_dev *dev, - uint16_t vlan_id, - int on); -static void i40e_vlan_tpid_set(struct rte_eth_dev *dev, uint16_t tpid); -static void i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask); -static void i40e_vlan_strip_queue_set(struct rte_eth_dev *dev, - uint16_t queue, - int on); -static int i40e_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on); -static int i40e_dev_led_on(struct rte_eth_dev *dev); -static int i40e_dev_led_off(struct rte_eth_dev *dev); -static int i40e_flow_ctrl_set(struct rte_eth_dev *dev, - struct rte_eth_fc_conf *fc_conf); -static int i40e_priority_flow_ctrl_set(struct rte_eth_dev *dev, - struct rte_eth_pfc_conf *pfc_conf); -static void i40e_macaddr_add(struct rte_eth_dev *dev, - struct ether_addr *mac_addr, - uint32_t index, - uint32_t pool); -static void i40e_macaddr_remove(struct rte_eth_dev *dev, uint32_t index); -static int i40e_dev_rss_reta_update(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size); -static int i40e_dev_rss_reta_query(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size); - -static int i40e_get_cap(struct i40e_hw *hw); -static int i40e_pf_parameter_init(struct rte_eth_dev *dev); -static int i40e_pf_setup(struct i40e_pf *pf); -static int i40e_dev_rxtx_init(struct i40e_pf *pf); -static int i40e_vmdq_setup(struct rte_eth_dev *dev); -static void i40e_stat_update_32(struct i40e_hw *hw, uint32_t reg, - bool offset_loaded, uint64_t *offset, uint64_t *stat); -static void i40e_stat_update_48(struct i40e_hw *hw, - uint32_t hireg, - uint32_t loreg, - bool offset_loaded, - uint64_t *offset, - uint64_t *stat); -static void i40e_pf_config_irq0(struct i40e_hw *hw); -static void i40e_dev_interrupt_handler( - __rte_unused struct rte_intr_handle *handle, void *param); -static int i40e_res_pool_init(struct i40e_res_pool_info *pool, - uint32_t base, uint32_t num); -static void i40e_res_pool_destroy(struct i40e_res_pool_info *pool); -static int i40e_res_pool_free(struct i40e_res_pool_info *pool, - uint32_t base); -static int i40e_res_pool_alloc(struct i40e_res_pool_info *pool, - uint16_t num); -static int i40e_dev_init_vlan(struct rte_eth_dev *dev); -static int i40e_veb_release(struct i40e_veb *veb); -static struct i40e_veb *i40e_veb_setup(struct i40e_pf *pf, - struct i40e_vsi *vsi); -static int i40e_pf_config_mq_rx(struct i40e_pf *pf); -static int i40e_vsi_config_double_vlan(struct i40e_vsi *vsi, int on); -static inline int i40e_find_all_vlan_for_mac(struct i40e_vsi *vsi, - struct i40e_macvlan_filter *mv_f, - int num, - struct ether_addr *addr); -static inline int i40e_find_all_mac_for_vlan(struct i40e_vsi *vsi, - struct i40e_macvlan_filter *mv_f, - int num, - uint16_t vlan); -static int i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi); -static int i40e_dev_rss_hash_update(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf); -static int i40e_dev_rss_hash_conf_get(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf); -static int i40e_dev_udp_tunnel_add(struct rte_eth_dev *dev, - struct rte_eth_udp_tunnel *udp_tunnel); -static int i40e_dev_udp_tunnel_del(struct rte_eth_dev *dev, - struct rte_eth_udp_tunnel *udp_tunnel); -static int i40e_ethertype_filter_set(struct i40e_pf *pf, - struct rte_eth_ethertype_filter *filter, - bool add); -static int i40e_ethertype_filter_handle(struct rte_eth_dev *dev, - enum rte_filter_op filter_op, - void *arg); -static int i40e_dev_filter_ctrl(struct rte_eth_dev *dev, - enum rte_filter_type filter_type, - enum rte_filter_op filter_op, - void *arg); -static void i40e_configure_registers(struct i40e_hw *hw); -static void i40e_hw_init(struct i40e_hw *hw); - -static const struct rte_pci_id pci_id_i40e_map[] = { -#define RTE_PCI_DEV_ID_DECL_I40E(vend, dev) {RTE_PCI_DEVICE(vend, dev)}, -#include "rte_pci_dev_ids.h" -{ .vendor_id = 0, /* sentinel */ }, -}; - -static const struct eth_dev_ops i40e_eth_dev_ops = { - .dev_configure = i40e_dev_configure, - .dev_start = i40e_dev_start, - .dev_stop = i40e_dev_stop, - .dev_close = i40e_dev_close, - .promiscuous_enable = i40e_dev_promiscuous_enable, - .promiscuous_disable = i40e_dev_promiscuous_disable, - .allmulticast_enable = i40e_dev_allmulticast_enable, - .allmulticast_disable = i40e_dev_allmulticast_disable, - .dev_set_link_up = i40e_dev_set_link_up, - .dev_set_link_down = i40e_dev_set_link_down, - .link_update = i40e_dev_link_update, - .stats_get = i40e_dev_stats_get, - .stats_reset = i40e_dev_stats_reset, - .queue_stats_mapping_set = i40e_dev_queue_stats_mapping_set, - .dev_infos_get = i40e_dev_info_get, - .vlan_filter_set = i40e_vlan_filter_set, - .vlan_tpid_set = i40e_vlan_tpid_set, - .vlan_offload_set = i40e_vlan_offload_set, - .vlan_strip_queue_set = i40e_vlan_strip_queue_set, - .vlan_pvid_set = i40e_vlan_pvid_set, - .rx_queue_start = i40e_dev_rx_queue_start, - .rx_queue_stop = i40e_dev_rx_queue_stop, - .tx_queue_start = i40e_dev_tx_queue_start, - .tx_queue_stop = i40e_dev_tx_queue_stop, - .rx_queue_setup = i40e_dev_rx_queue_setup, - .rx_queue_release = i40e_dev_rx_queue_release, - .rx_queue_count = i40e_dev_rx_queue_count, - .rx_descriptor_done = i40e_dev_rx_descriptor_done, - .tx_queue_setup = i40e_dev_tx_queue_setup, - .tx_queue_release = i40e_dev_tx_queue_release, - .dev_led_on = i40e_dev_led_on, - .dev_led_off = i40e_dev_led_off, - .flow_ctrl_set = i40e_flow_ctrl_set, - .priority_flow_ctrl_set = i40e_priority_flow_ctrl_set, - .mac_addr_add = i40e_macaddr_add, - .mac_addr_remove = i40e_macaddr_remove, - .reta_update = i40e_dev_rss_reta_update, - .reta_query = i40e_dev_rss_reta_query, - .rss_hash_update = i40e_dev_rss_hash_update, - .rss_hash_conf_get = i40e_dev_rss_hash_conf_get, - .udp_tunnel_add = i40e_dev_udp_tunnel_add, - .udp_tunnel_del = i40e_dev_udp_tunnel_del, - .filter_ctrl = i40e_dev_filter_ctrl, -}; - -static struct eth_driver rte_i40e_pmd = { - { - .name = "rte_i40e_pmd", - .id_table = pci_id_i40e_map, - .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC, - }, - .eth_dev_init = eth_i40e_dev_init, - .dev_private_size = sizeof(struct i40e_adapter), -}; - -static inline int -i40e_align_floor(int n) -{ - if (n == 0) - return 0; - return (1 << (sizeof(n) * CHAR_BIT - 1 - __builtin_clz(n))); -} - -static inline int -rte_i40e_dev_atomic_read_link_status(struct rte_eth_dev *dev, - struct rte_eth_link *link) -{ - struct rte_eth_link *dst = link; - struct rte_eth_link *src = &(dev->data->dev_link); - - if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst, - *(uint64_t *)src) == 0) - return -1; - - return 0; -} - -static inline int -rte_i40e_dev_atomic_write_link_status(struct rte_eth_dev *dev, - struct rte_eth_link *link) -{ - struct rte_eth_link *dst = &(dev->data->dev_link); - struct rte_eth_link *src = link; - - if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst, - *(uint64_t *)src) == 0) - return -1; - - return 0; -} - -/* - * Driver initialization routine. - * Invoked once at EAL init time. - * Register itself as the [Poll Mode] Driver of PCI IXGBE devices. - */ -static int -rte_i40e_pmd_init(const char *name __rte_unused, - const char *params __rte_unused) -{ - PMD_INIT_FUNC_TRACE(); - rte_eth_driver_register(&rte_i40e_pmd); - - return 0; -} - -static struct rte_driver rte_i40e_driver = { - .type = PMD_PDEV, - .init = rte_i40e_pmd_init, -}; - -PMD_REGISTER_DRIVER(rte_i40e_driver); - -/* - * Initialize registers for flexible payload, which should be set by NVM. - * This should be removed from code once it is fixed in NVM. - */ -#ifndef I40E_GLQF_ORT -#define I40E_GLQF_ORT(_i) (0x00268900 + ((_i) * 4)) -#endif -#ifndef I40E_GLQF_PIT -#define I40E_GLQF_PIT(_i) (0x00268C80 + ((_i) * 4)) -#endif - -static inline void i40e_flex_payload_reg_init(struct i40e_hw *hw) -{ - I40E_WRITE_REG(hw, I40E_GLQF_ORT(18), 0x00000030); - I40E_WRITE_REG(hw, I40E_GLQF_ORT(19), 0x00000030); - I40E_WRITE_REG(hw, I40E_GLQF_ORT(26), 0x0000002B); - I40E_WRITE_REG(hw, I40E_GLQF_ORT(30), 0x0000002B); - I40E_WRITE_REG(hw, I40E_GLQF_ORT(33), 0x000000E0); - I40E_WRITE_REG(hw, I40E_GLQF_ORT(34), 0x000000E3); - I40E_WRITE_REG(hw, I40E_GLQF_ORT(35), 0x000000E6); - I40E_WRITE_REG(hw, I40E_GLQF_ORT(20), 0x00000031); - I40E_WRITE_REG(hw, I40E_GLQF_ORT(23), 0x00000031); - I40E_WRITE_REG(hw, I40E_GLQF_ORT(63), 0x0000002D); - - /* GLQF_PIT Registers */ - I40E_WRITE_REG(hw, I40E_GLQF_PIT(16), 0x00007480); - I40E_WRITE_REG(hw, I40E_GLQF_PIT(17), 0x00007440); -} - -static int -eth_i40e_dev_init(struct rte_eth_dev *dev) -{ - struct rte_pci_device *pci_dev; - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct i40e_vsi *vsi; - int ret; - uint32_t len; - uint8_t aq_fail = 0; - - PMD_INIT_FUNC_TRACE(); - - dev->dev_ops = &i40e_eth_dev_ops; - dev->rx_pkt_burst = i40e_recv_pkts; - dev->tx_pkt_burst = i40e_xmit_pkts; - - /* for secondary processes, we don't initialise any further as primary - * has already done this work. Only check we don't need a different - * RX function */ - if (rte_eal_process_type() != RTE_PROC_PRIMARY){ - if (dev->data->scattered_rx) - dev->rx_pkt_burst = i40e_recv_scattered_pkts; - return 0; - } - pci_dev = dev->pci_dev; - pf->adapter = I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - pf->adapter->eth_dev = dev; - pf->dev_data = dev->data; - - hw->back = I40E_PF_TO_ADAPTER(pf); - hw->hw_addr = (uint8_t *)(pci_dev->mem_resource[0].addr); - if (!hw->hw_addr) { - PMD_INIT_LOG(ERR, "Hardware is not available, " - "as address is NULL"); - return -ENODEV; - } - - hw->vendor_id = pci_dev->id.vendor_id; - hw->device_id = pci_dev->id.device_id; - hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id; - hw->subsystem_device_id = pci_dev->id.subsystem_device_id; - hw->bus.device = pci_dev->addr.devid; - hw->bus.func = pci_dev->addr.function; - - /* Make sure all is clean before doing PF reset */ - i40e_clear_hw(hw); - - /* Initialize the hardware */ - i40e_hw_init(hw); - - /* Reset here to make sure all is clean for each PF */ - ret = i40e_pf_reset(hw); - if (ret) { - PMD_INIT_LOG(ERR, "Failed to reset pf: %d", ret); - return ret; - } - - /* Initialize the shared code (base driver) */ - ret = i40e_init_shared_code(hw); - if (ret) { - PMD_INIT_LOG(ERR, "Failed to init shared code (base driver): %d", ret); - return ret; - } - - /* - * To work around the NVM issue,initialize registers - * for flexible payload by software. - * It should be removed once issues are fixed in NVM. - */ - i40e_flex_payload_reg_init(hw); - - /* Initialize the parameters for adminq */ - i40e_init_adminq_parameter(hw); - ret = i40e_init_adminq(hw); - if (ret != I40E_SUCCESS) { - PMD_INIT_LOG(ERR, "Failed to init adminq: %d", ret); - return -EIO; - } - PMD_INIT_LOG(INFO, "FW %d.%d API %d.%d NVM %02d.%02d.%02d eetrack %04x", - hw->aq.fw_maj_ver, hw->aq.fw_min_ver, - hw->aq.api_maj_ver, hw->aq.api_min_ver, - ((hw->nvm.version >> 12) & 0xf), - ((hw->nvm.version >> 4) & 0xff), - (hw->nvm.version & 0xf), hw->nvm.eetrack); - - /* Disable LLDP */ - ret = i40e_aq_stop_lldp(hw, true, NULL); - if (ret != I40E_SUCCESS) /* Its failure can be ignored */ - PMD_INIT_LOG(INFO, "Failed to stop lldp"); - - /* Clear PXE mode */ - i40e_clear_pxe_mode(hw); - - /* - * On X710, performance number is far from the expectation on recent - * firmware versions. The fix for this issue may not be integrated in - * the following firmware version. So the workaround in software driver - * is needed. It needs to modify the initial values of 3 internal only - * registers. Note that the workaround can be removed when it is fixed - * in firmware in the future. - */ - i40e_configure_registers(hw); - - /* Get hw capabilities */ - ret = i40e_get_cap(hw); - if (ret != I40E_SUCCESS) { - PMD_INIT_LOG(ERR, "Failed to get capabilities: %d", ret); - goto err_get_capabilities; - } - - /* Initialize parameters for PF */ - ret = i40e_pf_parameter_init(dev); - if (ret != 0) { - PMD_INIT_LOG(ERR, "Failed to do parameter init: %d", ret); - goto err_parameter_init; - } - - /* Initialize the queue management */ - ret = i40e_res_pool_init(&pf->qp_pool, 0, hw->func_caps.num_tx_qp); - if (ret < 0) { - PMD_INIT_LOG(ERR, "Failed to init queue pool"); - goto err_qp_pool_init; - } - ret = i40e_res_pool_init(&pf->msix_pool, 1, - hw->func_caps.num_msix_vectors - 1); - if (ret < 0) { - PMD_INIT_LOG(ERR, "Failed to init MSIX pool"); - goto err_msix_pool_init; - } - - /* Initialize lan hmc */ - ret = i40e_init_lan_hmc(hw, hw->func_caps.num_tx_qp, - hw->func_caps.num_rx_qp, 0, 0); - if (ret != I40E_SUCCESS) { - PMD_INIT_LOG(ERR, "Failed to init lan hmc: %d", ret); - goto err_init_lan_hmc; - } - - /* Configure lan hmc */ - ret = i40e_configure_lan_hmc(hw, I40E_HMC_MODEL_DIRECT_ONLY); - if (ret != I40E_SUCCESS) { - PMD_INIT_LOG(ERR, "Failed to configure lan hmc: %d", ret); - goto err_configure_lan_hmc; - } - - /* Get and check the mac address */ - i40e_get_mac_addr(hw, hw->mac.addr); - if (i40e_validate_mac_addr(hw->mac.addr) != I40E_SUCCESS) { - PMD_INIT_LOG(ERR, "mac address is not valid"); - ret = -EIO; - goto err_get_mac_addr; - } - /* Copy the permanent MAC address */ - ether_addr_copy((struct ether_addr *) hw->mac.addr, - (struct ether_addr *) hw->mac.perm_addr); - - /* Disable flow control */ - hw->fc.requested_mode = I40E_FC_NONE; - i40e_set_fc(hw, &aq_fail, TRUE); - - /* PF setup, which includes VSI setup */ - ret = i40e_pf_setup(pf); - if (ret) { - PMD_INIT_LOG(ERR, "Failed to setup pf switch: %d", ret); - goto err_setup_pf_switch; - } - - vsi = pf->main_vsi; - - /* Disable double vlan by default */ - i40e_vsi_config_double_vlan(vsi, FALSE); - - if (!vsi->max_macaddrs) - len = ETHER_ADDR_LEN; - else - len = ETHER_ADDR_LEN * vsi->max_macaddrs; - - /* Should be after VSI initialized */ - dev->data->mac_addrs = rte_zmalloc("i40e", len, 0); - if (!dev->data->mac_addrs) { - PMD_INIT_LOG(ERR, "Failed to allocated memory " - "for storing mac address"); - goto err_mac_alloc; - } - ether_addr_copy((struct ether_addr *)hw->mac.perm_addr, - &dev->data->mac_addrs[0]); - - /* initialize pf host driver to setup SRIOV resource if applicable */ - i40e_pf_host_init(dev); - - /* register callback func to eal lib */ - rte_intr_callback_register(&(pci_dev->intr_handle), - i40e_dev_interrupt_handler, (void *)dev); - - /* configure and enable device interrupt */ - i40e_pf_config_irq0(hw); - i40e_pf_enable_irq0(hw); - - /* enable uio intr after callback register */ - rte_intr_enable(&(pci_dev->intr_handle)); - - return 0; - -err_mac_alloc: - i40e_vsi_release(pf->main_vsi); -err_setup_pf_switch: -err_get_mac_addr: -err_configure_lan_hmc: - (void)i40e_shutdown_lan_hmc(hw); -err_init_lan_hmc: - i40e_res_pool_destroy(&pf->msix_pool); -err_msix_pool_init: - i40e_res_pool_destroy(&pf->qp_pool); -err_qp_pool_init: -err_parameter_init: -err_get_capabilities: - (void)i40e_shutdown_adminq(hw); - - return ret; -} - -static int -i40e_dev_configure(struct rte_eth_dev *dev) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - enum rte_eth_rx_mq_mode mq_mode = dev->data->dev_conf.rxmode.mq_mode; - int ret; - - if (dev->data->dev_conf.fdir_conf.mode == RTE_FDIR_MODE_PERFECT) { - ret = i40e_fdir_setup(pf); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to setup flow director."); - return -ENOTSUP; - } - ret = i40e_fdir_configure(dev); - if (ret < 0) { - PMD_DRV_LOG(ERR, "failed to configure fdir."); - goto err; - } - } else - i40e_fdir_teardown(pf); - - ret = i40e_dev_init_vlan(dev); - if (ret < 0) - goto err; - - /* VMDQ setup. - * Needs to move VMDQ setting out of i40e_pf_config_mq_rx() as VMDQ and - * RSS setting have different requirements. - * General PMD driver call sequence are NIC init, configure, - * rx/tx_queue_setup and dev_start. In rx/tx_queue_setup() function, it - * will try to lookup the VSI that specific queue belongs to if VMDQ - * applicable. So, VMDQ setting has to be done before - * rx/tx_queue_setup(). This function is good to place vmdq_setup. - * For RSS setting, it will try to calculate actual configured RX queue - * number, which will be available after rx_queue_setup(). dev_start() - * function is good to place RSS setup. - */ - if (mq_mode & ETH_MQ_RX_VMDQ_FLAG) { - ret = i40e_vmdq_setup(dev); - if (ret) - goto err; - } - return 0; -err: - i40e_fdir_teardown(pf); - return ret; -} - -void -i40e_vsi_queues_unbind_intr(struct i40e_vsi *vsi) -{ - struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); - uint16_t msix_vect = vsi->msix_intr; - uint16_t i; - - for (i = 0; i < vsi->nb_qps; i++) { - I40E_WRITE_REG(hw, I40E_QINT_TQCTL(vsi->base_queue + i), 0); - I40E_WRITE_REG(hw, I40E_QINT_RQCTL(vsi->base_queue + i), 0); - rte_wmb(); - } - - if (vsi->type != I40E_VSI_SRIOV) { - I40E_WRITE_REG(hw, I40E_PFINT_LNKLSTN(msix_vect - 1), 0); - I40E_WRITE_REG(hw, I40E_PFINT_ITRN(I40E_ITR_INDEX_DEFAULT, - msix_vect - 1), 0); - } else { - uint32_t reg; - reg = (hw->func_caps.num_msix_vectors_vf - 1) * - vsi->user_param + (msix_vect - 1); - - I40E_WRITE_REG(hw, I40E_VPINT_LNKLSTN(reg), 0); - } - I40E_WRITE_FLUSH(hw); -} - -static inline uint16_t -i40e_calc_itr_interval(int16_t interval) -{ - if (interval < 0 || interval > I40E_QUEUE_ITR_INTERVAL_MAX) - interval = I40E_QUEUE_ITR_INTERVAL_DEFAULT; - - /* Convert to hardware count, as writing each 1 represents 2 us */ - return (interval/2); -} - -void -i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi) -{ - uint32_t val; - struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); - uint16_t msix_vect = vsi->msix_intr; - int i; - - for (i = 0; i < vsi->nb_qps; i++) - I40E_WRITE_REG(hw, I40E_QINT_TQCTL(vsi->base_queue + i), 0); - - /* Bind all RX queues to allocated MSIX interrupt */ - for (i = 0; i < vsi->nb_qps; i++) { - val = (msix_vect << I40E_QINT_RQCTL_MSIX_INDX_SHIFT) | - I40E_QINT_RQCTL_ITR_INDX_MASK | - ((vsi->base_queue + i + 1) << - I40E_QINT_RQCTL_NEXTQ_INDX_SHIFT) | - (0 << I40E_QINT_RQCTL_NEXTQ_TYPE_SHIFT) | - I40E_QINT_RQCTL_CAUSE_ENA_MASK; - - if (i == vsi->nb_qps - 1) - val |= I40E_QINT_RQCTL_NEXTQ_INDX_MASK; - I40E_WRITE_REG(hw, I40E_QINT_RQCTL(vsi->base_queue + i), val); - } - - /* Write first RX queue to Link list register as the head element */ - if (vsi->type != I40E_VSI_SRIOV) { - uint16_t interval = - i40e_calc_itr_interval(RTE_LIBRTE_I40E_ITR_INTERVAL); - - I40E_WRITE_REG(hw, I40E_PFINT_LNKLSTN(msix_vect - 1), - (vsi->base_queue << - I40E_PFINT_LNKLSTN_FIRSTQ_INDX_SHIFT) | - (0x0 << I40E_PFINT_LNKLSTN_FIRSTQ_TYPE_SHIFT)); - - I40E_WRITE_REG(hw, I40E_PFINT_ITRN(I40E_ITR_INDEX_DEFAULT, - msix_vect - 1), interval); - -#ifndef I40E_GLINT_CTL -#define I40E_GLINT_CTL 0x0003F800 -#define I40E_GLINT_CTL_DIS_AUTOMASK_N_MASK 0x4 -#endif - /* Disable auto-mask on enabling of all none-zero interrupt */ - I40E_WRITE_REG(hw, I40E_GLINT_CTL, - I40E_GLINT_CTL_DIS_AUTOMASK_N_MASK); - } else { - uint32_t reg; - - /* num_msix_vectors_vf needs to minus irq0 */ - reg = (hw->func_caps.num_msix_vectors_vf - 1) * - vsi->user_param + (msix_vect - 1); - - I40E_WRITE_REG(hw, I40E_VPINT_LNKLSTN(reg), (vsi->base_queue << - I40E_VPINT_LNKLSTN_FIRSTQ_INDX_SHIFT) | - (0x0 << I40E_VPINT_LNKLSTN_FIRSTQ_TYPE_SHIFT)); - } - - I40E_WRITE_FLUSH(hw); -} - -static void -i40e_vsi_enable_queues_intr(struct i40e_vsi *vsi) -{ - struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); - uint16_t interval = i40e_calc_itr_interval(\ - RTE_LIBRTE_I40E_ITR_INTERVAL); - - I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTLN(vsi->msix_intr - 1), - I40E_PFINT_DYN_CTLN_INTENA_MASK | - I40E_PFINT_DYN_CTLN_CLEARPBA_MASK | - (0 << I40E_PFINT_DYN_CTLN_ITR_INDX_SHIFT) | - (interval << I40E_PFINT_DYN_CTLN_INTERVAL_SHIFT)); -} - -static void -i40e_vsi_disable_queues_intr(struct i40e_vsi *vsi) -{ - struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); - - I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTLN(vsi->msix_intr - 1), 0); -} - -static inline uint8_t -i40e_parse_link_speed(uint16_t eth_link_speed) -{ - uint8_t link_speed = I40E_LINK_SPEED_UNKNOWN; - - switch (eth_link_speed) { - case ETH_LINK_SPEED_40G: - link_speed = I40E_LINK_SPEED_40GB; - break; - case ETH_LINK_SPEED_20G: - link_speed = I40E_LINK_SPEED_20GB; - break; - case ETH_LINK_SPEED_10G: - link_speed = I40E_LINK_SPEED_10GB; - break; - case ETH_LINK_SPEED_1000: - link_speed = I40E_LINK_SPEED_1GB; - break; - case ETH_LINK_SPEED_100: - link_speed = I40E_LINK_SPEED_100MB; - break; - } - - return link_speed; -} - -static int -i40e_phy_conf_link(struct i40e_hw *hw, uint8_t abilities, uint8_t force_speed) -{ - enum i40e_status_code status; - struct i40e_aq_get_phy_abilities_resp phy_ab; - struct i40e_aq_set_phy_config phy_conf; - const uint8_t mask = I40E_AQ_PHY_FLAG_PAUSE_TX | - I40E_AQ_PHY_FLAG_PAUSE_RX | - I40E_AQ_PHY_FLAG_LOW_POWER; - const uint8_t advt = I40E_LINK_SPEED_40GB | - I40E_LINK_SPEED_10GB | - I40E_LINK_SPEED_1GB | - I40E_LINK_SPEED_100MB; - int ret = -ENOTSUP; - - status = i40e_aq_get_phy_capabilities(hw, false, false, &phy_ab, - NULL); - if (status) - return ret; - - memset(&phy_conf, 0, sizeof(phy_conf)); - - /* bits 0-2 use the values from get_phy_abilities_resp */ - abilities &= ~mask; - abilities |= phy_ab.abilities & mask; - - /* update ablities and speed */ - if (abilities & I40E_AQ_PHY_AN_ENABLED) - phy_conf.link_speed = advt; - else - phy_conf.link_speed = force_speed; - - phy_conf.abilities = abilities; - - /* use get_phy_abilities_resp value for the rest */ - phy_conf.phy_type = phy_ab.phy_type; - phy_conf.eee_capability = phy_ab.eee_capability; - phy_conf.eeer = phy_ab.eeer_val; - phy_conf.low_power_ctrl = phy_ab.d3_lpan; - - PMD_DRV_LOG(DEBUG, "\tCurrent: abilities %x, link_speed %x", - phy_ab.abilities, phy_ab.link_speed); - PMD_DRV_LOG(DEBUG, "\tConfig: abilities %x, link_speed %x", - phy_conf.abilities, phy_conf.link_speed); - - status = i40e_aq_set_phy_config(hw, &phy_conf, NULL); - if (status) - return ret; - - return I40E_SUCCESS; -} - -static int -i40e_apply_link_speed(struct rte_eth_dev *dev) -{ - uint8_t speed; - uint8_t abilities = 0; - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct rte_eth_conf *conf = &dev->data->dev_conf; - - speed = i40e_parse_link_speed(conf->link_speed); - abilities |= I40E_AQ_PHY_ENABLE_ATOMIC_LINK; - if (conf->link_speed == ETH_LINK_SPEED_AUTONEG) - abilities |= I40E_AQ_PHY_AN_ENABLED; - else - abilities |= I40E_AQ_PHY_LINK_ENABLED; - - return i40e_phy_conf_link(hw, abilities, speed); -} - -static int -i40e_dev_start(struct rte_eth_dev *dev) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct i40e_vsi *main_vsi = pf->main_vsi; - int ret, i; - - if ((dev->data->dev_conf.link_duplex != ETH_LINK_AUTONEG_DUPLEX) && - (dev->data->dev_conf.link_duplex != ETH_LINK_FULL_DUPLEX)) { - PMD_INIT_LOG(ERR, "Invalid link_duplex (%hu) for port %hhu", - dev->data->dev_conf.link_duplex, - dev->data->port_id); - return -EINVAL; - } - - /* Initialize VSI */ - ret = i40e_dev_rxtx_init(pf); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to init rx/tx queues"); - goto err_up; - } - - /* Map queues with MSIX interrupt */ - i40e_vsi_queues_bind_intr(main_vsi); - i40e_vsi_enable_queues_intr(main_vsi); - - /* Map VMDQ VSI queues with MSIX interrupt */ - for (i = 0; i < pf->nb_cfg_vmdq_vsi; i++) { - i40e_vsi_queues_bind_intr(pf->vmdq[i].vsi); - i40e_vsi_enable_queues_intr(pf->vmdq[i].vsi); - } - - /* enable FDIR MSIX interrupt */ - if (pf->fdir.fdir_vsi) { - i40e_vsi_queues_bind_intr(pf->fdir.fdir_vsi); - i40e_vsi_enable_queues_intr(pf->fdir.fdir_vsi); - } - - /* Enable all queues which have been configured */ - ret = i40e_dev_switch_queues(pf, TRUE); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to enable VSI"); - goto err_up; - } - - /* Enable receiving broadcast packets */ - ret = i40e_aq_set_vsi_broadcast(hw, main_vsi->seid, true, NULL); - if (ret != I40E_SUCCESS) - PMD_DRV_LOG(INFO, "fail to set vsi broadcast"); - - for (i = 0; i < pf->nb_cfg_vmdq_vsi; i++) { - ret = i40e_aq_set_vsi_broadcast(hw, pf->vmdq[i].vsi->seid, - true, NULL); - if (ret != I40E_SUCCESS) - PMD_DRV_LOG(INFO, "fail to set vsi broadcast"); - } - - /* Apply link configure */ - ret = i40e_apply_link_speed(dev); - if (I40E_SUCCESS != ret) { - PMD_DRV_LOG(ERR, "Fail to apply link setting"); - goto err_up; - } - - return I40E_SUCCESS; - -err_up: - i40e_dev_switch_queues(pf, FALSE); - i40e_dev_clear_queues(dev); - - return ret; -} - -static void -i40e_dev_stop(struct rte_eth_dev *dev) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_vsi *main_vsi = pf->main_vsi; - int i; - - /* Disable all queues */ - i40e_dev_switch_queues(pf, FALSE); - - /* un-map queues with interrupt registers */ - i40e_vsi_disable_queues_intr(main_vsi); - i40e_vsi_queues_unbind_intr(main_vsi); - - for (i = 0; i < pf->nb_cfg_vmdq_vsi; i++) { - i40e_vsi_disable_queues_intr(pf->vmdq[i].vsi); - i40e_vsi_queues_unbind_intr(pf->vmdq[i].vsi); - } - - if (pf->fdir.fdir_vsi) { - i40e_vsi_queues_bind_intr(pf->fdir.fdir_vsi); - i40e_vsi_enable_queues_intr(pf->fdir.fdir_vsi); - } - /* Clear all queues and release memory */ - i40e_dev_clear_queues(dev); - - /* Set link down */ - i40e_dev_set_link_down(dev); - -} - -static void -i40e_dev_close(struct rte_eth_dev *dev) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - uint32_t reg; - - PMD_INIT_FUNC_TRACE(); - - i40e_dev_stop(dev); - - /* Disable interrupt */ - i40e_pf_disable_irq0(hw); - rte_intr_disable(&(dev->pci_dev->intr_handle)); - - /* shutdown and destroy the HMC */ - i40e_shutdown_lan_hmc(hw); - - /* release all the existing VSIs and VEBs */ - i40e_fdir_teardown(pf); - i40e_vsi_release(pf->main_vsi); - - /* shutdown the adminq */ - i40e_aq_queue_shutdown(hw, true); - i40e_shutdown_adminq(hw); - - i40e_res_pool_destroy(&pf->qp_pool); - i40e_res_pool_destroy(&pf->msix_pool); - - /* force a PF reset to clean anything leftover */ - reg = I40E_READ_REG(hw, I40E_PFGEN_CTRL); - I40E_WRITE_REG(hw, I40E_PFGEN_CTRL, - (reg | I40E_PFGEN_CTRL_PFSWR_MASK)); - I40E_WRITE_FLUSH(hw); -} - -static void -i40e_dev_promiscuous_enable(struct rte_eth_dev *dev) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct i40e_vsi *vsi = pf->main_vsi; - int status; - - status = i40e_aq_set_vsi_unicast_promiscuous(hw, vsi->seid, - true, NULL); - if (status != I40E_SUCCESS) - PMD_DRV_LOG(ERR, "Failed to enable unicast promiscuous"); - - status = i40e_aq_set_vsi_multicast_promiscuous(hw, vsi->seid, - TRUE, NULL); - if (status != I40E_SUCCESS) - PMD_DRV_LOG(ERR, "Failed to enable multicast promiscuous"); - -} - -static void -i40e_dev_promiscuous_disable(struct rte_eth_dev *dev) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct i40e_vsi *vsi = pf->main_vsi; - int status; - - status = i40e_aq_set_vsi_unicast_promiscuous(hw, vsi->seid, - false, NULL); - if (status != I40E_SUCCESS) - PMD_DRV_LOG(ERR, "Failed to disable unicast promiscuous"); - - status = i40e_aq_set_vsi_multicast_promiscuous(hw, vsi->seid, - false, NULL); - if (status != I40E_SUCCESS) - PMD_DRV_LOG(ERR, "Failed to disable multicast promiscuous"); -} - -static void -i40e_dev_allmulticast_enable(struct rte_eth_dev *dev) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct i40e_vsi *vsi = pf->main_vsi; - int ret; - - ret = i40e_aq_set_vsi_multicast_promiscuous(hw, vsi->seid, TRUE, NULL); - if (ret != I40E_SUCCESS) - PMD_DRV_LOG(ERR, "Failed to enable multicast promiscuous"); -} - -static void -i40e_dev_allmulticast_disable(struct rte_eth_dev *dev) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct i40e_vsi *vsi = pf->main_vsi; - int ret; - - if (dev->data->promiscuous == 1) - return; /* must remain in all_multicast mode */ - - ret = i40e_aq_set_vsi_multicast_promiscuous(hw, - vsi->seid, FALSE, NULL); - if (ret != I40E_SUCCESS) - PMD_DRV_LOG(ERR, "Failed to disable multicast promiscuous"); -} - -/* - * Set device link up. - */ -static int -i40e_dev_set_link_up(struct rte_eth_dev *dev) -{ - /* re-apply link speed setting */ - return i40e_apply_link_speed(dev); -} - -/* - * Set device link down. - */ -static int -i40e_dev_set_link_down(__rte_unused struct rte_eth_dev *dev) -{ - uint8_t speed = I40E_LINK_SPEED_UNKNOWN; - uint8_t abilities = I40E_AQ_PHY_ENABLE_ATOMIC_LINK; - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - - return i40e_phy_conf_link(hw, abilities, speed); -} - -int -i40e_dev_link_update(struct rte_eth_dev *dev, - int wait_to_complete) -{ -#define CHECK_INTERVAL 100 /* 100ms */ -#define MAX_REPEAT_TIME 10 /* 1s (10 * 100ms) in total */ - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct i40e_link_status link_status; - struct rte_eth_link link, old; - int status; - unsigned rep_cnt = MAX_REPEAT_TIME; - - memset(&link, 0, sizeof(link)); - memset(&old, 0, sizeof(old)); - memset(&link_status, 0, sizeof(link_status)); - rte_i40e_dev_atomic_read_link_status(dev, &old); - - do { - /* Get link status information from hardware */ - status = i40e_aq_get_link_info(hw, false, &link_status, NULL); - if (status != I40E_SUCCESS) { - link.link_speed = ETH_LINK_SPEED_100; - link.link_duplex = ETH_LINK_FULL_DUPLEX; - PMD_DRV_LOG(ERR, "Failed to get link info"); - goto out; - } - - link.link_status = link_status.link_info & I40E_AQ_LINK_UP; - if (!wait_to_complete) - break; - - rte_delay_ms(CHECK_INTERVAL); - } while (!link.link_status && rep_cnt--); - - if (!link.link_status) - goto out; - - /* i40e uses full duplex only */ - link.link_duplex = ETH_LINK_FULL_DUPLEX; - - /* Parse the link status */ - switch (link_status.link_speed) { - case I40E_LINK_SPEED_100MB: - link.link_speed = ETH_LINK_SPEED_100; - break; - case I40E_LINK_SPEED_1GB: - link.link_speed = ETH_LINK_SPEED_1000; - break; - case I40E_LINK_SPEED_10GB: - link.link_speed = ETH_LINK_SPEED_10G; - break; - case I40E_LINK_SPEED_20GB: - link.link_speed = ETH_LINK_SPEED_20G; - break; - case I40E_LINK_SPEED_40GB: - link.link_speed = ETH_LINK_SPEED_40G; - break; - default: - link.link_speed = ETH_LINK_SPEED_100; - break; - } - -out: - rte_i40e_dev_atomic_write_link_status(dev, &link); - if (link.link_status == old.link_status) - return -1; - - return 0; -} - -/* Get all the statistics of a VSI */ -void -i40e_update_vsi_stats(struct i40e_vsi *vsi) -{ - struct i40e_eth_stats *oes = &vsi->eth_stats_offset; - struct i40e_eth_stats *nes = &vsi->eth_stats; - struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); - int idx = rte_le_to_cpu_16(vsi->info.stat_counter_idx); - - i40e_stat_update_48(hw, I40E_GLV_GORCH(idx), I40E_GLV_GORCL(idx), - vsi->offset_loaded, &oes->rx_bytes, - &nes->rx_bytes); - i40e_stat_update_48(hw, I40E_GLV_UPRCH(idx), I40E_GLV_UPRCL(idx), - vsi->offset_loaded, &oes->rx_unicast, - &nes->rx_unicast); - i40e_stat_update_48(hw, I40E_GLV_MPRCH(idx), I40E_GLV_MPRCL(idx), - vsi->offset_loaded, &oes->rx_multicast, - &nes->rx_multicast); - i40e_stat_update_48(hw, I40E_GLV_BPRCH(idx), I40E_GLV_BPRCL(idx), - vsi->offset_loaded, &oes->rx_broadcast, - &nes->rx_broadcast); - i40e_stat_update_32(hw, I40E_GLV_RDPC(idx), vsi->offset_loaded, - &oes->rx_discards, &nes->rx_discards); - /* GLV_REPC not supported */ - /* GLV_RMPC not supported */ - i40e_stat_update_32(hw, I40E_GLV_RUPP(idx), vsi->offset_loaded, - &oes->rx_unknown_protocol, - &nes->rx_unknown_protocol); - i40e_stat_update_48(hw, I40E_GLV_GOTCH(idx), I40E_GLV_GOTCL(idx), - vsi->offset_loaded, &oes->tx_bytes, - &nes->tx_bytes); - i40e_stat_update_48(hw, I40E_GLV_UPTCH(idx), I40E_GLV_UPTCL(idx), - vsi->offset_loaded, &oes->tx_unicast, - &nes->tx_unicast); - i40e_stat_update_48(hw, I40E_GLV_MPTCH(idx), I40E_GLV_MPTCL(idx), - vsi->offset_loaded, &oes->tx_multicast, - &nes->tx_multicast); - i40e_stat_update_48(hw, I40E_GLV_BPTCH(idx), I40E_GLV_BPTCL(idx), - vsi->offset_loaded, &oes->tx_broadcast, - &nes->tx_broadcast); - /* GLV_TDPC not supported */ - i40e_stat_update_32(hw, I40E_GLV_TEPC(idx), vsi->offset_loaded, - &oes->tx_errors, &nes->tx_errors); - vsi->offset_loaded = true; - - PMD_DRV_LOG(DEBUG, "***************** VSI[%u] stats start *******************", - vsi->vsi_id); - PMD_DRV_LOG(DEBUG, "rx_bytes: %lu", nes->rx_bytes); - PMD_DRV_LOG(DEBUG, "rx_unicast: %lu", nes->rx_unicast); - PMD_DRV_LOG(DEBUG, "rx_multicast: %lu", nes->rx_multicast); - PMD_DRV_LOG(DEBUG, "rx_broadcast: %lu", nes->rx_broadcast); - PMD_DRV_LOG(DEBUG, "rx_discards: %lu", nes->rx_discards); - PMD_DRV_LOG(DEBUG, "rx_unknown_protocol: %lu", - nes->rx_unknown_protocol); - PMD_DRV_LOG(DEBUG, "tx_bytes: %lu", nes->tx_bytes); - PMD_DRV_LOG(DEBUG, "tx_unicast: %lu", nes->tx_unicast); - PMD_DRV_LOG(DEBUG, "tx_multicast: %lu", nes->tx_multicast); - PMD_DRV_LOG(DEBUG, "tx_broadcast: %lu", nes->tx_broadcast); - PMD_DRV_LOG(DEBUG, "tx_discards: %lu", nes->tx_discards); - PMD_DRV_LOG(DEBUG, "tx_errors: %lu", nes->tx_errors); - PMD_DRV_LOG(DEBUG, "***************** VSI[%u] stats end *******************", - vsi->vsi_id); -} - -/* Get all statistics of a port */ -static void -i40e_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) -{ - uint32_t i; - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct i40e_hw_port_stats *ns = &pf->stats; /* new stats */ - struct i40e_hw_port_stats *os = &pf->stats_offset; /* old stats */ - - /* Get statistics of struct i40e_eth_stats */ - i40e_stat_update_48(hw, I40E_GLPRT_GORCH(hw->port), - I40E_GLPRT_GORCL(hw->port), - pf->offset_loaded, &os->eth.rx_bytes, - &ns->eth.rx_bytes); - i40e_stat_update_48(hw, I40E_GLPRT_UPRCH(hw->port), - I40E_GLPRT_UPRCL(hw->port), - pf->offset_loaded, &os->eth.rx_unicast, - &ns->eth.rx_unicast); - i40e_stat_update_48(hw, I40E_GLPRT_MPRCH(hw->port), - I40E_GLPRT_MPRCL(hw->port), - pf->offset_loaded, &os->eth.rx_multicast, - &ns->eth.rx_multicast); - i40e_stat_update_48(hw, I40E_GLPRT_BPRCH(hw->port), - I40E_GLPRT_BPRCL(hw->port), - pf->offset_loaded, &os->eth.rx_broadcast, - &ns->eth.rx_broadcast); - i40e_stat_update_32(hw, I40E_GLPRT_RDPC(hw->port), - pf->offset_loaded, &os->eth.rx_discards, - &ns->eth.rx_discards); - /* GLPRT_REPC not supported */ - /* GLPRT_RMPC not supported */ - i40e_stat_update_32(hw, I40E_GLPRT_RUPP(hw->port), - pf->offset_loaded, - &os->eth.rx_unknown_protocol, - &ns->eth.rx_unknown_protocol); - i40e_stat_update_48(hw, I40E_GLPRT_GOTCH(hw->port), - I40E_GLPRT_GOTCL(hw->port), - pf->offset_loaded, &os->eth.tx_bytes, - &ns->eth.tx_bytes); - i40e_stat_update_48(hw, I40E_GLPRT_UPTCH(hw->port), - I40E_GLPRT_UPTCL(hw->port), - pf->offset_loaded, &os->eth.tx_unicast, - &ns->eth.tx_unicast); - i40e_stat_update_48(hw, I40E_GLPRT_MPTCH(hw->port), - I40E_GLPRT_MPTCL(hw->port), - pf->offset_loaded, &os->eth.tx_multicast, - &ns->eth.tx_multicast); - i40e_stat_update_48(hw, I40E_GLPRT_BPTCH(hw->port), - I40E_GLPRT_BPTCL(hw->port), - pf->offset_loaded, &os->eth.tx_broadcast, - &ns->eth.tx_broadcast); - i40e_stat_update_32(hw, I40E_GLPRT_TDPC(hw->port), - pf->offset_loaded, &os->eth.tx_discards, - &ns->eth.tx_discards); - /* GLPRT_TEPC not supported */ - - /* additional port specific stats */ - i40e_stat_update_32(hw, I40E_GLPRT_TDOLD(hw->port), - pf->offset_loaded, &os->tx_dropped_link_down, - &ns->tx_dropped_link_down); - i40e_stat_update_32(hw, I40E_GLPRT_CRCERRS(hw->port), - pf->offset_loaded, &os->crc_errors, - &ns->crc_errors); - i40e_stat_update_32(hw, I40E_GLPRT_ILLERRC(hw->port), - pf->offset_loaded, &os->illegal_bytes, - &ns->illegal_bytes); - /* GLPRT_ERRBC not supported */ - i40e_stat_update_32(hw, I40E_GLPRT_MLFC(hw->port), - pf->offset_loaded, &os->mac_local_faults, - &ns->mac_local_faults); - i40e_stat_update_32(hw, I40E_GLPRT_MRFC(hw->port), - pf->offset_loaded, &os->mac_remote_faults, - &ns->mac_remote_faults); - i40e_stat_update_32(hw, I40E_GLPRT_RLEC(hw->port), - pf->offset_loaded, &os->rx_length_errors, - &ns->rx_length_errors); - i40e_stat_update_32(hw, I40E_GLPRT_LXONRXC(hw->port), - pf->offset_loaded, &os->link_xon_rx, - &ns->link_xon_rx); - i40e_stat_update_32(hw, I40E_GLPRT_LXOFFRXC(hw->port), - pf->offset_loaded, &os->link_xoff_rx, - &ns->link_xoff_rx); - for (i = 0; i < 8; i++) { - i40e_stat_update_32(hw, I40E_GLPRT_PXONRXC(hw->port, i), - pf->offset_loaded, - &os->priority_xon_rx[i], - &ns->priority_xon_rx[i]); - i40e_stat_update_32(hw, I40E_GLPRT_PXOFFRXC(hw->port, i), - pf->offset_loaded, - &os->priority_xoff_rx[i], - &ns->priority_xoff_rx[i]); - } - i40e_stat_update_32(hw, I40E_GLPRT_LXONTXC(hw->port), - pf->offset_loaded, &os->link_xon_tx, - &ns->link_xon_tx); - i40e_stat_update_32(hw, I40E_GLPRT_LXOFFTXC(hw->port), - pf->offset_loaded, &os->link_xoff_tx, - &ns->link_xoff_tx); - for (i = 0; i < 8; i++) { - i40e_stat_update_32(hw, I40E_GLPRT_PXONTXC(hw->port, i), - pf->offset_loaded, - &os->priority_xon_tx[i], - &ns->priority_xon_tx[i]); - i40e_stat_update_32(hw, I40E_GLPRT_PXOFFTXC(hw->port, i), - pf->offset_loaded, - &os->priority_xoff_tx[i], - &ns->priority_xoff_tx[i]); - i40e_stat_update_32(hw, I40E_GLPRT_RXON2OFFCNT(hw->port, i), - pf->offset_loaded, - &os->priority_xon_2_xoff[i], - &ns->priority_xon_2_xoff[i]); - } - i40e_stat_update_48(hw, I40E_GLPRT_PRC64H(hw->port), - I40E_GLPRT_PRC64L(hw->port), - pf->offset_loaded, &os->rx_size_64, - &ns->rx_size_64); - i40e_stat_update_48(hw, I40E_GLPRT_PRC127H(hw->port), - I40E_GLPRT_PRC127L(hw->port), - pf->offset_loaded, &os->rx_size_127, - &ns->rx_size_127); - i40e_stat_update_48(hw, I40E_GLPRT_PRC255H(hw->port), - I40E_GLPRT_PRC255L(hw->port), - pf->offset_loaded, &os->rx_size_255, - &ns->rx_size_255); - i40e_stat_update_48(hw, I40E_GLPRT_PRC511H(hw->port), - I40E_GLPRT_PRC511L(hw->port), - pf->offset_loaded, &os->rx_size_511, - &ns->rx_size_511); - i40e_stat_update_48(hw, I40E_GLPRT_PRC1023H(hw->port), - I40E_GLPRT_PRC1023L(hw->port), - pf->offset_loaded, &os->rx_size_1023, - &ns->rx_size_1023); - i40e_stat_update_48(hw, I40E_GLPRT_PRC1522H(hw->port), - I40E_GLPRT_PRC1522L(hw->port), - pf->offset_loaded, &os->rx_size_1522, - &ns->rx_size_1522); - i40e_stat_update_48(hw, I40E_GLPRT_PRC9522H(hw->port), - I40E_GLPRT_PRC9522L(hw->port), - pf->offset_loaded, &os->rx_size_big, - &ns->rx_size_big); - i40e_stat_update_32(hw, I40E_GLPRT_RUC(hw->port), - pf->offset_loaded, &os->rx_undersize, - &ns->rx_undersize); - i40e_stat_update_32(hw, I40E_GLPRT_RFC(hw->port), - pf->offset_loaded, &os->rx_fragments, - &ns->rx_fragments); - i40e_stat_update_32(hw, I40E_GLPRT_ROC(hw->port), - pf->offset_loaded, &os->rx_oversize, - &ns->rx_oversize); - i40e_stat_update_32(hw, I40E_GLPRT_RJC(hw->port), - pf->offset_loaded, &os->rx_jabber, - &ns->rx_jabber); - i40e_stat_update_48(hw, I40E_GLPRT_PTC64H(hw->port), - I40E_GLPRT_PTC64L(hw->port), - pf->offset_loaded, &os->tx_size_64, - &ns->tx_size_64); - i40e_stat_update_48(hw, I40E_GLPRT_PTC127H(hw->port), - I40E_GLPRT_PTC127L(hw->port), - pf->offset_loaded, &os->tx_size_127, - &ns->tx_size_127); - i40e_stat_update_48(hw, I40E_GLPRT_PTC255H(hw->port), - I40E_GLPRT_PTC255L(hw->port), - pf->offset_loaded, &os->tx_size_255, - &ns->tx_size_255); - i40e_stat_update_48(hw, I40E_GLPRT_PTC511H(hw->port), - I40E_GLPRT_PTC511L(hw->port), - pf->offset_loaded, &os->tx_size_511, - &ns->tx_size_511); - i40e_stat_update_48(hw, I40E_GLPRT_PTC1023H(hw->port), - I40E_GLPRT_PTC1023L(hw->port), - pf->offset_loaded, &os->tx_size_1023, - &ns->tx_size_1023); - i40e_stat_update_48(hw, I40E_GLPRT_PTC1522H(hw->port), - I40E_GLPRT_PTC1522L(hw->port), - pf->offset_loaded, &os->tx_size_1522, - &ns->tx_size_1522); - i40e_stat_update_48(hw, I40E_GLPRT_PTC9522H(hw->port), - I40E_GLPRT_PTC9522L(hw->port), - pf->offset_loaded, &os->tx_size_big, - &ns->tx_size_big); - i40e_stat_update_32(hw, I40E_GLQF_PCNT(pf->fdir.match_counter_index), - pf->offset_loaded, - &os->fd_sb_match, &ns->fd_sb_match); - /* GLPRT_MSPDC not supported */ - /* GLPRT_XEC not supported */ - - pf->offset_loaded = true; - - if (pf->main_vsi) - i40e_update_vsi_stats(pf->main_vsi); - - stats->ipackets = ns->eth.rx_unicast + ns->eth.rx_multicast + - ns->eth.rx_broadcast; - stats->opackets = ns->eth.tx_unicast + ns->eth.tx_multicast + - ns->eth.tx_broadcast; - stats->ibytes = ns->eth.rx_bytes; - stats->obytes = ns->eth.tx_bytes; - stats->oerrors = ns->eth.tx_errors; - stats->imcasts = ns->eth.rx_multicast; - stats->fdirmatch = ns->fd_sb_match; - - /* Rx Errors */ - stats->ibadcrc = ns->crc_errors; - stats->ibadlen = ns->rx_length_errors + ns->rx_undersize + - ns->rx_oversize + ns->rx_fragments + ns->rx_jabber; - stats->imissed = ns->eth.rx_discards; - stats->ierrors = stats->ibadcrc + stats->ibadlen + stats->imissed; - - PMD_DRV_LOG(DEBUG, "***************** PF stats start *******************"); - PMD_DRV_LOG(DEBUG, "rx_bytes: %lu", ns->eth.rx_bytes); - PMD_DRV_LOG(DEBUG, "rx_unicast: %lu", ns->eth.rx_unicast); - PMD_DRV_LOG(DEBUG, "rx_multicast: %lu", ns->eth.rx_multicast); - PMD_DRV_LOG(DEBUG, "rx_broadcast: %lu", ns->eth.rx_broadcast); - PMD_DRV_LOG(DEBUG, "rx_discards: %lu", ns->eth.rx_discards); - PMD_DRV_LOG(DEBUG, "rx_unknown_protocol: %lu", - ns->eth.rx_unknown_protocol); - PMD_DRV_LOG(DEBUG, "tx_bytes: %lu", ns->eth.tx_bytes); - PMD_DRV_LOG(DEBUG, "tx_unicast: %lu", ns->eth.tx_unicast); - PMD_DRV_LOG(DEBUG, "tx_multicast: %lu", ns->eth.tx_multicast); - PMD_DRV_LOG(DEBUG, "tx_broadcast: %lu", ns->eth.tx_broadcast); - PMD_DRV_LOG(DEBUG, "tx_discards: %lu", ns->eth.tx_discards); - PMD_DRV_LOG(DEBUG, "tx_errors: %lu", ns->eth.tx_errors); - - PMD_DRV_LOG(DEBUG, "tx_dropped_link_down: %lu", - ns->tx_dropped_link_down); - PMD_DRV_LOG(DEBUG, "crc_errors: %lu", ns->crc_errors); - PMD_DRV_LOG(DEBUG, "illegal_bytes: %lu", - ns->illegal_bytes); - PMD_DRV_LOG(DEBUG, "error_bytes: %lu", ns->error_bytes); - PMD_DRV_LOG(DEBUG, "mac_local_faults: %lu", - ns->mac_local_faults); - PMD_DRV_LOG(DEBUG, "mac_remote_faults: %lu", - ns->mac_remote_faults); - PMD_DRV_LOG(DEBUG, "rx_length_errors: %lu", - ns->rx_length_errors); - PMD_DRV_LOG(DEBUG, "link_xon_rx: %lu", ns->link_xon_rx); - PMD_DRV_LOG(DEBUG, "link_xoff_rx: %lu", ns->link_xoff_rx); - for (i = 0; i < 8; i++) { - PMD_DRV_LOG(DEBUG, "priority_xon_rx[%d]: %lu", - i, ns->priority_xon_rx[i]); - PMD_DRV_LOG(DEBUG, "priority_xoff_rx[%d]: %lu", - i, ns->priority_xoff_rx[i]); - } - PMD_DRV_LOG(DEBUG, "link_xon_tx: %lu", ns->link_xon_tx); - PMD_DRV_LOG(DEBUG, "link_xoff_tx: %lu", ns->link_xoff_tx); - for (i = 0; i < 8; i++) { - PMD_DRV_LOG(DEBUG, "priority_xon_tx[%d]: %lu", - i, ns->priority_xon_tx[i]); - PMD_DRV_LOG(DEBUG, "priority_xoff_tx[%d]: %lu", - i, ns->priority_xoff_tx[i]); - PMD_DRV_LOG(DEBUG, "priority_xon_2_xoff[%d]: %lu", - i, ns->priority_xon_2_xoff[i]); - } - PMD_DRV_LOG(DEBUG, "rx_size_64: %lu", ns->rx_size_64); - PMD_DRV_LOG(DEBUG, "rx_size_127: %lu", ns->rx_size_127); - PMD_DRV_LOG(DEBUG, "rx_size_255: %lu", ns->rx_size_255); - PMD_DRV_LOG(DEBUG, "rx_size_511: %lu", ns->rx_size_511); - PMD_DRV_LOG(DEBUG, "rx_size_1023: %lu", ns->rx_size_1023); - PMD_DRV_LOG(DEBUG, "rx_size_1522: %lu", ns->rx_size_1522); - PMD_DRV_LOG(DEBUG, "rx_size_big: %lu", ns->rx_size_big); - PMD_DRV_LOG(DEBUG, "rx_undersize: %lu", ns->rx_undersize); - PMD_DRV_LOG(DEBUG, "rx_fragments: %lu", ns->rx_fragments); - PMD_DRV_LOG(DEBUG, "rx_oversize: %lu", ns->rx_oversize); - PMD_DRV_LOG(DEBUG, "rx_jabber: %lu", ns->rx_jabber); - PMD_DRV_LOG(DEBUG, "tx_size_64: %lu", ns->tx_size_64); - PMD_DRV_LOG(DEBUG, "tx_size_127: %lu", ns->tx_size_127); - PMD_DRV_LOG(DEBUG, "tx_size_255: %lu", ns->tx_size_255); - PMD_DRV_LOG(DEBUG, "tx_size_511: %lu", ns->tx_size_511); - PMD_DRV_LOG(DEBUG, "tx_size_1023: %lu", ns->tx_size_1023); - PMD_DRV_LOG(DEBUG, "tx_size_1522: %lu", ns->tx_size_1522); - PMD_DRV_LOG(DEBUG, "tx_size_big: %lu", ns->tx_size_big); - PMD_DRV_LOG(DEBUG, "mac_short_packet_dropped: %lu", - ns->mac_short_packet_dropped); - PMD_DRV_LOG(DEBUG, "checksum_error: %lu", - ns->checksum_error); - PMD_DRV_LOG(DEBUG, "fdir_match: %lu", ns->fd_sb_match); - PMD_DRV_LOG(DEBUG, "***************** PF stats end ********************"); -} - -/* Reset the statistics */ -static void -i40e_dev_stats_reset(struct rte_eth_dev *dev) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - - /* It results in reloading the start point of each counter */ - pf->offset_loaded = false; -} - -static int -i40e_dev_queue_stats_mapping_set(__rte_unused struct rte_eth_dev *dev, - __rte_unused uint16_t queue_id, - __rte_unused uint8_t stat_idx, - __rte_unused uint8_t is_rx) -{ - PMD_INIT_FUNC_TRACE(); - - return -ENOSYS; -} - -static void -i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_vsi *vsi = pf->main_vsi; - - dev_info->max_rx_queues = vsi->nb_qps; - dev_info->max_tx_queues = vsi->nb_qps; - dev_info->min_rx_bufsize = I40E_BUF_SIZE_MIN; - dev_info->max_rx_pktlen = I40E_FRAME_SIZE_MAX; - dev_info->max_mac_addrs = vsi->max_macaddrs; - dev_info->max_vfs = dev->pci_dev->max_vfs; - dev_info->rx_offload_capa = - DEV_RX_OFFLOAD_VLAN_STRIP | - DEV_RX_OFFLOAD_IPV4_CKSUM | - DEV_RX_OFFLOAD_UDP_CKSUM | - DEV_RX_OFFLOAD_TCP_CKSUM; - dev_info->tx_offload_capa = - DEV_TX_OFFLOAD_VLAN_INSERT | - DEV_TX_OFFLOAD_IPV4_CKSUM | - DEV_TX_OFFLOAD_UDP_CKSUM | - DEV_TX_OFFLOAD_TCP_CKSUM | - DEV_TX_OFFLOAD_SCTP_CKSUM | - DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | - DEV_TX_OFFLOAD_TCP_TSO; - dev_info->reta_size = pf->hash_lut_size; - dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL; - - dev_info->default_rxconf = (struct rte_eth_rxconf) { - .rx_thresh = { - .pthresh = I40E_DEFAULT_RX_PTHRESH, - .hthresh = I40E_DEFAULT_RX_HTHRESH, - .wthresh = I40E_DEFAULT_RX_WTHRESH, - }, - .rx_free_thresh = I40E_DEFAULT_RX_FREE_THRESH, - .rx_drop_en = 0, - }; - - dev_info->default_txconf = (struct rte_eth_txconf) { - .tx_thresh = { - .pthresh = I40E_DEFAULT_TX_PTHRESH, - .hthresh = I40E_DEFAULT_TX_HTHRESH, - .wthresh = I40E_DEFAULT_TX_WTHRESH, - }, - .tx_free_thresh = I40E_DEFAULT_TX_FREE_THRESH, - .tx_rs_thresh = I40E_DEFAULT_TX_RSBIT_THRESH, - .txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS | - ETH_TXQ_FLAGS_NOOFFLOADS, - }; - - if (pf->flags | I40E_FLAG_VMDQ) { - dev_info->max_vmdq_pools = pf->max_nb_vmdq_vsi; - dev_info->vmdq_queue_base = dev_info->max_rx_queues; - dev_info->vmdq_queue_num = pf->vmdq_nb_qps * - pf->max_nb_vmdq_vsi; - dev_info->vmdq_pool_base = I40E_VMDQ_POOL_BASE; - dev_info->max_rx_queues += dev_info->vmdq_queue_num; - dev_info->max_tx_queues += dev_info->vmdq_queue_num; - } -} - -static int -i40e_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_vsi *vsi = pf->main_vsi; - PMD_INIT_FUNC_TRACE(); - - if (on) - return i40e_vsi_add_vlan(vsi, vlan_id); - else - return i40e_vsi_delete_vlan(vsi, vlan_id); -} - -static void -i40e_vlan_tpid_set(__rte_unused struct rte_eth_dev *dev, - __rte_unused uint16_t tpid) -{ - PMD_INIT_FUNC_TRACE(); -} - -static void -i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_vsi *vsi = pf->main_vsi; - - if (mask & ETH_VLAN_STRIP_MASK) { - /* Enable or disable VLAN stripping */ - if (dev->data->dev_conf.rxmode.hw_vlan_strip) - i40e_vsi_config_vlan_stripping(vsi, TRUE); - else - i40e_vsi_config_vlan_stripping(vsi, FALSE); - } - - if (mask & ETH_VLAN_EXTEND_MASK) { - if (dev->data->dev_conf.rxmode.hw_vlan_extend) - i40e_vsi_config_double_vlan(vsi, TRUE); - else - i40e_vsi_config_double_vlan(vsi, FALSE); - } -} - -static void -i40e_vlan_strip_queue_set(__rte_unused struct rte_eth_dev *dev, - __rte_unused uint16_t queue, - __rte_unused int on) -{ - PMD_INIT_FUNC_TRACE(); -} - -static int -i40e_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_vsi *vsi = pf->main_vsi; - struct rte_eth_dev_data *data = I40E_VSI_TO_DEV_DATA(vsi); - struct i40e_vsi_vlan_pvid_info info; - - memset(&info, 0, sizeof(info)); - info.on = on; - if (info.on) - info.config.pvid = pvid; - else { - info.config.reject.tagged = - data->dev_conf.txmode.hw_vlan_reject_tagged; - info.config.reject.untagged = - data->dev_conf.txmode.hw_vlan_reject_untagged; - } - - return i40e_vsi_vlan_pvid_set(vsi, &info); -} - -static int -i40e_dev_led_on(struct rte_eth_dev *dev) -{ - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - uint32_t mode = i40e_led_get(hw); - - if (mode == 0) - i40e_led_set(hw, 0xf, true); /* 0xf means led always true */ - - return 0; -} - -static int -i40e_dev_led_off(struct rte_eth_dev *dev) -{ - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - uint32_t mode = i40e_led_get(hw); - - if (mode != 0) - i40e_led_set(hw, 0, false); - - return 0; -} - -static int -i40e_flow_ctrl_set(__rte_unused struct rte_eth_dev *dev, - __rte_unused struct rte_eth_fc_conf *fc_conf) -{ - PMD_INIT_FUNC_TRACE(); - - return -ENOSYS; -} - -static int -i40e_priority_flow_ctrl_set(__rte_unused struct rte_eth_dev *dev, - __rte_unused struct rte_eth_pfc_conf *pfc_conf) -{ - PMD_INIT_FUNC_TRACE(); - - return -ENOSYS; -} - -/* Add a MAC address, and update filters */ -static void -i40e_macaddr_add(struct rte_eth_dev *dev, - struct ether_addr *mac_addr, - __rte_unused uint32_t index, - uint32_t pool) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_mac_filter_info mac_filter; - struct i40e_vsi *vsi; - int ret; - - /* If VMDQ not enabled or configured, return */ - if (pool != 0 && (!(pf->flags | I40E_FLAG_VMDQ) || !pf->nb_cfg_vmdq_vsi)) { - PMD_DRV_LOG(ERR, "VMDQ not %s, can't set mac to pool %u", - pf->flags | I40E_FLAG_VMDQ ? "configured" : "enabled", - pool); - return; - } - - if (pool > pf->nb_cfg_vmdq_vsi) { - PMD_DRV_LOG(ERR, "Pool number %u invalid. Max pool is %u", - pool, pf->nb_cfg_vmdq_vsi); - return; - } - - (void)rte_memcpy(&mac_filter.mac_addr, mac_addr, ETHER_ADDR_LEN); - mac_filter.filter_type = RTE_MACVLAN_PERFECT_MATCH; - - if (pool == 0) - vsi = pf->main_vsi; - else - vsi = pf->vmdq[pool - 1].vsi; - - ret = i40e_vsi_add_mac(vsi, &mac_filter); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to add MACVLAN filter"); - return; - } -} - -/* Remove a MAC address, and update filters */ -static void -i40e_macaddr_remove(struct rte_eth_dev *dev, uint32_t index) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_vsi *vsi; - struct rte_eth_dev_data *data = dev->data; - struct ether_addr *macaddr; - int ret; - uint32_t i; - uint64_t pool_sel; - - macaddr = &(data->mac_addrs[index]); - - pool_sel = dev->data->mac_pool_sel[index]; - - for (i = 0; i < sizeof(pool_sel) * CHAR_BIT; i++) { - if (pool_sel & (1ULL << i)) { - if (i == 0) - vsi = pf->main_vsi; - else { - /* No VMDQ pool enabled or configured */ - if (!(pf->flags | I40E_FLAG_VMDQ) || - (i > pf->nb_cfg_vmdq_vsi)) { - PMD_DRV_LOG(ERR, "No VMDQ pool enabled" - "/configured"); - return; - } - vsi = pf->vmdq[i - 1].vsi; - } - ret = i40e_vsi_delete_mac(vsi, macaddr); - - if (ret) { - PMD_DRV_LOG(ERR, "Failed to remove MACVLAN filter"); - return; - } - } - } -} - -/* Set perfect match or hash match of MAC and VLAN for a VF */ -static int -i40e_vf_mac_filter_set(struct i40e_pf *pf, - struct rte_eth_mac_filter *filter, - bool add) -{ - struct i40e_hw *hw; - struct i40e_mac_filter_info mac_filter; - struct ether_addr old_mac; - struct ether_addr *new_mac; - struct i40e_pf_vf *vf = NULL; - uint16_t vf_id; - int ret; - - if (pf == NULL) { - PMD_DRV_LOG(ERR, "Invalid PF argument."); - return -EINVAL; - } - hw = I40E_PF_TO_HW(pf); - - if (filter == NULL) { - PMD_DRV_LOG(ERR, "Invalid mac filter argument."); - return -EINVAL; - } - - new_mac = &filter->mac_addr; - - if (is_zero_ether_addr(new_mac)) { - PMD_DRV_LOG(ERR, "Invalid ethernet address."); - return -EINVAL; - } - - vf_id = filter->dst_id; - - if (vf_id > pf->vf_num - 1 || !pf->vfs) { - PMD_DRV_LOG(ERR, "Invalid argument."); - return -EINVAL; - } - vf = &pf->vfs[vf_id]; - - if (add && is_same_ether_addr(new_mac, &(pf->dev_addr))) { - PMD_DRV_LOG(INFO, "Ignore adding permanent MAC address."); - return -EINVAL; - } - - if (add) { - (void)rte_memcpy(&old_mac, hw->mac.addr, ETHER_ADDR_LEN); - (void)rte_memcpy(hw->mac.addr, new_mac->addr_bytes, - ETHER_ADDR_LEN); - (void)rte_memcpy(&mac_filter.mac_addr, &filter->mac_addr, - ETHER_ADDR_LEN); - - mac_filter.filter_type = filter->filter_type; - ret = i40e_vsi_add_mac(vf->vsi, &mac_filter); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to add MAC filter."); - return -1; - } - ether_addr_copy(new_mac, &pf->dev_addr); - } else { - (void)rte_memcpy(hw->mac.addr, hw->mac.perm_addr, - ETHER_ADDR_LEN); - ret = i40e_vsi_delete_mac(vf->vsi, &filter->mac_addr); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to delete MAC filter."); - return -1; - } - - /* Clear device address as it has been removed */ - if (is_same_ether_addr(&(pf->dev_addr), new_mac)) - memset(&pf->dev_addr, 0, sizeof(struct ether_addr)); - } - - return 0; -} - -/* MAC filter handle */ -static int -i40e_mac_filter_handle(struct rte_eth_dev *dev, enum rte_filter_op filter_op, - void *arg) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct rte_eth_mac_filter *filter; - struct i40e_hw *hw = I40E_PF_TO_HW(pf); - int ret = I40E_NOT_SUPPORTED; - - filter = (struct rte_eth_mac_filter *)(arg); - - switch (filter_op) { - case RTE_ETH_FILTER_NOP: - ret = I40E_SUCCESS; - break; - case RTE_ETH_FILTER_ADD: - i40e_pf_disable_irq0(hw); - if (filter->is_vf) - ret = i40e_vf_mac_filter_set(pf, filter, 1); - i40e_pf_enable_irq0(hw); - break; - case RTE_ETH_FILTER_DELETE: - i40e_pf_disable_irq0(hw); - if (filter->is_vf) - ret = i40e_vf_mac_filter_set(pf, filter, 0); - i40e_pf_enable_irq0(hw); - break; - default: - PMD_DRV_LOG(ERR, "unknown operation %u", filter_op); - ret = I40E_ERR_PARAM; - break; - } - - return ret; -} - -static int -i40e_dev_rss_reta_update(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - uint32_t lut, l; - uint16_t i, j, lut_size = pf->hash_lut_size; - uint16_t idx, shift; - uint8_t mask; - - if (reta_size != lut_size || - reta_size > ETH_RSS_RETA_SIZE_512) { - PMD_DRV_LOG(ERR, "The size of hash lookup table configured " - "(%d) doesn't match the number hardware can supported " - "(%d)\n", reta_size, lut_size); - return -EINVAL; - } - - for (i = 0; i < reta_size; i += I40E_4_BIT_WIDTH) { - idx = i / RTE_RETA_GROUP_SIZE; - shift = i % RTE_RETA_GROUP_SIZE; - mask = (uint8_t)((reta_conf[idx].mask >> shift) & - I40E_4_BIT_MASK); - if (!mask) - continue; - if (mask == I40E_4_BIT_MASK) - l = 0; - else - l = I40E_READ_REG(hw, I40E_PFQF_HLUT(i >> 2)); - for (j = 0, lut = 0; j < I40E_4_BIT_WIDTH; j++) { - if (mask & (0x1 << j)) - lut |= reta_conf[idx].reta[shift + j] << - (CHAR_BIT * j); - else - lut |= l & (I40E_8_BIT_MASK << (CHAR_BIT * j)); - } - I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut); - } - - return 0; -} - -static int -i40e_dev_rss_reta_query(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - uint32_t lut; - uint16_t i, j, lut_size = pf->hash_lut_size; - uint16_t idx, shift; - uint8_t mask; - - if (reta_size != lut_size || - reta_size > ETH_RSS_RETA_SIZE_512) { - PMD_DRV_LOG(ERR, "The size of hash lookup table configured " - "(%d) doesn't match the number hardware can supported " - "(%d)\n", reta_size, lut_size); - return -EINVAL; - } - - for (i = 0; i < reta_size; i += I40E_4_BIT_WIDTH) { - idx = i / RTE_RETA_GROUP_SIZE; - shift = i % RTE_RETA_GROUP_SIZE; - mask = (uint8_t)((reta_conf[idx].mask >> shift) & - I40E_4_BIT_MASK); - if (!mask) - continue; - - lut = I40E_READ_REG(hw, I40E_PFQF_HLUT(i >> 2)); - for (j = 0; j < I40E_4_BIT_WIDTH; j++) { - if (mask & (0x1 << j)) - reta_conf[idx].reta[shift + j] = ((lut >> - (CHAR_BIT * j)) & I40E_8_BIT_MASK); - } - } - - return 0; -} - -/** - * i40e_allocate_dma_mem_d - specific memory alloc for shared code (base driver) - * @hw: pointer to the HW structure - * @mem: pointer to mem struct to fill out - * @size: size of memory requested - * @alignment: what to align the allocation to - **/ -enum i40e_status_code -i40e_allocate_dma_mem_d(__attribute__((unused)) struct i40e_hw *hw, - struct i40e_dma_mem *mem, - u64 size, - u32 alignment) -{ - static uint64_t id = 0; - const struct rte_memzone *mz = NULL; - char z_name[RTE_MEMZONE_NAMESIZE]; - - if (!mem) - return I40E_ERR_PARAM; - - id++; - snprintf(z_name, sizeof(z_name), "i40e_dma_%"PRIu64, id); -#ifdef RTE_LIBRTE_XEN_DOM0 - mz = rte_memzone_reserve_bounded(z_name, size, 0, 0, alignment, - RTE_PGSIZE_2M); -#else - mz = rte_memzone_reserve_aligned(z_name, size, 0, 0, alignment); -#endif - if (!mz) - return I40E_ERR_NO_MEMORY; - - mem->id = id; - mem->size = size; - mem->va = mz->addr; -#ifdef RTE_LIBRTE_XEN_DOM0 - mem->pa = rte_mem_phy2mch(mz->memseg_id, mz->phys_addr); -#else - mem->pa = mz->phys_addr; -#endif - - return I40E_SUCCESS; -} - -/** - * i40e_free_dma_mem_d - specific memory free for shared code (base driver) - * @hw: pointer to the HW structure - * @mem: ptr to mem struct to free - **/ -enum i40e_status_code -i40e_free_dma_mem_d(__attribute__((unused)) struct i40e_hw *hw, - struct i40e_dma_mem *mem) -{ - if (!mem || !mem->va) - return I40E_ERR_PARAM; - - mem->va = NULL; - mem->pa = (u64)0; - - return I40E_SUCCESS; -} - -/** - * i40e_allocate_virt_mem_d - specific memory alloc for shared code (base driver) - * @hw: pointer to the HW structure - * @mem: pointer to mem struct to fill out - * @size: size of memory requested - **/ -enum i40e_status_code -i40e_allocate_virt_mem_d(__attribute__((unused)) struct i40e_hw *hw, - struct i40e_virt_mem *mem, - u32 size) -{ - if (!mem) - return I40E_ERR_PARAM; - - mem->size = size; - mem->va = rte_zmalloc("i40e", size, 0); - - if (mem->va) - return I40E_SUCCESS; - else - return I40E_ERR_NO_MEMORY; -} - -/** - * i40e_free_virt_mem_d - specific memory free for shared code (base driver) - * @hw: pointer to the HW structure - * @mem: pointer to mem struct to free - **/ -enum i40e_status_code -i40e_free_virt_mem_d(__attribute__((unused)) struct i40e_hw *hw, - struct i40e_virt_mem *mem) -{ - if (!mem) - return I40E_ERR_PARAM; - - rte_free(mem->va); - mem->va = NULL; - - return I40E_SUCCESS; -} - -void -i40e_init_spinlock_d(struct i40e_spinlock *sp) -{ - rte_spinlock_init(&sp->spinlock); -} - -void -i40e_acquire_spinlock_d(struct i40e_spinlock *sp) -{ - rte_spinlock_lock(&sp->spinlock); -} - -void -i40e_release_spinlock_d(struct i40e_spinlock *sp) -{ - rte_spinlock_unlock(&sp->spinlock); -} - -void -i40e_destroy_spinlock_d(__attribute__((unused)) struct i40e_spinlock *sp) -{ - return; -} - -/** - * Get the hardware capabilities, which will be parsed - * and saved into struct i40e_hw. - */ -static int -i40e_get_cap(struct i40e_hw *hw) -{ - struct i40e_aqc_list_capabilities_element_resp *buf; - uint16_t len, size = 0; - int ret; - - /* Calculate a huge enough buff for saving response data temporarily */ - len = sizeof(struct i40e_aqc_list_capabilities_element_resp) * - I40E_MAX_CAP_ELE_NUM; - buf = rte_zmalloc("i40e", len, 0); - if (!buf) { - PMD_DRV_LOG(ERR, "Failed to allocate memory"); - return I40E_ERR_NO_MEMORY; - } - - /* Get, parse the capabilities and save it to hw */ - ret = i40e_aq_discover_capabilities(hw, buf, len, &size, - i40e_aqc_opc_list_func_capabilities, NULL); - if (ret != I40E_SUCCESS) - PMD_DRV_LOG(ERR, "Failed to discover capabilities"); - - /* Free the temporary buffer after being used */ - rte_free(buf); - - return ret; -} - -static int -i40e_pf_parameter_init(struct rte_eth_dev *dev) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_hw *hw = I40E_PF_TO_HW(pf); - uint16_t sum_queues = 0, sum_vsis, left_queues; - - /* First check if FW support SRIOV */ - if (dev->pci_dev->max_vfs && !hw->func_caps.sr_iov_1_1) { - PMD_INIT_LOG(ERR, "HW configuration doesn't support SRIOV"); - return -EINVAL; - } - - pf->flags = I40E_FLAG_HEADER_SPLIT_DISABLED; - pf->max_num_vsi = RTE_MIN(hw->func_caps.num_vsis, I40E_MAX_NUM_VSIS); - PMD_INIT_LOG(INFO, "Max supported VSIs:%u", pf->max_num_vsi); - /* Allocate queues for pf */ - if (hw->func_caps.rss) { - pf->flags |= I40E_FLAG_RSS; - pf->lan_nb_qps = RTE_MIN(hw->func_caps.num_tx_qp, - (uint32_t)(1 << hw->func_caps.rss_table_entry_width)); - pf->lan_nb_qps = i40e_align_floor(pf->lan_nb_qps); - } else - pf->lan_nb_qps = 1; - sum_queues = pf->lan_nb_qps; - /* Default VSI is not counted in */ - sum_vsis = 0; - PMD_INIT_LOG(INFO, "PF queue pairs:%u", pf->lan_nb_qps); - - if (hw->func_caps.sr_iov_1_1 && dev->pci_dev->max_vfs) { - pf->flags |= I40E_FLAG_SRIOV; - pf->vf_nb_qps = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF; - if (dev->pci_dev->max_vfs > hw->func_caps.num_vfs) { - PMD_INIT_LOG(ERR, "Config VF number %u, " - "max supported %u.", - dev->pci_dev->max_vfs, - hw->func_caps.num_vfs); - return -EINVAL; - } - if (pf->vf_nb_qps > I40E_MAX_QP_NUM_PER_VF) { - PMD_INIT_LOG(ERR, "FVL VF queue %u, " - "max support %u queues.", - pf->vf_nb_qps, I40E_MAX_QP_NUM_PER_VF); - return -EINVAL; - } - pf->vf_num = dev->pci_dev->max_vfs; - sum_queues += pf->vf_nb_qps * pf->vf_num; - sum_vsis += pf->vf_num; - PMD_INIT_LOG(INFO, "Max VF num:%u each has queue pairs:%u", - pf->vf_num, pf->vf_nb_qps); - } else - pf->vf_num = 0; - - if (hw->func_caps.vmdq) { - pf->flags |= I40E_FLAG_VMDQ; - pf->vmdq_nb_qps = RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM; - pf->max_nb_vmdq_vsi = 1; - /* - * If VMDQ available, assume a single VSI can be created. Will adjust - * later. - */ - sum_queues += pf->vmdq_nb_qps * pf->max_nb_vmdq_vsi; - sum_vsis += pf->max_nb_vmdq_vsi; - } else { - pf->vmdq_nb_qps = 0; - pf->max_nb_vmdq_vsi = 0; - } - pf->nb_cfg_vmdq_vsi = 0; - - if (hw->func_caps.fd) { - pf->flags |= I40E_FLAG_FDIR; - pf->fdir_nb_qps = I40E_DEFAULT_QP_NUM_FDIR; - /** - * Each flow director consumes one VSI and one queue, - * but can't calculate out predictably here. - */ - } - - if (sum_vsis > pf->max_num_vsi || - sum_queues > hw->func_caps.num_rx_qp) { - PMD_INIT_LOG(ERR, "VSI/QUEUE setting can't be satisfied"); - PMD_INIT_LOG(ERR, "Max VSIs: %u, asked:%u", - pf->max_num_vsi, sum_vsis); - PMD_INIT_LOG(ERR, "Total queue pairs:%u, asked:%u", - hw->func_caps.num_rx_qp, sum_queues); - return -EINVAL; - } - - /* Adjust VMDQ setting to support as many VMs as possible */ - if (pf->flags & I40E_FLAG_VMDQ) { - left_queues = hw->func_caps.num_rx_qp - sum_queues; - - pf->max_nb_vmdq_vsi += RTE_MIN(left_queues / pf->vmdq_nb_qps, - pf->max_num_vsi - sum_vsis); - - /* Limit the max VMDQ number that rte_ether that can support */ - pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi, - ETH_64_POOLS - 1); - - PMD_INIT_LOG(INFO, "Max VMDQ VSI num:%u", - pf->max_nb_vmdq_vsi); - PMD_INIT_LOG(INFO, "VMDQ queue pairs:%u", pf->vmdq_nb_qps); - } - - /* Each VSI occupy 1 MSIX interrupt at least, plus IRQ0 for misc intr - * cause */ - if (sum_vsis > hw->func_caps.num_msix_vectors - 1) { - PMD_INIT_LOG(ERR, "Too many VSIs(%u), MSIX intr(%u) not enough", - sum_vsis, hw->func_caps.num_msix_vectors); - return -EINVAL; - } - return I40E_SUCCESS; -} - -static int -i40e_pf_get_switch_config(struct i40e_pf *pf) -{ - struct i40e_hw *hw = I40E_PF_TO_HW(pf); - struct i40e_aqc_get_switch_config_resp *switch_config; - struct i40e_aqc_switch_config_element_resp *element; - uint16_t start_seid = 0, num_reported; - int ret; - - switch_config = (struct i40e_aqc_get_switch_config_resp *)\ - rte_zmalloc("i40e", I40E_AQ_LARGE_BUF, 0); - if (!switch_config) { - PMD_DRV_LOG(ERR, "Failed to allocated memory"); - return -ENOMEM; - } - - /* Get the switch configurations */ - ret = i40e_aq_get_switch_config(hw, switch_config, - I40E_AQ_LARGE_BUF, &start_seid, NULL); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to get switch configurations"); - goto fail; - } - num_reported = rte_le_to_cpu_16(switch_config->header.num_reported); - if (num_reported != 1) { /* The number should be 1 */ - PMD_DRV_LOG(ERR, "Wrong number of switch config reported"); - goto fail; - } - - /* Parse the switch configuration elements */ - element = &(switch_config->element[0]); - if (element->element_type == I40E_SWITCH_ELEMENT_TYPE_VSI) { - pf->mac_seid = rte_le_to_cpu_16(element->uplink_seid); - pf->main_vsi_seid = rte_le_to_cpu_16(element->seid); - } else - PMD_DRV_LOG(INFO, "Unknown element type"); - -fail: - rte_free(switch_config); - - return ret; -} - -static int -i40e_res_pool_init (struct i40e_res_pool_info *pool, uint32_t base, - uint32_t num) -{ - struct pool_entry *entry; - - if (pool == NULL || num == 0) - return -EINVAL; - - entry = rte_zmalloc("i40e", sizeof(*entry), 0); - if (entry == NULL) { - PMD_DRV_LOG(ERR, "Failed to allocate memory for resource pool"); - return -ENOMEM; - } - - /* queue heap initialize */ - pool->num_free = num; - pool->num_alloc = 0; - pool->base = base; - LIST_INIT(&pool->alloc_list); - LIST_INIT(&pool->free_list); - - /* Initialize element */ - entry->base = 0; - entry->len = num; - - LIST_INSERT_HEAD(&pool->free_list, entry, next); - return 0; -} - -static void -i40e_res_pool_destroy(struct i40e_res_pool_info *pool) -{ - struct pool_entry *entry; - - if (pool == NULL) - return; - - LIST_FOREACH(entry, &pool->alloc_list, next) { - LIST_REMOVE(entry, next); - rte_free(entry); - } - - LIST_FOREACH(entry, &pool->free_list, next) { - LIST_REMOVE(entry, next); - rte_free(entry); - } - - pool->num_free = 0; - pool->num_alloc = 0; - pool->base = 0; - LIST_INIT(&pool->alloc_list); - LIST_INIT(&pool->free_list); -} - -static int -i40e_res_pool_free(struct i40e_res_pool_info *pool, - uint32_t base) -{ - struct pool_entry *entry, *next, *prev, *valid_entry = NULL; - uint32_t pool_offset; - int insert; - - if (pool == NULL) { - PMD_DRV_LOG(ERR, "Invalid parameter"); - return -EINVAL; - } - - pool_offset = base - pool->base; - /* Lookup in alloc list */ - LIST_FOREACH(entry, &pool->alloc_list, next) { - if (entry->base == pool_offset) { - valid_entry = entry; - LIST_REMOVE(entry, next); - break; - } - } - - /* Not find, return */ - if (valid_entry == NULL) { - PMD_DRV_LOG(ERR, "Failed to find entry"); - return -EINVAL; - } - - /** - * Found it, move it to free list and try to merge. - * In order to make merge easier, always sort it by qbase. - * Find adjacent prev and last entries. - */ - prev = next = NULL; - LIST_FOREACH(entry, &pool->free_list, next) { - if (entry->base > valid_entry->base) { - next = entry; - break; - } - prev = entry; - } - - insert = 0; - /* Try to merge with next one*/ - if (next != NULL) { - /* Merge with next one */ - if (valid_entry->base + valid_entry->len == next->base) { - next->base = valid_entry->base; - next->len += valid_entry->len; - rte_free(valid_entry); - valid_entry = next; - insert = 1; - } - } - - if (prev != NULL) { - /* Merge with previous one */ - if (prev->base + prev->len == valid_entry->base) { - prev->len += valid_entry->len; - /* If it merge with next one, remove next node */ - if (insert == 1) { - LIST_REMOVE(valid_entry, next); - rte_free(valid_entry); - } else { - rte_free(valid_entry); - insert = 1; - } - } - } - - /* Not find any entry to merge, insert */ - if (insert == 0) { - if (prev != NULL) - LIST_INSERT_AFTER(prev, valid_entry, next); - else if (next != NULL) - LIST_INSERT_BEFORE(next, valid_entry, next); - else /* It's empty list, insert to head */ - LIST_INSERT_HEAD(&pool->free_list, valid_entry, next); - } - - pool->num_free += valid_entry->len; - pool->num_alloc -= valid_entry->len; - - return 0; -} - -static int -i40e_res_pool_alloc(struct i40e_res_pool_info *pool, - uint16_t num) -{ - struct pool_entry *entry, *valid_entry; - - if (pool == NULL || num == 0) { - PMD_DRV_LOG(ERR, "Invalid parameter"); - return -EINVAL; - } - - if (pool->num_free < num) { - PMD_DRV_LOG(ERR, "No resource. ask:%u, available:%u", - num, pool->num_free); - return -ENOMEM; - } - - valid_entry = NULL; - /* Lookup in free list and find most fit one */ - LIST_FOREACH(entry, &pool->free_list, next) { - if (entry->len >= num) { - /* Find best one */ - if (entry->len == num) { - valid_entry = entry; - break; - } - if (valid_entry == NULL || valid_entry->len > entry->len) - valid_entry = entry; - } - } - - /* Not find one to satisfy the request, return */ - if (valid_entry == NULL) { - PMD_DRV_LOG(ERR, "No valid entry found"); - return -ENOMEM; - } - /** - * The entry have equal queue number as requested, - * remove it from alloc_list. - */ - if (valid_entry->len == num) { - LIST_REMOVE(valid_entry, next); - } else { - /** - * The entry have more numbers than requested, - * create a new entry for alloc_list and minus its - * queue base and number in free_list. - */ - entry = rte_zmalloc("res_pool", sizeof(*entry), 0); - if (entry == NULL) { - PMD_DRV_LOG(ERR, "Failed to allocate memory for " - "resource pool"); - return -ENOMEM; - } - entry->base = valid_entry->base; - entry->len = num; - valid_entry->base += num; - valid_entry->len -= num; - valid_entry = entry; - } - - /* Insert it into alloc list, not sorted */ - LIST_INSERT_HEAD(&pool->alloc_list, valid_entry, next); - - pool->num_free -= valid_entry->len; - pool->num_alloc += valid_entry->len; - - return (valid_entry->base + pool->base); -} - -/** - * bitmap_is_subset - Check whether src2 is subset of src1 - **/ -static inline int -bitmap_is_subset(uint8_t src1, uint8_t src2) -{ - return !((src1 ^ src2) & src2); -} - -static int -validate_tcmap_parameter(struct i40e_vsi *vsi, uint8_t enabled_tcmap) -{ - struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); - - /* If DCB is not supported, only default TC is supported */ - if (!hw->func_caps.dcb && enabled_tcmap != I40E_DEFAULT_TCMAP) { - PMD_DRV_LOG(ERR, "DCB is not enabled, only TC0 is supported"); - return -EINVAL; - } - - if (!bitmap_is_subset(hw->func_caps.enabled_tcmap, enabled_tcmap)) { - PMD_DRV_LOG(ERR, "Enabled TC map 0x%x not applicable to " - "HW support 0x%x", hw->func_caps.enabled_tcmap, - enabled_tcmap); - return -EINVAL; - } - return I40E_SUCCESS; -} - -int -i40e_vsi_vlan_pvid_set(struct i40e_vsi *vsi, - struct i40e_vsi_vlan_pvid_info *info) -{ - struct i40e_hw *hw; - struct i40e_vsi_context ctxt; - uint8_t vlan_flags = 0; - int ret; - - if (vsi == NULL || info == NULL) { - PMD_DRV_LOG(ERR, "invalid parameters"); - return I40E_ERR_PARAM; - } - - if (info->on) { - vsi->info.pvid = info->config.pvid; - /** - * If insert pvid is enabled, only tagged pkts are - * allowed to be sent out. - */ - vlan_flags |= I40E_AQ_VSI_PVLAN_INSERT_PVID | - I40E_AQ_VSI_PVLAN_MODE_TAGGED; - } else { - vsi->info.pvid = 0; - if (info->config.reject.tagged == 0) - vlan_flags |= I40E_AQ_VSI_PVLAN_MODE_TAGGED; - - if (info->config.reject.untagged == 0) - vlan_flags |= I40E_AQ_VSI_PVLAN_MODE_UNTAGGED; - } - vsi->info.port_vlan_flags &= ~(I40E_AQ_VSI_PVLAN_INSERT_PVID | - I40E_AQ_VSI_PVLAN_MODE_MASK); - vsi->info.port_vlan_flags |= vlan_flags; - vsi->info.valid_sections = - rte_cpu_to_le_16(I40E_AQ_VSI_PROP_VLAN_VALID); - memset(&ctxt, 0, sizeof(ctxt)); - (void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info)); - ctxt.seid = vsi->seid; - - hw = I40E_VSI_TO_HW(vsi); - ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL); - if (ret != I40E_SUCCESS) - PMD_DRV_LOG(ERR, "Failed to update VSI params"); - - return ret; -} - -static int -i40e_vsi_update_tc_bandwidth(struct i40e_vsi *vsi, uint8_t enabled_tcmap) -{ - struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); - int i, ret; - struct i40e_aqc_configure_vsi_tc_bw_data tc_bw_data; - - ret = validate_tcmap_parameter(vsi, enabled_tcmap); - if (ret != I40E_SUCCESS) - return ret; - - if (!vsi->seid) { - PMD_DRV_LOG(ERR, "seid not valid"); - return -EINVAL; - } - - memset(&tc_bw_data, 0, sizeof(tc_bw_data)); - tc_bw_data.tc_valid_bits = enabled_tcmap; - for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) - tc_bw_data.tc_bw_credits[i] = - (enabled_tcmap & (1 << i)) ? 1 : 0; - - ret = i40e_aq_config_vsi_tc_bw(hw, vsi->seid, &tc_bw_data, NULL); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to configure TC BW"); - return ret; - } - - (void)rte_memcpy(vsi->info.qs_handle, tc_bw_data.qs_handles, - sizeof(vsi->info.qs_handle)); - return I40E_SUCCESS; -} - -static int -i40e_vsi_config_tc_queue_mapping(struct i40e_vsi *vsi, - struct i40e_aqc_vsi_properties_data *info, - uint8_t enabled_tcmap) -{ - int ret, total_tc = 0, i; - uint16_t qpnum_per_tc, bsf, qp_idx; - - ret = validate_tcmap_parameter(vsi, enabled_tcmap); - if (ret != I40E_SUCCESS) - return ret; - - for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) - if (enabled_tcmap & (1 << i)) - total_tc++; - vsi->enabled_tc = enabled_tcmap; - - /* Number of queues per enabled TC */ - qpnum_per_tc = i40e_align_floor(vsi->nb_qps / total_tc); - qpnum_per_tc = RTE_MIN(qpnum_per_tc, I40E_MAX_Q_PER_TC); - bsf = rte_bsf32(qpnum_per_tc); - - /* Adjust the queue number to actual queues that can be applied */ - vsi->nb_qps = qpnum_per_tc * total_tc; - - /** - * Configure TC and queue mapping parameters, for enabled TC, - * allocate qpnum_per_tc queues to this traffic. For disabled TC, - * default queue will serve it. - */ - qp_idx = 0; - for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) { - if (vsi->enabled_tc & (1 << i)) { - info->tc_mapping[i] = rte_cpu_to_le_16((qp_idx << - I40E_AQ_VSI_TC_QUE_OFFSET_SHIFT) | - (bsf << I40E_AQ_VSI_TC_QUE_NUMBER_SHIFT)); - qp_idx += qpnum_per_tc; - } else - info->tc_mapping[i] = 0; - } - - /* Associate queue number with VSI */ - if (vsi->type == I40E_VSI_SRIOV) { - info->mapping_flags |= - rte_cpu_to_le_16(I40E_AQ_VSI_QUE_MAP_NONCONTIG); - for (i = 0; i < vsi->nb_qps; i++) - info->queue_mapping[i] = - rte_cpu_to_le_16(vsi->base_queue + i); - } else { - info->mapping_flags |= - rte_cpu_to_le_16(I40E_AQ_VSI_QUE_MAP_CONTIG); - info->queue_mapping[0] = rte_cpu_to_le_16(vsi->base_queue); - } - info->valid_sections |= - rte_cpu_to_le_16(I40E_AQ_VSI_PROP_QUEUE_MAP_VALID); - - return I40E_SUCCESS; -} - -static int -i40e_veb_release(struct i40e_veb *veb) -{ - struct i40e_vsi *vsi; - struct i40e_hw *hw; - - if (veb == NULL || veb->associate_vsi == NULL) - return -EINVAL; - - if (!TAILQ_EMPTY(&veb->head)) { - PMD_DRV_LOG(ERR, "VEB still has VSI attached, can't remove"); - return -EACCES; - } - - vsi = veb->associate_vsi; - hw = I40E_VSI_TO_HW(vsi); - - vsi->uplink_seid = veb->uplink_seid; - i40e_aq_delete_element(hw, veb->seid, NULL); - rte_free(veb); - vsi->veb = NULL; - return I40E_SUCCESS; -} - -/* Setup a veb */ -static struct i40e_veb * -i40e_veb_setup(struct i40e_pf *pf, struct i40e_vsi *vsi) -{ - struct i40e_veb *veb; - int ret; - struct i40e_hw *hw; - - if (NULL == pf || vsi == NULL) { - PMD_DRV_LOG(ERR, "veb setup failed, " - "associated VSI shouldn't null"); - return NULL; - } - hw = I40E_PF_TO_HW(pf); - - veb = rte_zmalloc("i40e_veb", sizeof(struct i40e_veb), 0); - if (!veb) { - PMD_DRV_LOG(ERR, "Failed to allocate memory for veb"); - goto fail; - } - - veb->associate_vsi = vsi; - TAILQ_INIT(&veb->head); - veb->uplink_seid = vsi->uplink_seid; - - ret = i40e_aq_add_veb(hw, veb->uplink_seid, vsi->seid, - I40E_DEFAULT_TCMAP, false, false, &veb->seid, NULL); - - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Add veb failed, aq_err: %d", - hw->aq.asq_last_status); - goto fail; - } - - /* get statistics index */ - ret = i40e_aq_get_veb_parameters(hw, veb->seid, NULL, NULL, - &veb->stats_idx, NULL, NULL, NULL); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Get veb statics index failed, aq_err: %d", - hw->aq.asq_last_status); - goto fail; - } - - /* Get VEB bandwidth, to be implemented */ - /* Now associated vsi binding to the VEB, set uplink to this VEB */ - vsi->uplink_seid = veb->seid; - - return veb; -fail: - rte_free(veb); - return NULL; -} - -int -i40e_vsi_release(struct i40e_vsi *vsi) -{ - struct i40e_pf *pf; - struct i40e_hw *hw; - struct i40e_vsi_list *vsi_list; - int ret; - struct i40e_mac_filter *f; - - if (!vsi) - return I40E_SUCCESS; - - pf = I40E_VSI_TO_PF(vsi); - hw = I40E_VSI_TO_HW(vsi); - - /* VSI has child to attach, release child first */ - if (vsi->veb) { - TAILQ_FOREACH(vsi_list, &vsi->veb->head, list) { - if (i40e_vsi_release(vsi_list->vsi) != I40E_SUCCESS) - return -1; - TAILQ_REMOVE(&vsi->veb->head, vsi_list, list); - } - i40e_veb_release(vsi->veb); - } - - /* Remove all macvlan filters of the VSI */ - i40e_vsi_remove_all_macvlan_filter(vsi); - TAILQ_FOREACH(f, &vsi->mac_list, next) - rte_free(f); - - if (vsi->type != I40E_VSI_MAIN) { - /* Remove vsi from parent's sibling list */ - if (vsi->parent_vsi == NULL || vsi->parent_vsi->veb == NULL) { - PMD_DRV_LOG(ERR, "VSI's parent VSI is NULL"); - return I40E_ERR_PARAM; - } - TAILQ_REMOVE(&vsi->parent_vsi->veb->head, - &vsi->sib_vsi_list, list); - - /* Remove all switch element of the VSI */ - ret = i40e_aq_delete_element(hw, vsi->seid, NULL); - if (ret != I40E_SUCCESS) - PMD_DRV_LOG(ERR, "Failed to delete element"); - } - i40e_res_pool_free(&pf->qp_pool, vsi->base_queue); - - if (vsi->type != I40E_VSI_SRIOV) - i40e_res_pool_free(&pf->msix_pool, vsi->msix_intr); - rte_free(vsi); - - return I40E_SUCCESS; -} - -static int -i40e_update_default_filter_setting(struct i40e_vsi *vsi) -{ - struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); - struct i40e_aqc_remove_macvlan_element_data def_filter; - struct i40e_mac_filter_info filter; - int ret; - - if (vsi->type != I40E_VSI_MAIN) - return I40E_ERR_CONFIG; - memset(&def_filter, 0, sizeof(def_filter)); - (void)rte_memcpy(def_filter.mac_addr, hw->mac.perm_addr, - ETH_ADDR_LEN); - def_filter.vlan_tag = 0; - def_filter.flags = I40E_AQC_MACVLAN_DEL_PERFECT_MATCH | - I40E_AQC_MACVLAN_DEL_IGNORE_VLAN; - ret = i40e_aq_remove_macvlan(hw, vsi->seid, &def_filter, 1, NULL); - if (ret != I40E_SUCCESS) { - struct i40e_mac_filter *f; - struct ether_addr *mac; - - PMD_DRV_LOG(WARNING, "Cannot remove the default " - "macvlan filter"); - /* It needs to add the permanent mac into mac list */ - f = rte_zmalloc("macv_filter", sizeof(*f), 0); - if (f == NULL) { - PMD_DRV_LOG(ERR, "failed to allocate memory"); - return I40E_ERR_NO_MEMORY; - } - mac = &f->mac_info.mac_addr; - (void)rte_memcpy(&mac->addr_bytes, hw->mac.perm_addr, - ETH_ADDR_LEN); - f->mac_info.filter_type = RTE_MACVLAN_PERFECT_MATCH; - TAILQ_INSERT_TAIL(&vsi->mac_list, f, next); - vsi->mac_num++; - - return ret; - } - (void)rte_memcpy(&filter.mac_addr, - (struct ether_addr *)(hw->mac.perm_addr), ETH_ADDR_LEN); - filter.filter_type = RTE_MACVLAN_PERFECT_MATCH; - return i40e_vsi_add_mac(vsi, &filter); -} - -static int -i40e_vsi_dump_bw_config(struct i40e_vsi *vsi) -{ - struct i40e_aqc_query_vsi_bw_config_resp bw_config; - struct i40e_aqc_query_vsi_ets_sla_config_resp ets_sla_config; - struct i40e_hw *hw = &vsi->adapter->hw; - i40e_status ret; - int i; - - memset(&bw_config, 0, sizeof(bw_config)); - ret = i40e_aq_query_vsi_bw_config(hw, vsi->seid, &bw_config, NULL); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "VSI failed to get bandwidth configuration %u", - hw->aq.asq_last_status); - return ret; - } - - memset(&ets_sla_config, 0, sizeof(ets_sla_config)); - ret = i40e_aq_query_vsi_ets_sla_config(hw, vsi->seid, - &ets_sla_config, NULL); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "VSI failed to get TC bandwdith " - "configuration %u", hw->aq.asq_last_status); - return ret; - } - - /* Not store the info yet, just print out */ - PMD_DRV_LOG(INFO, "VSI bw limit:%u", bw_config.port_bw_limit); - PMD_DRV_LOG(INFO, "VSI max_bw:%u", bw_config.max_bw); - for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) { - PMD_DRV_LOG(INFO, "\tVSI TC%u:share credits %u", i, - ets_sla_config.share_credits[i]); - PMD_DRV_LOG(INFO, "\tVSI TC%u:credits %u", i, - rte_le_to_cpu_16(ets_sla_config.credits[i])); - PMD_DRV_LOG(INFO, "\tVSI TC%u: max credits: %u", i, - rte_le_to_cpu_16(ets_sla_config.credits[i / 4]) >> - (i * 4)); - } - - return 0; -} - -/* Setup a VSI */ -struct i40e_vsi * -i40e_vsi_setup(struct i40e_pf *pf, - enum i40e_vsi_type type, - struct i40e_vsi *uplink_vsi, - uint16_t user_param) -{ - struct i40e_hw *hw = I40E_PF_TO_HW(pf); - struct i40e_vsi *vsi; - struct i40e_mac_filter_info filter; - int ret; - struct i40e_vsi_context ctxt; - struct ether_addr broadcast = - {.addr_bytes = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff}}; - - if (type != I40E_VSI_MAIN && uplink_vsi == NULL) { - PMD_DRV_LOG(ERR, "VSI setup failed, " - "VSI link shouldn't be NULL"); - return NULL; - } - - if (type == I40E_VSI_MAIN && uplink_vsi != NULL) { - PMD_DRV_LOG(ERR, "VSI setup failed, MAIN VSI " - "uplink VSI should be NULL"); - return NULL; - } - - /* If uplink vsi didn't setup VEB, create one first */ - if (type != I40E_VSI_MAIN && uplink_vsi->veb == NULL) { - uplink_vsi->veb = i40e_veb_setup(pf, uplink_vsi); - - if (NULL == uplink_vsi->veb) { - PMD_DRV_LOG(ERR, "VEB setup failed"); - return NULL; - } - } - - vsi = rte_zmalloc("i40e_vsi", sizeof(struct i40e_vsi), 0); - if (!vsi) { - PMD_DRV_LOG(ERR, "Failed to allocate memory for vsi"); - return NULL; - } - TAILQ_INIT(&vsi->mac_list); - vsi->type = type; - vsi->adapter = I40E_PF_TO_ADAPTER(pf); - vsi->max_macaddrs = I40E_NUM_MACADDR_MAX; - vsi->parent_vsi = uplink_vsi; - vsi->user_param = user_param; - /* Allocate queues */ - switch (vsi->type) { - case I40E_VSI_MAIN : - vsi->nb_qps = pf->lan_nb_qps; - break; - case I40E_VSI_SRIOV : - vsi->nb_qps = pf->vf_nb_qps; - break; - case I40E_VSI_VMDQ2: - vsi->nb_qps = pf->vmdq_nb_qps; - break; - case I40E_VSI_FDIR: - vsi->nb_qps = pf->fdir_nb_qps; - break; - default: - goto fail_mem; - } - /* - * The filter status descriptor is reported in rx queue 0, - * while the tx queue for fdir filter programming has no - * such constraints, can be non-zero queues. - * To simplify it, choose FDIR vsi use queue 0 pair. - * To make sure it will use queue 0 pair, queue allocation - * need be done before this function is called - */ - if (type != I40E_VSI_FDIR) { - ret = i40e_res_pool_alloc(&pf->qp_pool, vsi->nb_qps); - if (ret < 0) { - PMD_DRV_LOG(ERR, "VSI %d allocate queue failed %d", - vsi->seid, ret); - goto fail_mem; - } - vsi->base_queue = ret; - } else - vsi->base_queue = I40E_FDIR_QUEUE_ID; - - /* VF has MSIX interrupt in VF range, don't allocate here */ - if (type != I40E_VSI_SRIOV) { - ret = i40e_res_pool_alloc(&pf->msix_pool, 1); - if (ret < 0) { - PMD_DRV_LOG(ERR, "VSI %d get heap failed %d", vsi->seid, ret); - goto fail_queue_alloc; - } - vsi->msix_intr = ret; - } else - vsi->msix_intr = 0; - /* Add VSI */ - if (type == I40E_VSI_MAIN) { - /* For main VSI, no need to add since it's default one */ - vsi->uplink_seid = pf->mac_seid; - vsi->seid = pf->main_vsi_seid; - /* Bind queues with specific MSIX interrupt */ - /** - * Needs 2 interrupt at least, one for misc cause which will - * enabled from OS side, Another for queues binding the - * interrupt from device side only. - */ - - /* Get default VSI parameters from hardware */ - memset(&ctxt, 0, sizeof(ctxt)); - ctxt.seid = vsi->seid; - ctxt.pf_num = hw->pf_id; - ctxt.uplink_seid = vsi->uplink_seid; - ctxt.vf_num = 0; - ret = i40e_aq_get_vsi_params(hw, &ctxt, NULL); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to get VSI params"); - goto fail_msix_alloc; - } - (void)rte_memcpy(&vsi->info, &ctxt.info, - sizeof(struct i40e_aqc_vsi_properties_data)); - vsi->vsi_id = ctxt.vsi_number; - vsi->info.valid_sections = 0; - - /* Configure tc, enabled TC0 only */ - if (i40e_vsi_update_tc_bandwidth(vsi, I40E_DEFAULT_TCMAP) != - I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to update TC bandwidth"); - goto fail_msix_alloc; - } - - /* TC, queue mapping */ - memset(&ctxt, 0, sizeof(ctxt)); - vsi->info.valid_sections |= - rte_cpu_to_le_16(I40E_AQ_VSI_PROP_VLAN_VALID); - vsi->info.port_vlan_flags = I40E_AQ_VSI_PVLAN_MODE_ALL | - I40E_AQ_VSI_PVLAN_EMOD_STR_BOTH; - (void)rte_memcpy(&ctxt.info, &vsi->info, - sizeof(struct i40e_aqc_vsi_properties_data)); - ret = i40e_vsi_config_tc_queue_mapping(vsi, &ctxt.info, - I40E_DEFAULT_TCMAP); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to configure " - "TC queue mapping"); - goto fail_msix_alloc; - } - ctxt.seid = vsi->seid; - ctxt.pf_num = hw->pf_id; - ctxt.uplink_seid = vsi->uplink_seid; - ctxt.vf_num = 0; - - /* Update VSI parameters */ - ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to update VSI params"); - goto fail_msix_alloc; - } - - (void)rte_memcpy(&vsi->info.tc_mapping, &ctxt.info.tc_mapping, - sizeof(vsi->info.tc_mapping)); - (void)rte_memcpy(&vsi->info.queue_mapping, - &ctxt.info.queue_mapping, - sizeof(vsi->info.queue_mapping)); - vsi->info.mapping_flags = ctxt.info.mapping_flags; - vsi->info.valid_sections = 0; - - (void)rte_memcpy(pf->dev_addr.addr_bytes, hw->mac.perm_addr, - ETH_ADDR_LEN); - - /** - * Updating default filter settings are necessary to prevent - * reception of tagged packets. - * Some old firmware configurations load a default macvlan - * filter which accepts both tagged and untagged packets. - * The updating is to use a normal filter instead if needed. - * For NVM 4.2.2 or after, the updating is not needed anymore. - * The firmware with correct configurations load the default - * macvlan filter which is expected and cannot be removed. - */ - i40e_update_default_filter_setting(vsi); - } else if (type == I40E_VSI_SRIOV) { - memset(&ctxt, 0, sizeof(ctxt)); - /** - * For other VSI, the uplink_seid equals to uplink VSI's - * uplink_seid since they share same VEB - */ - vsi->uplink_seid = uplink_vsi->uplink_seid; - ctxt.pf_num = hw->pf_id; - ctxt.vf_num = hw->func_caps.vf_base_id + user_param; - ctxt.uplink_seid = vsi->uplink_seid; - ctxt.connection_type = 0x1; - ctxt.flags = I40E_AQ_VSI_TYPE_VF; - - /** - * Do not configure switch ID to enable VEB switch by - * I40E_AQ_VSI_SW_ID_FLAG_ALLOW_LB. Because in Fortville, - * if the source mac address of packet sent from VF is not - * listed in the VEB's mac table, the VEB will switch the - * packet back to the VF. Need to enable it when HW issue - * is fixed. - */ - - /* Configure port/vlan */ - ctxt.info.valid_sections |= - rte_cpu_to_le_16(I40E_AQ_VSI_PROP_VLAN_VALID); - ctxt.info.port_vlan_flags |= I40E_AQ_VSI_PVLAN_MODE_ALL; - ret = i40e_vsi_config_tc_queue_mapping(vsi, &ctxt.info, - I40E_DEFAULT_TCMAP); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to configure " - "TC queue mapping"); - goto fail_msix_alloc; - } - ctxt.info.up_enable_bits = I40E_DEFAULT_TCMAP; - ctxt.info.valid_sections |= - rte_cpu_to_le_16(I40E_AQ_VSI_PROP_SCHED_VALID); - /** - * Since VSI is not created yet, only configure parameter, - * will add vsi below. - */ - } else if (type == I40E_VSI_VMDQ2) { - memset(&ctxt, 0, sizeof(ctxt)); - /* - * For other VSI, the uplink_seid equals to uplink VSI's - * uplink_seid since they share same VEB - */ - vsi->uplink_seid = uplink_vsi->uplink_seid; - ctxt.pf_num = hw->pf_id; - ctxt.vf_num = 0; - ctxt.uplink_seid = vsi->uplink_seid; - ctxt.connection_type = 0x1; - ctxt.flags = I40E_AQ_VSI_TYPE_VMDQ2; - - ctxt.info.valid_sections |= - rte_cpu_to_le_16(I40E_AQ_VSI_PROP_SWITCH_VALID); - /* user_param carries flag to enable loop back */ - if (user_param) { - ctxt.info.switch_id = - rte_cpu_to_le_16(I40E_AQ_VSI_SW_ID_FLAG_LOCAL_LB); - ctxt.info.switch_id |= - rte_cpu_to_le_16(I40E_AQ_VSI_SW_ID_FLAG_ALLOW_LB); - } - - /* Configure port/vlan */ - ctxt.info.valid_sections |= - rte_cpu_to_le_16(I40E_AQ_VSI_PROP_VLAN_VALID); - ctxt.info.port_vlan_flags |= I40E_AQ_VSI_PVLAN_MODE_ALL; - ret = i40e_vsi_config_tc_queue_mapping(vsi, &ctxt.info, - I40E_DEFAULT_TCMAP); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to configure " - "TC queue mapping"); - goto fail_msix_alloc; - } - ctxt.info.up_enable_bits = I40E_DEFAULT_TCMAP; - ctxt.info.valid_sections |= - rte_cpu_to_le_16(I40E_AQ_VSI_PROP_SCHED_VALID); - } else if (type == I40E_VSI_FDIR) { - memset(&ctxt, 0, sizeof(ctxt)); - vsi->uplink_seid = uplink_vsi->uplink_seid; - ctxt.pf_num = hw->pf_id; - ctxt.vf_num = 0; - ctxt.uplink_seid = vsi->uplink_seid; - ctxt.connection_type = 0x1; /* regular data port */ - ctxt.flags = I40E_AQ_VSI_TYPE_PF; - ret = i40e_vsi_config_tc_queue_mapping(vsi, &ctxt.info, - I40E_DEFAULT_TCMAP); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to configure " - "TC queue mapping."); - goto fail_msix_alloc; - } - ctxt.info.up_enable_bits = I40E_DEFAULT_TCMAP; - ctxt.info.valid_sections |= - rte_cpu_to_le_16(I40E_AQ_VSI_PROP_SCHED_VALID); - } else { - PMD_DRV_LOG(ERR, "VSI: Not support other type VSI yet"); - goto fail_msix_alloc; - } - - if (vsi->type != I40E_VSI_MAIN) { - ret = i40e_aq_add_vsi(hw, &ctxt, NULL); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "add vsi failed, aq_err=%d", - hw->aq.asq_last_status); - goto fail_msix_alloc; - } - memcpy(&vsi->info, &ctxt.info, sizeof(ctxt.info)); - vsi->info.valid_sections = 0; - vsi->seid = ctxt.seid; - vsi->vsi_id = ctxt.vsi_number; - vsi->sib_vsi_list.vsi = vsi; - TAILQ_INSERT_TAIL(&uplink_vsi->veb->head, - &vsi->sib_vsi_list, list); - } - - /* MAC/VLAN configuration */ - (void)rte_memcpy(&filter.mac_addr, &broadcast, ETHER_ADDR_LEN); - filter.filter_type = RTE_MACVLAN_PERFECT_MATCH; - - ret = i40e_vsi_add_mac(vsi, &filter); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to add MACVLAN filter"); - goto fail_msix_alloc; - } - - /* Get VSI BW information */ - i40e_vsi_dump_bw_config(vsi); - return vsi; -fail_msix_alloc: - i40e_res_pool_free(&pf->msix_pool,vsi->msix_intr); -fail_queue_alloc: - i40e_res_pool_free(&pf->qp_pool,vsi->base_queue); -fail_mem: - rte_free(vsi); - return NULL; -} - -/* Configure vlan stripping on or off */ -int -i40e_vsi_config_vlan_stripping(struct i40e_vsi *vsi, bool on) -{ - struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); - struct i40e_vsi_context ctxt; - uint8_t vlan_flags; - int ret = I40E_SUCCESS; - - /* Check if it has been already on or off */ - if (vsi->info.valid_sections & - rte_cpu_to_le_16(I40E_AQ_VSI_PROP_VLAN_VALID)) { - if (on) { - if ((vsi->info.port_vlan_flags & - I40E_AQ_VSI_PVLAN_EMOD_MASK) == 0) - return 0; /* already on */ - } else { - if ((vsi->info.port_vlan_flags & - I40E_AQ_VSI_PVLAN_EMOD_MASK) == - I40E_AQ_VSI_PVLAN_EMOD_MASK) - return 0; /* already off */ - } - } - - if (on) - vlan_flags = I40E_AQ_VSI_PVLAN_EMOD_STR_BOTH; - else - vlan_flags = I40E_AQ_VSI_PVLAN_EMOD_NOTHING; - vsi->info.valid_sections = - rte_cpu_to_le_16(I40E_AQ_VSI_PROP_VLAN_VALID); - vsi->info.port_vlan_flags &= ~(I40E_AQ_VSI_PVLAN_EMOD_MASK); - vsi->info.port_vlan_flags |= vlan_flags; - ctxt.seid = vsi->seid; - (void)rte_memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info)); - ret = i40e_aq_update_vsi_params(hw, &ctxt, NULL); - if (ret) - PMD_DRV_LOG(INFO, "Update VSI failed to %s vlan stripping", - on ? "enable" : "disable"); - - return ret; -} - -static int -i40e_dev_init_vlan(struct rte_eth_dev *dev) -{ - struct rte_eth_dev_data *data = dev->data; - int ret; - - /* Apply vlan offload setting */ - i40e_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK); - - /* Apply double-vlan setting, not implemented yet */ - - /* Apply pvid setting */ - ret = i40e_vlan_pvid_set(dev, data->dev_conf.txmode.pvid, - data->dev_conf.txmode.hw_vlan_insert_pvid); - if (ret) - PMD_DRV_LOG(INFO, "Failed to update VSI params"); - - return ret; -} - -static int -i40e_vsi_config_double_vlan(struct i40e_vsi *vsi, int on) -{ - struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); - - return i40e_aq_set_port_parameters(hw, vsi->seid, 0, 1, on, NULL); -} - -static int -i40e_update_flow_control(struct i40e_hw *hw) -{ -#define I40E_LINK_PAUSE_RXTX (I40E_AQ_LINK_PAUSE_RX | I40E_AQ_LINK_PAUSE_TX) - struct i40e_link_status link_status; - uint32_t rxfc = 0, txfc = 0, reg; - uint8_t an_info; - int ret; - - memset(&link_status, 0, sizeof(link_status)); - ret = i40e_aq_get_link_info(hw, FALSE, &link_status, NULL); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to get link status information"); - goto write_reg; /* Disable flow control */ - } - - an_info = hw->phy.link_info.an_info; - if (!(an_info & I40E_AQ_AN_COMPLETED)) { - PMD_DRV_LOG(INFO, "Link auto negotiation not completed"); - ret = I40E_ERR_NOT_READY; - goto write_reg; /* Disable flow control */ - } - /** - * If link auto negotiation is enabled, flow control needs to - * be configured according to it - */ - switch (an_info & I40E_LINK_PAUSE_RXTX) { - case I40E_LINK_PAUSE_RXTX: - rxfc = 1; - txfc = 1; - hw->fc.current_mode = I40E_FC_FULL; - break; - case I40E_AQ_LINK_PAUSE_RX: - rxfc = 1; - hw->fc.current_mode = I40E_FC_RX_PAUSE; - break; - case I40E_AQ_LINK_PAUSE_TX: - txfc = 1; - hw->fc.current_mode = I40E_FC_TX_PAUSE; - break; - default: - hw->fc.current_mode = I40E_FC_NONE; - break; - } - -write_reg: - I40E_WRITE_REG(hw, I40E_PRTDCB_FCCFG, - txfc << I40E_PRTDCB_FCCFG_TFCE_SHIFT); - reg = I40E_READ_REG(hw, I40E_PRTDCB_MFLCN); - reg &= ~I40E_PRTDCB_MFLCN_RFCE_MASK; - reg |= rxfc << I40E_PRTDCB_MFLCN_RFCE_SHIFT; - I40E_WRITE_REG(hw, I40E_PRTDCB_MFLCN, reg); - - return ret; -} - -/* PF setup */ -static int -i40e_pf_setup(struct i40e_pf *pf) -{ - struct i40e_hw *hw = I40E_PF_TO_HW(pf); - struct i40e_filter_control_settings settings; - struct i40e_vsi *vsi; - int ret; - - /* Clear all stats counters */ - pf->offset_loaded = FALSE; - memset(&pf->stats, 0, sizeof(struct i40e_hw_port_stats)); - memset(&pf->stats_offset, 0, sizeof(struct i40e_hw_port_stats)); - - ret = i40e_pf_get_switch_config(pf); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Could not get switch config, err %d", ret); - return ret; - } - if (pf->flags & I40E_FLAG_FDIR) { - /* make queue allocated first, let FDIR use queue pair 0*/ - ret = i40e_res_pool_alloc(&pf->qp_pool, I40E_DEFAULT_QP_NUM_FDIR); - if (ret != I40E_FDIR_QUEUE_ID) { - PMD_DRV_LOG(ERR, "queue allocation fails for FDIR :" - " ret =%d", ret); - pf->flags &= ~I40E_FLAG_FDIR; - } - } - /* main VSI setup */ - vsi = i40e_vsi_setup(pf, I40E_VSI_MAIN, NULL, 0); - if (!vsi) { - PMD_DRV_LOG(ERR, "Setup of main vsi failed"); - return I40E_ERR_NOT_READY; - } - pf->main_vsi = vsi; - - /* Configure filter control */ - memset(&settings, 0, sizeof(settings)); - if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_128) - settings.hash_lut_size = I40E_HASH_LUT_SIZE_128; - else if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_512) - settings.hash_lut_size = I40E_HASH_LUT_SIZE_512; - else { - PMD_DRV_LOG(ERR, "Hash lookup table size (%u) not supported\n", - hw->func_caps.rss_table_size); - return I40E_ERR_PARAM; - } - PMD_DRV_LOG(INFO, "Hardware capability of hash lookup table " - "size: %u\n", hw->func_caps.rss_table_size); - pf->hash_lut_size = hw->func_caps.rss_table_size; - - /* Enable ethtype and macvlan filters */ - settings.enable_ethtype = TRUE; - settings.enable_macvlan = TRUE; - ret = i40e_set_filter_control(hw, &settings); - if (ret) - PMD_INIT_LOG(WARNING, "setup_pf_filter_control failed: %d", - ret); - - /* Update flow control according to the auto negotiation */ - i40e_update_flow_control(hw); - - return I40E_SUCCESS; -} - -int -i40e_switch_tx_queue(struct i40e_hw *hw, uint16_t q_idx, bool on) -{ - uint32_t reg; - uint16_t j; - - /** - * Set or clear TX Queue Disable flags, - * which is required by hardware. - */ - i40e_pre_tx_queue_cfg(hw, q_idx, on); - rte_delay_us(I40E_PRE_TX_Q_CFG_WAIT_US); - - /* Wait until the request is finished */ - for (j = 0; j < I40E_CHK_Q_ENA_COUNT; j++) { - rte_delay_us(I40E_CHK_Q_ENA_INTERVAL_US); - reg = I40E_READ_REG(hw, I40E_QTX_ENA(q_idx)); - if (!(((reg >> I40E_QTX_ENA_QENA_REQ_SHIFT) & 0x1) ^ - ((reg >> I40E_QTX_ENA_QENA_STAT_SHIFT) - & 0x1))) { - break; - } - } - if (on) { - if (reg & I40E_QTX_ENA_QENA_STAT_MASK) - return I40E_SUCCESS; /* already on, skip next steps */ - - I40E_WRITE_REG(hw, I40E_QTX_HEAD(q_idx), 0); - reg |= I40E_QTX_ENA_QENA_REQ_MASK; - } else { - if (!(reg & I40E_QTX_ENA_QENA_STAT_MASK)) - return I40E_SUCCESS; /* already off, skip next steps */ - reg &= ~I40E_QTX_ENA_QENA_REQ_MASK; - } - /* Write the register */ - I40E_WRITE_REG(hw, I40E_QTX_ENA(q_idx), reg); - /* Check the result */ - for (j = 0; j < I40E_CHK_Q_ENA_COUNT; j++) { - rte_delay_us(I40E_CHK_Q_ENA_INTERVAL_US); - reg = I40E_READ_REG(hw, I40E_QTX_ENA(q_idx)); - if (on) { - if ((reg & I40E_QTX_ENA_QENA_REQ_MASK) && - (reg & I40E_QTX_ENA_QENA_STAT_MASK)) - break; - } else { - if (!(reg & I40E_QTX_ENA_QENA_REQ_MASK) && - !(reg & I40E_QTX_ENA_QENA_STAT_MASK)) - break; - } - } - /* Check if it is timeout */ - if (j >= I40E_CHK_Q_ENA_COUNT) { - PMD_DRV_LOG(ERR, "Failed to %s tx queue[%u]", - (on ? "enable" : "disable"), q_idx); - return I40E_ERR_TIMEOUT; - } - - return I40E_SUCCESS; -} - -/* Swith on or off the tx queues */ -static int -i40e_dev_switch_tx_queues(struct i40e_pf *pf, bool on) -{ - struct rte_eth_dev_data *dev_data = pf->dev_data; - struct i40e_tx_queue *txq; - struct rte_eth_dev *dev = pf->adapter->eth_dev; - uint16_t i; - int ret; - - for (i = 0; i < dev_data->nb_tx_queues; i++) { - txq = dev_data->tx_queues[i]; - /* Don't operate the queue if not configured or - * if starting only per queue */ - if (!txq || !txq->q_set || (on && txq->tx_deferred_start)) - continue; - if (on) - ret = i40e_dev_tx_queue_start(dev, i); - else - ret = i40e_dev_tx_queue_stop(dev, i); - if ( ret != I40E_SUCCESS) - return ret; - } - - return I40E_SUCCESS; -} - -int -i40e_switch_rx_queue(struct i40e_hw *hw, uint16_t q_idx, bool on) -{ - uint32_t reg; - uint16_t j; - - /* Wait until the request is finished */ - for (j = 0; j < I40E_CHK_Q_ENA_COUNT; j++) { - rte_delay_us(I40E_CHK_Q_ENA_INTERVAL_US); - reg = I40E_READ_REG(hw, I40E_QRX_ENA(q_idx)); - if (!((reg >> I40E_QRX_ENA_QENA_REQ_SHIFT) & 0x1) ^ - ((reg >> I40E_QRX_ENA_QENA_STAT_SHIFT) & 0x1)) - break; - } - - if (on) { - if (reg & I40E_QRX_ENA_QENA_STAT_MASK) - return I40E_SUCCESS; /* Already on, skip next steps */ - reg |= I40E_QRX_ENA_QENA_REQ_MASK; - } else { - if (!(reg & I40E_QRX_ENA_QENA_STAT_MASK)) - return I40E_SUCCESS; /* Already off, skip next steps */ - reg &= ~I40E_QRX_ENA_QENA_REQ_MASK; - } - - /* Write the register */ - I40E_WRITE_REG(hw, I40E_QRX_ENA(q_idx), reg); - /* Check the result */ - for (j = 0; j < I40E_CHK_Q_ENA_COUNT; j++) { - rte_delay_us(I40E_CHK_Q_ENA_INTERVAL_US); - reg = I40E_READ_REG(hw, I40E_QRX_ENA(q_idx)); - if (on) { - if ((reg & I40E_QRX_ENA_QENA_REQ_MASK) && - (reg & I40E_QRX_ENA_QENA_STAT_MASK)) - break; - } else { - if (!(reg & I40E_QRX_ENA_QENA_REQ_MASK) && - !(reg & I40E_QRX_ENA_QENA_STAT_MASK)) - break; - } - } - - /* Check if it is timeout */ - if (j >= I40E_CHK_Q_ENA_COUNT) { - PMD_DRV_LOG(ERR, "Failed to %s rx queue[%u]", - (on ? "enable" : "disable"), q_idx); - return I40E_ERR_TIMEOUT; - } - - return I40E_SUCCESS; -} -/* Switch on or off the rx queues */ -static int -i40e_dev_switch_rx_queues(struct i40e_pf *pf, bool on) -{ - struct rte_eth_dev_data *dev_data = pf->dev_data; - struct i40e_rx_queue *rxq; - struct rte_eth_dev *dev = pf->adapter->eth_dev; - uint16_t i; - int ret; - - for (i = 0; i < dev_data->nb_rx_queues; i++) { - rxq = dev_data->rx_queues[i]; - /* Don't operate the queue if not configured or - * if starting only per queue */ - if (!rxq || !rxq->q_set || (on && rxq->rx_deferred_start)) - continue; - if (on) - ret = i40e_dev_rx_queue_start(dev, i); - else - ret = i40e_dev_rx_queue_stop(dev, i); - if (ret != I40E_SUCCESS) - return ret; - } - - return I40E_SUCCESS; -} - -/* Switch on or off all the rx/tx queues */ -int -i40e_dev_switch_queues(struct i40e_pf *pf, bool on) -{ - int ret; - - if (on) { - /* enable rx queues before enabling tx queues */ - ret = i40e_dev_switch_rx_queues(pf, on); - if (ret) { - PMD_DRV_LOG(ERR, "Failed to switch rx queues"); - return ret; - } - ret = i40e_dev_switch_tx_queues(pf, on); - } else { - /* Stop tx queues before stopping rx queues */ - ret = i40e_dev_switch_tx_queues(pf, on); - if (ret) { - PMD_DRV_LOG(ERR, "Failed to switch tx queues"); - return ret; - } - ret = i40e_dev_switch_rx_queues(pf, on); - } - - return ret; -} - -/* Initialize VSI for TX */ -static int -i40e_dev_tx_init(struct i40e_pf *pf) -{ - struct rte_eth_dev_data *data = pf->dev_data; - uint16_t i; - uint32_t ret = I40E_SUCCESS; - struct i40e_tx_queue *txq; - - for (i = 0; i < data->nb_tx_queues; i++) { - txq = data->tx_queues[i]; - if (!txq || !txq->q_set) - continue; - ret = i40e_tx_queue_init(txq); - if (ret != I40E_SUCCESS) - break; - } - - return ret; -} - -/* Initialize VSI for RX */ -static int -i40e_dev_rx_init(struct i40e_pf *pf) -{ - struct rte_eth_dev_data *data = pf->dev_data; - int ret = I40E_SUCCESS; - uint16_t i; - struct i40e_rx_queue *rxq; - - i40e_pf_config_mq_rx(pf); - for (i = 0; i < data->nb_rx_queues; i++) { - rxq = data->rx_queues[i]; - if (!rxq || !rxq->q_set) - continue; - - ret = i40e_rx_queue_init(rxq); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to do RX queue " - "initialization"); - break; - } - } - - return ret; -} - -static int -i40e_dev_rxtx_init(struct i40e_pf *pf) -{ - int err; - - err = i40e_dev_tx_init(pf); - if (err) { - PMD_DRV_LOG(ERR, "Failed to do TX initialization"); - return err; - } - err = i40e_dev_rx_init(pf); - if (err) { - PMD_DRV_LOG(ERR, "Failed to do RX initialization"); - return err; - } - - return err; -} - -static int -i40e_vmdq_setup(struct rte_eth_dev *dev) -{ - struct rte_eth_conf *conf = &dev->data->dev_conf; - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - int i, err, conf_vsis, j, loop; - struct i40e_vsi *vsi; - struct i40e_vmdq_info *vmdq_info; - struct rte_eth_vmdq_rx_conf *vmdq_conf; - struct i40e_hw *hw = I40E_PF_TO_HW(pf); - - /* - * Disable interrupt to avoid message from VF. Furthermore, it will - * avoid race condition in VSI creation/destroy. - */ - i40e_pf_disable_irq0(hw); - - if ((pf->flags & I40E_FLAG_VMDQ) == 0) { - PMD_INIT_LOG(ERR, "FW doesn't support VMDQ"); - return -ENOTSUP; - } - - conf_vsis = conf->rx_adv_conf.vmdq_rx_conf.nb_queue_pools; - if (conf_vsis > pf->max_nb_vmdq_vsi) { - PMD_INIT_LOG(ERR, "VMDQ config: %u, max support:%u", - conf->rx_adv_conf.vmdq_rx_conf.nb_queue_pools, - pf->max_nb_vmdq_vsi); - return -ENOTSUP; - } - - if (pf->vmdq != NULL) { - PMD_INIT_LOG(INFO, "VMDQ already configured"); - return 0; - } - - pf->vmdq = rte_zmalloc("vmdq_info_struct", - sizeof(*vmdq_info) * conf_vsis, 0); - - if (pf->vmdq == NULL) { - PMD_INIT_LOG(ERR, "Failed to allocate memory"); - return -ENOMEM; - } - - vmdq_conf = &conf->rx_adv_conf.vmdq_rx_conf; - - /* Create VMDQ VSI */ - for (i = 0; i < conf_vsis; i++) { - vsi = i40e_vsi_setup(pf, I40E_VSI_VMDQ2, pf->main_vsi, - vmdq_conf->enable_loop_back); - if (vsi == NULL) { - PMD_INIT_LOG(ERR, "Failed to create VMDQ VSI"); - err = -1; - goto err_vsi_setup; - } - vmdq_info = &pf->vmdq[i]; - vmdq_info->pf = pf; - vmdq_info->vsi = vsi; - } - pf->nb_cfg_vmdq_vsi = conf_vsis; - - /* Configure Vlan */ - loop = sizeof(vmdq_conf->pool_map[0].pools) * CHAR_BIT; - for (i = 0; i < vmdq_conf->nb_pool_maps; i++) { - for (j = 0; j < loop && j < pf->nb_cfg_vmdq_vsi; j++) { - if (vmdq_conf->pool_map[i].pools & (1UL << j)) { - PMD_INIT_LOG(INFO, "Add vlan %u to vmdq pool %u", - vmdq_conf->pool_map[i].vlan_id, j); - - err = i40e_vsi_add_vlan(pf->vmdq[j].vsi, - vmdq_conf->pool_map[i].vlan_id); - if (err) { - PMD_INIT_LOG(ERR, "Failed to add vlan"); - err = -1; - goto err_vsi_setup; - } - } - } - } - - i40e_pf_enable_irq0(hw); - - return 0; - -err_vsi_setup: - for (i = 0; i < conf_vsis; i++) - if (pf->vmdq[i].vsi == NULL) - break; - else - i40e_vsi_release(pf->vmdq[i].vsi); - - rte_free(pf->vmdq); - pf->vmdq = NULL; - i40e_pf_enable_irq0(hw); - return err; -} - -static void -i40e_stat_update_32(struct i40e_hw *hw, - uint32_t reg, - bool offset_loaded, - uint64_t *offset, - uint64_t *stat) -{ - uint64_t new_data; - - new_data = (uint64_t)I40E_READ_REG(hw, reg); - if (!offset_loaded) - *offset = new_data; - - if (new_data >= *offset) - *stat = (uint64_t)(new_data - *offset); - else - *stat = (uint64_t)((new_data + - ((uint64_t)1 << I40E_32_BIT_WIDTH)) - *offset); -} - -static void -i40e_stat_update_48(struct i40e_hw *hw, - uint32_t hireg, - uint32_t loreg, - bool offset_loaded, - uint64_t *offset, - uint64_t *stat) -{ - uint64_t new_data; - - new_data = (uint64_t)I40E_READ_REG(hw, loreg); - new_data |= ((uint64_t)(I40E_READ_REG(hw, hireg) & - I40E_16_BIT_MASK)) << I40E_32_BIT_WIDTH; - - if (!offset_loaded) - *offset = new_data; - - if (new_data >= *offset) - *stat = new_data - *offset; - else - *stat = (uint64_t)((new_data + - ((uint64_t)1 << I40E_48_BIT_WIDTH)) - *offset); - - *stat &= I40E_48_BIT_MASK; -} - -/* Disable IRQ0 */ -void -i40e_pf_disable_irq0(struct i40e_hw *hw) -{ - /* Disable all interrupt types */ - I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0, 0); - I40E_WRITE_FLUSH(hw); -} - -/* Enable IRQ0 */ -void -i40e_pf_enable_irq0(struct i40e_hw *hw) -{ - I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0, - I40E_PFINT_DYN_CTL0_INTENA_MASK | - I40E_PFINT_DYN_CTL0_CLEARPBA_MASK | - I40E_PFINT_DYN_CTL0_ITR_INDX_MASK); - I40E_WRITE_FLUSH(hw); -} - -static void -i40e_pf_config_irq0(struct i40e_hw *hw) -{ - /* read pending request and disable first */ - i40e_pf_disable_irq0(hw); - I40E_WRITE_REG(hw, I40E_PFINT_ICR0_ENA, I40E_PFINT_ICR0_ENA_MASK); - I40E_WRITE_REG(hw, I40E_PFINT_STAT_CTL0, - I40E_PFINT_STAT_CTL0_OTHER_ITR_INDX_MASK); - - /* Link no queues with irq0 */ - I40E_WRITE_REG(hw, I40E_PFINT_LNKLST0, - I40E_PFINT_LNKLST0_FIRSTQ_INDX_MASK); -} - -static void -i40e_dev_handle_vfr_event(struct rte_eth_dev *dev) -{ - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - int i; - uint16_t abs_vf_id; - uint32_t index, offset, val; - - if (!pf->vfs) - return; - /** - * Try to find which VF trigger a reset, use absolute VF id to access - * since the reg is global register. - */ - for (i = 0; i < pf->vf_num; i++) { - abs_vf_id = hw->func_caps.vf_base_id + i; - index = abs_vf_id / I40E_UINT32_BIT_SIZE; - offset = abs_vf_id % I40E_UINT32_BIT_SIZE; - val = I40E_READ_REG(hw, I40E_GLGEN_VFLRSTAT(index)); - /* VFR event occured */ - if (val & (0x1 << offset)) { - int ret; - - /* Clear the event first */ - I40E_WRITE_REG(hw, I40E_GLGEN_VFLRSTAT(index), - (0x1 << offset)); - PMD_DRV_LOG(INFO, "VF %u reset occured", abs_vf_id); - /** - * Only notify a VF reset event occured, - * don't trigger another SW reset - */ - ret = i40e_pf_host_vf_reset(&pf->vfs[i], 0); - if (ret != I40E_SUCCESS) - PMD_DRV_LOG(ERR, "Failed to do VF reset"); - } - } -} - -static void -i40e_dev_handle_aq_msg(struct rte_eth_dev *dev) -{ - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct i40e_arq_event_info info; - uint16_t pending, opcode; - int ret; - - info.buf_len = I40E_AQ_BUF_SZ; - info.msg_buf = rte_zmalloc("msg_buffer", info.buf_len, 0); - if (!info.msg_buf) { - PMD_DRV_LOG(ERR, "Failed to allocate mem"); - return; - } - - pending = 1; - while (pending) { - ret = i40e_clean_arq_element(hw, &info, &pending); - - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(INFO, "Failed to read msg from AdminQ, " - "aq_err: %u", hw->aq.asq_last_status); - break; - } - opcode = rte_le_to_cpu_16(info.desc.opcode); - - switch (opcode) { - case i40e_aqc_opc_send_msg_to_pf: - /* Refer to i40e_aq_send_msg_to_pf() for argument layout*/ - i40e_pf_host_handle_vf_msg(dev, - rte_le_to_cpu_16(info.desc.retval), - rte_le_to_cpu_32(info.desc.cookie_high), - rte_le_to_cpu_32(info.desc.cookie_low), - info.msg_buf, - info.msg_len); - break; - default: - PMD_DRV_LOG(ERR, "Request %u is not supported yet", - opcode); - break; - } - } - rte_free(info.msg_buf); -} - -/* - * Interrupt handler is registered as the alarm callback for handling LSC - * interrupt in a definite of time, in order to wait the NIC into a stable - * state. Currently it waits 1 sec in i40e for the link up interrupt, and - * no need for link down interrupt. - */ -static void -i40e_dev_interrupt_delayed_handler(void *param) -{ - struct rte_eth_dev *dev = (struct rte_eth_dev *)param; - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - uint32_t icr0; - - /* read interrupt causes again */ - icr0 = I40E_READ_REG(hw, I40E_PFINT_ICR0); - -#ifdef RTE_LIBRTE_I40E_DEBUG_DRIVER - if (icr0 & I40E_PFINT_ICR0_ECC_ERR_MASK) - PMD_DRV_LOG(ERR, "ICR0: unrecoverable ECC error\n"); - if (icr0 & I40E_PFINT_ICR0_MAL_DETECT_MASK) - PMD_DRV_LOG(ERR, "ICR0: malicious programming detected\n"); - if (icr0 & I40E_PFINT_ICR0_GRST_MASK) - PMD_DRV_LOG(INFO, "ICR0: global reset requested\n"); - if (icr0 & I40E_PFINT_ICR0_PCI_EXCEPTION_MASK) - PMD_DRV_LOG(INFO, "ICR0: PCI exception\n activated\n"); - if (icr0 & I40E_PFINT_ICR0_STORM_DETECT_MASK) - PMD_DRV_LOG(INFO, "ICR0: a change in the storm control " - "state\n"); - if (icr0 & I40E_PFINT_ICR0_HMC_ERR_MASK) - PMD_DRV_LOG(ERR, "ICR0: HMC error\n"); - if (icr0 & I40E_PFINT_ICR0_PE_CRITERR_MASK) - PMD_DRV_LOG(ERR, "ICR0: protocol engine critical error\n"); -#endif /* RTE_LIBRTE_I40E_DEBUG_DRIVER */ - - if (icr0 & I40E_PFINT_ICR0_VFLR_MASK) { - PMD_DRV_LOG(INFO, "INT:VF reset detected\n"); - i40e_dev_handle_vfr_event(dev); - } - if (icr0 & I40E_PFINT_ICR0_ADMINQ_MASK) { - PMD_DRV_LOG(INFO, "INT:ADMINQ event\n"); - i40e_dev_handle_aq_msg(dev); - } - - /* handle the link up interrupt in an alarm callback */ - i40e_dev_link_update(dev, 0); - _rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC); - - i40e_pf_enable_irq0(hw); - rte_intr_enable(&(dev->pci_dev->intr_handle)); -} - -/** - * Interrupt handler triggered by NIC for handling - * specific interrupt. - * - * @param handle - * Pointer to interrupt handle. - * @param param - * The address of parameter (struct rte_eth_dev *) regsitered before. - * - * @return - * void - */ -static void -i40e_dev_interrupt_handler(__rte_unused struct rte_intr_handle *handle, - void *param) -{ - struct rte_eth_dev *dev = (struct rte_eth_dev *)param; - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - uint32_t icr0; - - /* Disable interrupt */ - i40e_pf_disable_irq0(hw); - - /* read out interrupt causes */ - icr0 = I40E_READ_REG(hw, I40E_PFINT_ICR0); - - /* No interrupt event indicated */ - if (!(icr0 & I40E_PFINT_ICR0_INTEVENT_MASK)) { - PMD_DRV_LOG(INFO, "No interrupt event"); - goto done; - } -#ifdef RTE_LIBRTE_I40E_DEBUG_DRIVER - if (icr0 & I40E_PFINT_ICR0_ECC_ERR_MASK) - PMD_DRV_LOG(ERR, "ICR0: unrecoverable ECC error"); - if (icr0 & I40E_PFINT_ICR0_MAL_DETECT_MASK) - PMD_DRV_LOG(ERR, "ICR0: malicious programming detected"); - if (icr0 & I40E_PFINT_ICR0_GRST_MASK) - PMD_DRV_LOG(INFO, "ICR0: global reset requested"); - if (icr0 & I40E_PFINT_ICR0_PCI_EXCEPTION_MASK) - PMD_DRV_LOG(INFO, "ICR0: PCI exception activated"); - if (icr0 & I40E_PFINT_ICR0_STORM_DETECT_MASK) - PMD_DRV_LOG(INFO, "ICR0: a change in the storm control state"); - if (icr0 & I40E_PFINT_ICR0_HMC_ERR_MASK) - PMD_DRV_LOG(ERR, "ICR0: HMC error"); - if (icr0 & I40E_PFINT_ICR0_PE_CRITERR_MASK) - PMD_DRV_LOG(ERR, "ICR0: protocol engine critical error"); -#endif /* RTE_LIBRTE_I40E_DEBUG_DRIVER */ - - if (icr0 & I40E_PFINT_ICR0_VFLR_MASK) { - PMD_DRV_LOG(INFO, "ICR0: VF reset detected"); - i40e_dev_handle_vfr_event(dev); - } - if (icr0 & I40E_PFINT_ICR0_ADMINQ_MASK) { - PMD_DRV_LOG(INFO, "ICR0: adminq event"); - i40e_dev_handle_aq_msg(dev); - } - - /* Link Status Change interrupt */ - if (icr0 & I40E_PFINT_ICR0_LINK_STAT_CHANGE_MASK) { -#define I40E_US_PER_SECOND 1000000 - struct rte_eth_link link; - - PMD_DRV_LOG(INFO, "ICR0: link status changed\n"); - memset(&link, 0, sizeof(link)); - rte_i40e_dev_atomic_read_link_status(dev, &link); - i40e_dev_link_update(dev, 0); - - /* - * For link up interrupt, it needs to wait 1 second to let the - * hardware be a stable state. Otherwise several consecutive - * interrupts can be observed. - * For link down interrupt, no need to wait. - */ - if (!link.link_status && rte_eal_alarm_set(I40E_US_PER_SECOND, - i40e_dev_interrupt_delayed_handler, (void *)dev) >= 0) - return; - else - _rte_eth_dev_callback_process(dev, - RTE_ETH_EVENT_INTR_LSC); - } - -done: - /* Enable interrupt */ - i40e_pf_enable_irq0(hw); - rte_intr_enable(&(dev->pci_dev->intr_handle)); -} - -static int -i40e_add_macvlan_filters(struct i40e_vsi *vsi, - struct i40e_macvlan_filter *filter, - int total) -{ - int ele_num, ele_buff_size; - int num, actual_num, i; - uint16_t flags; - int ret = I40E_SUCCESS; - struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); - struct i40e_aqc_add_macvlan_element_data *req_list; - - if (filter == NULL || total == 0) - return I40E_ERR_PARAM; - ele_num = hw->aq.asq_buf_size / sizeof(*req_list); - ele_buff_size = hw->aq.asq_buf_size; - - req_list = rte_zmalloc("macvlan_add", ele_buff_size, 0); - if (req_list == NULL) { - PMD_DRV_LOG(ERR, "Fail to allocate memory"); - return I40E_ERR_NO_MEMORY; - } - - num = 0; - do { - actual_num = (num + ele_num > total) ? (total - num) : ele_num; - memset(req_list, 0, ele_buff_size); - - for (i = 0; i < actual_num; i++) { - (void)rte_memcpy(req_list[i].mac_addr, - &filter[num + i].macaddr, ETH_ADDR_LEN); - req_list[i].vlan_tag = - rte_cpu_to_le_16(filter[num + i].vlan_id); - - switch (filter[num + i].filter_type) { - case RTE_MAC_PERFECT_MATCH: - flags = I40E_AQC_MACVLAN_ADD_PERFECT_MATCH | - I40E_AQC_MACVLAN_ADD_IGNORE_VLAN; - break; - case RTE_MACVLAN_PERFECT_MATCH: - flags = I40E_AQC_MACVLAN_ADD_PERFECT_MATCH; - break; - case RTE_MAC_HASH_MATCH: - flags = I40E_AQC_MACVLAN_ADD_HASH_MATCH | - I40E_AQC_MACVLAN_ADD_IGNORE_VLAN; - break; - case RTE_MACVLAN_HASH_MATCH: - flags = I40E_AQC_MACVLAN_ADD_HASH_MATCH; - break; - default: - PMD_DRV_LOG(ERR, "Invalid MAC match type\n"); - ret = I40E_ERR_PARAM; - goto DONE; - } - - req_list[i].queue_number = 0; - - req_list[i].flags = rte_cpu_to_le_16(flags); - } - - ret = i40e_aq_add_macvlan(hw, vsi->seid, req_list, - actual_num, NULL); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to add macvlan filter"); - goto DONE; - } - num += actual_num; - } while (num < total); - -DONE: - rte_free(req_list); - return ret; -} - -static int -i40e_remove_macvlan_filters(struct i40e_vsi *vsi, - struct i40e_macvlan_filter *filter, - int total) -{ - int ele_num, ele_buff_size; - int num, actual_num, i; - uint16_t flags; - int ret = I40E_SUCCESS; - struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); - struct i40e_aqc_remove_macvlan_element_data *req_list; - - if (filter == NULL || total == 0) - return I40E_ERR_PARAM; - - ele_num = hw->aq.asq_buf_size / sizeof(*req_list); - ele_buff_size = hw->aq.asq_buf_size; - - req_list = rte_zmalloc("macvlan_remove", ele_buff_size, 0); - if (req_list == NULL) { - PMD_DRV_LOG(ERR, "Fail to allocate memory"); - return I40E_ERR_NO_MEMORY; - } - - num = 0; - do { - actual_num = (num + ele_num > total) ? (total - num) : ele_num; - memset(req_list, 0, ele_buff_size); - - for (i = 0; i < actual_num; i++) { - (void)rte_memcpy(req_list[i].mac_addr, - &filter[num + i].macaddr, ETH_ADDR_LEN); - req_list[i].vlan_tag = - rte_cpu_to_le_16(filter[num + i].vlan_id); - - switch (filter[num + i].filter_type) { - case RTE_MAC_PERFECT_MATCH: - flags = I40E_AQC_MACVLAN_DEL_PERFECT_MATCH | - I40E_AQC_MACVLAN_DEL_IGNORE_VLAN; - break; - case RTE_MACVLAN_PERFECT_MATCH: - flags = I40E_AQC_MACVLAN_DEL_PERFECT_MATCH; - break; - case RTE_MAC_HASH_MATCH: - flags = I40E_AQC_MACVLAN_DEL_HASH_MATCH | - I40E_AQC_MACVLAN_DEL_IGNORE_VLAN; - break; - case RTE_MACVLAN_HASH_MATCH: - flags = I40E_AQC_MACVLAN_DEL_HASH_MATCH; - break; - default: - PMD_DRV_LOG(ERR, "Invalid MAC filter type\n"); - ret = I40E_ERR_PARAM; - goto DONE; - } - req_list[i].flags = rte_cpu_to_le_16(flags); - } - - ret = i40e_aq_remove_macvlan(hw, vsi->seid, req_list, - actual_num, NULL); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to remove macvlan filter"); - goto DONE; - } - num += actual_num; - } while (num < total); - -DONE: - rte_free(req_list); - return ret; -} - -/* Find out specific MAC filter */ -static struct i40e_mac_filter * -i40e_find_mac_filter(struct i40e_vsi *vsi, - struct ether_addr *macaddr) -{ - struct i40e_mac_filter *f; - - TAILQ_FOREACH(f, &vsi->mac_list, next) { - if (is_same_ether_addr(macaddr, &f->mac_info.mac_addr)) - return f; - } - - return NULL; -} - -static bool -i40e_find_vlan_filter(struct i40e_vsi *vsi, - uint16_t vlan_id) -{ - uint32_t vid_idx, vid_bit; - - if (vlan_id > ETH_VLAN_ID_MAX) - return 0; - - vid_idx = I40E_VFTA_IDX(vlan_id); - vid_bit = I40E_VFTA_BIT(vlan_id); - - if (vsi->vfta[vid_idx] & vid_bit) - return 1; - else - return 0; -} - -static void -i40e_set_vlan_filter(struct i40e_vsi *vsi, - uint16_t vlan_id, bool on) -{ - uint32_t vid_idx, vid_bit; - - if (vlan_id > ETH_VLAN_ID_MAX) - return; - - vid_idx = I40E_VFTA_IDX(vlan_id); - vid_bit = I40E_VFTA_BIT(vlan_id); - - if (on) - vsi->vfta[vid_idx] |= vid_bit; - else - vsi->vfta[vid_idx] &= ~vid_bit; -} - -/** - * Find all vlan options for specific mac addr, - * return with actual vlan found. - */ -static inline int -i40e_find_all_vlan_for_mac(struct i40e_vsi *vsi, - struct i40e_macvlan_filter *mv_f, - int num, struct ether_addr *addr) -{ - int i; - uint32_t j, k; - - /** - * Not to use i40e_find_vlan_filter to decrease the loop time, - * although the code looks complex. - */ - if (num < vsi->vlan_num) - return I40E_ERR_PARAM; - - i = 0; - for (j = 0; j < I40E_VFTA_SIZE; j++) { - if (vsi->vfta[j]) { - for (k = 0; k < I40E_UINT32_BIT_SIZE; k++) { - if (vsi->vfta[j] & (1 << k)) { - if (i > num - 1) { - PMD_DRV_LOG(ERR, "vlan number " - "not match"); - return I40E_ERR_PARAM; - } - (void)rte_memcpy(&mv_f[i].macaddr, - addr, ETH_ADDR_LEN); - mv_f[i].vlan_id = - j * I40E_UINT32_BIT_SIZE + k; - i++; - } - } - } - } - return I40E_SUCCESS; -} - -static inline int -i40e_find_all_mac_for_vlan(struct i40e_vsi *vsi, - struct i40e_macvlan_filter *mv_f, - int num, - uint16_t vlan) -{ - int i = 0; - struct i40e_mac_filter *f; - - if (num < vsi->mac_num) - return I40E_ERR_PARAM; - - TAILQ_FOREACH(f, &vsi->mac_list, next) { - if (i > num - 1) { - PMD_DRV_LOG(ERR, "buffer number not match"); - return I40E_ERR_PARAM; - } - (void)rte_memcpy(&mv_f[i].macaddr, &f->mac_info.mac_addr, - ETH_ADDR_LEN); - mv_f[i].vlan_id = vlan; - mv_f[i].filter_type = f->mac_info.filter_type; - i++; - } - - return I40E_SUCCESS; -} - -static int -i40e_vsi_remove_all_macvlan_filter(struct i40e_vsi *vsi) -{ - int i, num; - struct i40e_mac_filter *f; - struct i40e_macvlan_filter *mv_f; - int ret = I40E_SUCCESS; - - if (vsi == NULL || vsi->mac_num == 0) - return I40E_ERR_PARAM; - - /* Case that no vlan is set */ - if (vsi->vlan_num == 0) - num = vsi->mac_num; - else - num = vsi->mac_num * vsi->vlan_num; - - mv_f = rte_zmalloc("macvlan_data", num * sizeof(*mv_f), 0); - if (mv_f == NULL) { - PMD_DRV_LOG(ERR, "failed to allocate memory"); - return I40E_ERR_NO_MEMORY; - } - - i = 0; - if (vsi->vlan_num == 0) { - TAILQ_FOREACH(f, &vsi->mac_list, next) { - (void)rte_memcpy(&mv_f[i].macaddr, - &f->mac_info.mac_addr, ETH_ADDR_LEN); - mv_f[i].vlan_id = 0; - i++; - } - } else { - TAILQ_FOREACH(f, &vsi->mac_list, next) { - ret = i40e_find_all_vlan_for_mac(vsi,&mv_f[i], - vsi->vlan_num, &f->mac_info.mac_addr); - if (ret != I40E_SUCCESS) - goto DONE; - i += vsi->vlan_num; - } - } - - ret = i40e_remove_macvlan_filters(vsi, mv_f, num); -DONE: - rte_free(mv_f); - - return ret; -} - -int -i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan) -{ - struct i40e_macvlan_filter *mv_f; - int mac_num; - int ret = I40E_SUCCESS; - - if (!vsi || vlan > ETHER_MAX_VLAN_ID) - return I40E_ERR_PARAM; - - /* If it's already set, just return */ - if (i40e_find_vlan_filter(vsi,vlan)) - return I40E_SUCCESS; - - mac_num = vsi->mac_num; - - if (mac_num == 0) { - PMD_DRV_LOG(ERR, "Error! VSI doesn't have a mac addr"); - return I40E_ERR_PARAM; - } - - mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0); - - if (mv_f == NULL) { - PMD_DRV_LOG(ERR, "failed to allocate memory"); - return I40E_ERR_NO_MEMORY; - } - - ret = i40e_find_all_mac_for_vlan(vsi, mv_f, mac_num, vlan); - - if (ret != I40E_SUCCESS) - goto DONE; - - ret = i40e_add_macvlan_filters(vsi, mv_f, mac_num); - - if (ret != I40E_SUCCESS) - goto DONE; - - i40e_set_vlan_filter(vsi, vlan, 1); - - vsi->vlan_num++; - ret = I40E_SUCCESS; -DONE: - rte_free(mv_f); - return ret; -} - -int -i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan) -{ - struct i40e_macvlan_filter *mv_f; - int mac_num; - int ret = I40E_SUCCESS; - - /** - * Vlan 0 is the generic filter for untagged packets - * and can't be removed. - */ - if (!vsi || vlan == 0 || vlan > ETHER_MAX_VLAN_ID) - return I40E_ERR_PARAM; - - /* If can't find it, just return */ - if (!i40e_find_vlan_filter(vsi, vlan)) - return I40E_ERR_PARAM; - - mac_num = vsi->mac_num; - - if (mac_num == 0) { - PMD_DRV_LOG(ERR, "Error! VSI doesn't have a mac addr"); - return I40E_ERR_PARAM; - } - - mv_f = rte_zmalloc("macvlan_data", mac_num * sizeof(*mv_f), 0); - - if (mv_f == NULL) { - PMD_DRV_LOG(ERR, "failed to allocate memory"); - return I40E_ERR_NO_MEMORY; - } - - ret = i40e_find_all_mac_for_vlan(vsi, mv_f, mac_num, vlan); - - if (ret != I40E_SUCCESS) - goto DONE; - - ret = i40e_remove_macvlan_filters(vsi, mv_f, mac_num); - - if (ret != I40E_SUCCESS) - goto DONE; - - /* This is last vlan to remove, replace all mac filter with vlan 0 */ - if (vsi->vlan_num == 1) { - ret = i40e_find_all_mac_for_vlan(vsi, mv_f, mac_num, 0); - if (ret != I40E_SUCCESS) - goto DONE; - - ret = i40e_add_macvlan_filters(vsi, mv_f, mac_num); - if (ret != I40E_SUCCESS) - goto DONE; - } - - i40e_set_vlan_filter(vsi, vlan, 0); - - vsi->vlan_num--; - ret = I40E_SUCCESS; -DONE: - rte_free(mv_f); - return ret; -} - -int -i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *mac_filter) -{ - struct i40e_mac_filter *f; - struct i40e_macvlan_filter *mv_f; - int i, vlan_num = 0; - int ret = I40E_SUCCESS; - - /* If it's add and we've config it, return */ - f = i40e_find_mac_filter(vsi, &mac_filter->mac_addr); - if (f != NULL) - return I40E_SUCCESS; - if ((mac_filter->filter_type == RTE_MACVLAN_PERFECT_MATCH) || - (mac_filter->filter_type == RTE_MACVLAN_HASH_MATCH)) { - - /** - * If vlan_num is 0, that's the first time to add mac, - * set mask for vlan_id 0. - */ - if (vsi->vlan_num == 0) { - i40e_set_vlan_filter(vsi, 0, 1); - vsi->vlan_num = 1; - } - vlan_num = vsi->vlan_num; - } else if ((mac_filter->filter_type == RTE_MAC_PERFECT_MATCH) || - (mac_filter->filter_type == RTE_MAC_HASH_MATCH)) - vlan_num = 1; - - mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0); - if (mv_f == NULL) { - PMD_DRV_LOG(ERR, "failed to allocate memory"); - return I40E_ERR_NO_MEMORY; - } - - for (i = 0; i < vlan_num; i++) { - mv_f[i].filter_type = mac_filter->filter_type; - (void)rte_memcpy(&mv_f[i].macaddr, &mac_filter->mac_addr, - ETH_ADDR_LEN); - } - - if (mac_filter->filter_type == RTE_MACVLAN_PERFECT_MATCH || - mac_filter->filter_type == RTE_MACVLAN_HASH_MATCH) { - ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num, - &mac_filter->mac_addr); - if (ret != I40E_SUCCESS) - goto DONE; - } - - ret = i40e_add_macvlan_filters(vsi, mv_f, vlan_num); - if (ret != I40E_SUCCESS) - goto DONE; - - /* Add the mac addr into mac list */ - f = rte_zmalloc("macv_filter", sizeof(*f), 0); - if (f == NULL) { - PMD_DRV_LOG(ERR, "failed to allocate memory"); - ret = I40E_ERR_NO_MEMORY; - goto DONE; - } - (void)rte_memcpy(&f->mac_info.mac_addr, &mac_filter->mac_addr, - ETH_ADDR_LEN); - f->mac_info.filter_type = mac_filter->filter_type; - TAILQ_INSERT_TAIL(&vsi->mac_list, f, next); - vsi->mac_num++; - - ret = I40E_SUCCESS; -DONE: - rte_free(mv_f); - - return ret; -} - -int -i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct ether_addr *addr) -{ - struct i40e_mac_filter *f; - struct i40e_macvlan_filter *mv_f; - int i, vlan_num; - enum rte_mac_filter_type filter_type; - int ret = I40E_SUCCESS; - - /* Can't find it, return an error */ - f = i40e_find_mac_filter(vsi, addr); - if (f == NULL) - return I40E_ERR_PARAM; - - vlan_num = vsi->vlan_num; - filter_type = f->mac_info.filter_type; - if (filter_type == RTE_MACVLAN_PERFECT_MATCH || - filter_type == RTE_MACVLAN_HASH_MATCH) { - if (vlan_num == 0) { - PMD_DRV_LOG(ERR, "VLAN number shouldn't be 0\n"); - return I40E_ERR_PARAM; - } - } else if (filter_type == RTE_MAC_PERFECT_MATCH || - filter_type == RTE_MAC_HASH_MATCH) - vlan_num = 1; - - mv_f = rte_zmalloc("macvlan_data", vlan_num * sizeof(*mv_f), 0); - if (mv_f == NULL) { - PMD_DRV_LOG(ERR, "failed to allocate memory"); - return I40E_ERR_NO_MEMORY; - } - - for (i = 0; i < vlan_num; i++) { - mv_f[i].filter_type = filter_type; - (void)rte_memcpy(&mv_f[i].macaddr, &f->mac_info.mac_addr, - ETH_ADDR_LEN); - } - if (filter_type == RTE_MACVLAN_PERFECT_MATCH || - filter_type == RTE_MACVLAN_HASH_MATCH) { - ret = i40e_find_all_vlan_for_mac(vsi, mv_f, vlan_num, addr); - if (ret != I40E_SUCCESS) - goto DONE; - } - - ret = i40e_remove_macvlan_filters(vsi, mv_f, vlan_num); - if (ret != I40E_SUCCESS) - goto DONE; - - /* Remove the mac addr into mac list */ - TAILQ_REMOVE(&vsi->mac_list, f, next); - rte_free(f); - vsi->mac_num--; - - ret = I40E_SUCCESS; -DONE: - rte_free(mv_f); - return ret; -} - -/* Configure hash enable flags for RSS */ -uint64_t -i40e_config_hena(uint64_t flags) -{ - uint64_t hena = 0; - - if (!flags) - return hena; - - if (flags & ETH_RSS_FRAG_IPV4) - hena |= 1ULL << I40E_FILTER_PCTYPE_FRAG_IPV4; - if (flags & ETH_RSS_NONFRAG_IPV4_TCP) - hena |= 1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_TCP; - if (flags & ETH_RSS_NONFRAG_IPV4_UDP) - hena |= 1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_UDP; - if (flags & ETH_RSS_NONFRAG_IPV4_SCTP) - hena |= 1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_SCTP; - if (flags & ETH_RSS_NONFRAG_IPV4_OTHER) - hena |= 1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_OTHER; - if (flags & ETH_RSS_FRAG_IPV6) - hena |= 1ULL << I40E_FILTER_PCTYPE_FRAG_IPV6; - if (flags & ETH_RSS_NONFRAG_IPV6_TCP) - hena |= 1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_TCP; - if (flags & ETH_RSS_NONFRAG_IPV6_UDP) - hena |= 1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_UDP; - if (flags & ETH_RSS_NONFRAG_IPV6_SCTP) - hena |= 1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_SCTP; - if (flags & ETH_RSS_NONFRAG_IPV6_OTHER) - hena |= 1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_OTHER; - if (flags & ETH_RSS_L2_PAYLOAD) - hena |= 1ULL << I40E_FILTER_PCTYPE_L2_PAYLOAD; - - return hena; -} - -/* Parse the hash enable flags */ -uint64_t -i40e_parse_hena(uint64_t flags) -{ - uint64_t rss_hf = 0; - - if (!flags) - return rss_hf; - if (flags & (1ULL << I40E_FILTER_PCTYPE_FRAG_IPV4)) - rss_hf |= ETH_RSS_FRAG_IPV4; - if (flags & (1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_TCP)) - rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP; - if (flags & (1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_UDP)) - rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP; - if (flags & (1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_SCTP)) - rss_hf |= ETH_RSS_NONFRAG_IPV4_SCTP; - if (flags & (1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_OTHER)) - rss_hf |= ETH_RSS_NONFRAG_IPV4_OTHER; - if (flags & (1ULL << I40E_FILTER_PCTYPE_FRAG_IPV6)) - rss_hf |= ETH_RSS_FRAG_IPV6; - if (flags & (1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_TCP)) - rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP; - if (flags & (1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_UDP)) - rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP; - if (flags & (1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_SCTP)) - rss_hf |= ETH_RSS_NONFRAG_IPV6_SCTP; - if (flags & (1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_OTHER)) - rss_hf |= ETH_RSS_NONFRAG_IPV6_OTHER; - if (flags & (1ULL << I40E_FILTER_PCTYPE_L2_PAYLOAD)) - rss_hf |= ETH_RSS_L2_PAYLOAD; - - return rss_hf; -} - -/* Disable RSS */ -static void -i40e_pf_disable_rss(struct i40e_pf *pf) -{ - struct i40e_hw *hw = I40E_PF_TO_HW(pf); - uint64_t hena; - - hena = (uint64_t)I40E_READ_REG(hw, I40E_PFQF_HENA(0)); - hena |= ((uint64_t)I40E_READ_REG(hw, I40E_PFQF_HENA(1))) << 32; - hena &= ~I40E_RSS_HENA_ALL; - I40E_WRITE_REG(hw, I40E_PFQF_HENA(0), (uint32_t)hena); - I40E_WRITE_REG(hw, I40E_PFQF_HENA(1), (uint32_t)(hena >> 32)); - I40E_WRITE_FLUSH(hw); -} - -static int -i40e_hw_rss_hash_set(struct i40e_hw *hw, struct rte_eth_rss_conf *rss_conf) -{ - uint32_t *hash_key; - uint8_t hash_key_len; - uint64_t rss_hf; - uint16_t i; - uint64_t hena; - - hash_key = (uint32_t *)(rss_conf->rss_key); - hash_key_len = rss_conf->rss_key_len; - if (hash_key != NULL && hash_key_len >= - (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) { - /* Fill in RSS hash key */ - for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++) - I40E_WRITE_REG(hw, I40E_PFQF_HKEY(i), hash_key[i]); - } - - rss_hf = rss_conf->rss_hf; - hena = (uint64_t)I40E_READ_REG(hw, I40E_PFQF_HENA(0)); - hena |= ((uint64_t)I40E_READ_REG(hw, I40E_PFQF_HENA(1))) << 32; - hena &= ~I40E_RSS_HENA_ALL; - hena |= i40e_config_hena(rss_hf); - I40E_WRITE_REG(hw, I40E_PFQF_HENA(0), (uint32_t)hena); - I40E_WRITE_REG(hw, I40E_PFQF_HENA(1), (uint32_t)(hena >> 32)); - I40E_WRITE_FLUSH(hw); - - return 0; -} - -static int -i40e_dev_rss_hash_update(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf) -{ - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - uint64_t rss_hf = rss_conf->rss_hf & I40E_RSS_OFFLOAD_ALL; - uint64_t hena; - - hena = (uint64_t)I40E_READ_REG(hw, I40E_PFQF_HENA(0)); - hena |= ((uint64_t)I40E_READ_REG(hw, I40E_PFQF_HENA(1))) << 32; - if (!(hena & I40E_RSS_HENA_ALL)) { /* RSS disabled */ - if (rss_hf != 0) /* Enable RSS */ - return -EINVAL; - return 0; /* Nothing to do */ - } - /* RSS enabled */ - if (rss_hf == 0) /* Disable RSS */ - return -EINVAL; - - return i40e_hw_rss_hash_set(hw, rss_conf); -} - -static int -i40e_dev_rss_hash_conf_get(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf) -{ - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - uint32_t *hash_key = (uint32_t *)(rss_conf->rss_key); - uint64_t hena; - uint16_t i; - - if (hash_key != NULL) { - for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++) - hash_key[i] = I40E_READ_REG(hw, I40E_PFQF_HKEY(i)); - rss_conf->rss_key_len = i * sizeof(uint32_t); - } - hena = (uint64_t)I40E_READ_REG(hw, I40E_PFQF_HENA(0)); - hena |= ((uint64_t)I40E_READ_REG(hw, I40E_PFQF_HENA(1))) << 32; - rss_conf->rss_hf = i40e_parse_hena(hena); - - return 0; -} - -static int -i40e_dev_get_filter_type(uint16_t filter_type, uint16_t *flag) -{ - switch (filter_type) { - case RTE_TUNNEL_FILTER_IMAC_IVLAN: - *flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN; - break; - case RTE_TUNNEL_FILTER_IMAC_IVLAN_TENID: - *flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_IVLAN_TEN_ID; - break; - case RTE_TUNNEL_FILTER_IMAC_TENID: - *flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC_TEN_ID; - break; - case RTE_TUNNEL_FILTER_OMAC_TENID_IMAC: - *flag = I40E_AQC_ADD_CLOUD_FILTER_OMAC_TEN_ID_IMAC; - break; - case ETH_TUNNEL_FILTER_IMAC: - *flag = I40E_AQC_ADD_CLOUD_FILTER_IMAC; - break; - default: - PMD_DRV_LOG(ERR, "invalid tunnel filter type"); - return -EINVAL; - } - - return 0; -} - -static int -i40e_dev_tunnel_filter_set(struct i40e_pf *pf, - struct rte_eth_tunnel_filter_conf *tunnel_filter, - uint8_t add) -{ - uint16_t ip_type; - uint8_t tun_type = 0; - int val, ret = 0; - struct i40e_hw *hw = I40E_PF_TO_HW(pf); - struct i40e_vsi *vsi = pf->main_vsi; - struct i40e_aqc_add_remove_cloud_filters_element_data *cld_filter; - struct i40e_aqc_add_remove_cloud_filters_element_data *pfilter; - - cld_filter = rte_zmalloc("tunnel_filter", - sizeof(struct i40e_aqc_add_remove_cloud_filters_element_data), - 0); - - if (NULL == cld_filter) { - PMD_DRV_LOG(ERR, "Failed to alloc memory."); - return -EINVAL; - } - pfilter = cld_filter; - - (void)rte_memcpy(&pfilter->outer_mac, tunnel_filter->outer_mac, - sizeof(struct ether_addr)); - (void)rte_memcpy(&pfilter->inner_mac, tunnel_filter->inner_mac, - sizeof(struct ether_addr)); - - pfilter->inner_vlan = tunnel_filter->inner_vlan; - if (tunnel_filter->ip_type == RTE_TUNNEL_IPTYPE_IPV4) { - ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV4; - (void)rte_memcpy(&pfilter->ipaddr.v4.data, - &tunnel_filter->ip_addr, - sizeof(pfilter->ipaddr.v4.data)); - } else { - ip_type = I40E_AQC_ADD_CLOUD_FLAGS_IPV6; - (void)rte_memcpy(&pfilter->ipaddr.v6.data, - &tunnel_filter->ip_addr, - sizeof(pfilter->ipaddr.v6.data)); - } - - /* check tunneled type */ - switch (tunnel_filter->tunnel_type) { - case RTE_TUNNEL_TYPE_VXLAN: - tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_XVLAN; - break; - case RTE_TUNNEL_TYPE_NVGRE: - tun_type = I40E_AQC_ADD_CLOUD_TNL_TYPE_NVGRE_OMAC; - break; - default: - /* Other tunnel types is not supported. */ - PMD_DRV_LOG(ERR, "tunnel type is not supported."); - rte_free(cld_filter); - return -EINVAL; - } - - val = i40e_dev_get_filter_type(tunnel_filter->filter_type, - &pfilter->flags); - if (val < 0) { - rte_free(cld_filter); - return -EINVAL; - } - - pfilter->flags |= I40E_AQC_ADD_CLOUD_FLAGS_TO_QUEUE | ip_type | - (tun_type << I40E_AQC_ADD_CLOUD_TNL_TYPE_SHIFT); - pfilter->tenant_id = tunnel_filter->tenant_id; - pfilter->queue_number = tunnel_filter->queue_id; - - if (add) - ret = i40e_aq_add_cloud_filters(hw, vsi->seid, cld_filter, 1); - else - ret = i40e_aq_remove_cloud_filters(hw, vsi->seid, - cld_filter, 1); - - rte_free(cld_filter); - return ret; -} - -static int -i40e_get_vxlan_port_idx(struct i40e_pf *pf, uint16_t port) -{ - uint8_t i; - - for (i = 0; i < I40E_MAX_PF_UDP_OFFLOAD_PORTS; i++) { - if (pf->vxlan_ports[i] == port) - return i; - } - - return -1; -} - -static int -i40e_add_vxlan_port(struct i40e_pf *pf, uint16_t port) -{ - int idx, ret; - uint8_t filter_idx; - struct i40e_hw *hw = I40E_PF_TO_HW(pf); - - idx = i40e_get_vxlan_port_idx(pf, port); - - /* Check if port already exists */ - if (idx >= 0) { - PMD_DRV_LOG(ERR, "Port %d already offloaded", port); - return -EINVAL; - } - - /* Now check if there is space to add the new port */ - idx = i40e_get_vxlan_port_idx(pf, 0); - if (idx < 0) { - PMD_DRV_LOG(ERR, "Maximum number of UDP ports reached," - "not adding port %d", port); - return -ENOSPC; - } - - ret = i40e_aq_add_udp_tunnel(hw, port, I40E_AQC_TUNNEL_TYPE_VXLAN, - &filter_idx, NULL); - if (ret < 0) { - PMD_DRV_LOG(ERR, "Failed to add VXLAN UDP port %d", port); - return -1; - } - - PMD_DRV_LOG(INFO, "Added port %d with AQ command with index %d", - port, filter_idx); - - /* New port: add it and mark its index in the bitmap */ - pf->vxlan_ports[idx] = port; - pf->vxlan_bitmap |= (1 << idx); - - if (!(pf->flags & I40E_FLAG_VXLAN)) - pf->flags |= I40E_FLAG_VXLAN; - - return 0; -} - -static int -i40e_del_vxlan_port(struct i40e_pf *pf, uint16_t port) -{ - int idx; - struct i40e_hw *hw = I40E_PF_TO_HW(pf); - - if (!(pf->flags & I40E_FLAG_VXLAN)) { - PMD_DRV_LOG(ERR, "VXLAN UDP port was not configured."); - return -EINVAL; - } - - idx = i40e_get_vxlan_port_idx(pf, port); - - if (idx < 0) { - PMD_DRV_LOG(ERR, "Port %d doesn't exist", port); - return -EINVAL; - } - - if (i40e_aq_del_udp_tunnel(hw, idx, NULL) < 0) { - PMD_DRV_LOG(ERR, "Failed to delete VXLAN UDP port %d", port); - return -1; - } - - PMD_DRV_LOG(INFO, "Deleted port %d with AQ command with index %d", - port, idx); - - pf->vxlan_ports[idx] = 0; - pf->vxlan_bitmap &= ~(1 << idx); - - if (!pf->vxlan_bitmap) - pf->flags &= ~I40E_FLAG_VXLAN; - - return 0; -} - -/* Add UDP tunneling port */ -static int -i40e_dev_udp_tunnel_add(struct rte_eth_dev *dev, - struct rte_eth_udp_tunnel *udp_tunnel) -{ - int ret = 0; - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - - if (udp_tunnel == NULL) - return -EINVAL; - - switch (udp_tunnel->prot_type) { - case RTE_TUNNEL_TYPE_VXLAN: - ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port); - break; - - case RTE_TUNNEL_TYPE_GENEVE: - case RTE_TUNNEL_TYPE_TEREDO: - PMD_DRV_LOG(ERR, "Tunnel type is not supported now."); - ret = -1; - break; - - default: - PMD_DRV_LOG(ERR, "Invalid tunnel type"); - ret = -1; - break; - } - - return ret; -} - -/* Remove UDP tunneling port */ -static int -i40e_dev_udp_tunnel_del(struct rte_eth_dev *dev, - struct rte_eth_udp_tunnel *udp_tunnel) -{ - int ret = 0; - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - - if (udp_tunnel == NULL) - return -EINVAL; - - switch (udp_tunnel->prot_type) { - case RTE_TUNNEL_TYPE_VXLAN: - ret = i40e_del_vxlan_port(pf, udp_tunnel->udp_port); - break; - case RTE_TUNNEL_TYPE_GENEVE: - case RTE_TUNNEL_TYPE_TEREDO: - PMD_DRV_LOG(ERR, "Tunnel type is not supported now."); - ret = -1; - break; - default: - PMD_DRV_LOG(ERR, "Invalid tunnel type"); - ret = -1; - break; - } - - return ret; -} - -/* Calculate the maximum number of contiguous PF queues that are configured */ -static int -i40e_pf_calc_configured_queues_num(struct i40e_pf *pf) -{ - struct rte_eth_dev_data *data = pf->dev_data; - int i, num; - struct i40e_rx_queue *rxq; - - num = 0; - for (i = 0; i < pf->lan_nb_qps; i++) { - rxq = data->rx_queues[i]; - if (rxq && rxq->q_set) - num++; - else - break; - } - - return num; -} - -/* Configure RSS */ -static int -i40e_pf_config_rss(struct i40e_pf *pf) -{ - struct i40e_hw *hw = I40E_PF_TO_HW(pf); - struct rte_eth_rss_conf rss_conf; - uint32_t i, lut = 0; - uint16_t j, num; - - /* - * If both VMDQ and RSS enabled, not all of PF queues are configured. - * It's necessary to calulate the actual PF queues that are configured. - */ - if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) { - num = i40e_pf_calc_configured_queues_num(pf); - num = i40e_align_floor(num); - } else - num = i40e_align_floor(pf->dev_data->nb_rx_queues); - - PMD_INIT_LOG(INFO, "Max of contiguous %u PF queues are configured", - num); - - if (num == 0) { - PMD_INIT_LOG(ERR, "No PF queues are configured to enable RSS"); - return -ENOTSUP; - } - - for (i = 0, j = 0; i < hw->func_caps.rss_table_size; i++, j++) { - if (j == num) - j = 0; - lut = (lut << 8) | (j & ((0x1 << - hw->func_caps.rss_table_entry_width) - 1)); - if ((i & 3) == 3) - I40E_WRITE_REG(hw, I40E_PFQF_HLUT(i >> 2), lut); - } - - rss_conf = pf->dev_data->dev_conf.rx_adv_conf.rss_conf; - if ((rss_conf.rss_hf & I40E_RSS_OFFLOAD_ALL) == 0) { - i40e_pf_disable_rss(pf); - return 0; - } - if (rss_conf.rss_key == NULL || rss_conf.rss_key_len < - (I40E_PFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) { - /* Random default keys */ - static uint32_t rss_key_default[] = {0x6b793944, - 0x23504cb5, 0x5bea75b6, 0x309f4f12, 0x3dc0a2b8, - 0x024ddcdf, 0x339b8ca0, 0x4c4af64a, 0x34fac605, - 0x55d85839, 0x3a58997d, 0x2ec938e1, 0x66031581}; - - rss_conf.rss_key = (uint8_t *)rss_key_default; - rss_conf.rss_key_len = (I40E_PFQF_HKEY_MAX_INDEX + 1) * - sizeof(uint32_t); - } - - return i40e_hw_rss_hash_set(hw, &rss_conf); -} - -static int -i40e_tunnel_filter_param_check(struct i40e_pf *pf, - struct rte_eth_tunnel_filter_conf *filter) -{ - if (pf == NULL || filter == NULL) { - PMD_DRV_LOG(ERR, "Invalid parameter"); - return -EINVAL; - } - - if (filter->queue_id >= pf->dev_data->nb_rx_queues) { - PMD_DRV_LOG(ERR, "Invalid queue ID"); - return -EINVAL; - } - - if (filter->inner_vlan > ETHER_MAX_VLAN_ID) { - PMD_DRV_LOG(ERR, "Invalid inner VLAN ID"); - return -EINVAL; - } - - if ((filter->filter_type & ETH_TUNNEL_FILTER_OMAC) && - (is_zero_ether_addr(filter->outer_mac))) { - PMD_DRV_LOG(ERR, "Cannot add NULL outer MAC address"); - return -EINVAL; - } - - if ((filter->filter_type & ETH_TUNNEL_FILTER_IMAC) && - (is_zero_ether_addr(filter->inner_mac))) { - PMD_DRV_LOG(ERR, "Cannot add NULL inner MAC address"); - return -EINVAL; - } - - return 0; -} - -static int -i40e_tunnel_filter_handle(struct rte_eth_dev *dev, enum rte_filter_op filter_op, - void *arg) -{ - struct rte_eth_tunnel_filter_conf *filter; - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - int ret = I40E_SUCCESS; - - filter = (struct rte_eth_tunnel_filter_conf *)(arg); - - if (i40e_tunnel_filter_param_check(pf, filter) < 0) - return I40E_ERR_PARAM; - - switch (filter_op) { - case RTE_ETH_FILTER_NOP: - if (!(pf->flags & I40E_FLAG_VXLAN)) - ret = I40E_NOT_SUPPORTED; - case RTE_ETH_FILTER_ADD: - ret = i40e_dev_tunnel_filter_set(pf, filter, 1); - break; - case RTE_ETH_FILTER_DELETE: - ret = i40e_dev_tunnel_filter_set(pf, filter, 0); - break; - default: - PMD_DRV_LOG(ERR, "unknown operation %u", filter_op); - ret = I40E_ERR_PARAM; - break; - } - - return ret; -} - -static int -i40e_pf_config_mq_rx(struct i40e_pf *pf) -{ - int ret = 0; - enum rte_eth_rx_mq_mode mq_mode = pf->dev_data->dev_conf.rxmode.mq_mode; - - if (mq_mode & ETH_MQ_RX_DCB_FLAG) { - PMD_INIT_LOG(ERR, "i40e doesn't support DCB yet"); - return -ENOTSUP; - } - - /* RSS setup */ - if (mq_mode & ETH_MQ_RX_RSS_FLAG) - ret = i40e_pf_config_rss(pf); - else - i40e_pf_disable_rss(pf); - - return ret; -} - -/* Get the symmetric hash enable configurations per port */ -static void -i40e_get_symmetric_hash_enable_per_port(struct i40e_hw *hw, uint8_t *enable) -{ - uint32_t reg = I40E_READ_REG(hw, I40E_PRTQF_CTL_0); - - *enable = reg & I40E_PRTQF_CTL_0_HSYM_ENA_MASK ? 1 : 0; -} - -/* Set the symmetric hash enable configurations per port */ -static void -i40e_set_symmetric_hash_enable_per_port(struct i40e_hw *hw, uint8_t enable) -{ - uint32_t reg = I40E_READ_REG(hw, I40E_PRTQF_CTL_0); - - if (enable > 0) { - if (reg & I40E_PRTQF_CTL_0_HSYM_ENA_MASK) { - PMD_DRV_LOG(INFO, "Symmetric hash has already " - "been enabled"); - return; - } - reg |= I40E_PRTQF_CTL_0_HSYM_ENA_MASK; - } else { - if (!(reg & I40E_PRTQF_CTL_0_HSYM_ENA_MASK)) { - PMD_DRV_LOG(INFO, "Symmetric hash has already " - "been disabled"); - return; - } - reg &= ~I40E_PRTQF_CTL_0_HSYM_ENA_MASK; - } - I40E_WRITE_REG(hw, I40E_PRTQF_CTL_0, reg); - I40E_WRITE_FLUSH(hw); -} - -/* - * Get global configurations of hash function type and symmetric hash enable - * per flow type (pctype). Note that global configuration means it affects all - * the ports on the same NIC. - */ -static int -i40e_get_hash_filter_global_config(struct i40e_hw *hw, - struct rte_eth_hash_global_conf *g_cfg) -{ - uint32_t reg, mask = I40E_FLOW_TYPES; - uint16_t i; - enum i40e_filter_pctype pctype; - - memset(g_cfg, 0, sizeof(*g_cfg)); - reg = I40E_READ_REG(hw, I40E_GLQF_CTL); - if (reg & I40E_GLQF_CTL_HTOEP_MASK) - g_cfg->hash_func = RTE_ETH_HASH_FUNCTION_TOEPLITZ; - else - g_cfg->hash_func = RTE_ETH_HASH_FUNCTION_SIMPLE_XOR; - PMD_DRV_LOG(DEBUG, "Hash function is %s", - (reg & I40E_GLQF_CTL_HTOEP_MASK) ? "Toeplitz" : "Simple XOR"); - - for (i = 0; mask && i < RTE_ETH_FLOW_MAX; i++) { - if (!(mask & (1UL << i))) - continue; - mask &= ~(1UL << i); - /* Bit set indicats the coresponding flow type is supported */ - g_cfg->valid_bit_mask[0] |= (1UL << i); - pctype = i40e_flowtype_to_pctype(i); - reg = I40E_READ_REG(hw, I40E_GLQF_HSYM(pctype)); - if (reg & I40E_GLQF_HSYM_SYMH_ENA_MASK) - g_cfg->sym_hash_enable_mask[0] |= (1UL << i); - } - - return 0; -} - -static int -i40e_hash_global_config_check(struct rte_eth_hash_global_conf *g_cfg) -{ - uint32_t i; - uint32_t mask0, i40e_mask = I40E_FLOW_TYPES; - - if (g_cfg->hash_func != RTE_ETH_HASH_FUNCTION_TOEPLITZ && - g_cfg->hash_func != RTE_ETH_HASH_FUNCTION_SIMPLE_XOR && - g_cfg->hash_func != RTE_ETH_HASH_FUNCTION_DEFAULT) { - PMD_DRV_LOG(ERR, "Unsupported hash function type %d", - g_cfg->hash_func); - return -EINVAL; - } - - /* - * As i40e supports less than 32 flow types, only first 32 bits need to - * be checked. - */ - mask0 = g_cfg->valid_bit_mask[0]; - for (i = 0; i < RTE_SYM_HASH_MASK_ARRAY_SIZE; i++) { - if (i == 0) { - /* Check if any unsupported flow type configured */ - if ((mask0 | i40e_mask) ^ i40e_mask) - goto mask_err; - } else { - if (g_cfg->valid_bit_mask[i]) - goto mask_err; - } - } - - return 0; - -mask_err: - PMD_DRV_LOG(ERR, "i40e unsupported flow type bit(s) configured"); - - return -EINVAL; -} - -/* - * Set global configurations of hash function type and symmetric hash enable - * per flow type (pctype). Note any modifying global configuration will affect - * all the ports on the same NIC. - */ -static int -i40e_set_hash_filter_global_config(struct i40e_hw *hw, - struct rte_eth_hash_global_conf *g_cfg) -{ - int ret; - uint16_t i; - uint32_t reg; - uint32_t mask0 = g_cfg->valid_bit_mask[0]; - enum i40e_filter_pctype pctype; - - /* Check the input parameters */ - ret = i40e_hash_global_config_check(g_cfg); - if (ret < 0) - return ret; - - for (i = 0; mask0 && i < UINT32_BIT; i++) { - if (!(mask0 & (1UL << i))) - continue; - mask0 &= ~(1UL << i); - pctype = i40e_flowtype_to_pctype(i); - reg = (g_cfg->sym_hash_enable_mask[0] & (1UL << i)) ? - I40E_GLQF_HSYM_SYMH_ENA_MASK : 0; - I40E_WRITE_REG(hw, I40E_GLQF_HSYM(pctype), reg); - } - - reg = I40E_READ_REG(hw, I40E_GLQF_CTL); - if (g_cfg->hash_func == RTE_ETH_HASH_FUNCTION_TOEPLITZ) { - /* Toeplitz */ - if (reg & I40E_GLQF_CTL_HTOEP_MASK) { - PMD_DRV_LOG(DEBUG, "Hash function already set to " - "Toeplitz"); - goto out; - } - reg |= I40E_GLQF_CTL_HTOEP_MASK; - } else if (g_cfg->hash_func == RTE_ETH_HASH_FUNCTION_SIMPLE_XOR) { - /* Simple XOR */ - if (!(reg & I40E_GLQF_CTL_HTOEP_MASK)) { - PMD_DRV_LOG(DEBUG, "Hash function already set to " - "Simple XOR"); - goto out; - } - reg &= ~I40E_GLQF_CTL_HTOEP_MASK; - } else - /* Use the default, and keep it as it is */ - goto out; - - I40E_WRITE_REG(hw, I40E_GLQF_CTL, reg); - -out: - I40E_WRITE_FLUSH(hw); - - return 0; -} - -static int -i40e_hash_filter_get(struct i40e_hw *hw, struct rte_eth_hash_filter_info *info) -{ - int ret = 0; - - if (!hw || !info) { - PMD_DRV_LOG(ERR, "Invalid pointer"); - return -EFAULT; - } - - switch (info->info_type) { - case RTE_ETH_HASH_FILTER_SYM_HASH_ENA_PER_PORT: - i40e_get_symmetric_hash_enable_per_port(hw, - &(info->info.enable)); - break; - case RTE_ETH_HASH_FILTER_GLOBAL_CONFIG: - ret = i40e_get_hash_filter_global_config(hw, - &(info->info.global_conf)); - break; - default: - PMD_DRV_LOG(ERR, "Hash filter info type (%d) not supported", - info->info_type); - ret = -EINVAL; - break; - } - - return ret; -} - -static int -i40e_hash_filter_set(struct i40e_hw *hw, struct rte_eth_hash_filter_info *info) -{ - int ret = 0; - - if (!hw || !info) { - PMD_DRV_LOG(ERR, "Invalid pointer"); - return -EFAULT; - } - - switch (info->info_type) { - case RTE_ETH_HASH_FILTER_SYM_HASH_ENA_PER_PORT: - i40e_set_symmetric_hash_enable_per_port(hw, info->info.enable); - break; - case RTE_ETH_HASH_FILTER_GLOBAL_CONFIG: - ret = i40e_set_hash_filter_global_config(hw, - &(info->info.global_conf)); - break; - default: - PMD_DRV_LOG(ERR, "Hash filter info type (%d) not supported", - info->info_type); - ret = -EINVAL; - break; - } - - return ret; -} - -/* Operations for hash function */ -static int -i40e_hash_filter_ctrl(struct rte_eth_dev *dev, - enum rte_filter_op filter_op, - void *arg) -{ - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - int ret = 0; - - switch (filter_op) { - case RTE_ETH_FILTER_NOP: - break; - case RTE_ETH_FILTER_GET: - ret = i40e_hash_filter_get(hw, - (struct rte_eth_hash_filter_info *)arg); - break; - case RTE_ETH_FILTER_SET: - ret = i40e_hash_filter_set(hw, - (struct rte_eth_hash_filter_info *)arg); - break; - default: - PMD_DRV_LOG(WARNING, "Filter operation (%d) not supported", - filter_op); - ret = -ENOTSUP; - break; - } - - return ret; -} - -/* - * Configure ethertype filter, which can director packet by filtering - * with mac address and ether_type or only ether_type - */ -static int -i40e_ethertype_filter_set(struct i40e_pf *pf, - struct rte_eth_ethertype_filter *filter, - bool add) -{ - struct i40e_hw *hw = I40E_PF_TO_HW(pf); - struct i40e_control_filter_stats stats; - uint16_t flags = 0; - int ret; - - if (filter->queue >= pf->dev_data->nb_rx_queues) { - PMD_DRV_LOG(ERR, "Invalid queue ID"); - return -EINVAL; - } - if (filter->ether_type == ETHER_TYPE_IPv4 || - filter->ether_type == ETHER_TYPE_IPv6) { - PMD_DRV_LOG(ERR, "unsupported ether_type(0x%04x) in" - " control packet filter.", filter->ether_type); - return -EINVAL; - } - if (filter->ether_type == ETHER_TYPE_VLAN) - PMD_DRV_LOG(WARNING, "filter vlan ether_type in first tag is" - " not supported."); - - if (!(filter->flags & RTE_ETHTYPE_FLAGS_MAC)) - flags |= I40E_AQC_ADD_CONTROL_PACKET_FLAGS_IGNORE_MAC; - if (filter->flags & RTE_ETHTYPE_FLAGS_DROP) - flags |= I40E_AQC_ADD_CONTROL_PACKET_FLAGS_DROP; - flags |= I40E_AQC_ADD_CONTROL_PACKET_FLAGS_TO_QUEUE; - - memset(&stats, 0, sizeof(stats)); - ret = i40e_aq_add_rem_control_packet_filter(hw, - filter->mac_addr.addr_bytes, - filter->ether_type, flags, - pf->main_vsi->seid, - filter->queue, add, &stats, NULL); - - PMD_DRV_LOG(INFO, "add/rem control packet filter, return %d," - " mac_etype_used = %u, etype_used = %u," - " mac_etype_free = %u, etype_free = %u\n", - ret, stats.mac_etype_used, stats.etype_used, - stats.mac_etype_free, stats.etype_free); - if (ret < 0) - return -ENOSYS; - return 0; -} - -/* - * Handle operations for ethertype filter. - */ -static int -i40e_ethertype_filter_handle(struct rte_eth_dev *dev, - enum rte_filter_op filter_op, - void *arg) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - int ret = 0; - - if (filter_op == RTE_ETH_FILTER_NOP) - return ret; - - if (arg == NULL) { - PMD_DRV_LOG(ERR, "arg shouldn't be NULL for operation %u", - filter_op); - return -EINVAL; - } - - switch (filter_op) { - case RTE_ETH_FILTER_ADD: - ret = i40e_ethertype_filter_set(pf, - (struct rte_eth_ethertype_filter *)arg, - TRUE); - break; - case RTE_ETH_FILTER_DELETE: - ret = i40e_ethertype_filter_set(pf, - (struct rte_eth_ethertype_filter *)arg, - FALSE); - break; - default: - PMD_DRV_LOG(ERR, "unsupported operation %u\n", filter_op); - ret = -ENOSYS; - break; - } - return ret; -} - -static int -i40e_dev_filter_ctrl(struct rte_eth_dev *dev, - enum rte_filter_type filter_type, - enum rte_filter_op filter_op, - void *arg) -{ - int ret = 0; - - if (dev == NULL) - return -EINVAL; - - switch (filter_type) { - case RTE_ETH_FILTER_HASH: - ret = i40e_hash_filter_ctrl(dev, filter_op, arg); - break; - case RTE_ETH_FILTER_MACVLAN: - ret = i40e_mac_filter_handle(dev, filter_op, arg); - break; - case RTE_ETH_FILTER_ETHERTYPE: - ret = i40e_ethertype_filter_handle(dev, filter_op, arg); - break; - case RTE_ETH_FILTER_TUNNEL: - ret = i40e_tunnel_filter_handle(dev, filter_op, arg); - break; - case RTE_ETH_FILTER_FDIR: - ret = i40e_fdir_ctrl_func(dev, filter_op, arg); - break; - default: - PMD_DRV_LOG(WARNING, "Filter type (%d) not supported", - filter_type); - ret = -EINVAL; - break; - } - - return ret; -} - -/* - * As some registers wouldn't be reset unless a global hardware reset, - * hardware initialization is needed to put those registers into an - * expected initial state. - */ -static void -i40e_hw_init(struct i40e_hw *hw) -{ - /* clear the PF Queue Filter control register */ - I40E_WRITE_REG(hw, I40E_PFQF_CTL_0, 0); - - /* Disable symmetric hash per port */ - i40e_set_symmetric_hash_enable_per_port(hw, 0); -} - -enum i40e_filter_pctype -i40e_flowtype_to_pctype(uint16_t flow_type) -{ - static const enum i40e_filter_pctype pctype_table[] = { - [RTE_ETH_FLOW_FRAG_IPV4] = I40E_FILTER_PCTYPE_FRAG_IPV4, - [RTE_ETH_FLOW_NONFRAG_IPV4_UDP] = - I40E_FILTER_PCTYPE_NONF_IPV4_UDP, - [RTE_ETH_FLOW_NONFRAG_IPV4_TCP] = - I40E_FILTER_PCTYPE_NONF_IPV4_TCP, - [RTE_ETH_FLOW_NONFRAG_IPV4_SCTP] = - I40E_FILTER_PCTYPE_NONF_IPV4_SCTP, - [RTE_ETH_FLOW_NONFRAG_IPV4_OTHER] = - I40E_FILTER_PCTYPE_NONF_IPV4_OTHER, - [RTE_ETH_FLOW_FRAG_IPV6] = I40E_FILTER_PCTYPE_FRAG_IPV6, - [RTE_ETH_FLOW_NONFRAG_IPV6_UDP] = - I40E_FILTER_PCTYPE_NONF_IPV6_UDP, - [RTE_ETH_FLOW_NONFRAG_IPV6_TCP] = - I40E_FILTER_PCTYPE_NONF_IPV6_TCP, - [RTE_ETH_FLOW_NONFRAG_IPV6_SCTP] = - I40E_FILTER_PCTYPE_NONF_IPV6_SCTP, - [RTE_ETH_FLOW_NONFRAG_IPV6_OTHER] = - I40E_FILTER_PCTYPE_NONF_IPV6_OTHER, - [RTE_ETH_FLOW_L2_PAYLOAD] = I40E_FILTER_PCTYPE_L2_PAYLOAD, - }; - - return pctype_table[flow_type]; -} - -uint16_t -i40e_pctype_to_flowtype(enum i40e_filter_pctype pctype) -{ - static const uint16_t flowtype_table[] = { - [I40E_FILTER_PCTYPE_FRAG_IPV4] = RTE_ETH_FLOW_FRAG_IPV4, - [I40E_FILTER_PCTYPE_NONF_IPV4_UDP] = - RTE_ETH_FLOW_NONFRAG_IPV4_UDP, - [I40E_FILTER_PCTYPE_NONF_IPV4_TCP] = - RTE_ETH_FLOW_NONFRAG_IPV4_TCP, - [I40E_FILTER_PCTYPE_NONF_IPV4_SCTP] = - RTE_ETH_FLOW_NONFRAG_IPV4_SCTP, - [I40E_FILTER_PCTYPE_NONF_IPV4_OTHER] = - RTE_ETH_FLOW_NONFRAG_IPV4_OTHER, - [I40E_FILTER_PCTYPE_FRAG_IPV6] = RTE_ETH_FLOW_FRAG_IPV6, - [I40E_FILTER_PCTYPE_NONF_IPV6_UDP] = - RTE_ETH_FLOW_NONFRAG_IPV6_UDP, - [I40E_FILTER_PCTYPE_NONF_IPV6_TCP] = - RTE_ETH_FLOW_NONFRAG_IPV6_TCP, - [I40E_FILTER_PCTYPE_NONF_IPV6_SCTP] = - RTE_ETH_FLOW_NONFRAG_IPV6_SCTP, - [I40E_FILTER_PCTYPE_NONF_IPV6_OTHER] = - RTE_ETH_FLOW_NONFRAG_IPV6_OTHER, - [I40E_FILTER_PCTYPE_L2_PAYLOAD] = RTE_ETH_FLOW_L2_PAYLOAD, - }; - - return flowtype_table[pctype]; -} - -static int -i40e_debug_read_register(struct i40e_hw *hw, uint32_t addr, uint64_t *val) -{ - struct i40e_aq_desc desc; - enum i40e_status_code status; - - i40e_fill_default_direct_cmd_desc(&desc, i40e_aqc_opc_debug_read_reg); - desc.params.internal.param1 = rte_cpu_to_le_32(addr); - status = i40e_asq_send_command(hw, &desc, NULL, 0, NULL); - if (status < 0) - return status; - - *val = ((uint64_t)(rte_le_to_cpu_32(desc.params.internal.param2)) << - (CHAR_BIT * sizeof(uint32_t))) + - rte_le_to_cpu_32(desc.params.internal.param3); - - return status; -} - -/* - * On X710, performance number is far from the expectation on recent firmware - * versions; on XL710, performance number is also far from the expectation on - * recent firmware versions, if promiscuous mode is disabled, or promiscuous - * mode is enabled and port MAC address is equal to the packet destination MAC - * address. The fix for this issue may not be integrated in the following - * firmware version. So the workaround in software driver is needed. It needs - * to modify the initial values of 3 internal only registers for both X710 and - * XL710. Note that the values for X710 or XL710 could be different, and the - * workaround can be removed when it is fixed in firmware in the future. - */ - -/* For both X710 and XL710 */ -#define I40E_GL_SWR_PRI_JOIN_MAP_0_VALUE 0x10000200 -#define I40E_GL_SWR_PRI_JOIN_MAP_0 0x26CE00 - -#define I40E_GL_SWR_PRI_JOIN_MAP_2_VALUE 0x011f0200 -#define I40E_GL_SWR_PRI_JOIN_MAP_2 0x26CE08 - -/* For X710 */ -#define I40E_GL_SWR_PM_UP_THR_EF_VALUE 0x03030303 -/* For XL710 */ -#define I40E_GL_SWR_PM_UP_THR_SF_VALUE 0x06060606 -#define I40E_GL_SWR_PM_UP_THR 0x269FBC - -static void -i40e_configure_registers(struct i40e_hw *hw) -{ - static struct { - uint32_t addr; - uint64_t val; - } reg_table[] = { - {I40E_GL_SWR_PRI_JOIN_MAP_0, I40E_GL_SWR_PRI_JOIN_MAP_0_VALUE}, - {I40E_GL_SWR_PRI_JOIN_MAP_2, I40E_GL_SWR_PRI_JOIN_MAP_2_VALUE}, - {I40E_GL_SWR_PM_UP_THR, 0}, /* Compute value dynamically */ - }; - uint64_t reg; - uint32_t i; - int ret; - - for (i = 0; i < RTE_DIM(reg_table); i++) { - if (reg_table[i].addr == I40E_GL_SWR_PM_UP_THR) { - if (i40e_is_40G_device(hw->device_id)) /* For XL710 */ - reg_table[i].val = - I40E_GL_SWR_PM_UP_THR_SF_VALUE; - else /* For X710 */ - reg_table[i].val = - I40E_GL_SWR_PM_UP_THR_EF_VALUE; - } - - ret = i40e_debug_read_register(hw, reg_table[i].addr, ®); - if (ret < 0) { - PMD_DRV_LOG(ERR, "Failed to read from 0x%"PRIx32, - reg_table[i].addr); - break; - } - PMD_DRV_LOG(DEBUG, "Read from 0x%"PRIx32": 0x%"PRIx64, - reg_table[i].addr, reg); - if (reg == reg_table[i].val) - continue; - - ret = i40e_aq_debug_write_register(hw, reg_table[i].addr, - reg_table[i].val, NULL); - if (ret < 0) { - PMD_DRV_LOG(ERR, "Failed to write 0x%"PRIx64" to the " - "address of 0x%"PRIx32, reg_table[i].val, - reg_table[i].addr); - break; - } - PMD_DRV_LOG(DEBUG, "Write 0x%"PRIx64" to the address of " - "0x%"PRIx32, reg_table[i].val, reg_table[i].addr); - } -} diff --git a/lib/librte_pmd_i40e/i40e_ethdev.h b/lib/librte_pmd_i40e/i40e_ethdev.h deleted file mode 100644 index b9bed5a..0000000 --- a/lib/librte_pmd_i40e/i40e_ethdev.h +++ /dev/null @@ -1,567 +0,0 @@ -/*- - * BSD LICENSE - * - * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#ifndef _I40E_ETHDEV_H_ -#define _I40E_ETHDEV_H_ - -#include - -#define I40E_AQ_LEN 32 -#define I40E_AQ_BUF_SZ 4096 -/* Number of queues per TC should be one of 1, 2, 4, 8, 16, 32, 64 */ -#define I40E_MAX_Q_PER_TC 64 -#define I40E_NUM_DESC_DEFAULT 512 -#define I40E_NUM_DESC_ALIGN 32 -#define I40E_BUF_SIZE_MIN 1024 -#define I40E_FRAME_SIZE_MAX 9728 -#define I40E_QUEUE_BASE_ADDR_UNIT 128 -/* number of VSIs and queue default setting */ -#define I40E_MAX_QP_NUM_PER_VF 16 -#define I40E_DEFAULT_QP_NUM_FDIR 1 -#define I40E_UINT32_BIT_SIZE (CHAR_BIT * sizeof(uint32_t)) -#define I40E_VFTA_SIZE (4096 / I40E_UINT32_BIT_SIZE) -/* - * vlan_id is a 12 bit number. - * The VFTA array is actually a 4096 bit array, 128 of 32bit elements. - * 2^5 = 32. The val of lower 5 bits specifies the bit in the 32bit element. - * The higher 7 bit val specifies VFTA array index. - */ -#define I40E_VFTA_BIT(vlan_id) (1 << ((vlan_id) & 0x1F)) -#define I40E_VFTA_IDX(vlan_id) ((vlan_id) >> 5) - -/* Default TC traffic in case DCB is not enabled */ -#define I40E_DEFAULT_TCMAP 0x1 -#define I40E_FDIR_QUEUE_ID 0 - -/* Always assign pool 0 to main VSI, VMDQ will start from 1 */ -#define I40E_VMDQ_POOL_BASE 1 - -#define I40E_DEFAULT_RX_FREE_THRESH 32 -#define I40E_DEFAULT_RX_PTHRESH 8 -#define I40E_DEFAULT_RX_HTHRESH 8 -#define I40E_DEFAULT_RX_WTHRESH 0 - -#define I40E_DEFAULT_TX_FREE_THRESH 32 -#define I40E_DEFAULT_TX_PTHRESH 32 -#define I40E_DEFAULT_TX_HTHRESH 0 -#define I40E_DEFAULT_TX_WTHRESH 0 -#define I40E_DEFAULT_TX_RSBIT_THRESH 32 - -/* Bit shift and mask */ -#define I40E_4_BIT_WIDTH (CHAR_BIT / 2) -#define I40E_4_BIT_MASK RTE_LEN2MASK(I40E_4_BIT_WIDTH, uint8_t) -#define I40E_8_BIT_WIDTH CHAR_BIT -#define I40E_8_BIT_MASK UINT8_MAX -#define I40E_16_BIT_WIDTH (CHAR_BIT * 2) -#define I40E_16_BIT_MASK UINT16_MAX -#define I40E_32_BIT_WIDTH (CHAR_BIT * 4) -#define I40E_32_BIT_MASK UINT32_MAX -#define I40E_48_BIT_WIDTH (CHAR_BIT * 6) -#define I40E_48_BIT_MASK RTE_LEN2MASK(I40E_48_BIT_WIDTH, uint64_t) - -/* index flex payload per layer */ -enum i40e_flxpld_layer_idx { - I40E_FLXPLD_L2_IDX = 0, - I40E_FLXPLD_L3_IDX = 1, - I40E_FLXPLD_L4_IDX = 2, - I40E_MAX_FLXPLD_LAYER = 3, -}; -#define I40E_MAX_FLXPLD_FIED 3 /* max number of flex payload fields */ -#define I40E_FDIR_BITMASK_NUM_WORD 2 /* max number of bitmask words */ -#define I40E_FDIR_MAX_FLEXWORD_NUM 8 /* max number of flexpayload words */ -#define I40E_FDIR_MAX_FLEX_LEN 16 /* len in bytes of flex payload */ - -/* i40e flags */ -#define I40E_FLAG_RSS (1ULL << 0) -#define I40E_FLAG_DCB (1ULL << 1) -#define I40E_FLAG_VMDQ (1ULL << 2) -#define I40E_FLAG_SRIOV (1ULL << 3) -#define I40E_FLAG_HEADER_SPLIT_DISABLED (1ULL << 4) -#define I40E_FLAG_HEADER_SPLIT_ENABLED (1ULL << 5) -#define I40E_FLAG_FDIR (1ULL << 6) -#define I40E_FLAG_VXLAN (1ULL << 7) -#define I40E_FLAG_ALL (I40E_FLAG_RSS | \ - I40E_FLAG_DCB | \ - I40E_FLAG_VMDQ | \ - I40E_FLAG_SRIOV | \ - I40E_FLAG_HEADER_SPLIT_DISABLED | \ - I40E_FLAG_HEADER_SPLIT_ENABLED | \ - I40E_FLAG_FDIR | \ - I40E_FLAG_VXLAN) - -#define I40E_RSS_OFFLOAD_ALL ( \ - ETH_RSS_FRAG_IPV4 | \ - ETH_RSS_NONFRAG_IPV4_TCP | \ - ETH_RSS_NONFRAG_IPV4_UDP | \ - ETH_RSS_NONFRAG_IPV4_SCTP | \ - ETH_RSS_NONFRAG_IPV4_OTHER | \ - ETH_RSS_FRAG_IPV6 | \ - ETH_RSS_NONFRAG_IPV6_TCP | \ - ETH_RSS_NONFRAG_IPV6_UDP | \ - ETH_RSS_NONFRAG_IPV6_SCTP | \ - ETH_RSS_NONFRAG_IPV6_OTHER | \ - ETH_RSS_L2_PAYLOAD) - -/* All bits of RSS hash enable */ -#define I40E_RSS_HENA_ALL ( \ - (1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_UDP) | \ - (1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_TCP) | \ - (1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_SCTP) | \ - (1ULL << I40E_FILTER_PCTYPE_NONF_IPV4_OTHER) | \ - (1ULL << I40E_FILTER_PCTYPE_FRAG_IPV4) | \ - (1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_UDP) | \ - (1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_TCP) | \ - (1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_SCTP) | \ - (1ULL << I40E_FILTER_PCTYPE_NONF_IPV6_OTHER) | \ - (1ULL << I40E_FILTER_PCTYPE_FRAG_IPV6) | \ - (1ULL << I40E_FILTER_PCTYPE_FCOE_OX) | \ - (1ULL << I40E_FILTER_PCTYPE_FCOE_RX) | \ - (1ULL << I40E_FILTER_PCTYPE_FCOE_OTHER) | \ - (1ULL << I40E_FILTER_PCTYPE_L2_PAYLOAD)) - -struct i40e_adapter; - -/** - * MAC filter structure - */ -struct i40e_mac_filter_info { - enum rte_mac_filter_type filter_type; - struct ether_addr mac_addr; -}; - -TAILQ_HEAD(i40e_mac_filter_list, i40e_mac_filter); - -/* MAC filter list structure */ -struct i40e_mac_filter { - TAILQ_ENTRY(i40e_mac_filter) next; - struct i40e_mac_filter_info mac_info; -}; - -TAILQ_HEAD(i40e_vsi_list_head, i40e_vsi_list); - -struct i40e_vsi; - -/* VSI list structure */ -struct i40e_vsi_list { - TAILQ_ENTRY(i40e_vsi_list) list; - struct i40e_vsi *vsi; -}; - -struct i40e_rx_queue; -struct i40e_tx_queue; - -/* Structure that defines a VEB */ -struct i40e_veb { - struct i40e_vsi_list_head head; - struct i40e_vsi *associate_vsi; /* Associate VSI who owns the VEB */ - uint16_t seid; /* The seid of VEB itself */ - uint16_t uplink_seid; /* The uplink seid of this VEB */ - uint16_t stats_idx; - struct i40e_eth_stats stats; -}; - -/* i40e MACVLAN filter structure */ -struct i40e_macvlan_filter { - struct ether_addr macaddr; - enum rte_mac_filter_type filter_type; - uint16_t vlan_id; -}; - -/* - * Structure that defines a VSI, associated with a adapter. - */ -struct i40e_vsi { - struct i40e_adapter *adapter; /* Backreference to associated adapter */ - struct i40e_aqc_vsi_properties_data info; /* VSI properties */ - - struct i40e_eth_stats eth_stats_offset; - struct i40e_eth_stats eth_stats; - /* - * When drivers loaded, only a default main VSI exists. In case new VSI - * needs to add, HW needs to know the layout that VSIs are organized. - * Besides that, VSI isan element and can't switch packets, which needs - * to add new component VEB to perform switching. So, a new VSI needs - * to specify the the uplink VSI (Parent VSI) before created. The - * uplink VSI will check whether it had a VEB to switch packets. If no, - * it will try to create one. Then, uplink VSI will move the new VSI - * into its' sib_vsi_list to manage all the downlink VSI. - * sib_vsi_list: the VSI list that shared the same uplink VSI. - * parent_vsi : the uplink VSI. It's NULL for main VSI. - * veb : the VEB associates with the VSI. - */ - struct i40e_vsi_list sib_vsi_list; /* sibling vsi list */ - struct i40e_vsi *parent_vsi; - struct i40e_veb *veb; /* Associated veb, could be null */ - bool offset_loaded; - enum i40e_vsi_type type; /* VSI types */ - uint16_t vlan_num; /* Total VLAN number */ - uint16_t mac_num; /* Total mac number */ - uint32_t vfta[I40E_VFTA_SIZE]; /* VLAN bitmap */ - struct i40e_mac_filter_list mac_list; /* macvlan filter list */ - /* specific VSI-defined parameters, SRIOV stored the vf_id */ - uint32_t user_param; - uint16_t seid; /* The seid of VSI itself */ - uint16_t uplink_seid; /* The uplink seid of this VSI */ - uint16_t nb_qps; /* Number of queue pairs VSI can occupy */ - uint16_t max_macaddrs; /* Maximum number of MAC addresses */ - uint16_t base_queue; /* The first queue index of this VSI */ - /* - * The offset to visit VSI related register, assigned by HW when - * creating VSI - */ - uint16_t vsi_id; - uint16_t msix_intr; /* The MSIX interrupt binds to VSI */ - uint8_t enabled_tc; /* The traffic class enabled */ -}; - -struct pool_entry { - LIST_ENTRY(pool_entry) next; - uint16_t base; - uint16_t len; -}; - -LIST_HEAD(res_list, pool_entry); - -struct i40e_res_pool_info { - uint32_t base; /* Resource start index */ - uint32_t num_alloc; /* Allocated resource number */ - uint32_t num_free; /* Total available resource number */ - struct res_list alloc_list; /* Allocated resource list */ - struct res_list free_list; /* Available resource list */ -}; - -enum I40E_VF_STATE { - I40E_VF_INACTIVE = 0, - I40E_VF_INRESET, - I40E_VF_ININIT, - I40E_VF_ACTIVE, -}; - -/* - * Structure to store private data for PF host. - */ -struct i40e_pf_vf { - struct i40e_pf *pf; - struct i40e_vsi *vsi; - enum I40E_VF_STATE state; /* The number of queue pairs availiable */ - uint16_t vf_idx; /* VF index in pf->vfs */ - uint16_t lan_nb_qps; /* Actual queues allocated */ - uint16_t reset_cnt; /* Total vf reset times */ -}; - -/* - * Structure to store private data for VMDQ instance - */ -struct i40e_vmdq_info { - struct i40e_pf *pf; - struct i40e_vsi *vsi; -}; - -/* - * Structure to store flex pit for flow diretor. - */ -struct i40e_fdir_flex_pit { - uint8_t src_offset; /* offset in words from the beginning of payload */ - uint8_t size; /* size in words */ - uint8_t dst_offset; /* offset in words of flexible payload */ -}; - -struct i40e_fdir_flex_mask { - uint8_t word_mask; /**< Bit i enables word i of flexible payload */ - struct { - uint8_t offset; - uint16_t mask; - } bitmask[I40E_FDIR_BITMASK_NUM_WORD]; -}; - -#define I40E_FILTER_PCTYPE_MAX 64 -/* - * A structure used to define fields of a FDIR related info. - */ -struct i40e_fdir_info { - struct i40e_vsi *fdir_vsi; /* pointer to fdir VSI structure */ - uint16_t match_counter_index; /* Statistic counter index used for fdir*/ - struct i40e_tx_queue *txq; - struct i40e_rx_queue *rxq; - void *prg_pkt; /* memory for fdir program packet */ - uint64_t dma_addr; /* physic address of packet memory*/ - /* - * the rule how bytes stream is extracted as flexible payload - * for each payload layer, the setting can up to three elements - */ - struct i40e_fdir_flex_pit flex_set[I40E_MAX_FLXPLD_LAYER * I40E_MAX_FLXPLD_FIED]; - struct i40e_fdir_flex_mask flex_mask[I40E_FILTER_PCTYPE_MAX]; -}; - -/* - * Structure to store private data specific for PF instance. - */ -struct i40e_pf { - struct i40e_adapter *adapter; /* The adapter this PF associate to */ - struct i40e_vsi *main_vsi; /* pointer to main VSI structure */ - uint16_t mac_seid; /* The seid of the MAC of this PF */ - uint16_t main_vsi_seid; /* The seid of the main VSI */ - uint16_t max_num_vsi; - struct i40e_res_pool_info qp_pool; /*Queue pair pool */ - struct i40e_res_pool_info msix_pool; /* MSIX interrupt pool */ - - struct i40e_hw_port_stats stats_offset; - struct i40e_hw_port_stats stats; - bool offset_loaded; - - struct rte_eth_dev_data *dev_data; /* Pointer to the device data */ - struct ether_addr dev_addr; /* PF device mac address */ - uint64_t flags; /* PF featuer flags */ - /* All kinds of queue pair setting for different VSIs */ - struct i40e_pf_vf *vfs; - uint16_t vf_num; - /* Each of below queue pairs should be power of 2 since it's the - precondition after TC configuration applied */ - uint16_t lan_nb_qps; /* The number of queue pairs of LAN */ - uint16_t vmdq_nb_qps; /* The number of queue pairs of VMDq */ - uint16_t vf_nb_qps; /* The number of queue pairs of VF */ - uint16_t fdir_nb_qps; /* The number of queue pairs of Flow Director */ - uint16_t hash_lut_size; /* The size of hash lookup table */ - /* store VXLAN UDP ports */ - uint16_t vxlan_ports[I40E_MAX_PF_UDP_OFFLOAD_PORTS]; - uint16_t vxlan_bitmap; /* Vxlan bit mask */ - - /* VMDQ related info */ - uint16_t max_nb_vmdq_vsi; /* Max number of VMDQ VSIs supported */ - uint16_t nb_cfg_vmdq_vsi; /* number of VMDQ VSIs configured */ - struct i40e_vmdq_info *vmdq; - - struct i40e_fdir_info fdir; /* flow director info */ -}; - -enum pending_msg { - PFMSG_LINK_CHANGE = 0x1, - PFMSG_RESET_IMPENDING = 0x2, - PFMSG_DRIVER_CLOSE = 0x4, -}; - -struct i40e_vsi_vlan_pvid_info { - uint16_t on; /* Enable or disable pvid */ - union { - uint16_t pvid; /* Valid in case 'on' is set to set pvid */ - struct { - /* Valid in case 'on' is cleared. 'tagged' will reject tagged packets, - * while 'untagged' will reject untagged packets. - */ - uint8_t tagged; - uint8_t untagged; - } reject; - } config; -}; - -struct i40e_vf_rx_queues { - uint64_t rx_dma_addr; - uint32_t rx_ring_len; - uint32_t buff_size; -}; - -struct i40e_vf_tx_queues { - uint64_t tx_dma_addr; - uint32_t tx_ring_len; -}; - -/* - * Structure to store private data specific for VF instance. - */ -struct i40e_vf { - struct i40e_adapter *adapter; /* The adapter this VF associate to */ - struct rte_eth_dev_data *dev_data; /* Pointer to the device data */ - uint16_t num_queue_pairs; - uint16_t max_pkt_len; /* Maximum packet length */ - bool promisc_unicast_enabled; - bool promisc_multicast_enabled; - - uint32_t version_major; /* Major version number */ - uint32_t version_minor; /* Minor version number */ - uint16_t promisc_flags; /* Promiscuous setting */ - uint32_t vlan[I40E_VFTA_SIZE]; /* VLAN bit map */ - - /* Event from pf */ - bool dev_closed; - bool link_up; - bool vf_reset; - volatile uint32_t pend_cmd; /* pending command not finished yet */ - u16 pend_msg; /* flags indicates events from pf not handled yet */ - - /* VSI info */ - struct i40e_virtchnl_vf_resource *vf_res; /* All VSIs */ - struct i40e_virtchnl_vsi_resource *vsi_res; /* LAN VSI */ - struct i40e_vsi vsi; -}; - -/* - * Structure to store private data for each PF/VF instance. - */ -struct i40e_adapter { - /* Common for both PF and VF */ - struct i40e_hw hw; - struct rte_eth_dev *eth_dev; - - /* Specific for PF or VF */ - union { - struct i40e_pf pf; - struct i40e_vf vf; - }; -}; - -int i40e_dev_switch_queues(struct i40e_pf *pf, bool on); -int i40e_vsi_release(struct i40e_vsi *vsi); -struct i40e_vsi *i40e_vsi_setup(struct i40e_pf *pf, - enum i40e_vsi_type type, - struct i40e_vsi *uplink_vsi, - uint16_t user_param); -int i40e_switch_rx_queue(struct i40e_hw *hw, uint16_t q_idx, bool on); -int i40e_switch_tx_queue(struct i40e_hw *hw, uint16_t q_idx, bool on); -int i40e_vsi_add_vlan(struct i40e_vsi *vsi, uint16_t vlan); -int i40e_vsi_delete_vlan(struct i40e_vsi *vsi, uint16_t vlan); -int i40e_vsi_add_mac(struct i40e_vsi *vsi, struct i40e_mac_filter_info *filter); -int i40e_vsi_delete_mac(struct i40e_vsi *vsi, struct ether_addr *addr); -void i40e_update_vsi_stats(struct i40e_vsi *vsi); -void i40e_pf_disable_irq0(struct i40e_hw *hw); -void i40e_pf_enable_irq0(struct i40e_hw *hw); -int i40e_dev_link_update(struct rte_eth_dev *dev, - __rte_unused int wait_to_complete); -void i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi); -void i40e_vsi_queues_unbind_intr(struct i40e_vsi *vsi); -int i40e_vsi_vlan_pvid_set(struct i40e_vsi *vsi, - struct i40e_vsi_vlan_pvid_info *info); -int i40e_vsi_config_vlan_stripping(struct i40e_vsi *vsi, bool on); -uint64_t i40e_config_hena(uint64_t flags); -uint64_t i40e_parse_hena(uint64_t flags); -enum i40e_status_code i40e_fdir_setup_tx_resources(struct i40e_pf *pf); -enum i40e_status_code i40e_fdir_setup_rx_resources(struct i40e_pf *pf); -int i40e_fdir_setup(struct i40e_pf *pf); -const struct rte_memzone *i40e_memzone_reserve(const char *name, - uint32_t len, - int socket_id); -int i40e_fdir_configure(struct rte_eth_dev *dev); -void i40e_fdir_teardown(struct i40e_pf *pf); -enum i40e_filter_pctype i40e_flowtype_to_pctype(uint16_t flow_type); -uint16_t i40e_pctype_to_flowtype(enum i40e_filter_pctype pctype); -int i40e_fdir_ctrl_func(struct rte_eth_dev *dev, - enum rte_filter_op filter_op, - void *arg); - -/* I40E_DEV_PRIVATE_TO */ -#define I40E_DEV_PRIVATE_TO_PF(adapter) \ - (&((struct i40e_adapter *)adapter)->pf) -#define I40E_DEV_PRIVATE_TO_HW(adapter) \ - (&((struct i40e_adapter *)adapter)->hw) -#define I40E_DEV_PRIVATE_TO_ADAPTER(adapter) \ - ((struct i40e_adapter *)adapter) - -/* I40EVF_DEV_PRIVATE_TO */ -#define I40EVF_DEV_PRIVATE_TO_VF(adapter) \ - (&((struct i40e_adapter *)adapter)->vf) - -static inline struct i40e_vsi * -i40e_get_vsi_from_adapter(struct i40e_adapter *adapter) -{ - struct i40e_hw *hw; - - if (!adapter) - return NULL; - - hw = I40E_DEV_PRIVATE_TO_HW(adapter); - if (hw->mac.type == I40E_MAC_VF) { - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(adapter); - return &vf->vsi; - } else { - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(adapter); - return pf->main_vsi; - } -} -#define I40E_DEV_PRIVATE_TO_MAIN_VSI(adapter) \ - i40e_get_vsi_from_adapter((struct i40e_adapter *)adapter) - -/* I40E_VSI_TO */ -#define I40E_VSI_TO_HW(vsi) \ - (&(((struct i40e_vsi *)vsi)->adapter->hw)) -#define I40E_VSI_TO_PF(vsi) \ - (&(((struct i40e_vsi *)vsi)->adapter->pf)) -#define I40E_VSI_TO_DEV_DATA(vsi) \ - (((struct i40e_vsi *)vsi)->adapter->pf.dev_data) -#define I40E_VSI_TO_ETH_DEV(vsi) \ - (((struct i40e_vsi *)vsi)->adapter->eth_dev) - -/* I40E_PF_TO */ -#define I40E_PF_TO_HW(pf) \ - (&(((struct i40e_pf *)pf)->adapter->hw)) -#define I40E_PF_TO_ADAPTER(pf) \ - ((struct i40e_adapter *)pf->adapter) - -/* I40E_VF_TO */ -#define I40E_VF_TO_HW(vf) \ - (&(((struct i40e_vf *)vf)->adapter->hw)) - -static inline void -i40e_init_adminq_parameter(struct i40e_hw *hw) -{ - hw->aq.num_arq_entries = I40E_AQ_LEN; - hw->aq.num_asq_entries = I40E_AQ_LEN; - hw->aq.arq_buf_size = I40E_AQ_BUF_SZ; - hw->aq.asq_buf_size = I40E_AQ_BUF_SZ; -} - -#define I40E_VALID_FLOW(flow_type) \ - ((flow_type) == RTE_ETH_FLOW_FRAG_IPV4 || \ - (flow_type) == RTE_ETH_FLOW_NONFRAG_IPV4_TCP || \ - (flow_type) == RTE_ETH_FLOW_NONFRAG_IPV4_UDP || \ - (flow_type) == RTE_ETH_FLOW_NONFRAG_IPV4_SCTP || \ - (flow_type) == RTE_ETH_FLOW_NONFRAG_IPV4_OTHER || \ - (flow_type) == RTE_ETH_FLOW_FRAG_IPV6 || \ - (flow_type) == RTE_ETH_FLOW_NONFRAG_IPV6_TCP || \ - (flow_type) == RTE_ETH_FLOW_NONFRAG_IPV6_UDP || \ - (flow_type) == RTE_ETH_FLOW_NONFRAG_IPV6_SCTP || \ - (flow_type) == RTE_ETH_FLOW_NONFRAG_IPV6_OTHER || \ - (flow_type) == RTE_ETH_FLOW_L2_PAYLOAD) - -#define I40E_VALID_PCTYPE(pctype) \ - ((pctype) == I40E_FILTER_PCTYPE_FRAG_IPV4 || \ - (pctype) == I40E_FILTER_PCTYPE_NONF_IPV4_TCP || \ - (pctype) == I40E_FILTER_PCTYPE_NONF_IPV4_UDP || \ - (pctype) == I40E_FILTER_PCTYPE_NONF_IPV4_SCTP || \ - (pctype) == I40E_FILTER_PCTYPE_NONF_IPV4_OTHER || \ - (pctype) == I40E_FILTER_PCTYPE_FRAG_IPV6 || \ - (pctype) == I40E_FILTER_PCTYPE_NONF_IPV6_UDP || \ - (pctype) == I40E_FILTER_PCTYPE_NONF_IPV6_TCP || \ - (pctype) == I40E_FILTER_PCTYPE_NONF_IPV6_SCTP || \ - (pctype) == I40E_FILTER_PCTYPE_NONF_IPV6_OTHER || \ - (pctype) == I40E_FILTER_PCTYPE_L2_PAYLOAD) - -#endif /* _I40E_ETHDEV_H_ */ diff --git a/lib/librte_pmd_i40e/i40e_ethdev_vf.c b/lib/librte_pmd_i40e/i40e_ethdev_vf.c deleted file mode 100644 index a0d808f..0000000 --- a/lib/librte_pmd_i40e/i40e_ethdev_vf.c +++ /dev/null @@ -1,1893 +0,0 @@ -/*- - * BSD LICENSE - * - * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include "i40e_logs.h" -#include "i40e/i40e_prototype.h" -#include "i40e/i40e_adminq_cmd.h" -#include "i40e/i40e_type.h" - -#include "i40e_rxtx.h" -#include "i40e_ethdev.h" -#include "i40e_pf.h" -#define I40EVF_VSI_DEFAULT_MSIX_INTR 1 - -/* busy wait delay in msec */ -#define I40EVF_BUSY_WAIT_DELAY 10 -#define I40EVF_BUSY_WAIT_COUNT 50 -#define MAX_RESET_WAIT_CNT 20 - -struct i40evf_arq_msg_info { - enum i40e_virtchnl_ops ops; - enum i40e_status_code result; - uint16_t buf_len; - uint16_t msg_len; - uint8_t *msg; -}; - -struct vf_cmd_info { - enum i40e_virtchnl_ops ops; - uint8_t *in_args; - uint32_t in_args_size; - uint8_t *out_buffer; - /* Input & output type. pass in buffer size and pass out - * actual return result - */ - uint32_t out_size; -}; - -enum i40evf_aq_result { - I40EVF_MSG_ERR = -1, /* Meet error when accessing admin queue */ - I40EVF_MSG_NON, /* Read nothing from admin queue */ - I40EVF_MSG_SYS, /* Read system msg from admin queue */ - I40EVF_MSG_CMD, /* Read async command result */ -}; - -/* A share buffer to store the command result from PF driver */ -static uint8_t cmd_result_buffer[I40E_AQ_BUF_SZ]; - -static int i40evf_dev_configure(struct rte_eth_dev *dev); -static int i40evf_dev_start(struct rte_eth_dev *dev); -static void i40evf_dev_stop(struct rte_eth_dev *dev); -static void i40evf_dev_info_get(struct rte_eth_dev *dev, - struct rte_eth_dev_info *dev_info); -static int i40evf_dev_link_update(struct rte_eth_dev *dev, - __rte_unused int wait_to_complete); -static void i40evf_dev_stats_get(struct rte_eth_dev *dev, - struct rte_eth_stats *stats); -static int i40evf_vlan_filter_set(struct rte_eth_dev *dev, - uint16_t vlan_id, int on); -static void i40evf_vlan_offload_set(struct rte_eth_dev *dev, int mask); -static int i40evf_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, - int on); -static void i40evf_dev_close(struct rte_eth_dev *dev); -static void i40evf_dev_promiscuous_enable(struct rte_eth_dev *dev); -static void i40evf_dev_promiscuous_disable(struct rte_eth_dev *dev); -static void i40evf_dev_allmulticast_enable(struct rte_eth_dev *dev); -static void i40evf_dev_allmulticast_disable(struct rte_eth_dev *dev); -static int i40evf_get_link_status(struct rte_eth_dev *dev, - struct rte_eth_link *link); -static int i40evf_init_vlan(struct rte_eth_dev *dev); -static int i40evf_dev_rx_queue_start(struct rte_eth_dev *dev, - uint16_t rx_queue_id); -static int i40evf_dev_rx_queue_stop(struct rte_eth_dev *dev, - uint16_t rx_queue_id); -static int i40evf_dev_tx_queue_start(struct rte_eth_dev *dev, - uint16_t tx_queue_id); -static int i40evf_dev_tx_queue_stop(struct rte_eth_dev *dev, - uint16_t tx_queue_id); -static int i40evf_dev_rss_reta_update(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size); -static int i40evf_dev_rss_reta_query(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size); -static int i40evf_config_rss(struct i40e_vf *vf); -static int i40evf_dev_rss_hash_update(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf); -static int i40evf_dev_rss_hash_conf_get(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf); - -/* Default hash key buffer for RSS */ -static uint32_t rss_key_default[I40E_VFQF_HKEY_MAX_INDEX + 1]; - -static const struct eth_dev_ops i40evf_eth_dev_ops = { - .dev_configure = i40evf_dev_configure, - .dev_start = i40evf_dev_start, - .dev_stop = i40evf_dev_stop, - .promiscuous_enable = i40evf_dev_promiscuous_enable, - .promiscuous_disable = i40evf_dev_promiscuous_disable, - .allmulticast_enable = i40evf_dev_allmulticast_enable, - .allmulticast_disable = i40evf_dev_allmulticast_disable, - .link_update = i40evf_dev_link_update, - .stats_get = i40evf_dev_stats_get, - .dev_close = i40evf_dev_close, - .dev_infos_get = i40evf_dev_info_get, - .vlan_filter_set = i40evf_vlan_filter_set, - .vlan_offload_set = i40evf_vlan_offload_set, - .vlan_pvid_set = i40evf_vlan_pvid_set, - .rx_queue_start = i40evf_dev_rx_queue_start, - .rx_queue_stop = i40evf_dev_rx_queue_stop, - .tx_queue_start = i40evf_dev_tx_queue_start, - .tx_queue_stop = i40evf_dev_tx_queue_stop, - .rx_queue_setup = i40e_dev_rx_queue_setup, - .rx_queue_release = i40e_dev_rx_queue_release, - .tx_queue_setup = i40e_dev_tx_queue_setup, - .tx_queue_release = i40e_dev_tx_queue_release, - .reta_update = i40evf_dev_rss_reta_update, - .reta_query = i40evf_dev_rss_reta_query, - .rss_hash_update = i40evf_dev_rss_hash_update, - .rss_hash_conf_get = i40evf_dev_rss_hash_conf_get, -}; - -static int -i40evf_set_mac_type(struct i40e_hw *hw) -{ - int status = I40E_ERR_DEVICE_NOT_SUPPORTED; - - if (hw->vendor_id == I40E_INTEL_VENDOR_ID) { - switch (hw->device_id) { - case I40E_DEV_ID_VF: - case I40E_DEV_ID_VF_HV: - hw->mac.type = I40E_MAC_VF; - status = I40E_SUCCESS; - break; - default: - ; - } - } - - return status; -} - -/* - * Parse admin queue message. - * - * return value: - * < 0: meet error - * 0: read sys msg - * > 0: read cmd result - */ -static enum i40evf_aq_result -i40evf_parse_pfmsg(struct i40e_vf *vf, - struct i40e_arq_event_info *event, - struct i40evf_arq_msg_info *data) -{ - enum i40e_virtchnl_ops opcode = (enum i40e_virtchnl_ops)\ - rte_le_to_cpu_32(event->desc.cookie_high); - enum i40e_status_code retval = (enum i40e_status_code)\ - rte_le_to_cpu_32(event->desc.cookie_low); - enum i40evf_aq_result ret = I40EVF_MSG_CMD; - - /* pf sys event */ - if (opcode == I40E_VIRTCHNL_OP_EVENT) { - struct i40e_virtchnl_pf_event *vpe = - (struct i40e_virtchnl_pf_event *)event->msg_buf; - - /* Initialize ret to sys event */ - ret = I40EVF_MSG_SYS; - switch (vpe->event) { - case I40E_VIRTCHNL_EVENT_LINK_CHANGE: - vf->link_up = - vpe->event_data.link_event.link_status; - vf->pend_msg |= PFMSG_LINK_CHANGE; - PMD_DRV_LOG(INFO, "Link status update:%s", - vf->link_up ? "up" : "down"); - break; - case I40E_VIRTCHNL_EVENT_RESET_IMPENDING: - vf->vf_reset = true; - vf->pend_msg |= PFMSG_RESET_IMPENDING; - PMD_DRV_LOG(INFO, "vf is reseting"); - break; - case I40E_VIRTCHNL_EVENT_PF_DRIVER_CLOSE: - vf->dev_closed = true; - vf->pend_msg |= PFMSG_DRIVER_CLOSE; - PMD_DRV_LOG(INFO, "PF driver closed"); - break; - default: - PMD_DRV_LOG(ERR, "%s: Unknown event %d from pf", - __func__, vpe->event); - } - } else { - /* async reply msg on command issued by vf previously */ - ret = I40EVF_MSG_CMD; - /* Actual data length read from PF */ - data->msg_len = event->msg_len; - } - /* fill the ops and result to notify VF */ - data->result = retval; - data->ops = opcode; - - return ret; -} - -/* - * Read data in admin queue to get msg from pf driver - */ -static enum i40evf_aq_result -i40evf_read_pfmsg(struct rte_eth_dev *dev, struct i40evf_arq_msg_info *data) -{ - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - struct i40e_arq_event_info event; - int ret; - enum i40evf_aq_result result = I40EVF_MSG_NON; - - event.buf_len = data->buf_len; - event.msg_buf = data->msg; - ret = i40e_clean_arq_element(hw, &event, NULL); - /* Can't read any msg from adminQ */ - if (ret) { - if (ret == I40E_ERR_ADMIN_QUEUE_NO_WORK) - result = I40EVF_MSG_NON; - else - result = I40EVF_MSG_ERR; - return result; - } - - /* Parse the event */ - result = i40evf_parse_pfmsg(vf, &event, data); - - return result; -} - -/* - * Polling read until command result return from pf driver or meet error. - */ -static int -i40evf_wait_cmd_done(struct rte_eth_dev *dev, - struct i40evf_arq_msg_info *data) -{ - int i = 0; - enum i40evf_aq_result ret; - -#define MAX_TRY_TIMES 10 -#define ASQ_DELAY_MS 50 - do { - /* Delay some time first */ - rte_delay_ms(ASQ_DELAY_MS); - ret = i40evf_read_pfmsg(dev, data); - if (ret == I40EVF_MSG_CMD) - return 0; - else if (ret == I40EVF_MSG_ERR) - return -1; - - /* If don't read msg or read sys event, continue */ - } while(i++ < MAX_TRY_TIMES); - - return -1; -} - -/** - * clear current command. Only call in case execute - * _atomic_set_cmd successfully. - */ -static inline void -_clear_cmd(struct i40e_vf *vf) -{ - rte_wmb(); - vf->pend_cmd = I40E_VIRTCHNL_OP_UNKNOWN; -} - -/* - * Check there is pending cmd in execution. If none, set new command. - */ -static inline int -_atomic_set_cmd(struct i40e_vf *vf, enum i40e_virtchnl_ops ops) -{ - int ret = rte_atomic32_cmpset(&vf->pend_cmd, - I40E_VIRTCHNL_OP_UNKNOWN, ops); - - if (!ret) - PMD_DRV_LOG(ERR, "There is incomplete cmd %d", vf->pend_cmd); - - return !ret; -} - -static int -i40evf_execute_vf_cmd(struct rte_eth_dev *dev, struct vf_cmd_info *args) -{ - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - int err = -1; - struct i40evf_arq_msg_info info; - - if (_atomic_set_cmd(vf, args->ops)) - return -1; - - info.msg = args->out_buffer; - info.buf_len = args->out_size; - info.ops = I40E_VIRTCHNL_OP_UNKNOWN; - info.result = I40E_SUCCESS; - - err = i40e_aq_send_msg_to_pf(hw, args->ops, I40E_SUCCESS, - args->in_args, args->in_args_size, NULL); - if (err) { - PMD_DRV_LOG(ERR, "fail to send cmd %d", args->ops); - return err; - } - - err = i40evf_wait_cmd_done(dev, &info); - /* read message and it's expected one */ - if (!err && args->ops == info.ops) - _clear_cmd(vf); - else if (err) - PMD_DRV_LOG(ERR, "Failed to read message from AdminQ"); - else if (args->ops != info.ops) - PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u", - args->ops, info.ops); - - return (err | info.result); -} - -/* - * Check API version with sync wait until version read or fail from admin queue - */ -static int -i40evf_check_api_version(struct rte_eth_dev *dev) -{ - struct i40e_virtchnl_version_info version, *pver; - int err; - struct vf_cmd_info args; - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - - version.major = I40E_VIRTCHNL_VERSION_MAJOR; - version.minor = I40E_VIRTCHNL_VERSION_MINOR; - - args.ops = I40E_VIRTCHNL_OP_VERSION; - args.in_args = (uint8_t *)&version; - args.in_args_size = sizeof(version); - args.out_buffer = cmd_result_buffer; - args.out_size = I40E_AQ_BUF_SZ; - - err = i40evf_execute_vf_cmd(dev, &args); - if (err) { - PMD_INIT_LOG(ERR, "fail to execute command OP_VERSION"); - return err; - } - - pver = (struct i40e_virtchnl_version_info *)args.out_buffer; - vf->version_major = pver->major; - vf->version_minor = pver->minor; - if (vf->version_major == I40E_DPDK_VERSION_MAJOR) - PMD_DRV_LOG(INFO, "Peer is DPDK PF host"); - else if ((vf->version_major == I40E_VIRTCHNL_VERSION_MAJOR) && - (vf->version_minor == I40E_VIRTCHNL_VERSION_MINOR)) - PMD_DRV_LOG(INFO, "Peer is Linux PF host"); - else { - PMD_INIT_LOG(ERR, "PF/VF API version mismatch:(%u.%u)-(%u.%u)", - vf->version_major, vf->version_minor, - I40E_VIRTCHNL_VERSION_MAJOR, - I40E_VIRTCHNL_VERSION_MINOR); - return -1; - } - - return 0; -} - -static int -i40evf_get_vf_resource(struct rte_eth_dev *dev) -{ - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - int err; - struct vf_cmd_info args; - uint32_t len; - - args.ops = I40E_VIRTCHNL_OP_GET_VF_RESOURCES; - args.in_args = NULL; - args.in_args_size = 0; - args.out_buffer = cmd_result_buffer; - args.out_size = I40E_AQ_BUF_SZ; - - err = i40evf_execute_vf_cmd(dev, &args); - - if (err) { - PMD_DRV_LOG(ERR, "fail to execute command OP_GET_VF_RESOURCE"); - return err; - } - - len = sizeof(struct i40e_virtchnl_vf_resource) + - I40E_MAX_VF_VSI * sizeof(struct i40e_virtchnl_vsi_resource); - - (void)rte_memcpy(vf->vf_res, args.out_buffer, - RTE_MIN(args.out_size, len)); - i40e_vf_parse_hw_config(hw, vf->vf_res); - - return 0; -} - -static int -i40evf_config_promisc(struct rte_eth_dev *dev, - bool enable_unicast, - bool enable_multicast) -{ - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - int err; - struct vf_cmd_info args; - struct i40e_virtchnl_promisc_info promisc; - - promisc.flags = 0; - promisc.vsi_id = vf->vsi_res->vsi_id; - - if (enable_unicast) - promisc.flags |= I40E_FLAG_VF_UNICAST_PROMISC; - - if (enable_multicast) - promisc.flags |= I40E_FLAG_VF_MULTICAST_PROMISC; - - args.ops = I40E_VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE; - args.in_args = (uint8_t *)&promisc; - args.in_args_size = sizeof(promisc); - args.out_buffer = cmd_result_buffer; - args.out_size = I40E_AQ_BUF_SZ; - - err = i40evf_execute_vf_cmd(dev, &args); - - if (err) - PMD_DRV_LOG(ERR, "fail to execute command " - "CONFIG_PROMISCUOUS_MODE"); - return err; -} - -/* Configure vlan and double vlan offload. Use flag to specify which part to configure */ -static int -i40evf_config_vlan_offload(struct rte_eth_dev *dev, - bool enable_vlan_strip) -{ - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - int err; - struct vf_cmd_info args; - struct i40e_virtchnl_vlan_offload_info offload; - - offload.vsi_id = vf->vsi_res->vsi_id; - offload.enable_vlan_strip = enable_vlan_strip; - - args.ops = (enum i40e_virtchnl_ops)I40E_VIRTCHNL_OP_CFG_VLAN_OFFLOAD; - args.in_args = (uint8_t *)&offload; - args.in_args_size = sizeof(offload); - args.out_buffer = cmd_result_buffer; - args.out_size = I40E_AQ_BUF_SZ; - - err = i40evf_execute_vf_cmd(dev, &args); - if (err) - PMD_DRV_LOG(ERR, "fail to execute command CFG_VLAN_OFFLOAD"); - - return err; -} - -static int -i40evf_config_vlan_pvid(struct rte_eth_dev *dev, - struct i40e_vsi_vlan_pvid_info *info) -{ - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - int err; - struct vf_cmd_info args; - struct i40e_virtchnl_pvid_info tpid_info; - - if (dev == NULL || info == NULL) { - PMD_DRV_LOG(ERR, "invalid parameters"); - return I40E_ERR_PARAM; - } - - memset(&tpid_info, 0, sizeof(tpid_info)); - tpid_info.vsi_id = vf->vsi_res->vsi_id; - (void)rte_memcpy(&tpid_info.info, info, sizeof(*info)); - - args.ops = (enum i40e_virtchnl_ops)I40E_VIRTCHNL_OP_CFG_VLAN_PVID; - args.in_args = (uint8_t *)&tpid_info; - args.in_args_size = sizeof(tpid_info); - args.out_buffer = cmd_result_buffer; - args.out_size = I40E_AQ_BUF_SZ; - - err = i40evf_execute_vf_cmd(dev, &args); - if (err) - PMD_DRV_LOG(ERR, "fail to execute command CFG_VLAN_PVID"); - - return err; -} - -static void -i40evf_fill_virtchnl_vsi_txq_info(struct i40e_virtchnl_txq_info *txq_info, - uint16_t vsi_id, - uint16_t queue_id, - uint16_t nb_txq, - struct i40e_tx_queue *txq) -{ - txq_info->vsi_id = vsi_id; - txq_info->queue_id = queue_id; - if (queue_id < nb_txq) { - txq_info->ring_len = txq->nb_tx_desc; - txq_info->dma_ring_addr = txq->tx_ring_phys_addr; - } -} - -static void -i40evf_fill_virtchnl_vsi_rxq_info(struct i40e_virtchnl_rxq_info *rxq_info, - uint16_t vsi_id, - uint16_t queue_id, - uint16_t nb_rxq, - uint32_t max_pkt_size, - struct i40e_rx_queue *rxq) -{ - rxq_info->vsi_id = vsi_id; - rxq_info->queue_id = queue_id; - rxq_info->max_pkt_size = max_pkt_size; - if (queue_id < nb_rxq) { - rxq_info->ring_len = rxq->nb_rx_desc; - rxq_info->dma_ring_addr = rxq->rx_ring_phys_addr; - rxq_info->databuffer_size = - (rte_pktmbuf_data_room_size(rxq->mp) - - RTE_PKTMBUF_HEADROOM); - } -} - -/* It configures VSI queues to co-work with Linux PF host */ -static int -i40evf_configure_vsi_queues(struct rte_eth_dev *dev) -{ - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - struct i40e_rx_queue **rxq = - (struct i40e_rx_queue **)dev->data->rx_queues; - struct i40e_tx_queue **txq = - (struct i40e_tx_queue **)dev->data->tx_queues; - struct i40e_virtchnl_vsi_queue_config_info *vc_vqci; - struct i40e_virtchnl_queue_pair_info *vc_qpi; - struct vf_cmd_info args; - uint16_t i, nb_qp = vf->num_queue_pairs; - const uint32_t size = - I40E_VIRTCHNL_CONFIG_VSI_QUEUES_SIZE(vc_vqci, nb_qp); - uint8_t buff[size]; - int ret; - - memset(buff, 0, sizeof(buff)); - vc_vqci = (struct i40e_virtchnl_vsi_queue_config_info *)buff; - vc_vqci->vsi_id = vf->vsi_res->vsi_id; - vc_vqci->num_queue_pairs = nb_qp; - - for (i = 0, vc_qpi = vc_vqci->qpair; i < nb_qp; i++, vc_qpi++) { - i40evf_fill_virtchnl_vsi_txq_info(&vc_qpi->txq, - vc_vqci->vsi_id, i, dev->data->nb_tx_queues, txq[i]); - i40evf_fill_virtchnl_vsi_rxq_info(&vc_qpi->rxq, - vc_vqci->vsi_id, i, dev->data->nb_rx_queues, - vf->max_pkt_len, rxq[i]); - } - memset(&args, 0, sizeof(args)); - args.ops = I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES; - args.in_args = (uint8_t *)vc_vqci; - args.in_args_size = size; - args.out_buffer = cmd_result_buffer; - args.out_size = I40E_AQ_BUF_SZ; - ret = i40evf_execute_vf_cmd(dev, &args); - if (ret) - PMD_DRV_LOG(ERR, "Failed to execute command of " - "I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES\n"); - - return ret; -} - -/* It configures VSI queues to co-work with DPDK PF host */ -static int -i40evf_configure_vsi_queues_ext(struct rte_eth_dev *dev) -{ - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - struct i40e_rx_queue **rxq = - (struct i40e_rx_queue **)dev->data->rx_queues; - struct i40e_tx_queue **txq = - (struct i40e_tx_queue **)dev->data->tx_queues; - struct i40e_virtchnl_vsi_queue_config_ext_info *vc_vqcei; - struct i40e_virtchnl_queue_pair_ext_info *vc_qpei; - struct vf_cmd_info args; - uint16_t i, nb_qp = vf->num_queue_pairs; - const uint32_t size = - I40E_VIRTCHNL_CONFIG_VSI_QUEUES_SIZE(vc_vqcei, nb_qp); - uint8_t buff[size]; - int ret; - - memset(buff, 0, sizeof(buff)); - vc_vqcei = (struct i40e_virtchnl_vsi_queue_config_ext_info *)buff; - vc_vqcei->vsi_id = vf->vsi_res->vsi_id; - vc_vqcei->num_queue_pairs = nb_qp; - vc_qpei = vc_vqcei->qpair; - for (i = 0; i < nb_qp; i++, vc_qpei++) { - i40evf_fill_virtchnl_vsi_txq_info(&vc_qpei->txq, - vc_vqcei->vsi_id, i, dev->data->nb_tx_queues, txq[i]); - i40evf_fill_virtchnl_vsi_rxq_info(&vc_qpei->rxq, - vc_vqcei->vsi_id, i, dev->data->nb_rx_queues, - vf->max_pkt_len, rxq[i]); - if (i < dev->data->nb_rx_queues) - /* - * It adds extra info for configuring VSI queues, which - * is needed to enable the configurable crc stripping - * in VF. - */ - vc_qpei->rxq_ext.crcstrip = - dev->data->dev_conf.rxmode.hw_strip_crc; - } - memset(&args, 0, sizeof(args)); - args.ops = - (enum i40e_virtchnl_ops)I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES_EXT; - args.in_args = (uint8_t *)vc_vqcei; - args.in_args_size = size; - args.out_buffer = cmd_result_buffer; - args.out_size = I40E_AQ_BUF_SZ; - ret = i40evf_execute_vf_cmd(dev, &args); - if (ret) - PMD_DRV_LOG(ERR, "Failed to execute command of " - "I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES_EXT\n"); - - return ret; -} - -static int -i40evf_configure_queues(struct rte_eth_dev *dev) -{ - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - - if (vf->version_major == I40E_DPDK_VERSION_MAJOR) - /* To support DPDK PF host */ - return i40evf_configure_vsi_queues_ext(dev); - else - /* To support Linux PF host */ - return i40evf_configure_vsi_queues(dev); -} - -static int -i40evf_config_irq_map(struct rte_eth_dev *dev) -{ - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - struct vf_cmd_info args; - uint8_t cmd_buffer[sizeof(struct i40e_virtchnl_irq_map_info) + \ - sizeof(struct i40e_virtchnl_vector_map)]; - struct i40e_virtchnl_irq_map_info *map_info; - int i, err; - map_info = (struct i40e_virtchnl_irq_map_info *)cmd_buffer; - map_info->num_vectors = 1; - map_info->vecmap[0].rxitr_idx = RTE_LIBRTE_I40E_ITR_INTERVAL / 2; - map_info->vecmap[0].txitr_idx = RTE_LIBRTE_I40E_ITR_INTERVAL / 2; - map_info->vecmap[0].vsi_id = vf->vsi_res->vsi_id; - /* Alway use default dynamic MSIX interrupt */ - map_info->vecmap[0].vector_id = I40EVF_VSI_DEFAULT_MSIX_INTR; - /* Don't map any tx queue */ - map_info->vecmap[0].txq_map = 0; - map_info->vecmap[0].rxq_map = 0; - for (i = 0; i < dev->data->nb_rx_queues; i++) - map_info->vecmap[0].rxq_map |= 1 << i; - - args.ops = I40E_VIRTCHNL_OP_CONFIG_IRQ_MAP; - args.in_args = (u8 *)cmd_buffer; - args.in_args_size = sizeof(cmd_buffer); - args.out_buffer = cmd_result_buffer; - args.out_size = I40E_AQ_BUF_SZ; - err = i40evf_execute_vf_cmd(dev, &args); - if (err) - PMD_DRV_LOG(ERR, "fail to execute command OP_ENABLE_QUEUES"); - - return err; -} - -static int -i40evf_switch_queue(struct rte_eth_dev *dev, bool isrx, uint16_t qid, - bool on) -{ - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - struct i40e_virtchnl_queue_select queue_select; - int err; - struct vf_cmd_info args; - memset(&queue_select, 0, sizeof(queue_select)); - queue_select.vsi_id = vf->vsi_res->vsi_id; - - if (isrx) - queue_select.rx_queues |= 1 << qid; - else - queue_select.tx_queues |= 1 << qid; - - if (on) - args.ops = I40E_VIRTCHNL_OP_ENABLE_QUEUES; - else - args.ops = I40E_VIRTCHNL_OP_DISABLE_QUEUES; - args.in_args = (u8 *)&queue_select; - args.in_args_size = sizeof(queue_select); - args.out_buffer = cmd_result_buffer; - args.out_size = I40E_AQ_BUF_SZ; - err = i40evf_execute_vf_cmd(dev, &args); - if (err) - PMD_DRV_LOG(ERR, "fail to switch %s %u %s", - isrx ? "RX" : "TX", qid, on ? "on" : "off"); - - return err; -} - -static int -i40evf_start_queues(struct rte_eth_dev *dev) -{ - struct rte_eth_dev_data *dev_data = dev->data; - int i; - struct i40e_rx_queue *rxq; - struct i40e_tx_queue *txq; - - for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq = dev_data->rx_queues[i]; - if (rxq->rx_deferred_start) - continue; - if (i40evf_dev_rx_queue_start(dev, i) != 0) { - PMD_DRV_LOG(ERR, "Fail to start queue %u", i); - return -1; - } - } - - for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = dev_data->tx_queues[i]; - if (txq->tx_deferred_start) - continue; - if (i40evf_dev_tx_queue_start(dev, i) != 0) { - PMD_DRV_LOG(ERR, "Fail to start queue %u", i); - return -1; - } - } - - return 0; -} - -static int -i40evf_stop_queues(struct rte_eth_dev *dev) -{ - int i; - - /* Stop TX queues first */ - for (i = 0; i < dev->data->nb_tx_queues; i++) { - if (i40evf_dev_tx_queue_stop(dev, i) != 0) { - PMD_DRV_LOG(ERR, "Fail to start queue %u", i); - return -1; - } - } - - /* Then stop RX queues */ - for (i = 0; i < dev->data->nb_rx_queues; i++) { - if (i40evf_dev_rx_queue_stop(dev, i) != 0) { - PMD_DRV_LOG(ERR, "Fail to start queue %u", i); - return -1; - } - } - - return 0; -} - -static int -i40evf_add_mac_addr(struct rte_eth_dev *dev, struct ether_addr *addr) -{ - struct i40e_virtchnl_ether_addr_list *list; - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - uint8_t cmd_buffer[sizeof(struct i40e_virtchnl_ether_addr_list) + \ - sizeof(struct i40e_virtchnl_ether_addr)]; - int err; - struct vf_cmd_info args; - - if (i40e_validate_mac_addr(addr->addr_bytes) != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Invalid mac:%x:%x:%x:%x:%x:%x", - addr->addr_bytes[0], addr->addr_bytes[1], - addr->addr_bytes[2], addr->addr_bytes[3], - addr->addr_bytes[4], addr->addr_bytes[5]); - return -1; - } - - list = (struct i40e_virtchnl_ether_addr_list *)cmd_buffer; - list->vsi_id = vf->vsi_res->vsi_id; - list->num_elements = 1; - (void)rte_memcpy(list->list[0].addr, addr->addr_bytes, - sizeof(addr->addr_bytes)); - - args.ops = I40E_VIRTCHNL_OP_ADD_ETHER_ADDRESS; - args.in_args = cmd_buffer; - args.in_args_size = sizeof(cmd_buffer); - args.out_buffer = cmd_result_buffer; - args.out_size = I40E_AQ_BUF_SZ; - err = i40evf_execute_vf_cmd(dev, &args); - if (err) - PMD_DRV_LOG(ERR, "fail to execute command " - "OP_ADD_ETHER_ADDRESS"); - - return err; -} - -static int -i40evf_del_mac_addr(struct rte_eth_dev *dev, struct ether_addr *addr) -{ - struct i40e_virtchnl_ether_addr_list *list; - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - uint8_t cmd_buffer[sizeof(struct i40e_virtchnl_ether_addr_list) + \ - sizeof(struct i40e_virtchnl_ether_addr)]; - int err; - struct vf_cmd_info args; - - if (i40e_validate_mac_addr(addr->addr_bytes) != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Invalid mac:%x-%x-%x-%x-%x-%x", - addr->addr_bytes[0], addr->addr_bytes[1], - addr->addr_bytes[2], addr->addr_bytes[3], - addr->addr_bytes[4], addr->addr_bytes[5]); - return -1; - } - - list = (struct i40e_virtchnl_ether_addr_list *)cmd_buffer; - list->vsi_id = vf->vsi_res->vsi_id; - list->num_elements = 1; - (void)rte_memcpy(list->list[0].addr, addr->addr_bytes, - sizeof(addr->addr_bytes)); - - args.ops = I40E_VIRTCHNL_OP_DEL_ETHER_ADDRESS; - args.in_args = cmd_buffer; - args.in_args_size = sizeof(cmd_buffer); - args.out_buffer = cmd_result_buffer; - args.out_size = I40E_AQ_BUF_SZ; - err = i40evf_execute_vf_cmd(dev, &args); - if (err) - PMD_DRV_LOG(ERR, "fail to execute command " - "OP_DEL_ETHER_ADDRESS"); - - return err; -} - -static int -i40evf_get_statics(struct rte_eth_dev *dev, struct rte_eth_stats *stats) -{ - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - struct i40e_virtchnl_queue_select q_stats; - struct i40e_eth_stats *pstats; - int err; - struct vf_cmd_info args; - - memset(&q_stats, 0, sizeof(q_stats)); - q_stats.vsi_id = vf->vsi_res->vsi_id; - args.ops = I40E_VIRTCHNL_OP_GET_STATS; - args.in_args = (u8 *)&q_stats; - args.in_args_size = sizeof(q_stats); - args.out_buffer = cmd_result_buffer; - args.out_size = I40E_AQ_BUF_SZ; - - err = i40evf_execute_vf_cmd(dev, &args); - if (err) { - PMD_DRV_LOG(ERR, "fail to execute command OP_GET_STATS"); - return err; - } - pstats = (struct i40e_eth_stats *)args.out_buffer; - stats->ipackets = pstats->rx_unicast + pstats->rx_multicast + - pstats->rx_broadcast; - stats->opackets = pstats->tx_broadcast + pstats->tx_multicast + - pstats->tx_unicast; - stats->ierrors = pstats->rx_discards; - stats->oerrors = pstats->tx_errors + pstats->tx_discards; - stats->ibytes = pstats->rx_bytes; - stats->obytes = pstats->tx_bytes; - - return 0; -} - -static int -i40evf_add_vlan(struct rte_eth_dev *dev, uint16_t vlanid) -{ - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - struct i40e_virtchnl_vlan_filter_list *vlan_list; - uint8_t cmd_buffer[sizeof(struct i40e_virtchnl_vlan_filter_list) + - sizeof(uint16_t)]; - int err; - struct vf_cmd_info args; - - vlan_list = (struct i40e_virtchnl_vlan_filter_list *)cmd_buffer; - vlan_list->vsi_id = vf->vsi_res->vsi_id; - vlan_list->num_elements = 1; - vlan_list->vlan_id[0] = vlanid; - - args.ops = I40E_VIRTCHNL_OP_ADD_VLAN; - args.in_args = (u8 *)&cmd_buffer; - args.in_args_size = sizeof(cmd_buffer); - args.out_buffer = cmd_result_buffer; - args.out_size = I40E_AQ_BUF_SZ; - err = i40evf_execute_vf_cmd(dev, &args); - if (err) - PMD_DRV_LOG(ERR, "fail to execute command OP_ADD_VLAN"); - - return err; -} - -static int -i40evf_del_vlan(struct rte_eth_dev *dev, uint16_t vlanid) -{ - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - struct i40e_virtchnl_vlan_filter_list *vlan_list; - uint8_t cmd_buffer[sizeof(struct i40e_virtchnl_vlan_filter_list) + - sizeof(uint16_t)]; - int err; - struct vf_cmd_info args; - - vlan_list = (struct i40e_virtchnl_vlan_filter_list *)cmd_buffer; - vlan_list->vsi_id = vf->vsi_res->vsi_id; - vlan_list->num_elements = 1; - vlan_list->vlan_id[0] = vlanid; - - args.ops = I40E_VIRTCHNL_OP_DEL_VLAN; - args.in_args = (u8 *)&cmd_buffer; - args.in_args_size = sizeof(cmd_buffer); - args.out_buffer = cmd_result_buffer; - args.out_size = I40E_AQ_BUF_SZ; - err = i40evf_execute_vf_cmd(dev, &args); - if (err) - PMD_DRV_LOG(ERR, "fail to execute command OP_DEL_VLAN"); - - return err; -} - -static int -i40evf_get_link_status(struct rte_eth_dev *dev, struct rte_eth_link *link) -{ - int err; - struct vf_cmd_info args; - struct rte_eth_link *new_link; - - args.ops = (enum i40e_virtchnl_ops)I40E_VIRTCHNL_OP_GET_LINK_STAT; - args.in_args = NULL; - args.in_args_size = 0; - args.out_buffer = cmd_result_buffer; - args.out_size = I40E_AQ_BUF_SZ; - err = i40evf_execute_vf_cmd(dev, &args); - if (err) { - PMD_DRV_LOG(ERR, "fail to execute command OP_GET_LINK_STAT"); - return err; - } - - new_link = (struct rte_eth_link *)args.out_buffer; - (void)rte_memcpy(link, new_link, sizeof(*link)); - - return 0; -} - -static const struct rte_pci_id pci_id_i40evf_map[] = { -#define RTE_PCI_DEV_ID_DECL_I40EVF(vend, dev) {RTE_PCI_DEVICE(vend, dev)}, -#include "rte_pci_dev_ids.h" -{ .vendor_id = 0, /* sentinel */ }, -}; - -static inline int -i40evf_dev_atomic_write_link_status(struct rte_eth_dev *dev, - struct rte_eth_link *link) -{ - struct rte_eth_link *dst = &(dev->data->dev_link); - struct rte_eth_link *src = link; - - if (rte_atomic64_cmpset((uint64_t *)dst, *(uint64_t *)dst, - *(uint64_t *)src) == 0) - return -1; - - return 0; -} - -static int -i40evf_reset_vf(struct i40e_hw *hw) -{ - int i, reset; - - if (i40e_vf_reset(hw) != I40E_SUCCESS) { - PMD_INIT_LOG(ERR, "Reset VF NIC failed"); - return -1; - } - /** - * After issuing vf reset command to pf, pf won't necessarily - * reset vf, it depends on what state it exactly is. If it's not - * initialized yet, it won't have vf reset since it's in a certain - * state. If not, it will try to reset. Even vf is reset, pf will - * set I40E_VFGEN_RSTAT to COMPLETE first, then wait 10ms and set - * it to ACTIVE. In this duration, vf may not catch the moment that - * COMPLETE is set. So, for vf, we'll try to wait a long time. - */ - rte_delay_ms(200); - - for (i = 0; i < MAX_RESET_WAIT_CNT; i++) { - reset = rd32(hw, I40E_VFGEN_RSTAT) & - I40E_VFGEN_RSTAT_VFR_STATE_MASK; - reset = reset >> I40E_VFGEN_RSTAT_VFR_STATE_SHIFT; - if (I40E_VFR_COMPLETED == reset || I40E_VFR_VFACTIVE == reset) - break; - else - rte_delay_ms(50); - } - - if (i >= MAX_RESET_WAIT_CNT) { - PMD_INIT_LOG(ERR, "Reset VF NIC failed"); - return -1; - } - - return 0; -} - -static int -i40evf_init_vf(struct rte_eth_dev *dev) -{ - int i, err, bufsz; - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - - vf->adapter = I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); - vf->dev_data = dev->data; - err = i40evf_set_mac_type(hw); - if (err) { - PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err); - goto err; - } - - i40e_init_adminq_parameter(hw); - err = i40e_init_adminq(hw); - if (err) { - PMD_INIT_LOG(ERR, "init_adminq failed: %d", err); - goto err; - } - - - /* Reset VF and wait until it's complete */ - if (i40evf_reset_vf(hw)) { - PMD_INIT_LOG(ERR, "reset NIC failed"); - goto err_aq; - } - - /* VF reset, shutdown admin queue and initialize again */ - if (i40e_shutdown_adminq(hw) != I40E_SUCCESS) { - PMD_INIT_LOG(ERR, "i40e_shutdown_adminq failed"); - return -1; - } - - i40e_init_adminq_parameter(hw); - if (i40e_init_adminq(hw) != I40E_SUCCESS) { - PMD_INIT_LOG(ERR, "init_adminq failed"); - return -1; - } - if (i40evf_check_api_version(dev) != 0) { - PMD_INIT_LOG(ERR, "check_api version failed"); - goto err_aq; - } - bufsz = sizeof(struct i40e_virtchnl_vf_resource) + - (I40E_MAX_VF_VSI * sizeof(struct i40e_virtchnl_vsi_resource)); - vf->vf_res = rte_zmalloc("vf_res", bufsz, 0); - if (!vf->vf_res) { - PMD_INIT_LOG(ERR, "unable to allocate vf_res memory"); - goto err_aq; - } - - if (i40evf_get_vf_resource(dev) != 0) { - PMD_INIT_LOG(ERR, "i40evf_get_vf_config failed"); - goto err_alloc; - } - - /* got VF config message back from PF, now we can parse it */ - for (i = 0; i < vf->vf_res->num_vsis; i++) { - if (vf->vf_res->vsi_res[i].vsi_type == I40E_VSI_SRIOV) - vf->vsi_res = &vf->vf_res->vsi_res[i]; - } - - if (!vf->vsi_res) { - PMD_INIT_LOG(ERR, "no LAN VSI found"); - goto err_alloc; - } - - vf->vsi.vsi_id = vf->vsi_res->vsi_id; - vf->vsi.type = vf->vsi_res->vsi_type; - vf->vsi.nb_qps = vf->vsi_res->num_queue_pairs; - - /* check mac addr, if it's not valid, genrate one */ - if (I40E_SUCCESS != i40e_validate_mac_addr(\ - vf->vsi_res->default_mac_addr)) - eth_random_addr(vf->vsi_res->default_mac_addr); - - ether_addr_copy((struct ether_addr *)vf->vsi_res->default_mac_addr, - (struct ether_addr *)hw->mac.addr); - - return 0; - -err_alloc: - rte_free(vf->vf_res); -err_aq: - i40e_shutdown_adminq(hw); /* ignore error */ -err: - return -1; -} - -static int -i40evf_dev_init(struct rte_eth_dev *eth_dev) -{ - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(\ - eth_dev->data->dev_private); - - PMD_INIT_FUNC_TRACE(); - - /* assign ops func pointer */ - eth_dev->dev_ops = &i40evf_eth_dev_ops; - eth_dev->rx_pkt_burst = &i40e_recv_pkts; - eth_dev->tx_pkt_burst = &i40e_xmit_pkts; - - /* - * For secondary processes, we don't initialise any further as primary - * has already done this work. - */ - if (rte_eal_process_type() != RTE_PROC_PRIMARY){ - if (eth_dev->data->scattered_rx) - eth_dev->rx_pkt_burst = i40e_recv_scattered_pkts; - return 0; - } - - hw->vendor_id = eth_dev->pci_dev->id.vendor_id; - hw->device_id = eth_dev->pci_dev->id.device_id; - hw->subsystem_vendor_id = eth_dev->pci_dev->id.subsystem_vendor_id; - hw->subsystem_device_id = eth_dev->pci_dev->id.subsystem_device_id; - hw->bus.device = eth_dev->pci_dev->addr.devid; - hw->bus.func = eth_dev->pci_dev->addr.function; - hw->hw_addr = (void *)eth_dev->pci_dev->mem_resource[0].addr; - - if(i40evf_init_vf(eth_dev) != 0) { - PMD_INIT_LOG(ERR, "Init vf failed"); - return -1; - } - - /* copy mac addr */ - eth_dev->data->mac_addrs = rte_zmalloc("i40evf_mac", - ETHER_ADDR_LEN, 0); - if (eth_dev->data->mac_addrs == NULL) { - PMD_INIT_LOG(ERR, "Failed to allocate %d bytes needed to " - "store MAC addresses", ETHER_ADDR_LEN); - return -ENOMEM; - } - ether_addr_copy((struct ether_addr *)hw->mac.addr, - (struct ether_addr *)eth_dev->data->mac_addrs); - - return 0; -} - -/* - * virtual function driver struct - */ -static struct eth_driver rte_i40evf_pmd = { - { - .name = "rte_i40evf_pmd", - .id_table = pci_id_i40evf_map, - .drv_flags = RTE_PCI_DRV_NEED_MAPPING, - }, - .eth_dev_init = i40evf_dev_init, - .dev_private_size = sizeof(struct i40e_vf), -}; - -/* - * VF Driver initialization routine. - * Invoked one at EAL init time. - * Register itself as the [Virtual Poll Mode] Driver of PCI Fortville devices. - */ -static int -rte_i40evf_pmd_init(const char *name __rte_unused, - const char *params __rte_unused) -{ - PMD_INIT_FUNC_TRACE(); - - rte_eth_driver_register(&rte_i40evf_pmd); - - return 0; -} - -static struct rte_driver rte_i40evf_driver = { - .type = PMD_PDEV, - .init = rte_i40evf_pmd_init, -}; - -PMD_REGISTER_DRIVER(rte_i40evf_driver); - -static int -i40evf_dev_configure(struct rte_eth_dev *dev) -{ - return i40evf_init_vlan(dev); -} - -static int -i40evf_init_vlan(struct rte_eth_dev *dev) -{ - struct rte_eth_dev_data *data = dev->data; - int ret; - - /* Apply vlan offload setting */ - i40evf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK); - - /* Apply pvid setting */ - ret = i40evf_vlan_pvid_set(dev, data->dev_conf.txmode.pvid, - data->dev_conf.txmode.hw_vlan_insert_pvid); - return ret; -} - -static void -i40evf_vlan_offload_set(struct rte_eth_dev *dev, int mask) -{ - bool enable_vlan_strip = 0; - struct rte_eth_conf *dev_conf = &dev->data->dev_conf; - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - - /* Linux pf host doesn't support vlan offload yet */ - if (vf->version_major == I40E_DPDK_VERSION_MAJOR) { - /* Vlan stripping setting */ - if (mask & ETH_VLAN_STRIP_MASK) { - /* Enable or disable VLAN stripping */ - if (dev_conf->rxmode.hw_vlan_strip) - enable_vlan_strip = 1; - else - enable_vlan_strip = 0; - - i40evf_config_vlan_offload(dev, enable_vlan_strip); - } - } -} - -static int -i40evf_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t pvid, int on) -{ - struct rte_eth_conf *dev_conf = &dev->data->dev_conf; - struct i40e_vsi_vlan_pvid_info info; - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - - memset(&info, 0, sizeof(info)); - info.on = on; - - /* Linux pf host don't support vlan offload yet */ - if (vf->version_major == I40E_DPDK_VERSION_MAJOR) { - if (info.on) - info.config.pvid = pvid; - else { - info.config.reject.tagged = - dev_conf->txmode.hw_vlan_reject_tagged; - info.config.reject.untagged = - dev_conf->txmode.hw_vlan_reject_untagged; - } - return i40evf_config_vlan_pvid(dev, &info); - } - - return 0; -} - -static int -i40evf_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) -{ - struct i40e_rx_queue *rxq; - int err = 0; - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - - PMD_INIT_FUNC_TRACE(); - - if (rx_queue_id < dev->data->nb_rx_queues) { - rxq = dev->data->rx_queues[rx_queue_id]; - - err = i40e_alloc_rx_queue_mbufs(rxq); - if (err) { - PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf"); - return err; - } - - rte_wmb(); - - /* Init the RX tail register. */ - I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); - I40EVF_WRITE_FLUSH(hw); - - /* Ready to switch the queue on */ - err = i40evf_switch_queue(dev, TRUE, rx_queue_id, TRUE); - - if (err) - PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on", - rx_queue_id); - } - - return err; -} - -static int -i40evf_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) -{ - struct i40e_rx_queue *rxq; - int err; - - if (rx_queue_id < dev->data->nb_rx_queues) { - rxq = dev->data->rx_queues[rx_queue_id]; - - err = i40evf_switch_queue(dev, TRUE, rx_queue_id, FALSE); - - if (err) { - PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off", - rx_queue_id); - return err; - } - - i40e_rx_queue_release_mbufs(rxq); - i40e_reset_rx_queue(rxq); - } - - return 0; -} - -static int -i40evf_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) -{ - int err = 0; - - PMD_INIT_FUNC_TRACE(); - - if (tx_queue_id < dev->data->nb_tx_queues) { - - /* Ready to switch the queue on */ - err = i40evf_switch_queue(dev, FALSE, tx_queue_id, TRUE); - - if (err) - PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on", - tx_queue_id); - } - - return err; -} - -static int -i40evf_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) -{ - struct i40e_tx_queue *txq; - int err; - - if (tx_queue_id < dev->data->nb_tx_queues) { - txq = dev->data->tx_queues[tx_queue_id]; - - err = i40evf_switch_queue(dev, FALSE, tx_queue_id, FALSE); - - if (err) { - PMD_DRV_LOG(ERR, "Failed to switch TX queue %u of", - tx_queue_id); - return err; - } - - i40e_tx_queue_release_mbufs(txq); - i40e_reset_tx_queue(txq); - } - - return 0; -} - -static int -i40evf_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) -{ - int ret; - - if (on) - ret = i40evf_add_vlan(dev, vlan_id); - else - ret = i40evf_del_vlan(dev,vlan_id); - - return ret; -} - -static int -i40evf_rx_init(struct rte_eth_dev *dev) -{ - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - uint16_t i; - struct i40e_rx_queue **rxq = - (struct i40e_rx_queue **)dev->data->rx_queues; - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - - i40evf_config_rss(vf); - for (i = 0; i < dev->data->nb_rx_queues; i++) { - rxq[i]->qrx_tail = hw->hw_addr + I40E_QRX_TAIL1(i); - I40E_PCI_REG_WRITE(rxq[i]->qrx_tail, rxq[i]->nb_rx_desc - 1); - } - - /* Flush the operation to write registers */ - I40EVF_WRITE_FLUSH(hw); - - return 0; -} - -static void -i40evf_tx_init(struct rte_eth_dev *dev) -{ - uint16_t i; - struct i40e_tx_queue **txq = - (struct i40e_tx_queue **)dev->data->tx_queues; - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - - for (i = 0; i < dev->data->nb_tx_queues; i++) - txq[i]->qtx_tail = hw->hw_addr + I40E_QTX_TAIL1(i); -} - -static inline void -i40evf_enable_queues_intr(struct i40e_hw *hw) -{ - I40E_WRITE_REG(hw, I40E_VFINT_DYN_CTLN1(I40EVF_VSI_DEFAULT_MSIX_INTR - 1), - I40E_VFINT_DYN_CTLN1_INTENA_MASK | - I40E_VFINT_DYN_CTLN_CLEARPBA_MASK); -} - -static inline void -i40evf_disable_queues_intr(struct i40e_hw *hw) -{ - I40E_WRITE_REG(hw, I40E_VFINT_DYN_CTLN1(I40EVF_VSI_DEFAULT_MSIX_INTR - 1), - 0); -} - -static int -i40evf_dev_start(struct rte_eth_dev *dev) -{ - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct ether_addr mac_addr; - - PMD_INIT_FUNC_TRACE(); - - vf->max_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len; - if (dev->data->dev_conf.rxmode.jumbo_frame == 1) { - if (vf->max_pkt_len <= ETHER_MAX_LEN || - vf->max_pkt_len > I40E_FRAME_SIZE_MAX) { - PMD_DRV_LOG(ERR, "maximum packet length must " - "be larger than %u and smaller than %u," - "as jumbo frame is enabled", - (uint32_t)ETHER_MAX_LEN, - (uint32_t)I40E_FRAME_SIZE_MAX); - return I40E_ERR_CONFIG; - } - } else { - if (vf->max_pkt_len < ETHER_MIN_LEN || - vf->max_pkt_len > ETHER_MAX_LEN) { - PMD_DRV_LOG(ERR, "maximum packet length must be " - "larger than %u and smaller than %u, " - "as jumbo frame is disabled", - (uint32_t)ETHER_MIN_LEN, - (uint32_t)ETHER_MAX_LEN); - return I40E_ERR_CONFIG; - } - } - - vf->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues, - dev->data->nb_tx_queues); - - if (i40evf_rx_init(dev) != 0){ - PMD_DRV_LOG(ERR, "failed to do RX init"); - return -1; - } - - i40evf_tx_init(dev); - - if (i40evf_configure_queues(dev) != 0) { - PMD_DRV_LOG(ERR, "configure queues failed"); - goto err_queue; - } - if (i40evf_config_irq_map(dev)) { - PMD_DRV_LOG(ERR, "config_irq_map failed"); - goto err_queue; - } - - /* Set mac addr */ - (void)rte_memcpy(mac_addr.addr_bytes, hw->mac.addr, - sizeof(mac_addr.addr_bytes)); - if (i40evf_add_mac_addr(dev, &mac_addr)) { - PMD_DRV_LOG(ERR, "Failed to add mac addr"); - goto err_queue; - } - - if (i40evf_start_queues(dev) != 0) { - PMD_DRV_LOG(ERR, "enable queues failed"); - goto err_mac; - } - - i40evf_enable_queues_intr(hw); - return 0; - -err_mac: - i40evf_del_mac_addr(dev, &mac_addr); -err_queue: - return -1; -} - -static void -i40evf_dev_stop(struct rte_eth_dev *dev) -{ - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - - PMD_INIT_FUNC_TRACE(); - - i40evf_disable_queues_intr(hw); - i40evf_stop_queues(dev); -} - -static int -i40evf_dev_link_update(struct rte_eth_dev *dev, - __rte_unused int wait_to_complete) -{ - struct rte_eth_link new_link; - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - /* - * DPDK pf host provide interfacet to acquire link status - * while Linux driver does not - */ - if (vf->version_major == I40E_DPDK_VERSION_MAJOR) - i40evf_get_link_status(dev, &new_link); - else { - /* Always assume it's up, for Linux driver PF host */ - new_link.link_duplex = ETH_LINK_AUTONEG_DUPLEX; - new_link.link_speed = ETH_LINK_SPEED_10000; - new_link.link_status = 1; - } - i40evf_dev_atomic_write_link_status(dev, &new_link); - - return 0; -} - -static void -i40evf_dev_promiscuous_enable(struct rte_eth_dev *dev) -{ - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - int ret; - - /* If enabled, just return */ - if (vf->promisc_unicast_enabled) - return; - - ret = i40evf_config_promisc(dev, 1, vf->promisc_multicast_enabled); - if (ret == 0) - vf->promisc_unicast_enabled = TRUE; -} - -static void -i40evf_dev_promiscuous_disable(struct rte_eth_dev *dev) -{ - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - int ret; - - /* If disabled, just return */ - if (!vf->promisc_unicast_enabled) - return; - - ret = i40evf_config_promisc(dev, 0, vf->promisc_multicast_enabled); - if (ret == 0) - vf->promisc_unicast_enabled = FALSE; -} - -static void -i40evf_dev_allmulticast_enable(struct rte_eth_dev *dev) -{ - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - int ret; - - /* If enabled, just return */ - if (vf->promisc_multicast_enabled) - return; - - ret = i40evf_config_promisc(dev, vf->promisc_unicast_enabled, 1); - if (ret == 0) - vf->promisc_multicast_enabled = TRUE; -} - -static void -i40evf_dev_allmulticast_disable(struct rte_eth_dev *dev) -{ - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - int ret; - - /* If enabled, just return */ - if (!vf->promisc_multicast_enabled) - return; - - ret = i40evf_config_promisc(dev, vf->promisc_unicast_enabled, 0); - if (ret == 0) - vf->promisc_multicast_enabled = FALSE; -} - -static void -i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) -{ - struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - - memset(dev_info, 0, sizeof(*dev_info)); - dev_info->max_rx_queues = vf->vsi_res->num_queue_pairs; - dev_info->max_tx_queues = vf->vsi_res->num_queue_pairs; - dev_info->min_rx_bufsize = I40E_BUF_SIZE_MIN; - dev_info->max_rx_pktlen = I40E_FRAME_SIZE_MAX; - dev_info->reta_size = ETH_RSS_RETA_SIZE_64; - dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL; - - dev_info->default_rxconf = (struct rte_eth_rxconf) { - .rx_thresh = { - .pthresh = I40E_DEFAULT_RX_PTHRESH, - .hthresh = I40E_DEFAULT_RX_HTHRESH, - .wthresh = I40E_DEFAULT_RX_WTHRESH, - }, - .rx_free_thresh = I40E_DEFAULT_RX_FREE_THRESH, - .rx_drop_en = 0, - }; - - dev_info->default_txconf = (struct rte_eth_txconf) { - .tx_thresh = { - .pthresh = I40E_DEFAULT_TX_PTHRESH, - .hthresh = I40E_DEFAULT_TX_HTHRESH, - .wthresh = I40E_DEFAULT_TX_WTHRESH, - }, - .tx_free_thresh = I40E_DEFAULT_TX_FREE_THRESH, - .tx_rs_thresh = I40E_DEFAULT_TX_RSBIT_THRESH, - .txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS | - ETH_TXQ_FLAGS_NOOFFLOADS, - }; -} - -static void -i40evf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) -{ - if (i40evf_get_statics(dev, stats)) - PMD_DRV_LOG(ERR, "Get statics failed"); -} - -static void -i40evf_dev_close(struct rte_eth_dev *dev) -{ - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - - i40evf_dev_stop(dev); - i40evf_reset_vf(hw); - i40e_shutdown_adminq(hw); -} - -static int -i40evf_dev_rss_reta_update(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size) -{ - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - uint32_t lut, l; - uint16_t i, j; - uint16_t idx, shift; - uint8_t mask; - - if (reta_size != ETH_RSS_RETA_SIZE_64) { - PMD_DRV_LOG(ERR, "The size of hash lookup table configured " - "(%d) doesn't match the number of hardware can " - "support (%d)\n", reta_size, ETH_RSS_RETA_SIZE_64); - return -EINVAL; - } - - for (i = 0; i < reta_size; i += I40E_4_BIT_WIDTH) { - idx = i / RTE_RETA_GROUP_SIZE; - shift = i % RTE_RETA_GROUP_SIZE; - mask = (uint8_t)((reta_conf[idx].mask >> shift) & - I40E_4_BIT_MASK); - if (!mask) - continue; - if (mask == I40E_4_BIT_MASK) - l = 0; - else - l = I40E_READ_REG(hw, I40E_VFQF_HLUT(i >> 2)); - - for (j = 0, lut = 0; j < I40E_4_BIT_WIDTH; j++) { - if (mask & (0x1 << j)) - lut |= reta_conf[idx].reta[shift + j] << - (CHAR_BIT * j); - else - lut |= l & (I40E_8_BIT_MASK << (CHAR_BIT * j)); - } - I40E_WRITE_REG(hw, I40E_VFQF_HLUT(i >> 2), lut); - } - - return 0; -} - -static int -i40evf_dev_rss_reta_query(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size) -{ - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - uint32_t lut; - uint16_t i, j; - uint16_t idx, shift; - uint8_t mask; - - if (reta_size != ETH_RSS_RETA_SIZE_64) { - PMD_DRV_LOG(ERR, "The size of hash lookup table configured " - "(%d) doesn't match the number of hardware can " - "support (%d)\n", reta_size, ETH_RSS_RETA_SIZE_64); - return -EINVAL; - } - - for (i = 0; i < reta_size; i += I40E_4_BIT_WIDTH) { - idx = i / RTE_RETA_GROUP_SIZE; - shift = i % RTE_RETA_GROUP_SIZE; - mask = (uint8_t)((reta_conf[idx].mask >> shift) & - I40E_4_BIT_MASK); - if (!mask) - continue; - - lut = I40E_READ_REG(hw, I40E_VFQF_HLUT(i >> 2)); - for (j = 0; j < I40E_4_BIT_WIDTH; j++) { - if (mask & (0x1 << j)) - reta_conf[idx].reta[shift + j] = - ((lut >> (CHAR_BIT * j)) & - I40E_8_BIT_MASK); - } - } - - return 0; -} - -static int -i40evf_hw_rss_hash_set(struct i40e_hw *hw, struct rte_eth_rss_conf *rss_conf) -{ - uint32_t *hash_key; - uint8_t hash_key_len; - uint64_t rss_hf, hena; - - hash_key = (uint32_t *)(rss_conf->rss_key); - hash_key_len = rss_conf->rss_key_len; - if (hash_key != NULL && hash_key_len >= - (I40E_VFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t)) { - uint16_t i; - - for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++) - I40E_WRITE_REG(hw, I40E_VFQF_HKEY(i), hash_key[i]); - } - - rss_hf = rss_conf->rss_hf; - hena = (uint64_t)I40E_READ_REG(hw, I40E_VFQF_HENA(0)); - hena |= ((uint64_t)I40E_READ_REG(hw, I40E_VFQF_HENA(1))) << 32; - hena &= ~I40E_RSS_HENA_ALL; - hena |= i40e_config_hena(rss_hf); - I40E_WRITE_REG(hw, I40E_VFQF_HENA(0), (uint32_t)hena); - I40E_WRITE_REG(hw, I40E_VFQF_HENA(1), (uint32_t)(hena >> 32)); - I40EVF_WRITE_FLUSH(hw); - - return 0; -} - -static void -i40evf_disable_rss(struct i40e_vf *vf) -{ - struct i40e_hw *hw = I40E_VF_TO_HW(vf); - uint64_t hena; - - hena = (uint64_t)I40E_READ_REG(hw, I40E_VFQF_HENA(0)); - hena |= ((uint64_t)I40E_READ_REG(hw, I40E_VFQF_HENA(1))) << 32; - hena &= ~I40E_RSS_HENA_ALL; - I40E_WRITE_REG(hw, I40E_VFQF_HENA(0), (uint32_t)hena); - I40E_WRITE_REG(hw, I40E_VFQF_HENA(1), (uint32_t)(hena >> 32)); - I40EVF_WRITE_FLUSH(hw); -} - -static int -i40evf_config_rss(struct i40e_vf *vf) -{ - struct i40e_hw *hw = I40E_VF_TO_HW(vf); - struct rte_eth_rss_conf rss_conf; - uint32_t i, j, lut = 0, nb_q = (I40E_VFQF_HLUT_MAX_INDEX + 1) * 4; - - if (vf->dev_data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) { - i40evf_disable_rss(vf); - PMD_DRV_LOG(DEBUG, "RSS not configured\n"); - return 0; - } - - /* Fill out the look up table */ - for (i = 0, j = 0; i < nb_q; i++, j++) { - if (j >= vf->num_queue_pairs) - j = 0; - lut = (lut << 8) | j; - if ((i & 3) == 3) - I40E_WRITE_REG(hw, I40E_VFQF_HLUT(i >> 2), lut); - } - - rss_conf = vf->dev_data->dev_conf.rx_adv_conf.rss_conf; - if ((rss_conf.rss_hf & I40E_RSS_OFFLOAD_ALL) == 0) { - i40evf_disable_rss(vf); - PMD_DRV_LOG(DEBUG, "No hash flag is set\n"); - return 0; - } - - if (rss_conf.rss_key == NULL || rss_conf.rss_key_len < nb_q) { - /* Calculate the default hash key */ - for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++) - rss_key_default[i] = (uint32_t)rte_rand(); - rss_conf.rss_key = (uint8_t *)rss_key_default; - rss_conf.rss_key_len = nb_q; - } - - return i40evf_hw_rss_hash_set(hw, &rss_conf); -} - -static int -i40evf_dev_rss_hash_update(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf) -{ - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - uint64_t rss_hf = rss_conf->rss_hf & I40E_RSS_OFFLOAD_ALL; - uint64_t hena; - - hena = (uint64_t)I40E_READ_REG(hw, I40E_VFQF_HENA(0)); - hena |= ((uint64_t)I40E_READ_REG(hw, I40E_VFQF_HENA(1))) << 32; - if (!(hena & I40E_RSS_HENA_ALL)) { /* RSS disabled */ - if (rss_hf != 0) /* Enable RSS */ - return -EINVAL; - return 0; - } - - /* RSS enabled */ - if (rss_hf == 0) /* Disable RSS */ - return -EINVAL; - - return i40evf_hw_rss_hash_set(hw, rss_conf); -} - -static int -i40evf_dev_rss_hash_conf_get(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf) -{ - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - uint32_t *hash_key = (uint32_t *)(rss_conf->rss_key); - uint64_t hena; - uint16_t i; - - if (hash_key) { - for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++) - hash_key[i] = I40E_READ_REG(hw, I40E_VFQF_HKEY(i)); - rss_conf->rss_key_len = i * sizeof(uint32_t); - } - hena = (uint64_t)I40E_READ_REG(hw, I40E_VFQF_HENA(0)); - hena |= ((uint64_t)I40E_READ_REG(hw, I40E_VFQF_HENA(1))) << 32; - rss_conf->rss_hf = i40e_parse_hena(hena); - - return 0; -} diff --git a/lib/librte_pmd_i40e/i40e_fdir.c b/lib/librte_pmd_i40e/i40e_fdir.c deleted file mode 100644 index 7b68c78..0000000 --- a/lib/librte_pmd_i40e/i40e_fdir.c +++ /dev/null @@ -1,1361 +0,0 @@ -/*- - * BSD LICENSE - * - * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include "i40e_logs.h" -#include "i40e/i40e_type.h" -#include "i40e_ethdev.h" -#include "i40e_rxtx.h" - -#define I40E_FDIR_MZ_NAME "FDIR_MEMZONE" -#ifndef IPV6_ADDR_LEN -#define IPV6_ADDR_LEN 16 -#endif - -#define I40E_FDIR_PKT_LEN 512 -#define I40E_FDIR_IP_DEFAULT_LEN 420 -#define I40E_FDIR_IP_DEFAULT_TTL 0x40 -#define I40E_FDIR_IP_DEFAULT_VERSION_IHL 0x45 -#define I40E_FDIR_TCP_DEFAULT_DATAOFF 0x50 -#define I40E_FDIR_IPv6_DEFAULT_VTC_FLOW 0x60300000 -#define I40E_FDIR_IPv6_DEFAULT_HOP_LIMITS 0xFF -#define I40E_FDIR_IPv6_PAYLOAD_LEN 380 -#define I40E_FDIR_UDP_DEFAULT_LEN 400 - -/* Wait count and interval for fdir filter programming */ -#define I40E_FDIR_WAIT_COUNT 10 -#define I40E_FDIR_WAIT_INTERVAL_US 1000 - -/* Wait count and interval for fdir filter flush */ -#define I40E_FDIR_FLUSH_RETRY 50 -#define I40E_FDIR_FLUSH_INTERVAL_MS 5 - -#define I40E_COUNTER_PF 2 -/* Statistic counter index for one pf */ -#define I40E_COUNTER_INDEX_FDIR(pf_id) (0 + (pf_id) * I40E_COUNTER_PF) -#define I40E_MAX_FLX_SOURCE_OFF 480 -#define I40E_FLX_OFFSET_IN_FIELD_VECTOR 50 - -#define NONUSE_FLX_PIT_DEST_OFF 63 -#define NONUSE_FLX_PIT_FSIZE 1 -#define MK_FLX_PIT(src_offset, fsize, dst_offset) ( \ - (((src_offset) << I40E_PRTQF_FLX_PIT_SOURCE_OFF_SHIFT) & \ - I40E_PRTQF_FLX_PIT_SOURCE_OFF_MASK) | \ - (((fsize) << I40E_PRTQF_FLX_PIT_FSIZE_SHIFT) & \ - I40E_PRTQF_FLX_PIT_FSIZE_MASK) | \ - ((((dst_offset) + I40E_FLX_OFFSET_IN_FIELD_VECTOR) << \ - I40E_PRTQF_FLX_PIT_DEST_OFF_SHIFT) & \ - I40E_PRTQF_FLX_PIT_DEST_OFF_MASK)) - -#define I40E_FDIR_FLOWS ( \ - (1 << RTE_ETH_FLOW_FRAG_IPV4) | \ - (1 << RTE_ETH_FLOW_NONFRAG_IPV4_UDP) | \ - (1 << RTE_ETH_FLOW_NONFRAG_IPV4_TCP) | \ - (1 << RTE_ETH_FLOW_NONFRAG_IPV4_SCTP) | \ - (1 << RTE_ETH_FLOW_NONFRAG_IPV4_OTHER) | \ - (1 << RTE_ETH_FLOW_FRAG_IPV6) | \ - (1 << RTE_ETH_FLOW_NONFRAG_IPV6_UDP) | \ - (1 << RTE_ETH_FLOW_NONFRAG_IPV6_TCP) | \ - (1 << RTE_ETH_FLOW_NONFRAG_IPV6_SCTP) | \ - (1 << RTE_ETH_FLOW_NONFRAG_IPV6_OTHER)) - -#define I40E_FLEX_WORD_MASK(off) (0x80 >> (off)) - -static int i40e_fdir_rx_queue_init(struct i40e_rx_queue *rxq); -static int i40e_check_fdir_flex_conf( - const struct rte_eth_fdir_flex_conf *conf); -static void i40e_set_flx_pld_cfg(struct i40e_pf *pf, - const struct rte_eth_flex_payload_cfg *cfg); -static void i40e_set_flex_mask_on_pctype(struct i40e_pf *pf, - enum i40e_filter_pctype pctype, - const struct rte_eth_fdir_flex_mask *mask_cfg); -static int i40e_fdir_construct_pkt(struct i40e_pf *pf, - const struct rte_eth_fdir_input *fdir_input, - unsigned char *raw_pkt); -static int i40e_add_del_fdir_filter(struct rte_eth_dev *dev, - const struct rte_eth_fdir_filter *filter, - bool add); -static int i40e_fdir_filter_programming(struct i40e_pf *pf, - enum i40e_filter_pctype pctype, - const struct rte_eth_fdir_filter *filter, - bool add); -static int i40e_fdir_flush(struct rte_eth_dev *dev); -static void i40e_fdir_info_get(struct rte_eth_dev *dev, - struct rte_eth_fdir_info *fdir); -static void i40e_fdir_stats_get(struct rte_eth_dev *dev, - struct rte_eth_fdir_stats *stat); - -static int -i40e_fdir_rx_queue_init(struct i40e_rx_queue *rxq) -{ - struct i40e_hw *hw = I40E_VSI_TO_HW(rxq->vsi); - struct i40e_hmc_obj_rxq rx_ctx; - int err = I40E_SUCCESS; - - memset(&rx_ctx, 0, sizeof(struct i40e_hmc_obj_rxq)); - /* Init the RX queue in hardware */ - rx_ctx.dbuff = I40E_RXBUF_SZ_1024 >> I40E_RXQ_CTX_DBUFF_SHIFT; - rx_ctx.hbuff = 0; - rx_ctx.base = rxq->rx_ring_phys_addr / I40E_QUEUE_BASE_ADDR_UNIT; - rx_ctx.qlen = rxq->nb_rx_desc; -#ifndef RTE_LIBRTE_I40E_16BYTE_RX_DESC - rx_ctx.dsize = 1; -#endif - rx_ctx.dtype = i40e_header_split_none; - rx_ctx.hsplit_0 = I40E_HEADER_SPLIT_NONE; - rx_ctx.rxmax = ETHER_MAX_LEN; - rx_ctx.tphrdesc_ena = 1; - rx_ctx.tphwdesc_ena = 1; - rx_ctx.tphdata_ena = 1; - rx_ctx.tphhead_ena = 1; - rx_ctx.lrxqthresh = 2; - rx_ctx.crcstrip = 0; - rx_ctx.l2tsel = 1; - rx_ctx.showiv = 1; - rx_ctx.prefena = 1; - - err = i40e_clear_lan_rx_queue_context(hw, rxq->reg_idx); - if (err != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to clear FDIR RX queue context."); - return err; - } - err = i40e_set_lan_rx_queue_context(hw, rxq->reg_idx, &rx_ctx); - if (err != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to set FDIR RX queue context."); - return err; - } - rxq->qrx_tail = hw->hw_addr + - I40E_QRX_TAIL(rxq->vsi->base_queue); - - rte_wmb(); - /* Init the RX tail regieter. */ - I40E_PCI_REG_WRITE(rxq->qrx_tail, 0); - I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); - - return err; -} - -/* - * i40e_fdir_setup - reserve and initialize the Flow Director resources - * @pf: board private structure - */ -int -i40e_fdir_setup(struct i40e_pf *pf) -{ - struct i40e_hw *hw = I40E_PF_TO_HW(pf); - struct i40e_vsi *vsi; - int err = I40E_SUCCESS; - char z_name[RTE_MEMZONE_NAMESIZE]; - const struct rte_memzone *mz = NULL; - struct rte_eth_dev *eth_dev = pf->adapter->eth_dev; - - if ((pf->flags & I40E_FLAG_FDIR) == 0) { - PMD_INIT_LOG(ERR, "HW doesn't support FDIR"); - return I40E_NOT_SUPPORTED; - } - - PMD_DRV_LOG(INFO, "FDIR HW Capabilities: num_filters_guaranteed = %u," - " num_filters_best_effort = %u.", - hw->func_caps.fd_filters_guaranteed, - hw->func_caps.fd_filters_best_effort); - - vsi = pf->fdir.fdir_vsi; - if (vsi) { - PMD_DRV_LOG(INFO, "FDIR initialization has been done."); - return I40E_SUCCESS; - } - /* make new FDIR VSI */ - vsi = i40e_vsi_setup(pf, I40E_VSI_FDIR, pf->main_vsi, 0); - if (!vsi) { - PMD_DRV_LOG(ERR, "Couldn't create FDIR VSI."); - return I40E_ERR_NO_AVAILABLE_VSI; - } - pf->fdir.fdir_vsi = vsi; - - /*Fdir tx queue setup*/ - err = i40e_fdir_setup_tx_resources(pf); - if (err) { - PMD_DRV_LOG(ERR, "Failed to setup FDIR TX resources."); - goto fail_setup_tx; - } - - /*Fdir rx queue setup*/ - err = i40e_fdir_setup_rx_resources(pf); - if (err) { - PMD_DRV_LOG(ERR, "Failed to setup FDIR RX resources."); - goto fail_setup_rx; - } - - err = i40e_tx_queue_init(pf->fdir.txq); - if (err) { - PMD_DRV_LOG(ERR, "Failed to do FDIR TX initialization."); - goto fail_mem; - } - - /* need switch on before dev start*/ - err = i40e_switch_tx_queue(hw, vsi->base_queue, TRUE); - if (err) { - PMD_DRV_LOG(ERR, "Failed to do fdir TX switch on."); - goto fail_mem; - } - - /* Init the rx queue in hardware */ - err = i40e_fdir_rx_queue_init(pf->fdir.rxq); - if (err) { - PMD_DRV_LOG(ERR, "Failed to do FDIR RX initialization."); - goto fail_mem; - } - - /* switch on rx queue */ - err = i40e_switch_rx_queue(hw, vsi->base_queue, TRUE); - if (err) { - PMD_DRV_LOG(ERR, "Failed to do FDIR RX switch on."); - goto fail_mem; - } - - /* reserve memory for the fdir programming packet */ - snprintf(z_name, sizeof(z_name), "%s_%s_%d", - eth_dev->driver->pci_drv.name, - I40E_FDIR_MZ_NAME, - eth_dev->data->port_id); - mz = i40e_memzone_reserve(z_name, I40E_FDIR_PKT_LEN, SOCKET_ID_ANY); - if (!mz) { - PMD_DRV_LOG(ERR, "Cannot init memzone for " - "flow director program packet."); - err = I40E_ERR_NO_MEMORY; - goto fail_mem; - } - pf->fdir.prg_pkt = mz->addr; -#ifdef RTE_LIBRTE_XEN_DOM0 - pf->fdir.dma_addr = rte_mem_phy2mch(mz->memseg_id, mz->phys_addr); -#else - pf->fdir.dma_addr = (uint64_t)mz->phys_addr; -#endif - pf->fdir.match_counter_index = I40E_COUNTER_INDEX_FDIR(hw->pf_id); - PMD_DRV_LOG(INFO, "FDIR setup successfully, with programming queue %u.", - vsi->base_queue); - return I40E_SUCCESS; - -fail_mem: - i40e_dev_rx_queue_release(pf->fdir.rxq); - pf->fdir.rxq = NULL; -fail_setup_rx: - i40e_dev_tx_queue_release(pf->fdir.txq); - pf->fdir.txq = NULL; -fail_setup_tx: - i40e_vsi_release(vsi); - pf->fdir.fdir_vsi = NULL; - return err; -} - -/* - * i40e_fdir_teardown - release the Flow Director resources - * @pf: board private structure - */ -void -i40e_fdir_teardown(struct i40e_pf *pf) -{ - struct i40e_hw *hw = I40E_PF_TO_HW(pf); - struct i40e_vsi *vsi; - - vsi = pf->fdir.fdir_vsi; - if (!vsi) - return; - i40e_switch_tx_queue(hw, vsi->base_queue, FALSE); - i40e_switch_rx_queue(hw, vsi->base_queue, FALSE); - i40e_dev_rx_queue_release(pf->fdir.rxq); - pf->fdir.rxq = NULL; - i40e_dev_tx_queue_release(pf->fdir.txq); - pf->fdir.txq = NULL; - i40e_vsi_release(vsi); - pf->fdir.fdir_vsi = NULL; -} - -/* check whether the flow director table in empty */ -static inline int -i40e_fdir_empty(struct i40e_hw *hw) -{ - uint32_t guarant_cnt, best_cnt; - - guarant_cnt = (uint32_t)((I40E_READ_REG(hw, I40E_PFQF_FDSTAT) & - I40E_PFQF_FDSTAT_GUARANT_CNT_MASK) >> - I40E_PFQF_FDSTAT_GUARANT_CNT_SHIFT); - best_cnt = (uint32_t)((I40E_READ_REG(hw, I40E_PFQF_FDSTAT) & - I40E_PFQF_FDSTAT_BEST_CNT_MASK) >> - I40E_PFQF_FDSTAT_BEST_CNT_SHIFT); - if (best_cnt + guarant_cnt > 0) - return -1; - - return 0; -} - -/* - * Initialize the configuration about bytes stream extracted as flexible payload - * and mask setting - */ -static inline void -i40e_init_flx_pld(struct i40e_pf *pf) -{ - struct i40e_hw *hw = I40E_PF_TO_HW(pf); - uint8_t pctype; - int i, index; - - /* - * Define the bytes stream extracted as flexible payload in - * field vector. By default, select 8 words from the beginning - * of payload as flexible payload. - */ - for (i = I40E_FLXPLD_L2_IDX; i < I40E_MAX_FLXPLD_LAYER; i++) { - index = i * I40E_MAX_FLXPLD_FIED; - pf->fdir.flex_set[index].src_offset = 0; - pf->fdir.flex_set[index].size = I40E_FDIR_MAX_FLEXWORD_NUM; - pf->fdir.flex_set[index].dst_offset = 0; - I40E_WRITE_REG(hw, I40E_PRTQF_FLX_PIT(index), 0x0000C900); - I40E_WRITE_REG(hw, - I40E_PRTQF_FLX_PIT(index + 1), 0x0000FC29);/*non-used*/ - I40E_WRITE_REG(hw, - I40E_PRTQF_FLX_PIT(index + 2), 0x0000FC2A);/*non-used*/ - } - - /* initialize the masks */ - for (pctype = I40E_FILTER_PCTYPE_NONF_IPV4_UDP; - pctype <= I40E_FILTER_PCTYPE_FRAG_IPV6; pctype++) { - pf->fdir.flex_mask[pctype].word_mask = 0; - I40E_WRITE_REG(hw, I40E_PRTQF_FD_FLXINSET(pctype), 0); - for (i = 0; i < I40E_FDIR_BITMASK_NUM_WORD; i++) { - pf->fdir.flex_mask[pctype].bitmask[i].offset = 0; - pf->fdir.flex_mask[pctype].bitmask[i].mask = 0; - I40E_WRITE_REG(hw, I40E_PRTQF_FD_MSK(pctype, i), 0); - } - } -} - -#define I40E_WORD(hi, lo) (uint16_t)((((hi) << 8) & 0xFF00) | ((lo) & 0xFF)) - -#define I40E_VALIDATE_FLEX_PIT(flex_pit1, flex_pit2) do { \ - if ((flex_pit2).src_offset < \ - (flex_pit1).src_offset + (flex_pit1).size) { \ - PMD_DRV_LOG(ERR, "src_offset should be not" \ - " less than than previous offset" \ - " + previous FSIZE."); \ - return -EINVAL; \ - } \ -} while (0) - -/* - * i40e_srcoff_to_flx_pit - transform the src_offset into flex_pit structure, - * and the flex_pit will be sorted by it's src_offset value - */ -static inline uint16_t -i40e_srcoff_to_flx_pit(const uint16_t *src_offset, - struct i40e_fdir_flex_pit *flex_pit) -{ - uint16_t src_tmp, size, num = 0; - uint16_t i, k, j = 0; - - while (j < I40E_FDIR_MAX_FLEX_LEN) { - size = 1; - for (; j < I40E_FDIR_MAX_FLEX_LEN - 1; j++) { - if (src_offset[j + 1] == src_offset[j] + 1) - size++; - else - break; - } - src_tmp = src_offset[j] + 1 - size; - /* the flex_pit need to be sort by src_offset */ - for (i = 0; i < num; i++) { - if (src_tmp < flex_pit[i].src_offset) - break; - } - /* if insert required, move backward */ - for (k = num; k > i; k--) - flex_pit[k] = flex_pit[k - 1]; - /* insert */ - flex_pit[i].dst_offset = j + 1 - size; - flex_pit[i].src_offset = src_tmp; - flex_pit[i].size = size; - j++; - num++; - } - return num; -} - -/* i40e_check_fdir_flex_payload -check flex payload configuration arguments */ -static inline int -i40e_check_fdir_flex_payload(const struct rte_eth_flex_payload_cfg *flex_cfg) -{ - struct i40e_fdir_flex_pit flex_pit[I40E_FDIR_MAX_FLEX_LEN]; - uint16_t num, i; - - for (i = 0; i < I40E_FDIR_MAX_FLEX_LEN; i++) { - if (flex_cfg->src_offset[i] >= I40E_MAX_FLX_SOURCE_OFF) { - PMD_DRV_LOG(ERR, "exceeds maxmial payload limit."); - return -EINVAL; - } - } - - memset(flex_pit, 0, sizeof(flex_pit)); - num = i40e_srcoff_to_flx_pit(flex_cfg->src_offset, flex_pit); - if (num > I40E_MAX_FLXPLD_FIED) { - PMD_DRV_LOG(ERR, "exceeds maxmial number of flex fields."); - return -EINVAL; - } - for (i = 0; i < num; i++) { - if (flex_pit[i].size & 0x01 || flex_pit[i].dst_offset & 0x01 || - flex_pit[i].src_offset & 0x01) { - PMD_DRV_LOG(ERR, "flexpayload should be measured" - " in word"); - return -EINVAL; - } - if (i != num - 1) - I40E_VALIDATE_FLEX_PIT(flex_pit[i], flex_pit[i + 1]); - } - return 0; -} - -/* - * i40e_check_fdir_flex_conf -check if the flex payload and mask configuration - * arguments are valid - */ -static int -i40e_check_fdir_flex_conf(const struct rte_eth_fdir_flex_conf *conf) -{ - const struct rte_eth_flex_payload_cfg *flex_cfg; - const struct rte_eth_fdir_flex_mask *flex_mask; - uint16_t mask_tmp; - uint8_t nb_bitmask; - uint16_t i, j; - int ret = 0; - - if (conf == NULL) { - PMD_DRV_LOG(INFO, "NULL pointer."); - return -EINVAL; - } - /* check flexible payload setting configuration */ - if (conf->nb_payloads > RTE_ETH_L4_PAYLOAD) { - PMD_DRV_LOG(ERR, "invalid number of payload setting."); - return -EINVAL; - } - for (i = 0; i < conf->nb_payloads; i++) { - flex_cfg = &conf->flex_set[i]; - if (flex_cfg->type > RTE_ETH_L4_PAYLOAD) { - PMD_DRV_LOG(ERR, "invalid payload type."); - return -EINVAL; - } - ret = i40e_check_fdir_flex_payload(flex_cfg); - if (ret < 0) { - PMD_DRV_LOG(ERR, "invalid flex payload arguments."); - return -EINVAL; - } - } - - /* check flex mask setting configuration */ - if (conf->nb_flexmasks >= RTE_ETH_FLOW_MAX) { - PMD_DRV_LOG(ERR, "invalid number of flex masks."); - return -EINVAL; - } - for (i = 0; i < conf->nb_flexmasks; i++) { - flex_mask = &conf->flex_mask[i]; - if (!I40E_VALID_FLOW(flex_mask->flow_type)) { - PMD_DRV_LOG(WARNING, "invalid flow type."); - return -EINVAL; - } - nb_bitmask = 0; - for (j = 0; j < I40E_FDIR_MAX_FLEX_LEN; j += sizeof(uint16_t)) { - mask_tmp = I40E_WORD(flex_mask->mask[j], - flex_mask->mask[j + 1]); - if (mask_tmp != 0x0 && mask_tmp != UINT16_MAX) { - nb_bitmask++; - if (nb_bitmask > I40E_FDIR_BITMASK_NUM_WORD) { - PMD_DRV_LOG(ERR, " exceed maximal" - " number of bitmasks."); - return -EINVAL; - } - } - } - } - return 0; -} - -/* - * i40e_set_flx_pld_cfg -configure the rule how bytes stream is extracted as flexible payload - * @pf: board private structure - * @cfg: the rule how bytes stream is extracted as flexible payload - */ -static void -i40e_set_flx_pld_cfg(struct i40e_pf *pf, - const struct rte_eth_flex_payload_cfg *cfg) -{ - struct i40e_hw *hw = I40E_PF_TO_HW(pf); - struct i40e_fdir_flex_pit flex_pit[I40E_MAX_FLXPLD_FIED]; - uint32_t flx_pit; - uint16_t num, min_next_off; /* in words */ - uint8_t field_idx = 0; - uint8_t layer_idx = 0; - uint16_t i; - - if (cfg->type == RTE_ETH_L2_PAYLOAD) - layer_idx = I40E_FLXPLD_L2_IDX; - else if (cfg->type == RTE_ETH_L3_PAYLOAD) - layer_idx = I40E_FLXPLD_L3_IDX; - else if (cfg->type == RTE_ETH_L4_PAYLOAD) - layer_idx = I40E_FLXPLD_L4_IDX; - - memset(flex_pit, 0, sizeof(flex_pit)); - num = i40e_srcoff_to_flx_pit(cfg->src_offset, flex_pit); - - for (i = 0; i < num; i++) { - field_idx = layer_idx * I40E_MAX_FLXPLD_FIED + i; - /* record the info in fdir structure */ - pf->fdir.flex_set[field_idx].src_offset = - flex_pit[i].src_offset / sizeof(uint16_t); - pf->fdir.flex_set[field_idx].size = - flex_pit[i].size / sizeof(uint16_t); - pf->fdir.flex_set[field_idx].dst_offset = - flex_pit[i].dst_offset / sizeof(uint16_t); - flx_pit = MK_FLX_PIT(pf->fdir.flex_set[field_idx].src_offset, - pf->fdir.flex_set[field_idx].size, - pf->fdir.flex_set[field_idx].dst_offset); - - I40E_WRITE_REG(hw, I40E_PRTQF_FLX_PIT(field_idx), flx_pit); - } - min_next_off = pf->fdir.flex_set[field_idx].src_offset + - pf->fdir.flex_set[field_idx].size; - - for (; i < I40E_MAX_FLXPLD_FIED; i++) { - /* set the non-used register obeying register's constrain */ - flx_pit = MK_FLX_PIT(min_next_off, NONUSE_FLX_PIT_FSIZE, - NONUSE_FLX_PIT_DEST_OFF); - I40E_WRITE_REG(hw, - I40E_PRTQF_FLX_PIT(layer_idx * I40E_MAX_FLXPLD_FIED + i), - flx_pit); - min_next_off++; - } -} - -/* - * i40e_set_flex_mask_on_pctype - configure the mask on flexible payload - * @pf: board private structure - * @pctype: packet classify type - * @flex_masks: mask for flexible payload - */ -static void -i40e_set_flex_mask_on_pctype(struct i40e_pf *pf, - enum i40e_filter_pctype pctype, - const struct rte_eth_fdir_flex_mask *mask_cfg) -{ - struct i40e_hw *hw = I40E_PF_TO_HW(pf); - struct i40e_fdir_flex_mask *flex_mask; - uint32_t flxinset, fd_mask; - uint16_t mask_tmp; - uint8_t i, nb_bitmask = 0; - - flex_mask = &pf->fdir.flex_mask[pctype]; - memset(flex_mask, 0, sizeof(struct i40e_fdir_flex_mask)); - for (i = 0; i < I40E_FDIR_MAX_FLEX_LEN; i += sizeof(uint16_t)) { - mask_tmp = I40E_WORD(mask_cfg->mask[i], mask_cfg->mask[i + 1]); - if (mask_tmp != 0x0) { - flex_mask->word_mask |= - I40E_FLEX_WORD_MASK(i / sizeof(uint16_t)); - if (mask_tmp != UINT16_MAX) { - /* set bit mask */ - flex_mask->bitmask[nb_bitmask].mask = ~mask_tmp; - flex_mask->bitmask[nb_bitmask].offset = - i / sizeof(uint16_t); - nb_bitmask++; - } - } - } - /* write mask to hw */ - flxinset = (flex_mask->word_mask << - I40E_PRTQF_FD_FLXINSET_INSET_SHIFT) & - I40E_PRTQF_FD_FLXINSET_INSET_MASK; - I40E_WRITE_REG(hw, I40E_PRTQF_FD_FLXINSET(pctype), flxinset); - - for (i = 0; i < nb_bitmask; i++) { - fd_mask = (flex_mask->bitmask[i].mask << - I40E_PRTQF_FD_MSK_MASK_SHIFT) & - I40E_PRTQF_FD_MSK_MASK_MASK; - fd_mask |= ((flex_mask->bitmask[i].offset + - I40E_FLX_OFFSET_IN_FIELD_VECTOR) << - I40E_PRTQF_FD_MSK_OFFSET_SHIFT) & - I40E_PRTQF_FD_MSK_OFFSET_MASK; - I40E_WRITE_REG(hw, I40E_PRTQF_FD_MSK(pctype, i), fd_mask); - } -} - -/* - * Configure flow director related setting - */ -int -i40e_fdir_configure(struct rte_eth_dev *dev) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct rte_eth_fdir_flex_conf *conf; - enum i40e_filter_pctype pctype; - uint32_t val; - uint8_t i; - int ret = 0; - - /* - * configuration need to be done before - * flow director filters are added - * If filters exist, flush them. - */ - if (i40e_fdir_empty(hw) < 0) { - ret = i40e_fdir_flush(dev); - if (ret) { - PMD_DRV_LOG(ERR, "failed to flush fdir table."); - return ret; - } - } - - /* enable FDIR filter */ - val = I40E_READ_REG(hw, I40E_PFQF_CTL_0); - val |= I40E_PFQF_CTL_0_FD_ENA_MASK; - I40E_WRITE_REG(hw, I40E_PFQF_CTL_0, val); - - i40e_init_flx_pld(pf); /* set flex config to default value */ - - conf = &dev->data->dev_conf.fdir_conf.flex_conf; - ret = i40e_check_fdir_flex_conf(conf); - if (ret < 0) { - PMD_DRV_LOG(ERR, " invalid configuration arguments."); - return -EINVAL; - } - /* configure flex payload */ - for (i = 0; i < conf->nb_payloads; i++) - i40e_set_flx_pld_cfg(pf, &conf->flex_set[i]); - /* configure flex mask*/ - for (i = 0; i < conf->nb_flexmasks; i++) { - pctype = i40e_flowtype_to_pctype(conf->flex_mask[i].flow_type); - i40e_set_flex_mask_on_pctype(pf, pctype, &conf->flex_mask[i]); - } - - return ret; -} - -static inline void -i40e_fdir_fill_eth_ip_head(const struct rte_eth_fdir_input *fdir_input, - unsigned char *raw_pkt) -{ - struct ether_hdr *ether = (struct ether_hdr *)raw_pkt; - struct ipv4_hdr *ip; - struct ipv6_hdr *ip6; - static const uint8_t next_proto[] = { - [RTE_ETH_FLOW_FRAG_IPV4] = IPPROTO_IP, - [RTE_ETH_FLOW_NONFRAG_IPV4_TCP] = IPPROTO_TCP, - [RTE_ETH_FLOW_NONFRAG_IPV4_UDP] = IPPROTO_UDP, - [RTE_ETH_FLOW_NONFRAG_IPV4_SCTP] = IPPROTO_SCTP, - [RTE_ETH_FLOW_NONFRAG_IPV4_OTHER] = IPPROTO_IP, - [RTE_ETH_FLOW_FRAG_IPV6] = IPPROTO_NONE, - [RTE_ETH_FLOW_NONFRAG_IPV6_TCP] = IPPROTO_TCP, - [RTE_ETH_FLOW_NONFRAG_IPV6_UDP] = IPPROTO_UDP, - [RTE_ETH_FLOW_NONFRAG_IPV6_SCTP] = IPPROTO_SCTP, - [RTE_ETH_FLOW_NONFRAG_IPV6_OTHER] = IPPROTO_NONE, - }; - - switch (fdir_input->flow_type) { - case RTE_ETH_FLOW_NONFRAG_IPV4_TCP: - case RTE_ETH_FLOW_NONFRAG_IPV4_UDP: - case RTE_ETH_FLOW_NONFRAG_IPV4_SCTP: - case RTE_ETH_FLOW_NONFRAG_IPV4_OTHER: - case RTE_ETH_FLOW_FRAG_IPV4: - ip = (struct ipv4_hdr *)(raw_pkt + sizeof(struct ether_hdr)); - - ether->ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4); - ip->version_ihl = I40E_FDIR_IP_DEFAULT_VERSION_IHL; - /* set len to by default */ - ip->total_length = rte_cpu_to_be_16(I40E_FDIR_IP_DEFAULT_LEN); - ip->time_to_live = I40E_FDIR_IP_DEFAULT_TTL; - /* - * The source and destination fields in the transmitted packet - * need to be presented in a reversed order with respect - * to the expected received packets. - */ - ip->src_addr = fdir_input->flow.ip4_flow.dst_ip; - ip->dst_addr = fdir_input->flow.ip4_flow.src_ip; - ip->next_proto_id = next_proto[fdir_input->flow_type]; - break; - case RTE_ETH_FLOW_NONFRAG_IPV6_TCP: - case RTE_ETH_FLOW_NONFRAG_IPV6_UDP: - case RTE_ETH_FLOW_NONFRAG_IPV6_SCTP: - case RTE_ETH_FLOW_NONFRAG_IPV6_OTHER: - case RTE_ETH_FLOW_FRAG_IPV6: - ip6 = (struct ipv6_hdr *)(raw_pkt + sizeof(struct ether_hdr)); - - ether->ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv6); - ip6->vtc_flow = - rte_cpu_to_be_32(I40E_FDIR_IPv6_DEFAULT_VTC_FLOW); - ip6->payload_len = - rte_cpu_to_be_16(I40E_FDIR_IPv6_PAYLOAD_LEN); - ip6->hop_limits = I40E_FDIR_IPv6_DEFAULT_HOP_LIMITS; - - /* - * The source and destination fields in the transmitted packet - * need to be presented in a reversed order with respect - * to the expected received packets. - */ - rte_memcpy(&(ip6->src_addr), - &(fdir_input->flow.ipv6_flow.dst_ip), - IPV6_ADDR_LEN); - rte_memcpy(&(ip6->dst_addr), - &(fdir_input->flow.ipv6_flow.src_ip), - IPV6_ADDR_LEN); - ip6->proto = next_proto[fdir_input->flow_type]; - break; - default: - PMD_DRV_LOG(ERR, "unknown flow type %u.", - fdir_input->flow_type); - break; - } -} - - -/* - * i40e_fdir_construct_pkt - construct packet based on fields in input - * @pf: board private structure - * @fdir_input: input set of the flow director entry - * @raw_pkt: a packet to be constructed - */ -static int -i40e_fdir_construct_pkt(struct i40e_pf *pf, - const struct rte_eth_fdir_input *fdir_input, - unsigned char *raw_pkt) -{ - unsigned char *payload, *ptr; - struct udp_hdr *udp; - struct tcp_hdr *tcp; - struct sctp_hdr *sctp; - uint8_t size, dst = 0; - uint8_t i, pit_idx, set_idx = I40E_FLXPLD_L4_IDX; /* use l4 by default*/ - - /* fill the ethernet and IP head */ - i40e_fdir_fill_eth_ip_head(fdir_input, raw_pkt); - - /* fill the L4 head */ - switch (fdir_input->flow_type) { - case RTE_ETH_FLOW_NONFRAG_IPV4_UDP: - udp = (struct udp_hdr *)(raw_pkt + sizeof(struct ether_hdr) + - sizeof(struct ipv4_hdr)); - payload = (unsigned char *)udp + sizeof(struct udp_hdr); - /* - * The source and destination fields in the transmitted packet - * need to be presented in a reversed order with respect - * to the expected received packets. - */ - udp->src_port = fdir_input->flow.udp4_flow.dst_port; - udp->dst_port = fdir_input->flow.udp4_flow.src_port; - udp->dgram_len = rte_cpu_to_be_16(I40E_FDIR_UDP_DEFAULT_LEN); - break; - - case RTE_ETH_FLOW_NONFRAG_IPV4_TCP: - tcp = (struct tcp_hdr *)(raw_pkt + sizeof(struct ether_hdr) + - sizeof(struct ipv4_hdr)); - payload = (unsigned char *)tcp + sizeof(struct tcp_hdr); - /* - * The source and destination fields in the transmitted packet - * need to be presented in a reversed order with respect - * to the expected received packets. - */ - tcp->src_port = fdir_input->flow.tcp4_flow.dst_port; - tcp->dst_port = fdir_input->flow.tcp4_flow.src_port; - tcp->data_off = I40E_FDIR_TCP_DEFAULT_DATAOFF; - break; - - case RTE_ETH_FLOW_NONFRAG_IPV4_SCTP: - sctp = (struct sctp_hdr *)(raw_pkt + sizeof(struct ether_hdr) + - sizeof(struct ipv4_hdr)); - payload = (unsigned char *)sctp + sizeof(struct sctp_hdr); - sctp->tag = fdir_input->flow.sctp4_flow.verify_tag; - break; - - case RTE_ETH_FLOW_NONFRAG_IPV4_OTHER: - case RTE_ETH_FLOW_FRAG_IPV4: - payload = raw_pkt + sizeof(struct ether_hdr) + - sizeof(struct ipv4_hdr); - set_idx = I40E_FLXPLD_L3_IDX; - break; - - case RTE_ETH_FLOW_NONFRAG_IPV6_UDP: - udp = (struct udp_hdr *)(raw_pkt + sizeof(struct ether_hdr) + - sizeof(struct ipv6_hdr)); - payload = (unsigned char *)udp + sizeof(struct udp_hdr); - /* - * The source and destination fields in the transmitted packet - * need to be presented in a reversed order with respect - * to the expected received packets. - */ - udp->src_port = fdir_input->flow.udp6_flow.dst_port; - udp->dst_port = fdir_input->flow.udp6_flow.src_port; - udp->dgram_len = rte_cpu_to_be_16(I40E_FDIR_IPv6_PAYLOAD_LEN); - break; - - case RTE_ETH_FLOW_NONFRAG_IPV6_TCP: - tcp = (struct tcp_hdr *)(raw_pkt + sizeof(struct ether_hdr) + - sizeof(struct ipv6_hdr)); - payload = (unsigned char *)tcp + sizeof(struct tcp_hdr); - /* - * The source and destination fields in the transmitted packet - * need to be presented in a reversed order with respect - * to the expected received packets. - */ - tcp->data_off = I40E_FDIR_TCP_DEFAULT_DATAOFF; - tcp->src_port = fdir_input->flow.udp6_flow.dst_port; - tcp->dst_port = fdir_input->flow.udp6_flow.src_port; - break; - - case RTE_ETH_FLOW_NONFRAG_IPV6_SCTP: - sctp = (struct sctp_hdr *)(raw_pkt + sizeof(struct ether_hdr) + - sizeof(struct ipv6_hdr)); - payload = (unsigned char *)sctp + sizeof(struct sctp_hdr); - sctp->tag = fdir_input->flow.sctp6_flow.verify_tag; - break; - - case RTE_ETH_FLOW_NONFRAG_IPV6_OTHER: - case RTE_ETH_FLOW_FRAG_IPV6: - payload = raw_pkt + sizeof(struct ether_hdr) + - sizeof(struct ipv6_hdr); - set_idx = I40E_FLXPLD_L3_IDX; - break; - default: - PMD_DRV_LOG(ERR, "unknown flow type %u.", fdir_input->flow_type); - return -EINVAL; - } - - /* fill the flexbytes to payload */ - for (i = 0; i < I40E_MAX_FLXPLD_FIED; i++) { - pit_idx = set_idx * I40E_MAX_FLXPLD_FIED + i; - size = pf->fdir.flex_set[pit_idx].size; - if (size == 0) - continue; - dst = pf->fdir.flex_set[pit_idx].dst_offset * sizeof(uint16_t); - ptr = payload + - pf->fdir.flex_set[pit_idx].src_offset * sizeof(uint16_t); - (void)rte_memcpy(ptr, - &fdir_input->flow_ext.flexbytes[dst], - size * sizeof(uint16_t)); - } - - return 0; -} - -/* Construct the tx flags */ -static inline uint64_t -i40e_build_ctob(uint32_t td_cmd, - uint32_t td_offset, - unsigned int size, - uint32_t td_tag) -{ - return rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DATA | - ((uint64_t)td_cmd << I40E_TXD_QW1_CMD_SHIFT) | - ((uint64_t)td_offset << I40E_TXD_QW1_OFFSET_SHIFT) | - ((uint64_t)size << I40E_TXD_QW1_TX_BUF_SZ_SHIFT) | - ((uint64_t)td_tag << I40E_TXD_QW1_L2TAG1_SHIFT)); -} - -/* - * check the programming status descriptor in rx queue. - * done after Programming Flow Director is programmed on - * tx queue - */ -static inline int -i40e_check_fdir_programming_status(struct i40e_rx_queue *rxq) -{ - volatile union i40e_rx_desc *rxdp; - uint64_t qword1; - uint32_t rx_status; - uint32_t len, id; - uint32_t error; - int ret = 0; - - rxdp = &rxq->rx_ring[rxq->rx_tail]; - qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len); - rx_status = (qword1 & I40E_RXD_QW1_STATUS_MASK) - >> I40E_RXD_QW1_STATUS_SHIFT; - - if (rx_status & (1 << I40E_RX_DESC_STATUS_DD_SHIFT)) { - len = qword1 >> I40E_RX_PROG_STATUS_DESC_LENGTH_SHIFT; - id = (qword1 & I40E_RX_PROG_STATUS_DESC_QW1_PROGID_MASK) >> - I40E_RX_PROG_STATUS_DESC_QW1_PROGID_SHIFT; - - if (len == I40E_RX_PROG_STATUS_DESC_LENGTH && - id == I40E_RX_PROG_STATUS_DESC_FD_FILTER_STATUS) { - error = (qword1 & - I40E_RX_PROG_STATUS_DESC_QW1_ERROR_MASK) >> - I40E_RX_PROG_STATUS_DESC_QW1_ERROR_SHIFT; - if (error == (0x1 << - I40E_RX_PROG_STATUS_DESC_FD_TBL_FULL_SHIFT)) { - PMD_DRV_LOG(ERR, "Failed to add FDIR filter" - " (FD_ID %u): programming status" - " reported.", - rxdp->wb.qword0.hi_dword.fd_id); - ret = -1; - } else if (error == (0x1 << - I40E_RX_PROG_STATUS_DESC_NO_FD_ENTRY_SHIFT)) { - PMD_DRV_LOG(ERR, "Failed to delete FDIR filter" - " (FD_ID %u): programming status" - " reported.", - rxdp->wb.qword0.hi_dword.fd_id); - ret = -1; - } else - PMD_DRV_LOG(ERR, "invalid programming status" - " reported, error = %u.", error); - } else - PMD_DRV_LOG(ERR, "unknown programming status" - " reported, len = %d, id = %u.", len, id); - rxdp->wb.qword1.status_error_len = 0; - rxq->rx_tail++; - if (unlikely(rxq->rx_tail == rxq->nb_rx_desc)) - rxq->rx_tail = 0; - } - return ret; -} - -/* - * i40e_add_del_fdir_filter - add or remove a flow director filter. - * @pf: board private structure - * @filter: fdir filter entry - * @add: 0 - delete, 1 - add - */ -static int -i40e_add_del_fdir_filter(struct rte_eth_dev *dev, - const struct rte_eth_fdir_filter *filter, - bool add) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - unsigned char *pkt = (unsigned char *)pf->fdir.prg_pkt; - enum i40e_filter_pctype pctype; - int ret = 0; - - if (dev->data->dev_conf.fdir_conf.mode != RTE_FDIR_MODE_PERFECT) { - PMD_DRV_LOG(ERR, "FDIR is not enabled, please" - " check the mode in fdir_conf."); - return -ENOTSUP; - } - - if (!I40E_VALID_FLOW(filter->input.flow_type)) { - PMD_DRV_LOG(ERR, "invalid flow_type input."); - return -EINVAL; - } - if (filter->action.rx_queue >= pf->dev_data->nb_rx_queues) { - PMD_DRV_LOG(ERR, "Invalid queue ID"); - return -EINVAL; - } - - memset(pkt, 0, I40E_FDIR_PKT_LEN); - - ret = i40e_fdir_construct_pkt(pf, &filter->input, pkt); - if (ret < 0) { - PMD_DRV_LOG(ERR, "construct packet for fdir fails."); - return ret; - } - pctype = i40e_flowtype_to_pctype(filter->input.flow_type); - ret = i40e_fdir_filter_programming(pf, pctype, filter, add); - if (ret < 0) { - PMD_DRV_LOG(ERR, "fdir programming fails for PCTYPE(%u).", - pctype); - return ret; - } - return ret; -} - -/* - * i40e_fdir_filter_programming - Program a flow director filter rule. - * Is done by Flow Director Programming Descriptor followed by packet - * structure that contains the filter fields need to match. - * @pf: board private structure - * @pctype: pctype - * @filter: fdir filter entry - * @add: 0 - delelet, 1 - add - */ -static int -i40e_fdir_filter_programming(struct i40e_pf *pf, - enum i40e_filter_pctype pctype, - const struct rte_eth_fdir_filter *filter, - bool add) -{ - struct i40e_tx_queue *txq = pf->fdir.txq; - struct i40e_rx_queue *rxq = pf->fdir.rxq; - const struct rte_eth_fdir_action *fdir_action = &filter->action; - volatile struct i40e_tx_desc *txdp; - volatile struct i40e_filter_program_desc *fdirdp; - uint32_t td_cmd; - uint16_t i; - uint8_t dest; - - PMD_DRV_LOG(INFO, "filling filter programming descriptor."); - fdirdp = (volatile struct i40e_filter_program_desc *) - (&(txq->tx_ring[txq->tx_tail])); - - fdirdp->qindex_flex_ptype_vsi = - rte_cpu_to_le_32((fdir_action->rx_queue << - I40E_TXD_FLTR_QW0_QINDEX_SHIFT) & - I40E_TXD_FLTR_QW0_QINDEX_MASK); - - fdirdp->qindex_flex_ptype_vsi |= - rte_cpu_to_le_32((fdir_action->flex_off << - I40E_TXD_FLTR_QW0_FLEXOFF_SHIFT) & - I40E_TXD_FLTR_QW0_FLEXOFF_MASK); - - fdirdp->qindex_flex_ptype_vsi |= - rte_cpu_to_le_32((pctype << - I40E_TXD_FLTR_QW0_PCTYPE_SHIFT) & - I40E_TXD_FLTR_QW0_PCTYPE_MASK); - - /* Use LAN VSI Id by default */ - fdirdp->qindex_flex_ptype_vsi |= - rte_cpu_to_le_32((pf->main_vsi->vsi_id << - I40E_TXD_FLTR_QW0_DEST_VSI_SHIFT) & - I40E_TXD_FLTR_QW0_DEST_VSI_MASK); - - fdirdp->dtype_cmd_cntindex = - rte_cpu_to_le_32(I40E_TX_DESC_DTYPE_FILTER_PROG); - - if (add) - fdirdp->dtype_cmd_cntindex |= rte_cpu_to_le_32( - I40E_FILTER_PROGRAM_DESC_PCMD_ADD_UPDATE << - I40E_TXD_FLTR_QW1_PCMD_SHIFT); - else - fdirdp->dtype_cmd_cntindex |= rte_cpu_to_le_32( - I40E_FILTER_PROGRAM_DESC_PCMD_REMOVE << - I40E_TXD_FLTR_QW1_PCMD_SHIFT); - - if (fdir_action->behavior == RTE_ETH_FDIR_REJECT) - dest = I40E_FILTER_PROGRAM_DESC_DEST_DROP_PACKET; - else - dest = I40E_FILTER_PROGRAM_DESC_DEST_DIRECT_PACKET_QINDEX; - fdirdp->dtype_cmd_cntindex |= rte_cpu_to_le_32((dest << - I40E_TXD_FLTR_QW1_DEST_SHIFT) & - I40E_TXD_FLTR_QW1_DEST_MASK); - - fdirdp->dtype_cmd_cntindex |= - rte_cpu_to_le_32((fdir_action->report_status<< - I40E_TXD_FLTR_QW1_FD_STATUS_SHIFT) & - I40E_TXD_FLTR_QW1_FD_STATUS_MASK); - - fdirdp->dtype_cmd_cntindex |= - rte_cpu_to_le_32(I40E_TXD_FLTR_QW1_CNT_ENA_MASK); - fdirdp->dtype_cmd_cntindex |= - rte_cpu_to_le_32((pf->fdir.match_counter_index << - I40E_TXD_FLTR_QW1_CNTINDEX_SHIFT) & - I40E_TXD_FLTR_QW1_CNTINDEX_MASK); - - fdirdp->fd_id = rte_cpu_to_le_32(filter->soft_id); - - PMD_DRV_LOG(INFO, "filling transmit descriptor."); - txdp = &(txq->tx_ring[txq->tx_tail + 1]); - txdp->buffer_addr = rte_cpu_to_le_64(pf->fdir.dma_addr); - td_cmd = I40E_TX_DESC_CMD_EOP | - I40E_TX_DESC_CMD_RS | - I40E_TX_DESC_CMD_DUMMY; - - txdp->cmd_type_offset_bsz = - i40e_build_ctob(td_cmd, 0, I40E_FDIR_PKT_LEN, 0); - - txq->tx_tail += 2; /* set 2 descriptors above, fdirdp and txdp */ - if (txq->tx_tail >= txq->nb_tx_desc) - txq->tx_tail = 0; - /* Update the tx tail register */ - rte_wmb(); - I40E_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail); - - for (i = 0; i < I40E_FDIR_WAIT_COUNT; i++) { - rte_delay_us(I40E_FDIR_WAIT_INTERVAL_US); - if (txdp->cmd_type_offset_bsz & - rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE)) - break; - } - if (i >= I40E_FDIR_WAIT_COUNT) { - PMD_DRV_LOG(ERR, "Failed to program FDIR filter:" - " time out to get DD on tx queue."); - return -ETIMEDOUT; - } - /* totally delay 10 ms to check programming status*/ - rte_delay_us((I40E_FDIR_WAIT_COUNT - i) * I40E_FDIR_WAIT_INTERVAL_US); - if (i40e_check_fdir_programming_status(rxq) < 0) { - PMD_DRV_LOG(ERR, "Failed to program FDIR filter:" - " programming status reported."); - return -ENOSYS; - } - - return 0; -} - -/* - * i40e_fdir_flush - clear all filters of Flow Director table - * @pf: board private structure - */ -static int -i40e_fdir_flush(struct rte_eth_dev *dev) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_hw *hw = I40E_PF_TO_HW(pf); - uint32_t reg; - uint16_t guarant_cnt, best_cnt; - uint16_t i; - - I40E_WRITE_REG(hw, I40E_PFQF_CTL_1, I40E_PFQF_CTL_1_CLEARFDTABLE_MASK); - I40E_WRITE_FLUSH(hw); - - for (i = 0; i < I40E_FDIR_FLUSH_RETRY; i++) { - rte_delay_ms(I40E_FDIR_FLUSH_INTERVAL_MS); - reg = I40E_READ_REG(hw, I40E_PFQF_CTL_1); - if (!(reg & I40E_PFQF_CTL_1_CLEARFDTABLE_MASK)) - break; - } - if (i >= I40E_FDIR_FLUSH_RETRY) { - PMD_DRV_LOG(ERR, "FD table did not flush, may need more time."); - return -ETIMEDOUT; - } - guarant_cnt = (uint16_t)((I40E_READ_REG(hw, I40E_PFQF_FDSTAT) & - I40E_PFQF_FDSTAT_GUARANT_CNT_MASK) >> - I40E_PFQF_FDSTAT_GUARANT_CNT_SHIFT); - best_cnt = (uint16_t)((I40E_READ_REG(hw, I40E_PFQF_FDSTAT) & - I40E_PFQF_FDSTAT_BEST_CNT_MASK) >> - I40E_PFQF_FDSTAT_BEST_CNT_SHIFT); - if (guarant_cnt != 0 || best_cnt != 0) { - PMD_DRV_LOG(ERR, "Failed to flush FD table."); - return -ENOSYS; - } else - PMD_DRV_LOG(INFO, "FD table Flush success."); - return 0; -} - -static inline void -i40e_fdir_info_get_flex_set(struct i40e_pf *pf, - struct rte_eth_flex_payload_cfg *flex_set, - uint16_t *num) -{ - struct i40e_fdir_flex_pit *flex_pit; - struct rte_eth_flex_payload_cfg *ptr = flex_set; - uint16_t src, dst, size, j, k; - uint8_t i, layer_idx; - - for (layer_idx = I40E_FLXPLD_L2_IDX; - layer_idx <= I40E_FLXPLD_L4_IDX; - layer_idx++) { - if (layer_idx == I40E_FLXPLD_L2_IDX) - ptr->type = RTE_ETH_L2_PAYLOAD; - else if (layer_idx == I40E_FLXPLD_L3_IDX) - ptr->type = RTE_ETH_L3_PAYLOAD; - else if (layer_idx == I40E_FLXPLD_L4_IDX) - ptr->type = RTE_ETH_L4_PAYLOAD; - - for (i = 0; i < I40E_MAX_FLXPLD_FIED; i++) { - flex_pit = &pf->fdir.flex_set[layer_idx * - I40E_MAX_FLXPLD_FIED + i]; - if (flex_pit->size == 0) - continue; - src = flex_pit->src_offset * sizeof(uint16_t); - dst = flex_pit->dst_offset * sizeof(uint16_t); - size = flex_pit->size * sizeof(uint16_t); - for (j = src, k = dst; j < src + size; j++, k++) - ptr->src_offset[k] = j; - } - (*num)++; - ptr++; - } -} - -static inline void -i40e_fdir_info_get_flex_mask(struct i40e_pf *pf, - struct rte_eth_fdir_flex_mask *flex_mask, - uint16_t *num) -{ - struct i40e_fdir_flex_mask *mask; - struct rte_eth_fdir_flex_mask *ptr = flex_mask; - uint16_t flow_type; - uint8_t i, j; - uint16_t off_bytes, mask_tmp; - - for (i = I40E_FILTER_PCTYPE_NONF_IPV4_UDP; - i <= I40E_FILTER_PCTYPE_FRAG_IPV6; - i++) { - mask = &pf->fdir.flex_mask[i]; - if (!I40E_VALID_PCTYPE((enum i40e_filter_pctype)i)) - continue; - flow_type = i40e_pctype_to_flowtype((enum i40e_filter_pctype)i); - for (j = 0; j < I40E_FDIR_MAX_FLEXWORD_NUM; j++) { - if (mask->word_mask & I40E_FLEX_WORD_MASK(j)) { - ptr->mask[j * sizeof(uint16_t)] = UINT8_MAX; - ptr->mask[j * sizeof(uint16_t) + 1] = UINT8_MAX; - } else { - ptr->mask[j * sizeof(uint16_t)] = 0x0; - ptr->mask[j * sizeof(uint16_t) + 1] = 0x0; - } - } - for (j = 0; j < I40E_FDIR_BITMASK_NUM_WORD; j++) { - off_bytes = mask->bitmask[j].offset * sizeof(uint16_t); - mask_tmp = ~mask->bitmask[j].mask; - ptr->mask[off_bytes] &= I40E_HI_BYTE(mask_tmp); - ptr->mask[off_bytes + 1] &= I40E_LO_BYTE(mask_tmp); - } - ptr->flow_type = flow_type; - ptr++; - (*num)++; - } -} - -/* - * i40e_fdir_info_get - get information of Flow Director - * @pf: ethernet device to get info from - * @fdir: a pointer to a structure of type *rte_eth_fdir_info* to be filled with - * the flow director information. - */ -static void -i40e_fdir_info_get(struct rte_eth_dev *dev, struct rte_eth_fdir_info *fdir) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_hw *hw = I40E_PF_TO_HW(pf); - uint16_t num_flex_set = 0; - uint16_t num_flex_mask = 0; - - if (dev->data->dev_conf.fdir_conf.mode == RTE_FDIR_MODE_PERFECT) - fdir->mode = RTE_FDIR_MODE_PERFECT; - else - fdir->mode = RTE_FDIR_MODE_NONE; - - fdir->guarant_spc = - (uint32_t)hw->func_caps.fd_filters_guaranteed; - fdir->best_spc = - (uint32_t)hw->func_caps.fd_filters_best_effort; - fdir->max_flexpayload = I40E_FDIR_MAX_FLEX_LEN; - fdir->flow_types_mask[0] = I40E_FDIR_FLOWS; - fdir->flex_payload_unit = sizeof(uint16_t); - fdir->flex_bitmask_unit = sizeof(uint16_t); - fdir->max_flex_payload_segment_num = I40E_MAX_FLXPLD_FIED; - fdir->flex_payload_limit = I40E_MAX_FLX_SOURCE_OFF; - fdir->max_flex_bitmask_num = I40E_FDIR_BITMASK_NUM_WORD; - - i40e_fdir_info_get_flex_set(pf, - fdir->flex_conf.flex_set, - &num_flex_set); - i40e_fdir_info_get_flex_mask(pf, - fdir->flex_conf.flex_mask, - &num_flex_mask); - - fdir->flex_conf.nb_payloads = num_flex_set; - fdir->flex_conf.nb_flexmasks = num_flex_mask; -} - -/* - * i40e_fdir_stat_get - get statistics of Flow Director - * @pf: ethernet device to get info from - * @stat: a pointer to a structure of type *rte_eth_fdir_stats* to be filled with - * the flow director statistics. - */ -static void -i40e_fdir_stats_get(struct rte_eth_dev *dev, struct rte_eth_fdir_stats *stat) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_hw *hw = I40E_PF_TO_HW(pf); - uint32_t fdstat; - - fdstat = I40E_READ_REG(hw, I40E_PFQF_FDSTAT); - stat->guarant_cnt = - (uint32_t)((fdstat & I40E_PFQF_FDSTAT_GUARANT_CNT_MASK) >> - I40E_PFQF_FDSTAT_GUARANT_CNT_SHIFT); - stat->best_cnt = - (uint32_t)((fdstat & I40E_PFQF_FDSTAT_BEST_CNT_MASK) >> - I40E_PFQF_FDSTAT_BEST_CNT_SHIFT); -} - -/* - * i40e_fdir_ctrl_func - deal with all operations on flow director. - * @pf: board private structure - * @filter_op:operation will be taken. - * @arg: a pointer to specific structure corresponding to the filter_op - */ -int -i40e_fdir_ctrl_func(struct rte_eth_dev *dev, - enum rte_filter_op filter_op, - void *arg) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - int ret = 0; - - if ((pf->flags & I40E_FLAG_FDIR) == 0) - return -ENOTSUP; - - if (filter_op == RTE_ETH_FILTER_NOP) - return 0; - - if (arg == NULL && filter_op != RTE_ETH_FILTER_FLUSH) - return -EINVAL; - - switch (filter_op) { - case RTE_ETH_FILTER_ADD: - ret = i40e_add_del_fdir_filter(dev, - (struct rte_eth_fdir_filter *)arg, - TRUE); - break; - case RTE_ETH_FILTER_DELETE: - ret = i40e_add_del_fdir_filter(dev, - (struct rte_eth_fdir_filter *)arg, - FALSE); - break; - case RTE_ETH_FILTER_FLUSH: - ret = i40e_fdir_flush(dev); - break; - case RTE_ETH_FILTER_INFO: - i40e_fdir_info_get(dev, (struct rte_eth_fdir_info *)arg); - break; - case RTE_ETH_FILTER_STATS: - i40e_fdir_stats_get(dev, (struct rte_eth_fdir_stats *)arg); - break; - default: - PMD_DRV_LOG(ERR, "unknown operation %u.", filter_op); - ret = -EINVAL; - break; - } - return ret; -} diff --git a/lib/librte_pmd_i40e/i40e_logs.h b/lib/librte_pmd_i40e/i40e_logs.h deleted file mode 100644 index 399867f..0000000 --- a/lib/librte_pmd_i40e/i40e_logs.h +++ /dev/null @@ -1,78 +0,0 @@ -/*- - * BSD LICENSE - * - * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#ifndef _I40E_LOGS_H_ -#define _I40E_LOGS_H_ - -#define PMD_INIT_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \ - "PMD: %s(): " fmt "\n", __func__, ##args) - -#ifdef RTE_LIBRTE_I40E_DEBUG_INIT -#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>") -#else -#define PMD_INIT_FUNC_TRACE() do { } while(0) -#endif - -#ifdef RTE_LIBRTE_I40E_DEBUG_RX -#define PMD_RX_LOG(level, fmt, args...) \ - RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) -#else -#define PMD_RX_LOG(level, fmt, args...) do { } while(0) -#endif - -#ifdef RTE_LIBRTE_I40E_DEBUG_TX -#define PMD_TX_LOG(level, fmt, args...) \ - RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) -#else -#define PMD_TX_LOG(level, fmt, args...) do { } while(0) -#endif - -#ifdef RTE_LIBRTE_I40E_DEBUG_TX_FREE -#define PMD_TX_FREE_LOG(level, fmt, args...) \ - RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args) -#else -#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while(0) -#endif - -#ifdef RTE_LIBRTE_I40E_DEBUG_DRIVER -#define PMD_DRV_LOG_RAW(level, fmt, args...) \ - RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args) -#else -#define PMD_DRV_LOG_RAW(level, fmt, args...) do { } while (0) -#endif - -#define PMD_DRV_LOG(level, fmt, args...) \ - PMD_DRV_LOG_RAW(level, fmt "\n", ## args) - -#endif /* _I40E_LOGS_H_ */ diff --git a/lib/librte_pmd_i40e/i40e_pf.c b/lib/librte_pmd_i40e/i40e_pf.c deleted file mode 100644 index cbb2dcc..0000000 --- a/lib/librte_pmd_i40e/i40e_pf.c +++ /dev/null @@ -1,1063 +0,0 @@ -/*- - * BSD LICENSE - * - * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include -#include -#include -#include - -#include "i40e_logs.h" -#include "i40e/i40e_prototype.h" -#include "i40e/i40e_adminq_cmd.h" -#include "i40e/i40e_type.h" -#include "i40e_ethdev.h" -#include "i40e_rxtx.h" -#include "i40e_pf.h" - -#define I40E_CFG_CRCSTRIP_DEFAULT 1 - -static int -i40e_pf_host_switch_queues(struct i40e_pf_vf *vf, - struct i40e_virtchnl_queue_select *qsel, - bool on); - -/** - * Bind PF queues with VSI and VF. - **/ -static int -i40e_pf_vf_queues_mapping(struct i40e_pf_vf *vf) -{ - int i; - struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf); - uint16_t vsi_id = vf->vsi->vsi_id; - uint16_t vf_id = vf->vf_idx; - uint16_t nb_qps = vf->vsi->nb_qps; - uint16_t qbase = vf->vsi->base_queue; - uint16_t q1, q2; - uint32_t val; - - /* - * VF should use scatter range queues. So, it needn't - * to set QBASE in this register. - */ - I40E_WRITE_REG(hw, I40E_VSILAN_QBASE(vsi_id), - I40E_VSILAN_QBASE_VSIQTABLE_ENA_MASK); - - /* Set to enable VFLAN_QTABLE[] registers valid */ - I40E_WRITE_REG(hw, I40E_VPLAN_MAPENA(vf_id), - I40E_VPLAN_MAPENA_TXRX_ENA_MASK); - - /* map PF queues to VF */ - for (i = 0; i < nb_qps; i++) { - val = ((qbase + i) & I40E_VPLAN_QTABLE_QINDEX_MASK); - I40E_WRITE_REG(hw, I40E_VPLAN_QTABLE(i, vf_id), val); - } - - /* map PF queues to VSI */ - for (i = 0; i < I40E_MAX_QP_NUM_PER_VF / 2; i++) { - if (2 * i > nb_qps - 1) - q1 = I40E_VSILAN_QTABLE_QINDEX_0_MASK; - else - q1 = qbase + 2 * i; - - if (2 * i + 1 > nb_qps - 1) - q2 = I40E_VSILAN_QTABLE_QINDEX_0_MASK; - else - q2 = qbase + 2 * i + 1; - - val = (q2 << I40E_VSILAN_QTABLE_QINDEX_1_SHIFT) + q1; - I40E_WRITE_REG(hw, I40E_VSILAN_QTABLE(i, vsi_id), val); - } - I40E_WRITE_FLUSH(hw); - - return I40E_SUCCESS; -} - - -/** - * Proceed VF reset operation. - */ -int -i40e_pf_host_vf_reset(struct i40e_pf_vf *vf, bool do_hw_reset) -{ - uint32_t val, i; - struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf); - uint16_t vf_id, abs_vf_id, vf_msix_num; - int ret; - struct i40e_virtchnl_queue_select qsel; - - if (vf == NULL) - return -EINVAL; - - vf_id = vf->vf_idx; - abs_vf_id = vf_id + hw->func_caps.vf_base_id; - - /* Notify VF that we are in VFR progress */ - I40E_WRITE_REG(hw, I40E_VFGEN_RSTAT1(vf_id), I40E_PF_VFR_INPROGRESS); - - /* - * If require a SW VF reset, a VFLR interrupt will be generated, - * this function will be called again. To avoid it, - * disable interrupt first. - */ - if (do_hw_reset) { - vf->state = I40E_VF_INRESET; - val = I40E_READ_REG(hw, I40E_VPGEN_VFRTRIG(vf_id)); - val |= I40E_VPGEN_VFRTRIG_VFSWR_MASK; - I40E_WRITE_REG(hw, I40E_VPGEN_VFRTRIG(vf_id), val); - I40E_WRITE_FLUSH(hw); - } - -#define VFRESET_MAX_WAIT_CNT 100 - /* Wait until VF reset is done */ - for (i = 0; i < VFRESET_MAX_WAIT_CNT; i++) { - rte_delay_us(10); - val = I40E_READ_REG(hw, I40E_VPGEN_VFRSTAT(vf_id)); - if (val & I40E_VPGEN_VFRSTAT_VFRD_MASK) - break; - } - - if (i >= VFRESET_MAX_WAIT_CNT) { - PMD_DRV_LOG(ERR, "VF reset timeout"); - return -ETIMEDOUT; - } - - /* This is not first time to do reset, do cleanup job first */ - if (vf->vsi) { - /* Disable queues */ - memset(&qsel, 0, sizeof(qsel)); - for (i = 0; i < vf->vsi->nb_qps; i++) - qsel.rx_queues |= 1 << i; - qsel.tx_queues = qsel.rx_queues; - ret = i40e_pf_host_switch_queues(vf, &qsel, false); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Disable VF queues failed"); - return -EFAULT; - } - - /* Disable VF interrupt setting */ - vf_msix_num = hw->func_caps.num_msix_vectors_vf; - for (i = 0; i < vf_msix_num; i++) { - if (!i) - val = I40E_VFINT_DYN_CTL0(vf_id); - else - val = I40E_VFINT_DYN_CTLN(((vf_msix_num - 1) * - (vf_id)) + (i - 1)); - I40E_WRITE_REG(hw, val, I40E_VFINT_DYN_CTLN_CLEARPBA_MASK); - } - I40E_WRITE_FLUSH(hw); - - /* remove VSI */ - ret = i40e_vsi_release(vf->vsi); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Release VSI failed"); - return -EFAULT; - } - } - -#define I40E_VF_PCI_ADDR 0xAA -#define I40E_VF_PEND_MASK 0x20 - /* Check the pending transactions of this VF */ - /* Use absolute VF id, refer to datasheet for details */ - I40E_WRITE_REG(hw, I40E_PF_PCI_CIAA, I40E_VF_PCI_ADDR | - (abs_vf_id << I40E_PF_PCI_CIAA_VF_NUM_SHIFT)); - for (i = 0; i < VFRESET_MAX_WAIT_CNT; i++) { - rte_delay_us(1); - val = I40E_READ_REG(hw, I40E_PF_PCI_CIAD); - if ((val & I40E_VF_PEND_MASK) == 0) - break; - } - - if (i >= VFRESET_MAX_WAIT_CNT) { - PMD_DRV_LOG(ERR, "Wait VF PCI transaction end timeout"); - return -ETIMEDOUT; - } - - /* Reset done, Set COMPLETE flag and clear reset bit */ - I40E_WRITE_REG(hw, I40E_VFGEN_RSTAT1(vf_id), I40E_PF_VFR_COMPLETED); - val = I40E_READ_REG(hw, I40E_VPGEN_VFRTRIG(vf_id)); - val &= ~I40E_VPGEN_VFRTRIG_VFSWR_MASK; - I40E_WRITE_REG(hw, I40E_VPGEN_VFRTRIG(vf_id), val); - vf->reset_cnt++; - I40E_WRITE_FLUSH(hw); - - /* Allocate resource again */ - vf->vsi = i40e_vsi_setup(vf->pf, I40E_VSI_SRIOV, - vf->pf->main_vsi, vf->vf_idx); - if (vf->vsi == NULL) { - PMD_DRV_LOG(ERR, "Add vsi failed"); - return -EFAULT; - } - - ret = i40e_pf_vf_queues_mapping(vf); - if (ret != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "queue mapping error"); - i40e_vsi_release(vf->vsi); - return -EFAULT; - } - - return ret; -} - -static int -i40e_pf_host_send_msg_to_vf(struct i40e_pf_vf *vf, - uint32_t opcode, - uint32_t retval, - uint8_t *msg, - uint16_t msglen) -{ - struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf); - uint16_t abs_vf_id = hw->func_caps.vf_base_id + vf->vf_idx; - int ret; - - ret = i40e_aq_send_msg_to_vf(hw, abs_vf_id, opcode, retval, - msg, msglen, NULL); - if (ret) { - PMD_INIT_LOG(ERR, "Fail to send message to VF, err %u", - hw->aq.asq_last_status); - } - - return ret; -} - -static void -i40e_pf_host_process_cmd_version(struct i40e_pf_vf *vf) -{ - struct i40e_virtchnl_version_info info; - - info.major = I40E_DPDK_VERSION_MAJOR; - info.minor = I40E_DPDK_VERSION_MINOR; - i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_VERSION, - I40E_SUCCESS, (uint8_t *)&info, sizeof(info)); -} - -static int -i40e_pf_host_process_cmd_reset_vf(struct i40e_pf_vf *vf) -{ - i40e_pf_host_vf_reset(vf, 1); - - /* No feedback will be sent to VF for VFLR */ - return I40E_SUCCESS; -} - -static int -i40e_pf_host_process_cmd_get_vf_resource(struct i40e_pf_vf *vf) -{ - struct i40e_virtchnl_vf_resource *vf_res = NULL; - struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf); - uint32_t len = 0; - int ret = I40E_SUCCESS; - - /* only have 1 VSI by default */ - len = sizeof(struct i40e_virtchnl_vf_resource) + - I40E_DEFAULT_VF_VSI_NUM * - sizeof(struct i40e_virtchnl_vsi_resource); - - vf_res = rte_zmalloc("i40e_vf_res", len, 0); - if (vf_res == NULL) { - PMD_DRV_LOG(ERR, "failed to allocate mem"); - ret = I40E_ERR_NO_MEMORY; - vf_res = NULL; - len = 0; - goto send_msg; - } - - vf_res->vf_offload_flags = I40E_VIRTCHNL_VF_OFFLOAD_L2 | - I40E_VIRTCHNL_VF_OFFLOAD_VLAN; - vf_res->max_vectors = hw->func_caps.num_msix_vectors_vf; - vf_res->num_queue_pairs = vf->vsi->nb_qps; - vf_res->num_vsis = I40E_DEFAULT_VF_VSI_NUM; - - /* Change below setting if PF host can support more VSIs for VF */ - vf_res->vsi_res[0].vsi_type = I40E_VSI_SRIOV; - /* As assume Vf only has single VSI now, always return 0 */ - vf_res->vsi_res[0].vsi_id = 0; - vf_res->vsi_res[0].num_queue_pairs = vf->vsi->nb_qps; - -send_msg: - i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_GET_VF_RESOURCES, - ret, (uint8_t *)vf_res, len); - rte_free(vf_res); - - return ret; -} - -static int -i40e_pf_host_hmc_config_rxq(struct i40e_hw *hw, - struct i40e_pf_vf *vf, - struct i40e_virtchnl_rxq_info *rxq, - uint8_t crcstrip) -{ - int err = I40E_SUCCESS; - struct i40e_hmc_obj_rxq rx_ctx; - uint16_t abs_queue_id = vf->vsi->base_queue + rxq->queue_id; - - /* Clear the context structure first */ - memset(&rx_ctx, 0, sizeof(struct i40e_hmc_obj_rxq)); - rx_ctx.dbuff = rxq->databuffer_size >> I40E_RXQ_CTX_DBUFF_SHIFT; - rx_ctx.hbuff = rxq->hdr_size >> I40E_RXQ_CTX_HBUFF_SHIFT; - rx_ctx.base = rxq->dma_ring_addr / I40E_QUEUE_BASE_ADDR_UNIT; - rx_ctx.qlen = rxq->ring_len; -#ifndef RTE_LIBRTE_I40E_16BYTE_RX_DESC - rx_ctx.dsize = 1; -#endif - - if (rxq->splithdr_enabled) { - rx_ctx.hsplit_0 = I40E_HEADER_SPLIT_ALL; - rx_ctx.dtype = i40e_header_split_enabled; - } else { - rx_ctx.hsplit_0 = I40E_HEADER_SPLIT_NONE; - rx_ctx.dtype = i40e_header_split_none; - } - rx_ctx.rxmax = rxq->max_pkt_size; - rx_ctx.tphrdesc_ena = 1; - rx_ctx.tphwdesc_ena = 1; - rx_ctx.tphdata_ena = 1; - rx_ctx.tphhead_ena = 1; - rx_ctx.lrxqthresh = 2; - rx_ctx.crcstrip = crcstrip; - rx_ctx.l2tsel = 1; - rx_ctx.prefena = 1; - - err = i40e_clear_lan_rx_queue_context(hw, abs_queue_id); - if (err != I40E_SUCCESS) - return err; - err = i40e_set_lan_rx_queue_context(hw, abs_queue_id, &rx_ctx); - - return err; -} - -static int -i40e_pf_host_hmc_config_txq(struct i40e_hw *hw, - struct i40e_pf_vf *vf, - struct i40e_virtchnl_txq_info *txq) -{ - int err = I40E_SUCCESS; - struct i40e_hmc_obj_txq tx_ctx; - uint32_t qtx_ctl; - uint16_t abs_queue_id = vf->vsi->base_queue + txq->queue_id; - - - /* clear the context structure first */ - memset(&tx_ctx, 0, sizeof(tx_ctx)); - tx_ctx.new_context = 1; - tx_ctx.base = txq->dma_ring_addr / I40E_QUEUE_BASE_ADDR_UNIT; - tx_ctx.qlen = txq->ring_len; - tx_ctx.rdylist = rte_le_to_cpu_16(vf->vsi->info.qs_handle[0]); - err = i40e_clear_lan_tx_queue_context(hw, abs_queue_id); - if (err != I40E_SUCCESS) - return err; - - err = i40e_set_lan_tx_queue_context(hw, abs_queue_id, &tx_ctx); - if (err != I40E_SUCCESS) - return err; - - /* bind queue with VF function, since TX/QX will appear in pair, - * so only has QTX_CTL to set. - */ - qtx_ctl = (I40E_QTX_CTL_VF_QUEUE << I40E_QTX_CTL_PFVF_Q_SHIFT) | - ((hw->pf_id << I40E_QTX_CTL_PF_INDX_SHIFT) & - I40E_QTX_CTL_PF_INDX_MASK) | - (((vf->vf_idx + hw->func_caps.vf_base_id) << - I40E_QTX_CTL_VFVM_INDX_SHIFT) & - I40E_QTX_CTL_VFVM_INDX_MASK); - I40E_WRITE_REG(hw, I40E_QTX_CTL(abs_queue_id), qtx_ctl); - I40E_WRITE_FLUSH(hw); - - return I40E_SUCCESS; -} - -static int -i40e_pf_host_process_cmd_config_vsi_queues(struct i40e_pf_vf *vf, - uint8_t *msg, - uint16_t msglen) -{ - struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf); - struct i40e_vsi *vsi = vf->vsi; - struct i40e_virtchnl_vsi_queue_config_info *vc_vqci = - (struct i40e_virtchnl_vsi_queue_config_info *)msg; - struct i40e_virtchnl_queue_pair_info *vc_qpi; - int i, ret = I40E_SUCCESS; - - if (!msg || vc_vqci->num_queue_pairs > vsi->nb_qps || - vc_vqci->num_queue_pairs > I40E_MAX_VSI_QP || - msglen < I40E_VIRTCHNL_CONFIG_VSI_QUEUES_SIZE(vc_vqci, - vc_vqci->num_queue_pairs)) { - PMD_DRV_LOG(ERR, "vsi_queue_config_info argument wrong\n"); - ret = I40E_ERR_PARAM; - goto send_msg; - } - - vc_qpi = vc_vqci->qpair; - for (i = 0; i < vc_vqci->num_queue_pairs; i++) { - if (vc_qpi[i].rxq.queue_id > vsi->nb_qps - 1 || - vc_qpi[i].txq.queue_id > vsi->nb_qps - 1) { - ret = I40E_ERR_PARAM; - goto send_msg; - } - - /* - * Apply VF RX queue setting to HMC. - * If the opcode is I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES_EXT, - * then the extra information of - * 'struct i40e_virtchnl_queue_pair_extra_info' is needed, - * otherwise set the last parameter to NULL. - */ - if (i40e_pf_host_hmc_config_rxq(hw, vf, &vc_qpi[i].rxq, - I40E_CFG_CRCSTRIP_DEFAULT) != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Configure RX queue HMC failed"); - ret = I40E_ERR_PARAM; - goto send_msg; - } - - /* Apply VF TX queue setting to HMC */ - if (i40e_pf_host_hmc_config_txq(hw, vf, - &vc_qpi[i].txq) != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Configure TX queue HMC failed"); - ret = I40E_ERR_PARAM; - goto send_msg; - } - } - -send_msg: - i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES, - ret, NULL, 0); - - return ret; -} - -static int -i40e_pf_host_process_cmd_config_vsi_queues_ext(struct i40e_pf_vf *vf, - uint8_t *msg, - uint16_t msglen) -{ - struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf); - struct i40e_vsi *vsi = vf->vsi; - struct i40e_virtchnl_vsi_queue_config_ext_info *vc_vqcei = - (struct i40e_virtchnl_vsi_queue_config_ext_info *)msg; - struct i40e_virtchnl_queue_pair_ext_info *vc_qpei; - int i, ret = I40E_SUCCESS; - - if (!msg || vc_vqcei->num_queue_pairs > vsi->nb_qps || - vc_vqcei->num_queue_pairs > I40E_MAX_VSI_QP || - msglen < I40E_VIRTCHNL_CONFIG_VSI_QUEUES_SIZE(vc_vqcei, - vc_vqcei->num_queue_pairs)) { - PMD_DRV_LOG(ERR, "vsi_queue_config_ext_info argument wrong\n"); - ret = I40E_ERR_PARAM; - goto send_msg; - } - - vc_qpei = vc_vqcei->qpair; - for (i = 0; i < vc_vqcei->num_queue_pairs; i++) { - if (vc_qpei[i].rxq.queue_id > vsi->nb_qps - 1 || - vc_qpei[i].txq.queue_id > vsi->nb_qps - 1) { - ret = I40E_ERR_PARAM; - goto send_msg; - } - /* - * Apply VF RX queue setting to HMC. - * If the opcode is I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES_EXT, - * then the extra information of - * 'struct i40e_virtchnl_queue_pair_ext_info' is needed, - * otherwise set the last parameter to NULL. - */ - if (i40e_pf_host_hmc_config_rxq(hw, vf, &vc_qpei[i].rxq, - vc_qpei[i].rxq_ext.crcstrip) != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Configure RX queue HMC failed"); - ret = I40E_ERR_PARAM; - goto send_msg; - } - - /* Apply VF TX queue setting to HMC */ - if (i40e_pf_host_hmc_config_txq(hw, vf, &vc_qpei[i].txq) != - I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Configure TX queue HMC failed"); - ret = I40E_ERR_PARAM; - goto send_msg; - } - } - -send_msg: - i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES_EXT, - ret, NULL, 0); - - return ret; -} - -static int -i40e_pf_host_process_cmd_config_irq_map(struct i40e_pf_vf *vf, - uint8_t *msg, uint16_t msglen) -{ - int ret = I40E_SUCCESS; - struct i40e_virtchnl_irq_map_info *irqmap = - (struct i40e_virtchnl_irq_map_info *)msg; - - if (msg == NULL || msglen < sizeof(struct i40e_virtchnl_irq_map_info)) { - PMD_DRV_LOG(ERR, "buffer too short"); - ret = I40E_ERR_PARAM; - goto send_msg; - } - - /* Assume VF only have 1 vector to bind all queues */ - if (irqmap->num_vectors != 1) { - PMD_DRV_LOG(ERR, "DKDK host only support 1 vector"); - ret = I40E_ERR_PARAM; - goto send_msg; - } - - if (irqmap->vecmap[0].vector_id == 0) { - PMD_DRV_LOG(ERR, "DPDK host don't support use IRQ0"); - ret = I40E_ERR_PARAM; - goto send_msg; - } - /* This MSIX intr store the intr in VF range */ - vf->vsi->msix_intr = irqmap->vecmap[0].vector_id; - - /* Don't care how the TX/RX queue mapping with this vector. - * Link all VF RX queues together. Only did mapping work. - * VF can disable/enable the intr by itself. - */ - i40e_vsi_queues_bind_intr(vf->vsi); -send_msg: - i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_CONFIG_IRQ_MAP, - ret, NULL, 0); - - return ret; -} - -static int -i40e_pf_host_switch_queues(struct i40e_pf_vf *vf, - struct i40e_virtchnl_queue_select *qsel, - bool on) -{ - int ret = I40E_SUCCESS; - int i; - struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf); - uint16_t baseq = vf->vsi->base_queue; - - if (qsel->rx_queues + qsel->tx_queues == 0) - return I40E_ERR_PARAM; - - /* always enable RX first and disable last */ - /* Enable RX if it's enable */ - if (on) { - for (i = 0; i < I40E_MAX_QP_NUM_PER_VF; i++) - if (qsel->rx_queues & (1 << i)) { - ret = i40e_switch_rx_queue(hw, baseq + i, on); - if (ret != I40E_SUCCESS) - return ret; - } - } - - /* Enable/Disable TX */ - for (i = 0; i < I40E_MAX_QP_NUM_PER_VF; i++) - if (qsel->tx_queues & (1 << i)) { - ret = i40e_switch_tx_queue(hw, baseq + i, on); - if (ret != I40E_SUCCESS) - return ret; - } - - /* disable RX last if it's disable */ - if (!on) { - /* disable RX */ - for (i = 0; i < I40E_MAX_QP_NUM_PER_VF; i++) - if (qsel->rx_queues & (1 << i)) { - ret = i40e_switch_rx_queue(hw, baseq + i, on); - if (ret != I40E_SUCCESS) - return ret; - } - } - - return ret; -} - -static int -i40e_pf_host_process_cmd_enable_queues(struct i40e_pf_vf *vf, - uint8_t *msg, - uint16_t msglen) -{ - int ret = I40E_SUCCESS; - struct i40e_virtchnl_queue_select *q_sel = - (struct i40e_virtchnl_queue_select *)msg; - - if (msg == NULL || msglen != sizeof(*q_sel)) { - ret = I40E_ERR_PARAM; - goto send_msg; - } - ret = i40e_pf_host_switch_queues(vf, q_sel, true); - -send_msg: - i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_ENABLE_QUEUES, - ret, NULL, 0); - - return ret; -} - -static int -i40e_pf_host_process_cmd_disable_queues(struct i40e_pf_vf *vf, - uint8_t *msg, - uint16_t msglen) -{ - int ret = I40E_SUCCESS; - struct i40e_virtchnl_queue_select *q_sel = - (struct i40e_virtchnl_queue_select *)msg; - - if (msg == NULL || msglen != sizeof(*q_sel)) { - ret = I40E_ERR_PARAM; - goto send_msg; - } - ret = i40e_pf_host_switch_queues(vf, q_sel, false); - -send_msg: - i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_DISABLE_QUEUES, - ret, NULL, 0); - - return ret; -} - - -static int -i40e_pf_host_process_cmd_add_ether_address(struct i40e_pf_vf *vf, - uint8_t *msg, - uint16_t msglen) -{ - int ret = I40E_SUCCESS; - struct i40e_virtchnl_ether_addr_list *addr_list = - (struct i40e_virtchnl_ether_addr_list *)msg; - struct i40e_mac_filter_info filter; - int i; - struct ether_addr *mac; - - memset(&filter, 0 , sizeof(struct i40e_mac_filter_info)); - - if (msg == NULL || msglen <= sizeof(*addr_list)) { - PMD_DRV_LOG(ERR, "add_ether_address argument too short"); - ret = I40E_ERR_PARAM; - goto send_msg; - } - - for (i = 0; i < addr_list->num_elements; i++) { - mac = (struct ether_addr *)(addr_list->list[i].addr); - (void)rte_memcpy(&filter.mac_addr, mac, ETHER_ADDR_LEN); - filter.filter_type = RTE_MACVLAN_PERFECT_MATCH; - if(!is_valid_assigned_ether_addr(mac) || - i40e_vsi_add_mac(vf->vsi, &filter)) { - ret = I40E_ERR_INVALID_MAC_ADDR; - goto send_msg; - } - } - -send_msg: - i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_ADD_ETHER_ADDRESS, - ret, NULL, 0); - - return ret; -} - -static int -i40e_pf_host_process_cmd_del_ether_address(struct i40e_pf_vf *vf, - uint8_t *msg, - uint16_t msglen) -{ - int ret = I40E_SUCCESS; - struct i40e_virtchnl_ether_addr_list *addr_list = - (struct i40e_virtchnl_ether_addr_list *)msg; - int i; - struct ether_addr *mac; - - if (msg == NULL || msglen <= sizeof(*addr_list)) { - PMD_DRV_LOG(ERR, "delete_ether_address argument too short"); - ret = I40E_ERR_PARAM; - goto send_msg; - } - - for (i = 0; i < addr_list->num_elements; i++) { - mac = (struct ether_addr *)(addr_list->list[i].addr); - if(!is_valid_assigned_ether_addr(mac) || - i40e_vsi_delete_mac(vf->vsi, mac)) { - ret = I40E_ERR_INVALID_MAC_ADDR; - goto send_msg; - } - } - -send_msg: - i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_DEL_ETHER_ADDRESS, - ret, NULL, 0); - - return ret; -} - -static int -i40e_pf_host_process_cmd_add_vlan(struct i40e_pf_vf *vf, - uint8_t *msg, uint16_t msglen) -{ - int ret = I40E_SUCCESS; - struct i40e_virtchnl_vlan_filter_list *vlan_filter_list = - (struct i40e_virtchnl_vlan_filter_list *)msg; - int i; - uint16_t *vid; - - if (msg == NULL || msglen <= sizeof(*vlan_filter_list)) { - PMD_DRV_LOG(ERR, "add_vlan argument too short"); - ret = I40E_ERR_PARAM; - goto send_msg; - } - - vid = vlan_filter_list->vlan_id; - - for (i = 0; i < vlan_filter_list->num_elements; i++) { - ret = i40e_vsi_add_vlan(vf->vsi, vid[i]); - if(ret != I40E_SUCCESS) - goto send_msg; - } - -send_msg: - i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_ADD_VLAN, - ret, NULL, 0); - - return ret; -} - -static int -i40e_pf_host_process_cmd_del_vlan(struct i40e_pf_vf *vf, - uint8_t *msg, - uint16_t msglen) -{ - int ret = I40E_SUCCESS; - struct i40e_virtchnl_vlan_filter_list *vlan_filter_list = - (struct i40e_virtchnl_vlan_filter_list *)msg; - int i; - uint16_t *vid; - - if (msg == NULL || msglen <= sizeof(*vlan_filter_list)) { - PMD_DRV_LOG(ERR, "delete_vlan argument too short"); - ret = I40E_ERR_PARAM; - goto send_msg; - } - - vid = vlan_filter_list->vlan_id; - for (i = 0; i < vlan_filter_list->num_elements; i++) { - ret = i40e_vsi_delete_vlan(vf->vsi, vid[i]); - if(ret != I40E_SUCCESS) - goto send_msg; - } - -send_msg: - i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_DEL_VLAN, - ret, NULL, 0); - - return ret; -} - -static int -i40e_pf_host_process_cmd_config_promisc_mode( - struct i40e_pf_vf *vf, - uint8_t *msg, - uint16_t msglen) -{ - int ret = I40E_SUCCESS; - struct i40e_virtchnl_promisc_info *promisc = - (struct i40e_virtchnl_promisc_info *)msg; - struct i40e_hw *hw = I40E_PF_TO_HW(vf->pf); - bool unicast = FALSE, multicast = FALSE; - - if (msg == NULL || msglen != sizeof(*promisc)) { - ret = I40E_ERR_PARAM; - goto send_msg; - } - - if (promisc->flags & I40E_FLAG_VF_UNICAST_PROMISC) - unicast = TRUE; - ret = i40e_aq_set_vsi_unicast_promiscuous(hw, - vf->vsi->seid, unicast, NULL); - if (ret != I40E_SUCCESS) - goto send_msg; - - if (promisc->flags & I40E_FLAG_VF_MULTICAST_PROMISC) - multicast = TRUE; - ret = i40e_aq_set_vsi_multicast_promiscuous(hw, vf->vsi->seid, - multicast, NULL); - -send_msg: - i40e_pf_host_send_msg_to_vf(vf, - I40E_VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE, ret, NULL, 0); - - return ret; -} - -static int -i40e_pf_host_process_cmd_get_stats(struct i40e_pf_vf *vf) -{ - i40e_update_vsi_stats(vf->vsi); - - i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_GET_STATS, - I40E_SUCCESS, (uint8_t *)&vf->vsi->eth_stats, - sizeof(vf->vsi->eth_stats)); - - return I40E_SUCCESS; -} - -static void -i40e_pf_host_process_cmd_get_link_status(struct i40e_pf_vf *vf) -{ - struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vf->pf->main_vsi); - - /* Update link status first to acquire latest link change */ - i40e_dev_link_update(dev, 1); - i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_GET_LINK_STAT, - I40E_SUCCESS, (uint8_t *)&dev->data->dev_link, - sizeof(struct rte_eth_link)); -} - -static int -i40e_pf_host_process_cmd_cfg_vlan_offload( - struct i40e_pf_vf *vf, - uint8_t *msg, - uint16_t msglen) -{ - int ret = I40E_SUCCESS; - struct i40e_virtchnl_vlan_offload_info *offload = - (struct i40e_virtchnl_vlan_offload_info *)msg; - - if (msg == NULL || msglen != sizeof(*offload)) { - ret = I40E_ERR_PARAM; - goto send_msg; - } - - ret = i40e_vsi_config_vlan_stripping(vf->vsi, - !!offload->enable_vlan_strip); - if (ret != 0) - PMD_DRV_LOG(ERR, "Failed to configure vlan stripping"); - -send_msg: - i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_CFG_VLAN_OFFLOAD, - ret, NULL, 0); - - return ret; -} - -static int -i40e_pf_host_process_cmd_cfg_pvid(struct i40e_pf_vf *vf, - uint8_t *msg, - uint16_t msglen) -{ - int ret = I40E_SUCCESS; - struct i40e_virtchnl_pvid_info *tpid_info = - (struct i40e_virtchnl_pvid_info *)msg; - - if (msg == NULL || msglen != sizeof(*tpid_info)) { - ret = I40E_ERR_PARAM; - goto send_msg; - } - - ret = i40e_vsi_vlan_pvid_set(vf->vsi, &tpid_info->info); - -send_msg: - i40e_pf_host_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_CFG_VLAN_PVID, - ret, NULL, 0); - - return ret; -} - -void -i40e_pf_host_handle_vf_msg(struct rte_eth_dev *dev, - uint16_t abs_vf_id, uint32_t opcode, - __rte_unused uint32_t retval, - uint8_t *msg, - uint16_t msglen) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct i40e_pf_vf *vf; - /* AdminQ will pass absolute VF id, transfer to internal vf id */ - uint16_t vf_id = abs_vf_id - hw->func_caps.vf_base_id; - - if (!dev || vf_id > pf->vf_num - 1 || !pf->vfs) { - PMD_DRV_LOG(ERR, "invalid argument"); - return; - } - - vf = &pf->vfs[vf_id]; - if (!vf->vsi) { - PMD_DRV_LOG(ERR, "NO VSI associated with VF found"); - i40e_pf_host_send_msg_to_vf(vf, opcode, - I40E_ERR_NO_AVAILABLE_VSI, NULL, 0); - return; - } - - switch (opcode) { - case I40E_VIRTCHNL_OP_VERSION : - PMD_DRV_LOG(INFO, "OP_VERSION received"); - i40e_pf_host_process_cmd_version(vf); - break; - case I40E_VIRTCHNL_OP_RESET_VF : - PMD_DRV_LOG(INFO, "OP_RESET_VF received"); - i40e_pf_host_process_cmd_reset_vf(vf); - break; - case I40E_VIRTCHNL_OP_GET_VF_RESOURCES: - PMD_DRV_LOG(INFO, "OP_GET_VF_RESOURCES received"); - i40e_pf_host_process_cmd_get_vf_resource(vf); - break; - case I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES: - PMD_DRV_LOG(INFO, "OP_CONFIG_VSI_QUEUES received"); - i40e_pf_host_process_cmd_config_vsi_queues(vf, msg, msglen); - break; - case I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES_EXT: - PMD_DRV_LOG(INFO, "OP_CONFIG_VSI_QUEUES_EXT received"); - i40e_pf_host_process_cmd_config_vsi_queues_ext(vf, msg, - msglen); - break; - case I40E_VIRTCHNL_OP_CONFIG_IRQ_MAP: - PMD_DRV_LOG(INFO, "OP_CONFIG_IRQ_MAP received"); - i40e_pf_host_process_cmd_config_irq_map(vf, msg, msglen); - break; - case I40E_VIRTCHNL_OP_ENABLE_QUEUES: - PMD_DRV_LOG(INFO, "OP_ENABLE_QUEUES received"); - i40e_pf_host_process_cmd_enable_queues(vf, msg, msglen); - break; - case I40E_VIRTCHNL_OP_DISABLE_QUEUES: - PMD_DRV_LOG(INFO, "OP_DISABLE_QUEUE received"); - i40e_pf_host_process_cmd_disable_queues(vf, msg, msglen); - break; - case I40E_VIRTCHNL_OP_ADD_ETHER_ADDRESS: - PMD_DRV_LOG(INFO, "OP_ADD_ETHER_ADDRESS received"); - i40e_pf_host_process_cmd_add_ether_address(vf, msg, msglen); - break; - case I40E_VIRTCHNL_OP_DEL_ETHER_ADDRESS: - PMD_DRV_LOG(INFO, "OP_DEL_ETHER_ADDRESS received"); - i40e_pf_host_process_cmd_del_ether_address(vf, msg, msglen); - break; - case I40E_VIRTCHNL_OP_ADD_VLAN: - PMD_DRV_LOG(INFO, "OP_ADD_VLAN received"); - i40e_pf_host_process_cmd_add_vlan(vf, msg, msglen); - break; - case I40E_VIRTCHNL_OP_DEL_VLAN: - PMD_DRV_LOG(INFO, "OP_DEL_VLAN received"); - i40e_pf_host_process_cmd_del_vlan(vf, msg, msglen); - break; - case I40E_VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE: - PMD_DRV_LOG(INFO, "OP_CONFIG_PROMISCUOUS_MODE received"); - i40e_pf_host_process_cmd_config_promisc_mode(vf, msg, msglen); - break; - case I40E_VIRTCHNL_OP_GET_STATS: - PMD_DRV_LOG(INFO, "OP_GET_STATS received"); - i40e_pf_host_process_cmd_get_stats(vf); - break; - case I40E_VIRTCHNL_OP_GET_LINK_STAT: - PMD_DRV_LOG(INFO, "OP_GET_LINK_STAT received"); - i40e_pf_host_process_cmd_get_link_status(vf); - break; - case I40E_VIRTCHNL_OP_CFG_VLAN_OFFLOAD: - PMD_DRV_LOG(INFO, "OP_CFG_VLAN_OFFLOAD received"); - i40e_pf_host_process_cmd_cfg_vlan_offload(vf, msg, msglen); - break; - case I40E_VIRTCHNL_OP_CFG_VLAN_PVID: - PMD_DRV_LOG(INFO, "OP_CFG_VLAN_PVID received"); - i40e_pf_host_process_cmd_cfg_pvid(vf, msg, msglen); - break; - /* Don't add command supported below, which will - * return an error code. - */ - case I40E_VIRTCHNL_OP_FCOE: - PMD_DRV_LOG(ERR, "OP_FCOE received, not supported"); - default: - PMD_DRV_LOG(ERR, "%u received, not supported", opcode); - i40e_pf_host_send_msg_to_vf(vf, opcode, I40E_ERR_PARAM, - NULL, 0); - break; - } -} - -int -i40e_pf_host_init(struct rte_eth_dev *dev) -{ - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_hw *hw = I40E_PF_TO_HW(pf); - int ret, i; - uint32_t val; - - PMD_INIT_FUNC_TRACE(); - - /** - * return if SRIOV not enabled, VF number not configured or - * no queue assigned. - */ - if(!hw->func_caps.sr_iov_1_1 || pf->vf_num == 0 || pf->vf_nb_qps == 0) - return I40E_SUCCESS; - - /* Allocate memory to store VF structure */ - pf->vfs = rte_zmalloc("i40e_pf_vf",sizeof(*pf->vfs) * pf->vf_num, 0); - if(pf->vfs == NULL) - return -ENOMEM; - - /* Disable irq0 for VFR event */ - i40e_pf_disable_irq0(hw); - - /* Disable VF link status interrupt */ - val = I40E_READ_REG(hw, I40E_PFGEN_PORTMDIO_NUM); - val &= ~I40E_PFGEN_PORTMDIO_NUM_VFLINK_STAT_ENA_MASK; - I40E_WRITE_REG(hw, I40E_PFGEN_PORTMDIO_NUM, val); - I40E_WRITE_FLUSH(hw); - - for (i = 0; i < pf->vf_num; i++) { - pf->vfs[i].pf = pf; - pf->vfs[i].state = I40E_VF_INACTIVE; - pf->vfs[i].vf_idx = i; - ret = i40e_pf_host_vf_reset(&pf->vfs[i], 0); - if (ret != I40E_SUCCESS) - goto fail; - } - - /* restore irq0 */ - i40e_pf_enable_irq0(hw); - - return I40E_SUCCESS; - -fail: - rte_free(pf->vfs); - i40e_pf_enable_irq0(hw); - - return ret; -} diff --git a/lib/librte_pmd_i40e/i40e_pf.h b/lib/librte_pmd_i40e/i40e_pf.h deleted file mode 100644 index 8bf83c2..0000000 --- a/lib/librte_pmd_i40e/i40e_pf.h +++ /dev/null @@ -1,127 +0,0 @@ -/*- - * BSD LICENSE - * - * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#ifndef _I40E_PF_H_ -#define _I40E_PF_H_ - -/* VERSION info to exchange between VF and PF host. In case VF works with - * ND kernel driver, it reads I40E_VIRTCHNL_VERSION_MAJOR/MINOR. In - * case works with DPDK host, it reads version below. Then VF realize who it - * is talking to and use proper language to communicate. - * */ -#define I40E_DPDK_SIGNATURE ('D' << 24 | 'P' << 16 | 'D' << 8 | 'K') -#define I40E_DPDK_VERSION_MAJOR I40E_DPDK_SIGNATURE -#define I40E_DPDK_VERSION_MINOR 0 - -/* Default setting on number of VSIs that VF can contain */ -#define I40E_DEFAULT_VF_VSI_NUM 1 - -#define I40E_DPDK_OFFSET 0x100 - -enum i40e_pf_vfr_state { - I40E_PF_VFR_INPROGRESS = 0, - I40E_PF_VFR_COMPLETED = 1, -}; - -/* DPDK pf driver specific command to VF */ -enum i40e_virtchnl_ops_dpdk { - /* - * Keep some gap between Linux PF commands and - * DPDK PF extended commands. - */ - I40E_VIRTCHNL_OP_GET_LINK_STAT = I40E_VIRTCHNL_OP_VERSION + - I40E_DPDK_OFFSET, - I40E_VIRTCHNL_OP_CFG_VLAN_OFFLOAD, - I40E_VIRTCHNL_OP_CFG_VLAN_PVID, - I40E_VIRTCHNL_OP_CONFIG_VSI_QUEUES_EXT, -}; - -/* A structure to support extended info of a receive queue. */ -struct i40e_virtchnl_rxq_ext_info { - uint8_t crcstrip; -}; - -/* - * A structure to support extended info of queue pairs, an additional field - * is added, comparing to original 'struct i40e_virtchnl_queue_pair_info'. - */ -struct i40e_virtchnl_queue_pair_ext_info { - /* vsi_id and queue_id should be identical for both rx and tx queues.*/ - struct i40e_virtchnl_txq_info txq; - struct i40e_virtchnl_rxq_info rxq; - struct i40e_virtchnl_rxq_ext_info rxq_ext; -}; - -/* - * A structure to support extended info of VSI queue pairs, - * 'struct i40e_virtchnl_queue_pair_ext_info' is used, see its original - * of 'struct i40e_virtchnl_queue_pair_info'. - */ -struct i40e_virtchnl_vsi_queue_config_ext_info { - uint16_t vsi_id; - uint16_t num_queue_pairs; - struct i40e_virtchnl_queue_pair_ext_info qpair[0]; -}; - -struct i40e_virtchnl_vlan_offload_info { - uint16_t vsi_id; - uint8_t enable_vlan_strip; - uint8_t reserved; -}; - -/* - * Macro to calculate the memory size for configuring VSI queues - * via virtual channel. - */ -#define I40E_VIRTCHNL_CONFIG_VSI_QUEUES_SIZE(x, n) \ - (sizeof(*(x)) + sizeof((x)->qpair[0]) * (n)) - -/* - * I40E_VIRTCHNL_OP_CFG_VLAN_PVID - * VF sends this message to enable/disable pvid. If it's - * enable op, needs to specify the pvid. PF returns status - * code in retval. - */ -struct i40e_virtchnl_pvid_info { - uint16_t vsi_id; - struct i40e_vsi_vlan_pvid_info info; -}; - -int i40e_pf_host_vf_reset(struct i40e_pf_vf *vf, bool do_hw_reset); -void i40e_pf_host_handle_vf_msg(struct rte_eth_dev *dev, - uint16_t abs_vf_id, uint32_t opcode, - __rte_unused uint32_t retval, - uint8_t *msg, uint16_t msglen); -int i40e_pf_host_init(struct rte_eth_dev *dev); - -#endif /* _I40E_PF_H_ */ diff --git a/lib/librte_pmd_i40e/i40e_rxtx.c b/lib/librte_pmd_i40e/i40e_rxtx.c deleted file mode 100644 index 493cfa3..0000000 --- a/lib/librte_pmd_i40e/i40e_rxtx.c +++ /dev/null @@ -1,2709 +0,0 @@ -/*- - * BSD LICENSE - * - * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include "i40e_logs.h" -#include "i40e/i40e_prototype.h" -#include "i40e/i40e_type.h" -#include "i40e_ethdev.h" -#include "i40e_rxtx.h" - -#define I40E_MIN_RING_DESC 64 -#define I40E_MAX_RING_DESC 4096 -#define I40E_ALIGN 128 -#define DEFAULT_TX_RS_THRESH 32 -#define DEFAULT_TX_FREE_THRESH 32 -#define I40E_MAX_PKT_TYPE 256 - -#define I40E_VLAN_TAG_SIZE 4 -#define I40E_TX_MAX_BURST 32 - -#define I40E_DMA_MEM_ALIGN 4096 - -#define I40E_SIMPLE_FLAGS ((uint32_t)ETH_TXQ_FLAGS_NOMULTSEGS | \ - ETH_TXQ_FLAGS_NOOFFLOADS) - -#define I40E_TXD_CMD (I40E_TX_DESC_CMD_EOP | I40E_TX_DESC_CMD_RS) - -#define I40E_TX_CKSUM_OFFLOAD_MASK ( \ - PKT_TX_IP_CKSUM | \ - PKT_TX_L4_MASK | \ - PKT_TX_OUTER_IP_CKSUM) - -#define RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb) \ - (uint64_t) ((mb)->buf_physaddr + RTE_PKTMBUF_HEADROOM) - -#define RTE_MBUF_DATA_DMA_ADDR(mb) \ - ((uint64_t)((mb)->buf_physaddr + (mb)->data_off)) - -static const struct rte_memzone * -i40e_ring_dma_zone_reserve(struct rte_eth_dev *dev, - const char *ring_name, - uint16_t queue_id, - uint32_t ring_size, - int socket_id); -static uint16_t i40e_xmit_pkts_simple(void *tx_queue, - struct rte_mbuf **tx_pkts, - uint16_t nb_pkts); - -/* Translate the rx descriptor status to pkt flags */ -static inline uint64_t -i40e_rxd_status_to_pkt_flags(uint64_t qword) -{ - uint64_t flags; - - /* Check if VLAN packet */ - flags = qword & (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ? - PKT_RX_VLAN_PKT : 0; - - /* Check if RSS_HASH */ - flags |= (((qword >> I40E_RX_DESC_STATUS_FLTSTAT_SHIFT) & - I40E_RX_DESC_FLTSTAT_RSS_HASH) == - I40E_RX_DESC_FLTSTAT_RSS_HASH) ? PKT_RX_RSS_HASH : 0; - - /* Check if FDIR Match */ - flags |= (qword & (1 << I40E_RX_DESC_STATUS_FLM_SHIFT) ? - PKT_RX_FDIR : 0); - - return flags; -} - -static inline uint64_t -i40e_rxd_error_to_pkt_flags(uint64_t qword) -{ - uint64_t flags = 0; - uint64_t error_bits = (qword >> I40E_RXD_QW1_ERROR_SHIFT); - -#define I40E_RX_ERR_BITS 0x3f - if (likely((error_bits & I40E_RX_ERR_BITS) == 0)) - return flags; - /* If RXE bit set, all other status bits are meaningless */ - if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_RXE_SHIFT))) { - flags |= PKT_RX_MAC_ERR; - return flags; - } - - /* If RECIPE bit set, all other status indications should be ignored */ - if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_RECIPE_SHIFT))) { - flags |= PKT_RX_RECIP_ERR; - return flags; - } - if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_HBO_SHIFT))) - flags |= PKT_RX_HBUF_OVERFLOW; - if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_IPE_SHIFT))) - flags |= PKT_RX_IP_CKSUM_BAD; - if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_L4E_SHIFT))) - flags |= PKT_RX_L4_CKSUM_BAD; - if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_EIPE_SHIFT))) - flags |= PKT_RX_EIP_CKSUM_BAD; - if (unlikely(error_bits & (1 << I40E_RX_DESC_ERROR_OVERSIZE_SHIFT))) - flags |= PKT_RX_OVERSIZE; - - return flags; -} - -/* Translate pkt types to pkt flags */ -static inline uint64_t -i40e_rxd_ptype_to_pkt_flags(uint64_t qword) -{ - uint8_t ptype = (uint8_t)((qword & I40E_RXD_QW1_PTYPE_MASK) >> - I40E_RXD_QW1_PTYPE_SHIFT); - static const uint64_t ip_ptype_map[I40E_MAX_PKT_TYPE] = { - 0, /* PTYPE 0 */ - 0, /* PTYPE 1 */ - 0, /* PTYPE 2 */ - 0, /* PTYPE 3 */ - 0, /* PTYPE 4 */ - 0, /* PTYPE 5 */ - 0, /* PTYPE 6 */ - 0, /* PTYPE 7 */ - 0, /* PTYPE 8 */ - 0, /* PTYPE 9 */ - 0, /* PTYPE 10 */ - 0, /* PTYPE 11 */ - 0, /* PTYPE 12 */ - 0, /* PTYPE 13 */ - 0, /* PTYPE 14 */ - 0, /* PTYPE 15 */ - 0, /* PTYPE 16 */ - 0, /* PTYPE 17 */ - 0, /* PTYPE 18 */ - 0, /* PTYPE 19 */ - 0, /* PTYPE 20 */ - 0, /* PTYPE 21 */ - PKT_RX_IPV4_HDR, /* PTYPE 22 */ - PKT_RX_IPV4_HDR, /* PTYPE 23 */ - PKT_RX_IPV4_HDR, /* PTYPE 24 */ - 0, /* PTYPE 25 */ - PKT_RX_IPV4_HDR, /* PTYPE 26 */ - PKT_RX_IPV4_HDR, /* PTYPE 27 */ - PKT_RX_IPV4_HDR, /* PTYPE 28 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 29 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 30 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 31 */ - 0, /* PTYPE 32 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 33 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 34 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 35 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 36 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 37 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 38 */ - 0, /* PTYPE 39 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 40 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 41 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 42 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 43 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 44 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 45 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 46 */ - 0, /* PTYPE 47 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 48 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 49 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 50 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 51 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 52 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 53 */ - 0, /* PTYPE 54 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 55 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 56 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 57 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 58 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 59 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 60 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 61 */ - 0, /* PTYPE 62 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 63 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 64 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 65 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 66 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 67 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 68 */ - 0, /* PTYPE 69 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 70 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 71 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 72 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 73 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 74 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 75 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 76 */ - 0, /* PTYPE 77 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 78 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 79 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 80 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 81 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 82 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 83 */ - 0, /* PTYPE 84 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 85 */ - PKT_RX_TUNNEL_IPV4_HDR, /* PTYPE 86 */ - PKT_RX_IPV4_HDR_EXT, /* PTYPE 87 */ - PKT_RX_IPV6_HDR, /* PTYPE 88 */ - PKT_RX_IPV6_HDR, /* PTYPE 89 */ - PKT_RX_IPV6_HDR, /* PTYPE 90 */ - 0, /* PTYPE 91 */ - PKT_RX_IPV6_HDR, /* PTYPE 92 */ - PKT_RX_IPV6_HDR, /* PTYPE 93 */ - PKT_RX_IPV6_HDR, /* PTYPE 94 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 95 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 96 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 97 */ - 0, /* PTYPE 98 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 99 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 100 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 101 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 102 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 103 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 104 */ - 0, /* PTYPE 105 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 106 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 107 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 108 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 109 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 110 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 111 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 112 */ - 0, /* PTYPE 113 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 114 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 115 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 116 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 117 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 118 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 119 */ - 0, /* PTYPE 120 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 121 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 122 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 123 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 124 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 125 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 126 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 127 */ - 0, /* PTYPE 128 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 129 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 130 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 131 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 132 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 133 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 134 */ - 0, /* PTYPE 135 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 136 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 137 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 138 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 139 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 140 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 141 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 142 */ - 0, /* PTYPE 143 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 144 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 145 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 146 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 147 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 148 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 149 */ - 0, /* PTYPE 150 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 151 */ - PKT_RX_TUNNEL_IPV6_HDR, /* PTYPE 152 */ - PKT_RX_IPV6_HDR_EXT, /* PTYPE 153 */ - 0, /* PTYPE 154 */ - 0, /* PTYPE 155 */ - 0, /* PTYPE 156 */ - 0, /* PTYPE 157 */ - 0, /* PTYPE 158 */ - 0, /* PTYPE 159 */ - 0, /* PTYPE 160 */ - 0, /* PTYPE 161 */ - 0, /* PTYPE 162 */ - 0, /* PTYPE 163 */ - 0, /* PTYPE 164 */ - 0, /* PTYPE 165 */ - 0, /* PTYPE 166 */ - 0, /* PTYPE 167 */ - 0, /* PTYPE 168 */ - 0, /* PTYPE 169 */ - 0, /* PTYPE 170 */ - 0, /* PTYPE 171 */ - 0, /* PTYPE 172 */ - 0, /* PTYPE 173 */ - 0, /* PTYPE 174 */ - 0, /* PTYPE 175 */ - 0, /* PTYPE 176 */ - 0, /* PTYPE 177 */ - 0, /* PTYPE 178 */ - 0, /* PTYPE 179 */ - 0, /* PTYPE 180 */ - 0, /* PTYPE 181 */ - 0, /* PTYPE 182 */ - 0, /* PTYPE 183 */ - 0, /* PTYPE 184 */ - 0, /* PTYPE 185 */ - 0, /* PTYPE 186 */ - 0, /* PTYPE 187 */ - 0, /* PTYPE 188 */ - 0, /* PTYPE 189 */ - 0, /* PTYPE 190 */ - 0, /* PTYPE 191 */ - 0, /* PTYPE 192 */ - 0, /* PTYPE 193 */ - 0, /* PTYPE 194 */ - 0, /* PTYPE 195 */ - 0, /* PTYPE 196 */ - 0, /* PTYPE 197 */ - 0, /* PTYPE 198 */ - 0, /* PTYPE 199 */ - 0, /* PTYPE 200 */ - 0, /* PTYPE 201 */ - 0, /* PTYPE 202 */ - 0, /* PTYPE 203 */ - 0, /* PTYPE 204 */ - 0, /* PTYPE 205 */ - 0, /* PTYPE 206 */ - 0, /* PTYPE 207 */ - 0, /* PTYPE 208 */ - 0, /* PTYPE 209 */ - 0, /* PTYPE 210 */ - 0, /* PTYPE 211 */ - 0, /* PTYPE 212 */ - 0, /* PTYPE 213 */ - 0, /* PTYPE 214 */ - 0, /* PTYPE 215 */ - 0, /* PTYPE 216 */ - 0, /* PTYPE 217 */ - 0, /* PTYPE 218 */ - 0, /* PTYPE 219 */ - 0, /* PTYPE 220 */ - 0, /* PTYPE 221 */ - 0, /* PTYPE 222 */ - 0, /* PTYPE 223 */ - 0, /* PTYPE 224 */ - 0, /* PTYPE 225 */ - 0, /* PTYPE 226 */ - 0, /* PTYPE 227 */ - 0, /* PTYPE 228 */ - 0, /* PTYPE 229 */ - 0, /* PTYPE 230 */ - 0, /* PTYPE 231 */ - 0, /* PTYPE 232 */ - 0, /* PTYPE 233 */ - 0, /* PTYPE 234 */ - 0, /* PTYPE 235 */ - 0, /* PTYPE 236 */ - 0, /* PTYPE 237 */ - 0, /* PTYPE 238 */ - 0, /* PTYPE 239 */ - 0, /* PTYPE 240 */ - 0, /* PTYPE 241 */ - 0, /* PTYPE 242 */ - 0, /* PTYPE 243 */ - 0, /* PTYPE 244 */ - 0, /* PTYPE 245 */ - 0, /* PTYPE 246 */ - 0, /* PTYPE 247 */ - 0, /* PTYPE 248 */ - 0, /* PTYPE 249 */ - 0, /* PTYPE 250 */ - 0, /* PTYPE 251 */ - 0, /* PTYPE 252 */ - 0, /* PTYPE 253 */ - 0, /* PTYPE 254 */ - 0, /* PTYPE 255 */ - }; - - return ip_ptype_map[ptype]; -} - -#define I40E_RX_DESC_EXT_STATUS_FLEXBH_MASK 0x03 -#define I40E_RX_DESC_EXT_STATUS_FLEXBH_FD_ID 0x01 -#define I40E_RX_DESC_EXT_STATUS_FLEXBH_FLEX 0x02 -#define I40E_RX_DESC_EXT_STATUS_FLEXBL_MASK 0x03 -#define I40E_RX_DESC_EXT_STATUS_FLEXBL_FLEX 0x01 - -static inline uint64_t -i40e_rxd_build_fdir(volatile union i40e_rx_desc *rxdp, struct rte_mbuf *mb) -{ - uint64_t flags = 0; -#ifndef RTE_LIBRTE_I40E_16BYTE_RX_DESC - uint16_t flexbh, flexbl; - - flexbh = (rte_le_to_cpu_32(rxdp->wb.qword2.ext_status) >> - I40E_RX_DESC_EXT_STATUS_FLEXBH_SHIFT) & - I40E_RX_DESC_EXT_STATUS_FLEXBH_MASK; - flexbl = (rte_le_to_cpu_32(rxdp->wb.qword2.ext_status) >> - I40E_RX_DESC_EXT_STATUS_FLEXBL_SHIFT) & - I40E_RX_DESC_EXT_STATUS_FLEXBL_MASK; - - - if (flexbh == I40E_RX_DESC_EXT_STATUS_FLEXBH_FD_ID) { - mb->hash.fdir.hi = - rte_le_to_cpu_32(rxdp->wb.qword3.hi_dword.fd_id); - flags |= PKT_RX_FDIR_ID; - } else if (flexbh == I40E_RX_DESC_EXT_STATUS_FLEXBH_FLEX) { - mb->hash.fdir.hi = - rte_le_to_cpu_32(rxdp->wb.qword3.hi_dword.flex_bytes_hi); - flags |= PKT_RX_FDIR_FLX; - } - if (flexbl == I40E_RX_DESC_EXT_STATUS_FLEXBL_FLEX) { - mb->hash.fdir.lo = - rte_le_to_cpu_32(rxdp->wb.qword3.lo_dword.flex_bytes_lo); - flags |= PKT_RX_FDIR_FLX; - } -#else - mb->hash.fdir.hi = - rte_le_to_cpu_32(rxdp->wb.qword0.hi_dword.fd_id); - flags |= PKT_RX_FDIR_ID; -#endif - return flags; -} -static inline void -i40e_txd_enable_checksum(uint64_t ol_flags, - uint32_t *td_cmd, - uint32_t *td_offset, - union i40e_tx_offload tx_offload, - uint32_t *cd_tunneling) -{ - /* UDP tunneling packet TX checksum offload */ - if (ol_flags & PKT_TX_OUTER_IP_CKSUM) { - - *td_offset |= (tx_offload.outer_l2_len >> 1) - << I40E_TX_DESC_LENGTH_MACLEN_SHIFT; - - if (ol_flags & PKT_TX_OUTER_IP_CKSUM) - *cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV4; - else if (ol_flags & PKT_TX_OUTER_IPV4) - *cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV4_NO_CSUM; - else if (ol_flags & PKT_TX_OUTER_IPV6) - *cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV6; - - /* Now set the ctx descriptor fields */ - *cd_tunneling |= (tx_offload.outer_l3_len >> 2) << - I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT | - (tx_offload.l2_len >> 1) << - I40E_TXD_CTX_QW0_NATLEN_SHIFT; - - } else - *td_offset |= (tx_offload.l2_len >> 1) - << I40E_TX_DESC_LENGTH_MACLEN_SHIFT; - - /* Enable L3 checksum offloads */ - if (ol_flags & PKT_TX_IP_CKSUM) { - *td_cmd |= I40E_TX_DESC_CMD_IIPT_IPV4_CSUM; - *td_offset |= (tx_offload.l3_len >> 2) - << I40E_TX_DESC_LENGTH_IPLEN_SHIFT; - } else if (ol_flags & PKT_TX_IPV4) { - *td_cmd |= I40E_TX_DESC_CMD_IIPT_IPV4; - *td_offset |= (tx_offload.l3_len >> 2) - << I40E_TX_DESC_LENGTH_IPLEN_SHIFT; - } else if (ol_flags & PKT_TX_IPV6) { - *td_cmd |= I40E_TX_DESC_CMD_IIPT_IPV6; - *td_offset |= (tx_offload.l3_len >> 2) - << I40E_TX_DESC_LENGTH_IPLEN_SHIFT; - } - - if (ol_flags & PKT_TX_TCP_SEG) { - *td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_TCP; - *td_offset |= (tx_offload.l4_len >> 2) - << I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT; - return; - } - - /* Enable L4 checksum offloads */ - switch (ol_flags & PKT_TX_L4_MASK) { - case PKT_TX_TCP_CKSUM: - *td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_TCP; - *td_offset |= (sizeof(struct tcp_hdr) >> 2) << - I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT; - break; - case PKT_TX_SCTP_CKSUM: - *td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_SCTP; - *td_offset |= (sizeof(struct sctp_hdr) >> 2) << - I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT; - break; - case PKT_TX_UDP_CKSUM: - *td_cmd |= I40E_TX_DESC_CMD_L4T_EOFT_UDP; - *td_offset |= (sizeof(struct udp_hdr) >> 2) << - I40E_TX_DESC_LENGTH_L4_FC_LEN_SHIFT; - break; - default: - break; - } -} - -static inline struct rte_mbuf * -rte_rxmbuf_alloc(struct rte_mempool *mp) -{ - struct rte_mbuf *m; - - m = __rte_mbuf_raw_alloc(mp); - __rte_mbuf_sanity_check_raw(m, 0); - - return m; -} - -/* Construct the tx flags */ -static inline uint64_t -i40e_build_ctob(uint32_t td_cmd, - uint32_t td_offset, - unsigned int size, - uint32_t td_tag) -{ - return rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DATA | - ((uint64_t)td_cmd << I40E_TXD_QW1_CMD_SHIFT) | - ((uint64_t)td_offset << I40E_TXD_QW1_OFFSET_SHIFT) | - ((uint64_t)size << I40E_TXD_QW1_TX_BUF_SZ_SHIFT) | - ((uint64_t)td_tag << I40E_TXD_QW1_L2TAG1_SHIFT)); -} - -static inline int -i40e_xmit_cleanup(struct i40e_tx_queue *txq) -{ - struct i40e_tx_entry *sw_ring = txq->sw_ring; - volatile struct i40e_tx_desc *txd = txq->tx_ring; - uint16_t last_desc_cleaned = txq->last_desc_cleaned; - uint16_t nb_tx_desc = txq->nb_tx_desc; - uint16_t desc_to_clean_to; - uint16_t nb_tx_to_clean; - - desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->tx_rs_thresh); - if (desc_to_clean_to >= nb_tx_desc) - desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc); - - desc_to_clean_to = sw_ring[desc_to_clean_to].last_id; - if (!(txd[desc_to_clean_to].cmd_type_offset_bsz & - rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))) { - PMD_TX_FREE_LOG(DEBUG, "TX descriptor %4u is not done " - "(port=%d queue=%d)", desc_to_clean_to, - txq->port_id, txq->queue_id); - return -1; - } - - if (last_desc_cleaned > desc_to_clean_to) - nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) + - desc_to_clean_to); - else - nb_tx_to_clean = (uint16_t)(desc_to_clean_to - - last_desc_cleaned); - - txd[desc_to_clean_to].cmd_type_offset_bsz = 0; - - txq->last_desc_cleaned = desc_to_clean_to; - txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + nb_tx_to_clean); - - return 0; -} - -static inline int -#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC -check_rx_burst_bulk_alloc_preconditions(struct i40e_rx_queue *rxq) -#else -check_rx_burst_bulk_alloc_preconditions(__rte_unused struct i40e_rx_queue *rxq) -#endif -{ - int ret = 0; - -#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC - if (!(rxq->rx_free_thresh >= RTE_PMD_I40E_RX_MAX_BURST)) { - PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: " - "rxq->rx_free_thresh=%d, " - "RTE_PMD_I40E_RX_MAX_BURST=%d", - rxq->rx_free_thresh, RTE_PMD_I40E_RX_MAX_BURST); - ret = -EINVAL; - } else if (!(rxq->rx_free_thresh < rxq->nb_rx_desc)) { - PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: " - "rxq->rx_free_thresh=%d, " - "rxq->nb_rx_desc=%d", - rxq->rx_free_thresh, rxq->nb_rx_desc); - ret = -EINVAL; - } else if (rxq->nb_rx_desc % rxq->rx_free_thresh != 0) { - PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: " - "rxq->nb_rx_desc=%d, " - "rxq->rx_free_thresh=%d", - rxq->nb_rx_desc, rxq->rx_free_thresh); - ret = -EINVAL; - } else if (!(rxq->nb_rx_desc < (I40E_MAX_RING_DESC - - RTE_PMD_I40E_RX_MAX_BURST))) { - PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: " - "rxq->nb_rx_desc=%d, " - "I40E_MAX_RING_DESC=%d, " - "RTE_PMD_I40E_RX_MAX_BURST=%d", - rxq->nb_rx_desc, I40E_MAX_RING_DESC, - RTE_PMD_I40E_RX_MAX_BURST); - ret = -EINVAL; - } -#else - ret = -EINVAL; -#endif - - return ret; -} - -#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC -#define I40E_LOOK_AHEAD 8 -#if (I40E_LOOK_AHEAD != 8) -#error "PMD I40E: I40E_LOOK_AHEAD must be 8\n" -#endif -static inline int -i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq) -{ - volatile union i40e_rx_desc *rxdp; - struct i40e_rx_entry *rxep; - struct rte_mbuf *mb; - uint16_t pkt_len; - uint64_t qword1; - uint32_t rx_status; - int32_t s[I40E_LOOK_AHEAD], nb_dd; - int32_t i, j, nb_rx = 0; - uint64_t pkt_flags; - - rxdp = &rxq->rx_ring[rxq->rx_tail]; - rxep = &rxq->sw_ring[rxq->rx_tail]; - - qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len); - rx_status = (qword1 & I40E_RXD_QW1_STATUS_MASK) >> - I40E_RXD_QW1_STATUS_SHIFT; - - /* Make sure there is at least 1 packet to receive */ - if (!(rx_status & (1 << I40E_RX_DESC_STATUS_DD_SHIFT))) - return 0; - - /** - * Scan LOOK_AHEAD descriptors at a time to determine which - * descriptors reference packets that are ready to be received. - */ - for (i = 0; i < RTE_PMD_I40E_RX_MAX_BURST; i+=I40E_LOOK_AHEAD, - rxdp += I40E_LOOK_AHEAD, rxep += I40E_LOOK_AHEAD) { - /* Read desc statuses backwards to avoid race condition */ - for (j = I40E_LOOK_AHEAD - 1; j >= 0; j--) { - qword1 = rte_le_to_cpu_64(\ - rxdp[j].wb.qword1.status_error_len); - s[j] = (qword1 & I40E_RXD_QW1_STATUS_MASK) >> - I40E_RXD_QW1_STATUS_SHIFT; - } - - /* Compute how many status bits were set */ - for (j = 0, nb_dd = 0; j < I40E_LOOK_AHEAD; j++) - nb_dd += s[j] & (1 << I40E_RX_DESC_STATUS_DD_SHIFT); - - nb_rx += nb_dd; - - /* Translate descriptor info to mbuf parameters */ - for (j = 0; j < nb_dd; j++) { - mb = rxep[j].mbuf; - qword1 = rte_le_to_cpu_64(\ - rxdp[j].wb.qword1.status_error_len); - rx_status = (qword1 & I40E_RXD_QW1_STATUS_MASK) >> - I40E_RXD_QW1_STATUS_SHIFT; - pkt_len = ((qword1 & I40E_RXD_QW1_LENGTH_PBUF_MASK) >> - I40E_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len; - mb->data_len = pkt_len; - mb->pkt_len = pkt_len; - mb->vlan_tci = rx_status & - (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ? - rte_le_to_cpu_16(\ - rxdp[j].wb.qword0.lo_dword.l2tag1) : 0; - pkt_flags = i40e_rxd_status_to_pkt_flags(qword1); - pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1); - pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1); - - mb->packet_type = (uint16_t)((qword1 & - I40E_RXD_QW1_PTYPE_MASK) >> - I40E_RXD_QW1_PTYPE_SHIFT); - if (pkt_flags & PKT_RX_RSS_HASH) - mb->hash.rss = rte_le_to_cpu_32(\ - rxdp[j].wb.qword0.hi_dword.rss); - if (pkt_flags & PKT_RX_FDIR) - pkt_flags |= i40e_rxd_build_fdir(&rxdp[j], mb); - - mb->ol_flags = pkt_flags; - } - - for (j = 0; j < I40E_LOOK_AHEAD; j++) - rxq->rx_stage[i + j] = rxep[j].mbuf; - - if (nb_dd != I40E_LOOK_AHEAD) - break; - } - - /* Clear software ring entries */ - for (i = 0; i < nb_rx; i++) - rxq->sw_ring[rxq->rx_tail + i].mbuf = NULL; - - return nb_rx; -} - -static inline uint16_t -i40e_rx_fill_from_stage(struct i40e_rx_queue *rxq, - struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) -{ - uint16_t i; - struct rte_mbuf **stage = &rxq->rx_stage[rxq->rx_next_avail]; - - nb_pkts = (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail); - - for (i = 0; i < nb_pkts; i++) - rx_pkts[i] = stage[i]; - - rxq->rx_nb_avail = (uint16_t)(rxq->rx_nb_avail - nb_pkts); - rxq->rx_next_avail = (uint16_t)(rxq->rx_next_avail + nb_pkts); - - return nb_pkts; -} - -static inline int -i40e_rx_alloc_bufs(struct i40e_rx_queue *rxq) -{ - volatile union i40e_rx_desc *rxdp; - struct i40e_rx_entry *rxep; - struct rte_mbuf *mb; - uint16_t alloc_idx, i; - uint64_t dma_addr; - int diag; - - /* Allocate buffers in bulk */ - alloc_idx = (uint16_t)(rxq->rx_free_trigger - - (rxq->rx_free_thresh - 1)); - rxep = &(rxq->sw_ring[alloc_idx]); - diag = rte_mempool_get_bulk(rxq->mp, (void *)rxep, - rxq->rx_free_thresh); - if (unlikely(diag != 0)) { - PMD_DRV_LOG(ERR, "Failed to get mbufs in bulk"); - return -ENOMEM; - } - - rxdp = &rxq->rx_ring[alloc_idx]; - for (i = 0; i < rxq->rx_free_thresh; i++) { - mb = rxep[i].mbuf; - rte_mbuf_refcnt_set(mb, 1); - mb->next = NULL; - mb->data_off = RTE_PKTMBUF_HEADROOM; - mb->nb_segs = 1; - mb->port = rxq->port_id; - dma_addr = rte_cpu_to_le_64(\ - RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb)); - rxdp[i].read.hdr_addr = dma_addr; - rxdp[i].read.pkt_addr = dma_addr; - } - - /* Update rx tail regsiter */ - rte_wmb(); - I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->rx_free_trigger); - - rxq->rx_free_trigger = - (uint16_t)(rxq->rx_free_trigger + rxq->rx_free_thresh); - if (rxq->rx_free_trigger >= rxq->nb_rx_desc) - rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1); - - return 0; -} - -static inline uint16_t -rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) -{ - struct i40e_rx_queue *rxq = (struct i40e_rx_queue *)rx_queue; - uint16_t nb_rx = 0; - - if (!nb_pkts) - return 0; - - if (rxq->rx_nb_avail) - return i40e_rx_fill_from_stage(rxq, rx_pkts, nb_pkts); - - nb_rx = (uint16_t)i40e_rx_scan_hw_ring(rxq); - rxq->rx_next_avail = 0; - rxq->rx_nb_avail = nb_rx; - rxq->rx_tail = (uint16_t)(rxq->rx_tail + nb_rx); - - if (rxq->rx_tail > rxq->rx_free_trigger) { - if (i40e_rx_alloc_bufs(rxq) != 0) { - uint16_t i, j; - - PMD_RX_LOG(DEBUG, "Rx mbuf alloc failed for " - "port_id=%u, queue_id=%u", - rxq->port_id, rxq->queue_id); - rxq->rx_nb_avail = 0; - rxq->rx_tail = (uint16_t)(rxq->rx_tail - nb_rx); - for (i = 0, j = rxq->rx_tail; i < nb_rx; i++, j++) - rxq->sw_ring[j].mbuf = rxq->rx_stage[i]; - - return 0; - } - } - - if (rxq->rx_tail >= rxq->nb_rx_desc) - rxq->rx_tail = 0; - - if (rxq->rx_nb_avail) - return i40e_rx_fill_from_stage(rxq, rx_pkts, nb_pkts); - - return 0; -} - -static uint16_t -i40e_recv_pkts_bulk_alloc(void *rx_queue, - struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) -{ - uint16_t nb_rx = 0, n, count; - - if (unlikely(nb_pkts == 0)) - return 0; - - if (likely(nb_pkts <= RTE_PMD_I40E_RX_MAX_BURST)) - return rx_recv_pkts(rx_queue, rx_pkts, nb_pkts); - - while (nb_pkts) { - n = RTE_MIN(nb_pkts, RTE_PMD_I40E_RX_MAX_BURST); - count = rx_recv_pkts(rx_queue, &rx_pkts[nb_rx], n); - nb_rx = (uint16_t)(nb_rx + count); - nb_pkts = (uint16_t)(nb_pkts - count); - if (count < n) - break; - } - - return nb_rx; -} -#endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */ - -uint16_t -i40e_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) -{ - struct i40e_rx_queue *rxq; - volatile union i40e_rx_desc *rx_ring; - volatile union i40e_rx_desc *rxdp; - union i40e_rx_desc rxd; - struct i40e_rx_entry *sw_ring; - struct i40e_rx_entry *rxe; - struct rte_mbuf *rxm; - struct rte_mbuf *nmb; - uint16_t nb_rx; - uint32_t rx_status; - uint64_t qword1; - uint16_t rx_packet_len; - uint16_t rx_id, nb_hold; - uint64_t dma_addr; - uint64_t pkt_flags; - - nb_rx = 0; - nb_hold = 0; - rxq = rx_queue; - rx_id = rxq->rx_tail; - rx_ring = rxq->rx_ring; - sw_ring = rxq->sw_ring; - - while (nb_rx < nb_pkts) { - rxdp = &rx_ring[rx_id]; - qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len); - rx_status = (qword1 & I40E_RXD_QW1_STATUS_MASK) - >> I40E_RXD_QW1_STATUS_SHIFT; - /* Check the DD bit first */ - if (!(rx_status & (1 << I40E_RX_DESC_STATUS_DD_SHIFT))) - break; - - nmb = rte_rxmbuf_alloc(rxq->mp); - if (unlikely(!nmb)) - break; - rxd = *rxdp; - - nb_hold++; - rxe = &sw_ring[rx_id]; - rx_id++; - if (unlikely(rx_id == rxq->nb_rx_desc)) - rx_id = 0; - - /* Prefetch next mbuf */ - rte_prefetch0(sw_ring[rx_id].mbuf); - - /** - * When next RX descriptor is on a cache line boundary, - * prefetch the next 4 RX descriptors and next 8 pointers - * to mbufs. - */ - if ((rx_id & 0x3) == 0) { - rte_prefetch0(&rx_ring[rx_id]); - rte_prefetch0(&sw_ring[rx_id]); - } - rxm = rxe->mbuf; - rxe->mbuf = nmb; - dma_addr = - rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(nmb)); - rxdp->read.hdr_addr = dma_addr; - rxdp->read.pkt_addr = dma_addr; - - rx_packet_len = ((qword1 & I40E_RXD_QW1_LENGTH_PBUF_MASK) >> - I40E_RXD_QW1_LENGTH_PBUF_SHIFT) - rxq->crc_len; - - rxm->data_off = RTE_PKTMBUF_HEADROOM; - rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM)); - rxm->nb_segs = 1; - rxm->next = NULL; - rxm->pkt_len = rx_packet_len; - rxm->data_len = rx_packet_len; - rxm->port = rxq->port_id; - - rxm->vlan_tci = rx_status & - (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT) ? - rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0; - pkt_flags = i40e_rxd_status_to_pkt_flags(qword1); - pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1); - pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1); - rxm->packet_type = (uint16_t)((qword1 & I40E_RXD_QW1_PTYPE_MASK) >> - I40E_RXD_QW1_PTYPE_SHIFT); - if (pkt_flags & PKT_RX_RSS_HASH) - rxm->hash.rss = - rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss); - if (pkt_flags & PKT_RX_FDIR) - pkt_flags |= i40e_rxd_build_fdir(&rxd, rxm); - - rxm->ol_flags = pkt_flags; - - rx_pkts[nb_rx++] = rxm; - } - rxq->rx_tail = rx_id; - - /** - * If the number of free RX descriptors is greater than the RX free - * threshold of the queue, advance the receive tail register of queue. - * Update that register with the value of the last processed RX - * descriptor minus 1. - */ - nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold); - if (nb_hold > rxq->rx_free_thresh) { - rx_id = (uint16_t) ((rx_id == 0) ? - (rxq->nb_rx_desc - 1) : (rx_id - 1)); - I40E_PCI_REG_WRITE(rxq->qrx_tail, rx_id); - nb_hold = 0; - } - rxq->nb_rx_hold = nb_hold; - - return nb_rx; -} - -uint16_t -i40e_recv_scattered_pkts(void *rx_queue, - struct rte_mbuf **rx_pkts, - uint16_t nb_pkts) -{ - struct i40e_rx_queue *rxq = rx_queue; - volatile union i40e_rx_desc *rx_ring = rxq->rx_ring; - volatile union i40e_rx_desc *rxdp; - union i40e_rx_desc rxd; - struct i40e_rx_entry *sw_ring = rxq->sw_ring; - struct i40e_rx_entry *rxe; - struct rte_mbuf *first_seg = rxq->pkt_first_seg; - struct rte_mbuf *last_seg = rxq->pkt_last_seg; - struct rte_mbuf *nmb, *rxm; - uint16_t rx_id = rxq->rx_tail; - uint16_t nb_rx = 0, nb_hold = 0, rx_packet_len; - uint32_t rx_status; - uint64_t qword1; - uint64_t dma_addr; - uint64_t pkt_flags; - - while (nb_rx < nb_pkts) { - rxdp = &rx_ring[rx_id]; - qword1 = rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len); - rx_status = (qword1 & I40E_RXD_QW1_STATUS_MASK) >> - I40E_RXD_QW1_STATUS_SHIFT; - /* Check the DD bit */ - if (!(rx_status & (1 << I40E_RX_DESC_STATUS_DD_SHIFT))) - break; - - nmb = rte_rxmbuf_alloc(rxq->mp); - if (unlikely(!nmb)) - break; - rxd = *rxdp; - nb_hold++; - rxe = &sw_ring[rx_id]; - rx_id++; - if (rx_id == rxq->nb_rx_desc) - rx_id = 0; - - /* Prefetch next mbuf */ - rte_prefetch0(sw_ring[rx_id].mbuf); - - /** - * When next RX descriptor is on a cache line boundary, - * prefetch the next 4 RX descriptors and next 8 pointers - * to mbufs. - */ - if ((rx_id & 0x3) == 0) { - rte_prefetch0(&rx_ring[rx_id]); - rte_prefetch0(&sw_ring[rx_id]); - } - - rxm = rxe->mbuf; - rxe->mbuf = nmb; - dma_addr = - rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(nmb)); - - /* Set data buffer address and data length of the mbuf */ - rxdp->read.hdr_addr = dma_addr; - rxdp->read.pkt_addr = dma_addr; - rx_packet_len = (qword1 & I40E_RXD_QW1_LENGTH_PBUF_MASK) >> - I40E_RXD_QW1_LENGTH_PBUF_SHIFT; - rxm->data_len = rx_packet_len; - rxm->data_off = RTE_PKTMBUF_HEADROOM; - - /** - * If this is the first buffer of the received packet, set the - * pointer to the first mbuf of the packet and initialize its - * context. Otherwise, update the total length and the number - * of segments of the current scattered packet, and update the - * pointer to the last mbuf of the current packet. - */ - if (!first_seg) { - first_seg = rxm; - first_seg->nb_segs = 1; - first_seg->pkt_len = rx_packet_len; - } else { - first_seg->pkt_len = - (uint16_t)(first_seg->pkt_len + - rx_packet_len); - first_seg->nb_segs++; - last_seg->next = rxm; - } - - /** - * If this is not the last buffer of the received packet, - * update the pointer to the last mbuf of the current scattered - * packet and continue to parse the RX ring. - */ - if (!(rx_status & (1 << I40E_RX_DESC_STATUS_EOF_SHIFT))) { - last_seg = rxm; - continue; - } - - /** - * This is the last buffer of the received packet. If the CRC - * is not stripped by the hardware: - * - Subtract the CRC length from the total packet length. - * - If the last buffer only contains the whole CRC or a part - * of it, free the mbuf associated to the last buffer. If part - * of the CRC is also contained in the previous mbuf, subtract - * the length of that CRC part from the data length of the - * previous mbuf. - */ - rxm->next = NULL; - if (unlikely(rxq->crc_len > 0)) { - first_seg->pkt_len -= ETHER_CRC_LEN; - if (rx_packet_len <= ETHER_CRC_LEN) { - rte_pktmbuf_free_seg(rxm); - first_seg->nb_segs--; - last_seg->data_len = - (uint16_t)(last_seg->data_len - - (ETHER_CRC_LEN - rx_packet_len)); - last_seg->next = NULL; - } else - rxm->data_len = (uint16_t)(rx_packet_len - - ETHER_CRC_LEN); - } - - first_seg->port = rxq->port_id; - first_seg->vlan_tci = (rx_status & - (1 << I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) ? - rte_le_to_cpu_16(rxd.wb.qword0.lo_dword.l2tag1) : 0; - pkt_flags = i40e_rxd_status_to_pkt_flags(qword1); - pkt_flags |= i40e_rxd_error_to_pkt_flags(qword1); - pkt_flags |= i40e_rxd_ptype_to_pkt_flags(qword1); - first_seg->packet_type = (uint16_t)((qword1 & - I40E_RXD_QW1_PTYPE_MASK) >> - I40E_RXD_QW1_PTYPE_SHIFT); - if (pkt_flags & PKT_RX_RSS_HASH) - rxm->hash.rss = - rte_le_to_cpu_32(rxd.wb.qword0.hi_dword.rss); - if (pkt_flags & PKT_RX_FDIR) - pkt_flags |= i40e_rxd_build_fdir(&rxd, rxm); - - first_seg->ol_flags = pkt_flags; - - /* Prefetch data of first segment, if configured to do so. */ - rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr, - first_seg->data_off)); - rx_pkts[nb_rx++] = first_seg; - first_seg = NULL; - } - - /* Record index of the next RX descriptor to probe. */ - rxq->rx_tail = rx_id; - rxq->pkt_first_seg = first_seg; - rxq->pkt_last_seg = last_seg; - - /** - * If the number of free RX descriptors is greater than the RX free - * threshold of the queue, advance the Receive Descriptor Tail (RDT) - * register. Update the RDT with the value of the last processed RX - * descriptor minus 1, to guarantee that the RDT register is never - * equal to the RDH register, which creates a "full" ring situtation - * from the hardware point of view. - */ - nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold); - if (nb_hold > rxq->rx_free_thresh) { - rx_id = (uint16_t)(rx_id == 0 ? - (rxq->nb_rx_desc - 1) : (rx_id - 1)); - I40E_PCI_REG_WRITE(rxq->qrx_tail, rx_id); - nb_hold = 0; - } - rxq->nb_rx_hold = nb_hold; - - return nb_rx; -} - -/* Check if the context descriptor is needed for TX offloading */ -static inline uint16_t -i40e_calc_context_desc(uint64_t flags) -{ - uint64_t mask = 0ULL; - - mask |= (PKT_TX_OUTER_IP_CKSUM | PKT_TX_TCP_SEG); - -#ifdef RTE_LIBRTE_IEEE1588 - mask |= PKT_TX_IEEE1588_TMST; -#endif - if (flags & mask) - return 1; - - return 0; -} - -/* set i40e TSO context descriptor */ -static inline uint64_t -i40e_set_tso_ctx(struct rte_mbuf *mbuf, union i40e_tx_offload tx_offload) -{ - uint64_t ctx_desc = 0; - uint32_t cd_cmd, hdr_len, cd_tso_len; - - if (!tx_offload.l4_len) { - PMD_DRV_LOG(DEBUG, "L4 length set to 0"); - return ctx_desc; - } - - /** - * in case of tunneling packet, the outer_l2_len and - * outer_l3_len must be 0. - */ - hdr_len = tx_offload.outer_l2_len + - tx_offload.outer_l3_len + - tx_offload.l2_len + - tx_offload.l3_len + - tx_offload.l4_len; - - cd_cmd = I40E_TX_CTX_DESC_TSO; - cd_tso_len = mbuf->pkt_len - hdr_len; - ctx_desc |= ((uint64_t)cd_cmd << I40E_TXD_CTX_QW1_CMD_SHIFT) | - ((uint64_t)cd_tso_len << - I40E_TXD_CTX_QW1_TSO_LEN_SHIFT) | - ((uint64_t)mbuf->tso_segsz << - I40E_TXD_CTX_QW1_MSS_SHIFT); - - return ctx_desc; -} - -uint16_t -i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) -{ - struct i40e_tx_queue *txq; - struct i40e_tx_entry *sw_ring; - struct i40e_tx_entry *txe, *txn; - volatile struct i40e_tx_desc *txd; - volatile struct i40e_tx_desc *txr; - struct rte_mbuf *tx_pkt; - struct rte_mbuf *m_seg; - uint32_t cd_tunneling_params; - uint16_t tx_id; - uint16_t nb_tx; - uint32_t td_cmd; - uint32_t td_offset; - uint32_t tx_flags; - uint32_t td_tag; - uint64_t ol_flags; - uint16_t nb_used; - uint16_t nb_ctx; - uint16_t tx_last; - uint16_t slen; - uint64_t buf_dma_addr; - union i40e_tx_offload tx_offload = {0}; - - txq = tx_queue; - sw_ring = txq->sw_ring; - txr = txq->tx_ring; - tx_id = txq->tx_tail; - txe = &sw_ring[tx_id]; - - /* Check if the descriptor ring needs to be cleaned. */ - if ((txq->nb_tx_desc - txq->nb_tx_free) > txq->tx_free_thresh) - i40e_xmit_cleanup(txq); - - for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { - td_cmd = 0; - td_tag = 0; - td_offset = 0; - tx_flags = 0; - - tx_pkt = *tx_pkts++; - RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf); - - ol_flags = tx_pkt->ol_flags; - tx_offload.l2_len = tx_pkt->l2_len; - tx_offload.l3_len = tx_pkt->l3_len; - tx_offload.outer_l2_len = tx_pkt->outer_l2_len; - tx_offload.outer_l3_len = tx_pkt->outer_l3_len; - tx_offload.l4_len = tx_pkt->l4_len; - tx_offload.tso_segsz = tx_pkt->tso_segsz; - - /* Calculate the number of context descriptors needed. */ - nb_ctx = i40e_calc_context_desc(ol_flags); - - /** - * The number of descriptors that must be allocated for - * a packet equals to the number of the segments of that - * packet plus 1 context descriptor if needed. - */ - nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx); - tx_last = (uint16_t)(tx_id + nb_used - 1); - - /* Circular ring */ - if (tx_last >= txq->nb_tx_desc) - tx_last = (uint16_t)(tx_last - txq->nb_tx_desc); - - if (nb_used > txq->nb_tx_free) { - if (i40e_xmit_cleanup(txq) != 0) { - if (nb_tx == 0) - return 0; - goto end_of_tx; - } - if (unlikely(nb_used > txq->tx_rs_thresh)) { - while (nb_used > txq->nb_tx_free) { - if (i40e_xmit_cleanup(txq) != 0) { - if (nb_tx == 0) - return 0; - goto end_of_tx; - } - } - } - } - - /* Descriptor based VLAN insertion */ - if (ol_flags & PKT_TX_VLAN_PKT) { - tx_flags |= tx_pkt->vlan_tci << - I40E_TX_FLAG_L2TAG1_SHIFT; - tx_flags |= I40E_TX_FLAG_INSERT_VLAN; - td_cmd |= I40E_TX_DESC_CMD_IL2TAG1; - td_tag = (tx_flags & I40E_TX_FLAG_L2TAG1_MASK) >> - I40E_TX_FLAG_L2TAG1_SHIFT; - } - - /* Always enable CRC offload insertion */ - td_cmd |= I40E_TX_DESC_CMD_ICRC; - - /* Enable checksum offloading */ - cd_tunneling_params = 0; - if (unlikely(ol_flags & I40E_TX_CKSUM_OFFLOAD_MASK)) { - i40e_txd_enable_checksum(ol_flags, &td_cmd, &td_offset, - tx_offload, &cd_tunneling_params); - } - - if (unlikely(nb_ctx)) { - /* Setup TX context descriptor if required */ - volatile struct i40e_tx_context_desc *ctx_txd = - (volatile struct i40e_tx_context_desc *)\ - &txr[tx_id]; - uint16_t cd_l2tag2 = 0; - uint64_t cd_type_cmd_tso_mss = - I40E_TX_DESC_DTYPE_CONTEXT; - - txn = &sw_ring[txe->next_id]; - RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf); - if (txe->mbuf != NULL) { - rte_pktmbuf_free_seg(txe->mbuf); - txe->mbuf = NULL; - } - - /* TSO enabled means no timestamp */ - if (ol_flags & PKT_TX_TCP_SEG) - cd_type_cmd_tso_mss |= - i40e_set_tso_ctx(tx_pkt, tx_offload); - else { -#ifdef RTE_LIBRTE_IEEE1588 - if (ol_flags & PKT_TX_IEEE1588_TMST) - cd_type_cmd_tso_mss |= - ((uint64_t)I40E_TX_CTX_DESC_TSYN << - I40E_TXD_CTX_QW1_CMD_SHIFT); -#endif - } - - ctx_txd->tunneling_params = - rte_cpu_to_le_32(cd_tunneling_params); - ctx_txd->l2tag2 = rte_cpu_to_le_16(cd_l2tag2); - ctx_txd->type_cmd_tso_mss = - rte_cpu_to_le_64(cd_type_cmd_tso_mss); - - PMD_TX_LOG(DEBUG, "mbuf: %p, TCD[%u]:\n" - "tunneling_params: %#x;\n" - "l2tag2: %#hx;\n" - "rsvd: %#hx;\n" - "type_cmd_tso_mss: %#lx;\n", - tx_pkt, tx_id, - ctx_txd->tunneling_params, - ctx_txd->l2tag2, - ctx_txd->rsvd, - ctx_txd->type_cmd_tso_mss); - - txe->last_id = tx_last; - tx_id = txe->next_id; - txe = txn; - } - - m_seg = tx_pkt; - do { - txd = &txr[tx_id]; - txn = &sw_ring[txe->next_id]; - - if (txe->mbuf) - rte_pktmbuf_free_seg(txe->mbuf); - txe->mbuf = m_seg; - - /* Setup TX Descriptor */ - slen = m_seg->data_len; - buf_dma_addr = RTE_MBUF_DATA_DMA_ADDR(m_seg); - - PMD_TX_LOG(DEBUG, "mbuf: %p, TDD[%u]:\n" - "buf_dma_addr: %#"PRIx64";\n" - "td_cmd: %#x;\n" - "td_offset: %#x;\n" - "td_len: %u;\n" - "td_tag: %#x;\n", - tx_pkt, tx_id, buf_dma_addr, - td_cmd, td_offset, slen, td_tag); - - txd->buffer_addr = rte_cpu_to_le_64(buf_dma_addr); - txd->cmd_type_offset_bsz = i40e_build_ctob(td_cmd, - td_offset, slen, td_tag); - txe->last_id = tx_last; - tx_id = txe->next_id; - txe = txn; - m_seg = m_seg->next; - } while (m_seg != NULL); - - /* The last packet data descriptor needs End Of Packet (EOP) */ - td_cmd |= I40E_TX_DESC_CMD_EOP; - txq->nb_tx_used = (uint16_t)(txq->nb_tx_used + nb_used); - txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_used); - - if (txq->nb_tx_used >= txq->tx_rs_thresh) { - PMD_TX_FREE_LOG(DEBUG, - "Setting RS bit on TXD id=" - "%4u (port=%d queue=%d)", - tx_last, txq->port_id, txq->queue_id); - - td_cmd |= I40E_TX_DESC_CMD_RS; - - /* Update txq RS bit counters */ - txq->nb_tx_used = 0; - } - - txd->cmd_type_offset_bsz |= - rte_cpu_to_le_64(((uint64_t)td_cmd) << - I40E_TXD_QW1_CMD_SHIFT); - } - -end_of_tx: - rte_wmb(); - - PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u", - (unsigned) txq->port_id, (unsigned) txq->queue_id, - (unsigned) tx_id, (unsigned) nb_tx); - - I40E_PCI_REG_WRITE(txq->qtx_tail, tx_id); - txq->tx_tail = tx_id; - - return nb_tx; -} - -static inline int __attribute__((always_inline)) -i40e_tx_free_bufs(struct i40e_tx_queue *txq) -{ - struct i40e_tx_entry *txep; - uint16_t i; - - if (!(txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & - rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))) - return 0; - - txep = &(txq->sw_ring[txq->tx_next_dd - (txq->tx_rs_thresh - 1)]); - - for (i = 0; i < txq->tx_rs_thresh; i++) - rte_prefetch0((txep + i)->mbuf); - - if (!(txq->txq_flags & (uint32_t)ETH_TXQ_FLAGS_NOREFCOUNT)) { - for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) { - rte_mempool_put(txep->mbuf->pool, txep->mbuf); - txep->mbuf = NULL; - } - } else { - for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) { - rte_pktmbuf_free_seg(txep->mbuf); - txep->mbuf = NULL; - } - } - - txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh); - txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh); - if (txq->tx_next_dd >= txq->nb_tx_desc) - txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); - - return txq->tx_rs_thresh; -} - -#define I40E_TD_CMD (I40E_TX_DESC_CMD_ICRC |\ - I40E_TX_DESC_CMD_EOP) - -/* Populate 4 descriptors with data from 4 mbufs */ -static inline void -tx4(volatile struct i40e_tx_desc *txdp, struct rte_mbuf **pkts) -{ - uint64_t dma_addr; - uint32_t i; - - for (i = 0; i < 4; i++, txdp++, pkts++) { - dma_addr = RTE_MBUF_DATA_DMA_ADDR(*pkts); - txdp->buffer_addr = rte_cpu_to_le_64(dma_addr); - txdp->cmd_type_offset_bsz = - i40e_build_ctob((uint32_t)I40E_TD_CMD, 0, - (*pkts)->data_len, 0); - } -} - -/* Populate 1 descriptor with data from 1 mbuf */ -static inline void -tx1(volatile struct i40e_tx_desc *txdp, struct rte_mbuf **pkts) -{ - uint64_t dma_addr; - - dma_addr = RTE_MBUF_DATA_DMA_ADDR(*pkts); - txdp->buffer_addr = rte_cpu_to_le_64(dma_addr); - txdp->cmd_type_offset_bsz = - i40e_build_ctob((uint32_t)I40E_TD_CMD, 0, - (*pkts)->data_len, 0); -} - -/* Fill hardware descriptor ring with mbuf data */ -static inline void -i40e_tx_fill_hw_ring(struct i40e_tx_queue *txq, - struct rte_mbuf **pkts, - uint16_t nb_pkts) -{ - volatile struct i40e_tx_desc *txdp = &(txq->tx_ring[txq->tx_tail]); - struct i40e_tx_entry *txep = &(txq->sw_ring[txq->tx_tail]); - const int N_PER_LOOP = 4; - const int N_PER_LOOP_MASK = N_PER_LOOP - 1; - int mainpart, leftover; - int i, j; - - mainpart = (nb_pkts & ((uint32_t) ~N_PER_LOOP_MASK)); - leftover = (nb_pkts & ((uint32_t) N_PER_LOOP_MASK)); - for (i = 0; i < mainpart; i += N_PER_LOOP) { - for (j = 0; j < N_PER_LOOP; ++j) { - (txep + i + j)->mbuf = *(pkts + i + j); - } - tx4(txdp + i, pkts + i); - } - if (unlikely(leftover > 0)) { - for (i = 0; i < leftover; ++i) { - (txep + mainpart + i)->mbuf = *(pkts + mainpart + i); - tx1(txdp + mainpart + i, pkts + mainpart + i); - } - } -} - -static inline uint16_t -tx_xmit_pkts(struct i40e_tx_queue *txq, - struct rte_mbuf **tx_pkts, - uint16_t nb_pkts) -{ - volatile struct i40e_tx_desc *txr = txq->tx_ring; - uint16_t n = 0; - - /** - * Begin scanning the H/W ring for done descriptors when the number - * of available descriptors drops below tx_free_thresh. For each done - * descriptor, free the associated buffer. - */ - if (txq->nb_tx_free < txq->tx_free_thresh) - i40e_tx_free_bufs(txq); - - /* Use available descriptor only */ - nb_pkts = (uint16_t)RTE_MIN(txq->nb_tx_free, nb_pkts); - if (unlikely(!nb_pkts)) - return 0; - - txq->nb_tx_free = (uint16_t)(txq->nb_tx_free - nb_pkts); - if ((txq->tx_tail + nb_pkts) > txq->nb_tx_desc) { - n = (uint16_t)(txq->nb_tx_desc - txq->tx_tail); - i40e_tx_fill_hw_ring(txq, tx_pkts, n); - txr[txq->tx_next_rs].cmd_type_offset_bsz |= - rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) << - I40E_TXD_QW1_CMD_SHIFT); - txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); - txq->tx_tail = 0; - } - - /* Fill hardware descriptor ring with mbuf data */ - i40e_tx_fill_hw_ring(txq, tx_pkts + n, (uint16_t)(nb_pkts - n)); - txq->tx_tail = (uint16_t)(txq->tx_tail + (nb_pkts - n)); - - /* Determin if RS bit needs to be set */ - if (txq->tx_tail > txq->tx_next_rs) { - txr[txq->tx_next_rs].cmd_type_offset_bsz |= - rte_cpu_to_le_64(((uint64_t)I40E_TX_DESC_CMD_RS) << - I40E_TXD_QW1_CMD_SHIFT); - txq->tx_next_rs = - (uint16_t)(txq->tx_next_rs + txq->tx_rs_thresh); - if (txq->tx_next_rs >= txq->nb_tx_desc) - txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); - } - - if (txq->tx_tail >= txq->nb_tx_desc) - txq->tx_tail = 0; - - /* Update the tx tail register */ - rte_wmb(); - I40E_PCI_REG_WRITE(txq->qtx_tail, txq->tx_tail); - - return nb_pkts; -} - -static uint16_t -i40e_xmit_pkts_simple(void *tx_queue, - struct rte_mbuf **tx_pkts, - uint16_t nb_pkts) -{ - uint16_t nb_tx = 0; - - if (likely(nb_pkts <= I40E_TX_MAX_BURST)) - return tx_xmit_pkts((struct i40e_tx_queue *)tx_queue, - tx_pkts, nb_pkts); - - while (nb_pkts) { - uint16_t ret, num = (uint16_t)RTE_MIN(nb_pkts, - I40E_TX_MAX_BURST); - - ret = tx_xmit_pkts((struct i40e_tx_queue *)tx_queue, - &tx_pkts[nb_tx], num); - nb_tx = (uint16_t)(nb_tx + ret); - nb_pkts = (uint16_t)(nb_pkts - ret); - if (ret < num) - break; - } - - return nb_tx; -} - -/* - * Find the VSI the queue belongs to. 'queue_idx' is the queue index - * application used, which assume having sequential ones. But from driver's - * perspective, it's different. For example, q0 belongs to FDIR VSI, q1-q64 - * to MAIN VSI, , q65-96 to SRIOV VSIs, q97-128 to VMDQ VSIs. For application - * running on host, q1-64 and q97-128 can be used, total 96 queues. They can - * use queue_idx from 0 to 95 to access queues, while real queue would be - * different. This function will do a queue mapping to find VSI the queue - * belongs to. - */ -static struct i40e_vsi* -i40e_pf_get_vsi_by_qindex(struct i40e_pf *pf, uint16_t queue_idx) -{ - /* the queue in MAIN VSI range */ - if (queue_idx < pf->main_vsi->nb_qps) - return pf->main_vsi; - - queue_idx -= pf->main_vsi->nb_qps; - - /* queue_idx is greater than VMDQ VSIs range */ - if (queue_idx > pf->nb_cfg_vmdq_vsi * pf->vmdq_nb_qps - 1) { - PMD_INIT_LOG(ERR, "queue_idx out of range. VMDQ configured?"); - return NULL; - } - - return pf->vmdq[queue_idx / pf->vmdq_nb_qps].vsi; -} - -static uint16_t -i40e_get_queue_offset_by_qindex(struct i40e_pf *pf, uint16_t queue_idx) -{ - /* the queue in MAIN VSI range */ - if (queue_idx < pf->main_vsi->nb_qps) - return queue_idx; - - /* It's VMDQ queues */ - queue_idx -= pf->main_vsi->nb_qps; - - if (pf->nb_cfg_vmdq_vsi) - return queue_idx % pf->vmdq_nb_qps; - else { - PMD_INIT_LOG(ERR, "Fail to get queue offset"); - return (uint16_t)(-1); - } -} - -int -i40e_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) -{ - struct i40e_rx_queue *rxq; - int err = -1; - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - - PMD_INIT_FUNC_TRACE(); - - if (rx_queue_id < dev->data->nb_rx_queues) { - rxq = dev->data->rx_queues[rx_queue_id]; - - err = i40e_alloc_rx_queue_mbufs(rxq); - if (err) { - PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf"); - return err; - } - - rte_wmb(); - - /* Init the RX tail regieter. */ - I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); - - err = i40e_switch_rx_queue(hw, rxq->reg_idx, TRUE); - - if (err) { - PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on", - rx_queue_id); - - i40e_rx_queue_release_mbufs(rxq); - i40e_reset_rx_queue(rxq); - } - } - - return err; -} - -int -i40e_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) -{ - struct i40e_rx_queue *rxq; - int err; - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - - if (rx_queue_id < dev->data->nb_rx_queues) { - rxq = dev->data->rx_queues[rx_queue_id]; - - /* - * rx_queue_id is queue id aplication refers to, while - * rxq->reg_idx is the real queue index. - */ - err = i40e_switch_rx_queue(hw, rxq->reg_idx, FALSE); - - if (err) { - PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off", - rx_queue_id); - return err; - } - i40e_rx_queue_release_mbufs(rxq); - i40e_reset_rx_queue(rxq); - } - - return 0; -} - -int -i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) -{ - int err = -1; - struct i40e_tx_queue *txq; - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - - PMD_INIT_FUNC_TRACE(); - - if (tx_queue_id < dev->data->nb_tx_queues) { - txq = dev->data->tx_queues[tx_queue_id]; - - /* - * tx_queue_id is queue id aplication refers to, while - * rxq->reg_idx is the real queue index. - */ - err = i40e_switch_tx_queue(hw, txq->reg_idx, TRUE); - if (err) - PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on", - tx_queue_id); - } - - return err; -} - -int -i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) -{ - struct i40e_tx_queue *txq; - int err; - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - - if (tx_queue_id < dev->data->nb_tx_queues) { - txq = dev->data->tx_queues[tx_queue_id]; - - /* - * tx_queue_id is queue id aplication refers to, while - * txq->reg_idx is the real queue index. - */ - err = i40e_switch_tx_queue(hw, txq->reg_idx, FALSE); - - if (err) { - PMD_DRV_LOG(ERR, "Failed to switch TX queue %u of", - tx_queue_id); - return err; - } - - i40e_tx_queue_release_mbufs(txq); - i40e_reset_tx_queue(txq); - } - - return 0; -} - -int -i40e_dev_rx_queue_setup(struct rte_eth_dev *dev, - uint16_t queue_idx, - uint16_t nb_desc, - unsigned int socket_id, - const struct rte_eth_rxconf *rx_conf, - struct rte_mempool *mp) -{ - struct i40e_vsi *vsi; - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_rx_queue *rxq; - const struct rte_memzone *rz; - uint32_t ring_size; - uint16_t len; - int use_def_burst_func = 1; - - if (hw->mac.type == I40E_MAC_VF) { - struct i40e_vf *vf = - I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - vsi = &vf->vsi; - } else - vsi = i40e_pf_get_vsi_by_qindex(pf, queue_idx); - - if (vsi == NULL) { - PMD_DRV_LOG(ERR, "VSI not available or queue " - "index exceeds the maximum"); - return I40E_ERR_PARAM; - } - if (((nb_desc * sizeof(union i40e_rx_desc)) % I40E_ALIGN) != 0 || - (nb_desc > I40E_MAX_RING_DESC) || - (nb_desc < I40E_MIN_RING_DESC)) { - PMD_DRV_LOG(ERR, "Number (%u) of receive descriptors is " - "invalid", nb_desc); - return I40E_ERR_PARAM; - } - - /* Free memory if needed */ - if (dev->data->rx_queues[queue_idx]) { - i40e_dev_rx_queue_release(dev->data->rx_queues[queue_idx]); - dev->data->rx_queues[queue_idx] = NULL; - } - - /* Allocate the rx queue data structure */ - rxq = rte_zmalloc_socket("i40e rx queue", - sizeof(struct i40e_rx_queue), - RTE_CACHE_LINE_SIZE, - socket_id); - if (!rxq) { - PMD_DRV_LOG(ERR, "Failed to allocate memory for " - "rx queue data structure"); - return (-ENOMEM); - } - rxq->mp = mp; - rxq->nb_rx_desc = nb_desc; - rxq->rx_free_thresh = rx_conf->rx_free_thresh; - rxq->queue_id = queue_idx; - if (hw->mac.type == I40E_MAC_VF) - rxq->reg_idx = queue_idx; - else /* PF device */ - rxq->reg_idx = vsi->base_queue + - i40e_get_queue_offset_by_qindex(pf, queue_idx); - - rxq->port_id = dev->data->port_id; - rxq->crc_len = (uint8_t) ((dev->data->dev_conf.rxmode.hw_strip_crc) ? - 0 : ETHER_CRC_LEN); - rxq->drop_en = rx_conf->rx_drop_en; - rxq->vsi = vsi; - rxq->rx_deferred_start = rx_conf->rx_deferred_start; - - /* Allocate the maximun number of RX ring hardware descriptor. */ - ring_size = sizeof(union i40e_rx_desc) * I40E_MAX_RING_DESC; - ring_size = RTE_ALIGN(ring_size, I40E_DMA_MEM_ALIGN); - rz = i40e_ring_dma_zone_reserve(dev, - "rx_ring", - queue_idx, - ring_size, - socket_id); - if (!rz) { - i40e_dev_rx_queue_release(rxq); - PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for RX"); - return (-ENOMEM); - } - - /* Zero all the descriptors in the ring. */ - memset(rz->addr, 0, ring_size); - -#ifdef RTE_LIBRTE_XEN_DOM0 - rxq->rx_ring_phys_addr = rte_mem_phy2mch(rz->memseg_id, rz->phys_addr); -#else - rxq->rx_ring_phys_addr = (uint64_t)rz->phys_addr; -#endif - - rxq->rx_ring = (union i40e_rx_desc *)rz->addr; - -#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC - len = (uint16_t)(nb_desc + RTE_PMD_I40E_RX_MAX_BURST); -#else - len = nb_desc; -#endif - - /* Allocate the software ring. */ - rxq->sw_ring = - rte_zmalloc_socket("i40e rx sw ring", - sizeof(struct i40e_rx_entry) * len, - RTE_CACHE_LINE_SIZE, - socket_id); - if (!rxq->sw_ring) { - i40e_dev_rx_queue_release(rxq); - PMD_DRV_LOG(ERR, "Failed to allocate memory for SW ring"); - return (-ENOMEM); - } - - i40e_reset_rx_queue(rxq); - rxq->q_set = TRUE; - dev->data->rx_queues[queue_idx] = rxq; - - use_def_burst_func = check_rx_burst_bulk_alloc_preconditions(rxq); - - if (!use_def_burst_func && !dev->data->scattered_rx) { -#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC - PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are " - "satisfied. Rx Burst Bulk Alloc function will be " - "used on port=%d, queue=%d.", - rxq->port_id, rxq->queue_id); - dev->rx_pkt_burst = i40e_recv_pkts_bulk_alloc; -#endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */ - } else { - PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are " - "not satisfied, Scattered Rx is requested, " - "or RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC is " - "not enabled on port=%d, queue=%d.", - rxq->port_id, rxq->queue_id); - } - - return 0; -} - -void -i40e_dev_rx_queue_release(void *rxq) -{ - struct i40e_rx_queue *q = (struct i40e_rx_queue *)rxq; - - if (!q) { - PMD_DRV_LOG(DEBUG, "Pointer to rxq is NULL"); - return; - } - - i40e_rx_queue_release_mbufs(q); - rte_free(q->sw_ring); - rte_free(q); -} - -uint32_t -i40e_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) -{ -#define I40E_RXQ_SCAN_INTERVAL 4 - volatile union i40e_rx_desc *rxdp; - struct i40e_rx_queue *rxq; - uint16_t desc = 0; - - if (unlikely(rx_queue_id >= dev->data->nb_rx_queues)) { - PMD_DRV_LOG(ERR, "Invalid RX queue id %u", rx_queue_id); - return 0; - } - - rxq = dev->data->rx_queues[rx_queue_id]; - rxdp = &(rxq->rx_ring[rxq->rx_tail]); - while ((desc < rxq->nb_rx_desc) && - ((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) & - I40E_RXD_QW1_STATUS_MASK) >> I40E_RXD_QW1_STATUS_SHIFT) & - (1 << I40E_RX_DESC_STATUS_DD_SHIFT)) { - /** - * Check the DD bit of a rx descriptor of each 4 in a group, - * to avoid checking too frequently and downgrading performance - * too much. - */ - desc += I40E_RXQ_SCAN_INTERVAL; - rxdp += I40E_RXQ_SCAN_INTERVAL; - if (rxq->rx_tail + desc >= rxq->nb_rx_desc) - rxdp = &(rxq->rx_ring[rxq->rx_tail + - desc - rxq->nb_rx_desc]); - } - - return desc; -} - -int -i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset) -{ - volatile union i40e_rx_desc *rxdp; - struct i40e_rx_queue *rxq = rx_queue; - uint16_t desc; - int ret; - - if (unlikely(offset >= rxq->nb_rx_desc)) { - PMD_DRV_LOG(ERR, "Invalid RX queue id %u", offset); - return 0; - } - - desc = rxq->rx_tail + offset; - if (desc >= rxq->nb_rx_desc) - desc -= rxq->nb_rx_desc; - - rxdp = &(rxq->rx_ring[desc]); - - ret = !!(((rte_le_to_cpu_64(rxdp->wb.qword1.status_error_len) & - I40E_RXD_QW1_STATUS_MASK) >> I40E_RXD_QW1_STATUS_SHIFT) & - (1 << I40E_RX_DESC_STATUS_DD_SHIFT)); - - return ret; -} - -int -i40e_dev_tx_queue_setup(struct rte_eth_dev *dev, - uint16_t queue_idx, - uint16_t nb_desc, - unsigned int socket_id, - const struct rte_eth_txconf *tx_conf) -{ - struct i40e_vsi *vsi; - struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private); - struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private); - struct i40e_tx_queue *txq; - const struct rte_memzone *tz; - uint32_t ring_size; - uint16_t tx_rs_thresh, tx_free_thresh; - - if (hw->mac.type == I40E_MAC_VF) { - struct i40e_vf *vf = - I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); - vsi = &vf->vsi; - } else - vsi = i40e_pf_get_vsi_by_qindex(pf, queue_idx); - - if (vsi == NULL) { - PMD_DRV_LOG(ERR, "VSI is NULL, or queue index (%u) " - "exceeds the maximum", queue_idx); - return I40E_ERR_PARAM; - } - - if (((nb_desc * sizeof(struct i40e_tx_desc)) % I40E_ALIGN) != 0 || - (nb_desc > I40E_MAX_RING_DESC) || - (nb_desc < I40E_MIN_RING_DESC)) { - PMD_DRV_LOG(ERR, "Number (%u) of transmit descriptors is " - "invalid", nb_desc); - return I40E_ERR_PARAM; - } - - /** - * The following two parameters control the setting of the RS bit on - * transmit descriptors. TX descriptors will have their RS bit set - * after txq->tx_rs_thresh descriptors have been used. The TX - * descriptor ring will be cleaned after txq->tx_free_thresh - * descriptors are used or if the number of descriptors required to - * transmit a packet is greater than the number of free TX descriptors. - * - * The following constraints must be satisfied: - * - tx_rs_thresh must be greater than 0. - * - tx_rs_thresh must be less than the size of the ring minus 2. - * - tx_rs_thresh must be less than or equal to tx_free_thresh. - * - tx_rs_thresh must be a divisor of the ring size. - * - tx_free_thresh must be greater than 0. - * - tx_free_thresh must be less than the size of the ring minus 3. - * - * One descriptor in the TX ring is used as a sentinel to avoid a H/W - * race condition, hence the maximum threshold constraints. When set - * to zero use default values. - */ - tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh) ? - tx_conf->tx_rs_thresh : DEFAULT_TX_RS_THRESH); - tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ? - tx_conf->tx_free_thresh : DEFAULT_TX_FREE_THRESH); - if (tx_rs_thresh >= (nb_desc - 2)) { - PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than the " - "number of TX descriptors minus 2. " - "(tx_rs_thresh=%u port=%d queue=%d)", - (unsigned int)tx_rs_thresh, - (int)dev->data->port_id, - (int)queue_idx); - return I40E_ERR_PARAM; - } - if (tx_free_thresh >= (nb_desc - 3)) { - PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than the " - "tx_free_thresh must be less than the " - "number of TX descriptors minus 3. " - "(tx_free_thresh=%u port=%d queue=%d)", - (unsigned int)tx_free_thresh, - (int)dev->data->port_id, - (int)queue_idx); - return I40E_ERR_PARAM; - } - if (tx_rs_thresh > tx_free_thresh) { - PMD_INIT_LOG(ERR, "tx_rs_thresh must be less than or " - "equal to tx_free_thresh. (tx_free_thresh=%u" - " tx_rs_thresh=%u port=%d queue=%d)", - (unsigned int)tx_free_thresh, - (unsigned int)tx_rs_thresh, - (int)dev->data->port_id, - (int)queue_idx); - return I40E_ERR_PARAM; - } - if ((nb_desc % tx_rs_thresh) != 0) { - PMD_INIT_LOG(ERR, "tx_rs_thresh must be a divisor of the " - "number of TX descriptors. (tx_rs_thresh=%u" - " port=%d queue=%d)", - (unsigned int)tx_rs_thresh, - (int)dev->data->port_id, - (int)queue_idx); - return I40E_ERR_PARAM; - } - if ((tx_rs_thresh > 1) && (tx_conf->tx_thresh.wthresh != 0)) { - PMD_INIT_LOG(ERR, "TX WTHRESH must be set to 0 if " - "tx_rs_thresh is greater than 1. " - "(tx_rs_thresh=%u port=%d queue=%d)", - (unsigned int)tx_rs_thresh, - (int)dev->data->port_id, - (int)queue_idx); - return I40E_ERR_PARAM; - } - - /* Free memory if needed. */ - if (dev->data->tx_queues[queue_idx]) { - i40e_dev_tx_queue_release(dev->data->tx_queues[queue_idx]); - dev->data->tx_queues[queue_idx] = NULL; - } - - /* Allocate the TX queue data structure. */ - txq = rte_zmalloc_socket("i40e tx queue", - sizeof(struct i40e_tx_queue), - RTE_CACHE_LINE_SIZE, - socket_id); - if (!txq) { - PMD_DRV_LOG(ERR, "Failed to allocate memory for " - "tx queue structure"); - return (-ENOMEM); - } - - /* Allocate TX hardware ring descriptors. */ - ring_size = sizeof(struct i40e_tx_desc) * I40E_MAX_RING_DESC; - ring_size = RTE_ALIGN(ring_size, I40E_DMA_MEM_ALIGN); - tz = i40e_ring_dma_zone_reserve(dev, - "tx_ring", - queue_idx, - ring_size, - socket_id); - if (!tz) { - i40e_dev_tx_queue_release(txq); - PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for TX"); - return (-ENOMEM); - } - - txq->nb_tx_desc = nb_desc; - txq->tx_rs_thresh = tx_rs_thresh; - txq->tx_free_thresh = tx_free_thresh; - txq->pthresh = tx_conf->tx_thresh.pthresh; - txq->hthresh = tx_conf->tx_thresh.hthresh; - txq->wthresh = tx_conf->tx_thresh.wthresh; - txq->queue_id = queue_idx; - if (hw->mac.type == I40E_MAC_VF) - txq->reg_idx = queue_idx; - else /* PF device */ - txq->reg_idx = vsi->base_queue + - i40e_get_queue_offset_by_qindex(pf, queue_idx); - - txq->port_id = dev->data->port_id; - txq->txq_flags = tx_conf->txq_flags; - txq->vsi = vsi; - txq->tx_deferred_start = tx_conf->tx_deferred_start; - -#ifdef RTE_LIBRTE_XEN_DOM0 - txq->tx_ring_phys_addr = rte_mem_phy2mch(tz->memseg_id, tz->phys_addr); -#else - txq->tx_ring_phys_addr = (uint64_t)tz->phys_addr; -#endif - txq->tx_ring = (struct i40e_tx_desc *)tz->addr; - - /* Allocate software ring */ - txq->sw_ring = - rte_zmalloc_socket("i40e tx sw ring", - sizeof(struct i40e_tx_entry) * nb_desc, - RTE_CACHE_LINE_SIZE, - socket_id); - if (!txq->sw_ring) { - i40e_dev_tx_queue_release(txq); - PMD_DRV_LOG(ERR, "Failed to allocate memory for SW TX ring"); - return (-ENOMEM); - } - - i40e_reset_tx_queue(txq); - txq->q_set = TRUE; - dev->data->tx_queues[queue_idx] = txq; - - /* Use a simple TX queue without offloads or multi segs if possible */ - if (((txq->txq_flags & I40E_SIMPLE_FLAGS) == I40E_SIMPLE_FLAGS) && - (txq->tx_rs_thresh >= I40E_TX_MAX_BURST)) { - PMD_INIT_LOG(INFO, "Using simple tx path"); - dev->tx_pkt_burst = i40e_xmit_pkts_simple; - } else { - PMD_INIT_LOG(INFO, "Using full-featured tx path"); - dev->tx_pkt_burst = i40e_xmit_pkts; - } - - return 0; -} - -void -i40e_dev_tx_queue_release(void *txq) -{ - struct i40e_tx_queue *q = (struct i40e_tx_queue *)txq; - - if (!q) { - PMD_DRV_LOG(DEBUG, "Pointer to TX queue is NULL"); - return; - } - - i40e_tx_queue_release_mbufs(q); - rte_free(q->sw_ring); - rte_free(q); -} - -static const struct rte_memzone * -i40e_ring_dma_zone_reserve(struct rte_eth_dev *dev, - const char *ring_name, - uint16_t queue_id, - uint32_t ring_size, - int socket_id) -{ - char z_name[RTE_MEMZONE_NAMESIZE]; - const struct rte_memzone *mz; - - snprintf(z_name, sizeof(z_name), "%s_%s_%d_%d", - dev->driver->pci_drv.name, ring_name, - dev->data->port_id, queue_id); - mz = rte_memzone_lookup(z_name); - if (mz) - return mz; - -#ifdef RTE_LIBRTE_XEN_DOM0 - return rte_memzone_reserve_bounded(z_name, ring_size, - socket_id, 0, I40E_ALIGN, RTE_PGSIZE_2M); -#else - return rte_memzone_reserve_aligned(z_name, ring_size, - socket_id, 0, I40E_ALIGN); -#endif -} - -const struct rte_memzone * -i40e_memzone_reserve(const char *name, uint32_t len, int socket_id) -{ - const struct rte_memzone *mz = NULL; - - mz = rte_memzone_lookup(name); - if (mz) - return mz; -#ifdef RTE_LIBRTE_XEN_DOM0 - mz = rte_memzone_reserve_bounded(name, len, - socket_id, 0, I40E_ALIGN, RTE_PGSIZE_2M); -#else - mz = rte_memzone_reserve_aligned(name, len, - socket_id, 0, I40E_ALIGN); -#endif - return mz; -} - -void -i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq) -{ - uint16_t i; - - if (!rxq || !rxq->sw_ring) { - PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL"); - return; - } - - for (i = 0; i < rxq->nb_rx_desc; i++) { - if (rxq->sw_ring[i].mbuf) { - rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf); - rxq->sw_ring[i].mbuf = NULL; - } - } -#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC - if (rxq->rx_nb_avail == 0) - return; - for (i = 0; i < rxq->rx_nb_avail; i++) { - struct rte_mbuf *mbuf; - - mbuf = rxq->rx_stage[rxq->rx_next_avail + i]; - rte_pktmbuf_free_seg(mbuf); - } - rxq->rx_nb_avail = 0; -#endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */ -} - -void -i40e_reset_rx_queue(struct i40e_rx_queue *rxq) -{ - unsigned i; - uint16_t len; - -#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC - if (check_rx_burst_bulk_alloc_preconditions(rxq) == 0) - len = (uint16_t)(rxq->nb_rx_desc + RTE_PMD_I40E_RX_MAX_BURST); - else -#endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */ - len = rxq->nb_rx_desc; - - for (i = 0; i < len * sizeof(union i40e_rx_desc); i++) - ((volatile char *)rxq->rx_ring)[i] = 0; - -#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC - memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf)); - for (i = 0; i < RTE_PMD_I40E_RX_MAX_BURST; ++i) - rxq->sw_ring[rxq->nb_rx_desc + i].mbuf = &rxq->fake_mbuf; - - rxq->rx_nb_avail = 0; - rxq->rx_next_avail = 0; - rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1); -#endif /* RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC */ - rxq->rx_tail = 0; - rxq->nb_rx_hold = 0; - rxq->pkt_first_seg = NULL; - rxq->pkt_last_seg = NULL; -} - -void -i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq) -{ - uint16_t i; - - if (!txq || !txq->sw_ring) { - PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL"); - return; - } - - for (i = 0; i < txq->nb_tx_desc; i++) { - if (txq->sw_ring[i].mbuf) { - rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf); - txq->sw_ring[i].mbuf = NULL; - } - } -} - -void -i40e_reset_tx_queue(struct i40e_tx_queue *txq) -{ - struct i40e_tx_entry *txe; - uint16_t i, prev, size; - - if (!txq) { - PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL"); - return; - } - - txe = txq->sw_ring; - size = sizeof(struct i40e_tx_desc) * txq->nb_tx_desc; - for (i = 0; i < size; i++) - ((volatile char *)txq->tx_ring)[i] = 0; - - prev = (uint16_t)(txq->nb_tx_desc - 1); - for (i = 0; i < txq->nb_tx_desc; i++) { - volatile struct i40e_tx_desc *txd = &txq->tx_ring[i]; - - txd->cmd_type_offset_bsz = - rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE); - txe[i].mbuf = NULL; - txe[i].last_id = i; - txe[prev].next_id = i; - prev = i; - } - - txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); - txq->tx_next_rs = (uint16_t)(txq->tx_rs_thresh - 1); - - txq->tx_tail = 0; - txq->nb_tx_used = 0; - - txq->last_desc_cleaned = (uint16_t)(txq->nb_tx_desc - 1); - txq->nb_tx_free = (uint16_t)(txq->nb_tx_desc - 1); -} - -/* Init the TX queue in hardware */ -int -i40e_tx_queue_init(struct i40e_tx_queue *txq) -{ - enum i40e_status_code err = I40E_SUCCESS; - struct i40e_vsi *vsi = txq->vsi; - struct i40e_hw *hw = I40E_VSI_TO_HW(vsi); - uint16_t pf_q = txq->reg_idx; - struct i40e_hmc_obj_txq tx_ctx; - uint32_t qtx_ctl; - - /* clear the context structure first */ - memset(&tx_ctx, 0, sizeof(tx_ctx)); - tx_ctx.new_context = 1; - tx_ctx.base = txq->tx_ring_phys_addr / I40E_QUEUE_BASE_ADDR_UNIT; - tx_ctx.qlen = txq->nb_tx_desc; - tx_ctx.rdylist = rte_le_to_cpu_16(vsi->info.qs_handle[0]); - if (vsi->type == I40E_VSI_FDIR) - tx_ctx.fd_ena = TRUE; - - err = i40e_clear_lan_tx_queue_context(hw, pf_q); - if (err != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failure of clean lan tx queue context"); - return err; - } - - err = i40e_set_lan_tx_queue_context(hw, pf_q, &tx_ctx); - if (err != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failure of set lan tx queue context"); - return err; - } - - /* Now associate this queue with this PCI function */ - qtx_ctl = I40E_QTX_CTL_PF_QUEUE; - qtx_ctl |= ((hw->pf_id << I40E_QTX_CTL_PF_INDX_SHIFT) & - I40E_QTX_CTL_PF_INDX_MASK); - I40E_WRITE_REG(hw, I40E_QTX_CTL(pf_q), qtx_ctl); - I40E_WRITE_FLUSH(hw); - - txq->qtx_tail = hw->hw_addr + I40E_QTX_TAIL(pf_q); - - return err; -} - -int -i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq) -{ - struct i40e_rx_entry *rxe = rxq->sw_ring; - uint64_t dma_addr; - uint16_t i; - - for (i = 0; i < rxq->nb_rx_desc; i++) { - volatile union i40e_rx_desc *rxd; - struct rte_mbuf *mbuf = rte_rxmbuf_alloc(rxq->mp); - - if (unlikely(!mbuf)) { - PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX"); - return -ENOMEM; - } - - rte_mbuf_refcnt_set(mbuf, 1); - mbuf->next = NULL; - mbuf->data_off = RTE_PKTMBUF_HEADROOM; - mbuf->nb_segs = 1; - mbuf->port = rxq->port_id; - - dma_addr = - rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf)); - - rxd = &rxq->rx_ring[i]; - rxd->read.pkt_addr = dma_addr; - rxd->read.hdr_addr = dma_addr; -#ifndef RTE_LIBRTE_I40E_16BYTE_RX_DESC - rxd->read.rsvd1 = 0; - rxd->read.rsvd2 = 0; -#endif /* RTE_LIBRTE_I40E_16BYTE_RX_DESC */ - - rxe[i].mbuf = mbuf; - } - - return 0; -} - -/* - * Calculate the buffer length, and check the jumbo frame - * and maximum packet length. - */ -static int -i40e_rx_queue_config(struct i40e_rx_queue *rxq) -{ - struct i40e_pf *pf = I40E_VSI_TO_PF(rxq->vsi); - struct i40e_hw *hw = I40E_VSI_TO_HW(rxq->vsi); - struct rte_eth_dev_data *data = pf->dev_data; - uint16_t buf_size, len; - - buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) - - RTE_PKTMBUF_HEADROOM); - - switch (pf->flags & (I40E_FLAG_HEADER_SPLIT_DISABLED | - I40E_FLAG_HEADER_SPLIT_ENABLED)) { - case I40E_FLAG_HEADER_SPLIT_ENABLED: /* Not supported */ - rxq->rx_hdr_len = RTE_ALIGN(I40E_RXBUF_SZ_1024, - (1 << I40E_RXQ_CTX_HBUFF_SHIFT)); - rxq->rx_buf_len = RTE_ALIGN(I40E_RXBUF_SZ_2048, - (1 << I40E_RXQ_CTX_DBUFF_SHIFT)); - rxq->hs_mode = i40e_header_split_enabled; - break; - case I40E_FLAG_HEADER_SPLIT_DISABLED: - default: - rxq->rx_hdr_len = 0; - rxq->rx_buf_len = RTE_ALIGN(buf_size, - (1 << I40E_RXQ_CTX_DBUFF_SHIFT)); - rxq->hs_mode = i40e_header_split_none; - break; - } - - len = hw->func_caps.rx_buf_chain_len * rxq->rx_buf_len; - rxq->max_pkt_len = RTE_MIN(len, data->dev_conf.rxmode.max_rx_pkt_len); - if (data->dev_conf.rxmode.jumbo_frame == 1) { - if (rxq->max_pkt_len <= ETHER_MAX_LEN || - rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) { - PMD_DRV_LOG(ERR, "maximum packet length must " - "be larger than %u and smaller than %u," - "as jumbo frame is enabled", - (uint32_t)ETHER_MAX_LEN, - (uint32_t)I40E_FRAME_SIZE_MAX); - return I40E_ERR_CONFIG; - } - } else { - if (rxq->max_pkt_len < ETHER_MIN_LEN || - rxq->max_pkt_len > ETHER_MAX_LEN) { - PMD_DRV_LOG(ERR, "maximum packet length must be " - "larger than %u and smaller than %u, " - "as jumbo frame is disabled", - (uint32_t)ETHER_MIN_LEN, - (uint32_t)ETHER_MAX_LEN); - return I40E_ERR_CONFIG; - } - } - - return 0; -} - -/* Init the RX queue in hardware */ -int -i40e_rx_queue_init(struct i40e_rx_queue *rxq) -{ - int err = I40E_SUCCESS; - struct i40e_hw *hw = I40E_VSI_TO_HW(rxq->vsi); - struct rte_eth_dev_data *dev_data = I40E_VSI_TO_DEV_DATA(rxq->vsi); - struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(rxq->vsi); - uint16_t pf_q = rxq->reg_idx; - uint16_t buf_size; - struct i40e_hmc_obj_rxq rx_ctx; - - err = i40e_rx_queue_config(rxq); - if (err < 0) { - PMD_DRV_LOG(ERR, "Failed to config RX queue"); - return err; - } - - /* Clear the context structure first */ - memset(&rx_ctx, 0, sizeof(struct i40e_hmc_obj_rxq)); - rx_ctx.dbuff = rxq->rx_buf_len >> I40E_RXQ_CTX_DBUFF_SHIFT; - rx_ctx.hbuff = rxq->rx_hdr_len >> I40E_RXQ_CTX_HBUFF_SHIFT; - - rx_ctx.base = rxq->rx_ring_phys_addr / I40E_QUEUE_BASE_ADDR_UNIT; - rx_ctx.qlen = rxq->nb_rx_desc; -#ifndef RTE_LIBRTE_I40E_16BYTE_RX_DESC - rx_ctx.dsize = 1; -#endif - rx_ctx.dtype = rxq->hs_mode; - if (rxq->hs_mode) - rx_ctx.hsplit_0 = I40E_HEADER_SPLIT_ALL; - else - rx_ctx.hsplit_0 = I40E_HEADER_SPLIT_NONE; - rx_ctx.rxmax = rxq->max_pkt_len; - rx_ctx.tphrdesc_ena = 1; - rx_ctx.tphwdesc_ena = 1; - rx_ctx.tphdata_ena = 1; - rx_ctx.tphhead_ena = 1; - rx_ctx.lrxqthresh = 2; - rx_ctx.crcstrip = (rxq->crc_len == 0) ? 1 : 0; - rx_ctx.l2tsel = 1; - rx_ctx.showiv = 1; - rx_ctx.prefena = 1; - - err = i40e_clear_lan_rx_queue_context(hw, pf_q); - if (err != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to clear LAN RX queue context"); - return err; - } - err = i40e_set_lan_rx_queue_context(hw, pf_q, &rx_ctx); - if (err != I40E_SUCCESS) { - PMD_DRV_LOG(ERR, "Failed to set LAN RX queue context"); - return err; - } - - rxq->qrx_tail = hw->hw_addr + I40E_QRX_TAIL(pf_q); - - buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) - - RTE_PKTMBUF_HEADROOM); - - /* Check if scattered RX needs to be used. */ - if ((rxq->max_pkt_len + 2 * I40E_VLAN_TAG_SIZE) > buf_size) { - dev_data->scattered_rx = 1; - dev->rx_pkt_burst = i40e_recv_scattered_pkts; - } - - /* Init the RX tail regieter. */ - I40E_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1); - - return 0; -} - -void -i40e_dev_clear_queues(struct rte_eth_dev *dev) -{ - uint16_t i; - - PMD_INIT_FUNC_TRACE(); - - for (i = 0; i < dev->data->nb_tx_queues; i++) { - i40e_tx_queue_release_mbufs(dev->data->tx_queues[i]); - i40e_reset_tx_queue(dev->data->tx_queues[i]); - } - - for (i = 0; i < dev->data->nb_rx_queues; i++) { - i40e_rx_queue_release_mbufs(dev->data->rx_queues[i]); - i40e_reset_rx_queue(dev->data->rx_queues[i]); - } -} - -#define I40E_FDIR_NUM_TX_DESC I40E_MIN_RING_DESC -#define I40E_FDIR_NUM_RX_DESC I40E_MIN_RING_DESC - -enum i40e_status_code -i40e_fdir_setup_tx_resources(struct i40e_pf *pf) -{ - struct i40e_tx_queue *txq; - const struct rte_memzone *tz = NULL; - uint32_t ring_size; - struct rte_eth_dev *dev = pf->adapter->eth_dev; - - if (!pf) { - PMD_DRV_LOG(ERR, "PF is not available"); - return I40E_ERR_BAD_PTR; - } - - /* Allocate the TX queue data structure. */ - txq = rte_zmalloc_socket("i40e fdir tx queue", - sizeof(struct i40e_tx_queue), - RTE_CACHE_LINE_SIZE, - SOCKET_ID_ANY); - if (!txq) { - PMD_DRV_LOG(ERR, "Failed to allocate memory for " - "tx queue structure."); - return I40E_ERR_NO_MEMORY; - } - - /* Allocate TX hardware ring descriptors. */ - ring_size = sizeof(struct i40e_tx_desc) * I40E_FDIR_NUM_TX_DESC; - ring_size = RTE_ALIGN(ring_size, I40E_DMA_MEM_ALIGN); - - tz = i40e_ring_dma_zone_reserve(dev, - "fdir_tx_ring", - I40E_FDIR_QUEUE_ID, - ring_size, - SOCKET_ID_ANY); - if (!tz) { - i40e_dev_tx_queue_release(txq); - PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for TX."); - return I40E_ERR_NO_MEMORY; - } - - txq->nb_tx_desc = I40E_FDIR_NUM_TX_DESC; - txq->queue_id = I40E_FDIR_QUEUE_ID; - txq->reg_idx = pf->fdir.fdir_vsi->base_queue; - txq->vsi = pf->fdir.fdir_vsi; - -#ifdef RTE_LIBRTE_XEN_DOM0 - txq->tx_ring_phys_addr = rte_mem_phy2mch(tz->memseg_id, tz->phys_addr); -#else - txq->tx_ring_phys_addr = (uint64_t)tz->phys_addr; -#endif - txq->tx_ring = (struct i40e_tx_desc *)tz->addr; - /* - * don't need to allocate software ring and reset for the fdir - * program queue just set the queue has been configured. - */ - txq->q_set = TRUE; - pf->fdir.txq = txq; - - return I40E_SUCCESS; -} - -enum i40e_status_code -i40e_fdir_setup_rx_resources(struct i40e_pf *pf) -{ - struct i40e_rx_queue *rxq; - const struct rte_memzone *rz = NULL; - uint32_t ring_size; - struct rte_eth_dev *dev = pf->adapter->eth_dev; - - if (!pf) { - PMD_DRV_LOG(ERR, "PF is not available"); - return I40E_ERR_BAD_PTR; - } - - /* Allocate the RX queue data structure. */ - rxq = rte_zmalloc_socket("i40e fdir rx queue", - sizeof(struct i40e_rx_queue), - RTE_CACHE_LINE_SIZE, - SOCKET_ID_ANY); - if (!rxq) { - PMD_DRV_LOG(ERR, "Failed to allocate memory for " - "rx queue structure."); - return I40E_ERR_NO_MEMORY; - } - - /* Allocate RX hardware ring descriptors. */ - ring_size = sizeof(union i40e_rx_desc) * I40E_FDIR_NUM_RX_DESC; - ring_size = RTE_ALIGN(ring_size, I40E_DMA_MEM_ALIGN); - - rz = i40e_ring_dma_zone_reserve(dev, - "fdir_rx_ring", - I40E_FDIR_QUEUE_ID, - ring_size, - SOCKET_ID_ANY); - if (!rz) { - i40e_dev_rx_queue_release(rxq); - PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for RX."); - return I40E_ERR_NO_MEMORY; - } - - rxq->nb_rx_desc = I40E_FDIR_NUM_RX_DESC; - rxq->queue_id = I40E_FDIR_QUEUE_ID; - rxq->reg_idx = pf->fdir.fdir_vsi->base_queue; - rxq->vsi = pf->fdir.fdir_vsi; - -#ifdef RTE_LIBRTE_XEN_DOM0 - rxq->rx_ring_phys_addr = rte_mem_phy2mch(rz->memseg_id, rz->phys_addr); -#else - rxq->rx_ring_phys_addr = (uint64_t)rz->phys_addr; -#endif - rxq->rx_ring = (union i40e_rx_desc *)rz->addr; - - /* - * Don't need to allocate software ring and reset for the fdir - * rx queue, just set the queue has been configured. - */ - rxq->q_set = TRUE; - pf->fdir.rxq = rxq; - - return I40E_SUCCESS; -} diff --git a/lib/librte_pmd_i40e/i40e_rxtx.h b/lib/librte_pmd_i40e/i40e_rxtx.h deleted file mode 100644 index 5b2984a..0000000 --- a/lib/librte_pmd_i40e/i40e_rxtx.h +++ /dev/null @@ -1,211 +0,0 @@ -/*- - * BSD LICENSE - * - * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#ifndef _I40E_RXTX_H_ -#define _I40E_RXTX_H_ - -/** - * 32 bits tx flags, high 16 bits for L2TAG1 (VLAN), - * low 16 bits for others. - */ -#define I40E_TX_FLAG_L2TAG1_SHIFT 16 -#define I40E_TX_FLAG_L2TAG1_MASK 0xffff0000 -#define I40E_TX_FLAG_CSUM ((uint32_t)(1 << 0)) -#define I40E_TX_FLAG_INSERT_VLAN ((uint32_t)(1 << 1)) -#define I40E_TX_FLAG_TSYN ((uint32_t)(1 << 2)) - -#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC -#define RTE_PMD_I40E_RX_MAX_BURST 32 -#endif - -#define I40E_RXBUF_SZ_1024 1024 -#define I40E_RXBUF_SZ_2048 2048 - -enum i40e_header_split_mode { - i40e_header_split_none = 0, - i40e_header_split_enabled = 1, - i40e_header_split_always = 2, - i40e_header_split_reserved -}; - -#define I40E_HEADER_SPLIT_NONE ((uint8_t)0) -#define I40E_HEADER_SPLIT_L2 ((uint8_t)(1 << 0)) -#define I40E_HEADER_SPLIT_IP ((uint8_t)(1 << 1)) -#define I40E_HEADER_SPLIT_UDP_TCP ((uint8_t)(1 << 2)) -#define I40E_HEADER_SPLIT_SCTP ((uint8_t)(1 << 3)) -#define I40E_HEADER_SPLIT_ALL (I40E_HEADER_SPLIT_L2 | \ - I40E_HEADER_SPLIT_IP | \ - I40E_HEADER_SPLIT_UDP_TCP | \ - I40E_HEADER_SPLIT_SCTP) - -/* HW desc structure, both 16-byte and 32-byte types are supported */ -#ifdef RTE_LIBRTE_I40E_16BYTE_RX_DESC -#define i40e_rx_desc i40e_16byte_rx_desc -#else -#define i40e_rx_desc i40e_32byte_rx_desc -#endif - -struct i40e_rx_entry { - struct rte_mbuf *mbuf; -}; - -/* - * Structure associated with each RX queue. - */ -struct i40e_rx_queue { - struct rte_mempool *mp; /**< mbuf pool to populate RX ring */ - volatile union i40e_rx_desc *rx_ring;/**< RX ring virtual address */ - uint64_t rx_ring_phys_addr; /**< RX ring DMA address */ - struct i40e_rx_entry *sw_ring; /**< address of RX soft ring */ - uint16_t nb_rx_desc; /**< number of RX descriptors */ - uint16_t rx_free_thresh; /**< max free RX desc to hold */ - uint16_t rx_tail; /**< current value of tail */ - uint16_t nb_rx_hold; /**< number of held free RX desc */ - struct rte_mbuf *pkt_first_seg; /**< first segment of current packet */ - struct rte_mbuf *pkt_last_seg; /**< last segment of current packet */ -#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC - uint16_t rx_nb_avail; /**< number of staged packets ready */ - uint16_t rx_next_avail; /**< index of next staged packets */ - uint16_t rx_free_trigger; /**< triggers rx buffer allocation */ - struct rte_mbuf fake_mbuf; /**< dummy mbuf */ - struct rte_mbuf *rx_stage[RTE_PMD_I40E_RX_MAX_BURST * 2]; -#endif - uint8_t port_id; /**< device port ID */ - uint8_t crc_len; /**< 0 if CRC stripped, 4 otherwise */ - uint16_t queue_id; /**< RX queue index */ - uint16_t reg_idx; /**< RX queue register index */ - uint8_t drop_en; /**< if not 0, set register bit */ - volatile uint8_t *qrx_tail; /**< register address of tail */ - struct i40e_vsi *vsi; /**< the VSI this queue belongs to */ - uint16_t rx_buf_len; /* The packet buffer size */ - uint16_t rx_hdr_len; /* The header buffer size */ - uint16_t max_pkt_len; /* Maximum packet length */ - uint8_t hs_mode; /* Header Split mode */ - bool q_set; /**< indicate if rx queue has been configured */ - bool rx_deferred_start; /**< don't start this queue in dev start */ -}; - -struct i40e_tx_entry { - struct rte_mbuf *mbuf; - uint16_t next_id; - uint16_t last_id; -}; - -/* - * Structure associated with each TX queue. - */ -struct i40e_tx_queue { - uint16_t nb_tx_desc; /**< number of TX descriptors */ - uint64_t tx_ring_phys_addr; /**< TX ring DMA address */ - volatile struct i40e_tx_desc *tx_ring; /**< TX ring virtual address */ - struct i40e_tx_entry *sw_ring; /**< virtual address of SW ring */ - uint16_t tx_tail; /**< current value of tail register */ - volatile uint8_t *qtx_tail; /**< register address of tail */ - uint16_t nb_tx_used; /**< number of TX desc used since RS bit set */ - /**< index to last TX descriptor to have been cleaned */ - uint16_t last_desc_cleaned; - /**< Total number of TX descriptors ready to be allocated. */ - uint16_t nb_tx_free; - /**< Number of TX descriptors to use before RS bit is set. */ - uint16_t tx_free_thresh; /**< minimum TX before freeing. */ - /** Number of TX descriptors to use before RS bit is set. */ - uint16_t tx_rs_thresh; - uint8_t pthresh; /**< Prefetch threshold register. */ - uint8_t hthresh; /**< Host threshold register. */ - uint8_t wthresh; /**< Write-back threshold reg. */ - uint8_t port_id; /**< Device port identifier. */ - uint16_t queue_id; /**< TX queue index. */ - uint16_t reg_idx; - uint32_t txq_flags; - struct i40e_vsi *vsi; /**< the VSI this queue belongs to */ - uint16_t tx_next_dd; - uint16_t tx_next_rs; - bool q_set; /**< indicate if tx queue has been configured */ - bool tx_deferred_start; /**< don't start this queue in dev start */ -}; - -/** Offload features */ -union i40e_tx_offload { - uint64_t data; - struct { - uint64_t l2_len:7; /**< L2 (MAC) Header Length. */ - uint64_t l3_len:9; /**< L3 (IP) Header Length. */ - uint64_t l4_len:8; /**< L4 Header Length. */ - uint64_t tso_segsz:16; /**< TCP TSO segment size */ - uint64_t outer_l2_len:8; /**< outer L2 Header Length */ - uint64_t outer_l3_len:16; /**< outer L3 Header Length */ - }; -}; - -int i40e_dev_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id); -int i40e_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id); -int i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id); -int i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id); -int i40e_dev_rx_queue_setup(struct rte_eth_dev *dev, - uint16_t queue_idx, - uint16_t nb_desc, - unsigned int socket_id, - const struct rte_eth_rxconf *rx_conf, - struct rte_mempool *mp); -int i40e_dev_tx_queue_setup(struct rte_eth_dev *dev, - uint16_t queue_idx, - uint16_t nb_desc, - unsigned int socket_id, - const struct rte_eth_txconf *tx_conf); -void i40e_dev_rx_queue_release(void *rxq); -void i40e_dev_tx_queue_release(void *txq); -uint16_t i40e_recv_pkts(void *rx_queue, - struct rte_mbuf **rx_pkts, - uint16_t nb_pkts); -uint16_t i40e_recv_scattered_pkts(void *rx_queue, - struct rte_mbuf **rx_pkts, - uint16_t nb_pkts); -uint16_t i40e_xmit_pkts(void *tx_queue, - struct rte_mbuf **tx_pkts, - uint16_t nb_pkts); -int i40e_tx_queue_init(struct i40e_tx_queue *txq); -int i40e_rx_queue_init(struct i40e_rx_queue *rxq); -void i40e_free_tx_resources(struct i40e_tx_queue *txq); -void i40e_free_rx_resources(struct i40e_rx_queue *rxq); -void i40e_dev_clear_queues(struct rte_eth_dev *dev); -void i40e_reset_rx_queue(struct i40e_rx_queue *rxq); -void i40e_reset_tx_queue(struct i40e_tx_queue *txq); -void i40e_tx_queue_release_mbufs(struct i40e_tx_queue *txq); -int i40e_alloc_rx_queue_mbufs(struct i40e_rx_queue *rxq); -void i40e_rx_queue_release_mbufs(struct i40e_rx_queue *rxq); - -uint32_t i40e_dev_rx_queue_count(struct rte_eth_dev *dev, - uint16_t rx_queue_id); -int i40e_dev_rx_descriptor_done(void *rx_queue, uint16_t offset); - -#endif /* _I40E_RXTX_H_ */ diff --git a/lib/librte_pmd_i40e/rte_pmd_i40e_version.map b/lib/librte_pmd_i40e/rte_pmd_i40e_version.map deleted file mode 100644 index ef35398..0000000 --- a/lib/librte_pmd_i40e/rte_pmd_i40e_version.map +++ /dev/null @@ -1,4 +0,0 @@ -DPDK_2.0 { - - local: *; -}; -- 2.1.0